Sharing some of the latest research, announcements, and resources on deep learning. By Isaac Madan ( email ) Continuing our series of deep learning updates, we pulled together some of the awesome resources that have emerged since our last post. In case you missed it, here are our past updates: , , , , , , , , , ( , , and the of 20+ resources we outlined in April 2016. As always, this list is not comprehensive, so if there’s something we should add, or if you’re interested in discussing this area further. June ( part 1 , part 2 ) May April ( part 1 , part 2 ) March part 1 February November September part 2 & October part 1 September part 1 August ( part 1 , part 2 ) July part 1 part 2 ), June original set let us know Research & Announcements by Baidu Research. Teaching an AI agent to speak by interacting with a virtual agent. This represents an advancement in more closely replicating how humans learn, as well as advancing our goal to demonstrate general artificial intelligence. Original paper . Learning to Speak via Interaction Our AI agent learns to speak in an interactive way similar to a baby. In contrast, the conventional approach relies on supervised training using a large corpus of pre-collected training set, which is static and makes it hard to capture the interactive nature within the process of language learning. here by Mason Britan of Georgia Tech. Video . Deep Shimon: Robot that composes its own music The robot Shimon composes and performs his first deep learning driven piece. A recurrent deep neural network is trained on a large database of classical and jazz music. Based on learned semantic relationships between musical units in this dataset, Shimon generates and performs a new musical piece. here by Pathak . UC Berkeley researchers demonstrate artificial curiosity via an to control a virtual agent in a video game and understand its environment faster — which can accelerate problem solving. Original paper and video . Curiosity-driven Exploration by Self-supervised Prediction et al intrinsic curiosity model here here by Facebook Research. Deep learning benefits from massive data sets, but this means long training times that slow down development. Original paper . Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour Using commodity hardware, our implementation achieves ∼90% scaling efficiency when moving from 8 to 256 GPUs. This system enables us to train visual recognition models on internet-scale data with high efficiency. here Resources by Michelle Fullwood at PyCon 2017. 41 minute video. Slides and GitHub . A gentle introduction to deep learning with TensorFlow This talk aims to gently bridge the divide by demonstrating how deep learning operates on core machine learning concepts and getting attendees started coding deep neural networks using Google’s TensorFlow library. here here by Moustafa Alzantot. Deep Reinforcement Learning Demystified (Episode 0) Basic description of what reinforcement learning is and provide examples for where it can be used. Cover the essential terminologies for reinforcement learning and provide a quick tutorial about OpenAI gym. by Michael Nielsen. Free online book that introduces neural networks and deep learning. Neural Networks and Deep Learning by Andrew Beam. Article argues and explains how In response to by Jeff Leek. You can probably use deep learning even if your data isn’t that big you can still use deep learning in (some) small data settings, if you train your model carefully. Don’t use deep learning your data isn’t that big by Yann LeCun. In response to, and refuting, by Yoav Goldberg of Bar Ilan University, which takes issue with deep learning researchers publishing aggressively on Arxiv. Posting on ArXiv is good, flag planting notwithstanding An Adversarial Review of “Adversarial Generation of Natural Language” Tutorials & Data by University of Washington. Starts July 3, enroll now. Learn how the brain processes information. Computational Neuroscience Coursera course This course provides an introduction to basic computational methods for understanding what nervous systems do and for determining how they function. We will explore the computational principles governing various aspects of vision, sensory-motor control, learning, and memory. by Audrey Tam. iOS 11 introduces two new frameworks related to machine learning, Core ML and Vision. This tutorial walks you through how to use these new APIs and build a scene classifier. Core ML and Vision: Machine Learning in iOS 11 Tutorial by Cole Murray. Deep Learning CNN’s in Tensorflow with GPUs In this tutorial, you’ll learn the architecture of a convolutional neural network (CNN), how to create a CNN in Tensorflow, and provide predictions on labels of images. Finally, you’ll learn how to run the model on a GPU so you can spend your time creating better models, not waiting for them to converge. by Google DeepMind. Annotated data set of human actions — things like playing instruments, shaking hands, and hugging. Open-sourced Kinetics data set Kinetics is a large-scale, high-quality dataset of YouTube video URLs which include a diverse range of human focused actions. The dataset consists of approximately 300,000 video clips, and covers 400 human action classes with at least 400 video clips for each action class. by Matt Harvey of Coastline Automation. Let’s evolve a neural network with a genetic algorithm Applying a genetic algorithm to evolve a network with the goal of achieving optimal hyperparameters in a fraction of the time required to do a brute force search. By . Isaac is an investor at Venrock ( ). If you’re interested in deep learning or there are resources I should share in a future newsletter, I’d love to hear from you. Isaac Madan email is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers. Requests for Startups ** **❤” Please tap or click “︎ to help to promote this piece to others.