Sharing some of the latest research, announcements, and resources on deep learning. By Isaac Madan ( email ) Continuing our series of deep learning updates, we pulled together some of the awesome resources that have emerged since our last post. In case you missed it, here are our past updates: , , , , , , , , , ( , , and the of 20+ resources we outlined in April 2016. As always, this list is not comprehensive, so if there’s something we should add, or if you’re interested in discussing this area further. June ( part 1 , part 2 , part 3 , part 4 ) May April ( part 1 , part 2 ) March part 1 February November September part 2 & October part 1 September part 1 August ( part 1 , part 2 ) July part 1 part 2 ), June original set let us know Research & Announcements by Baidu. Newly launched source platform for building autonomous vehicles. Apollo by Sony. Sony demonstrates its interest in deep learning by releasing their own open source deep learning framework. Neural Network Libraries by Harshvardhan Gupta. Facebook researchers This post walks through the paper and explains it. Original paper . CAN (Creative Adversarial Network) — Explained propose a new system for generating art. The system generates art by looking at art and learning about style; and becomes creative by increasing the arousal potential of the generated art by deviating from the learned styles. here by George Nott. One current downside, and ongoing research area, of deep neural networks today is that they are black boxes, meaning their decision making & outcomes can’t be easily justified or explained. Article discusses various attempts and ongoing work in this area, including work by UC Berkeley & Max Plank Institute described in this original paper . ‘Explainable Artificial Intelligence’: Cracking open the black box of AI here by DeepMind. In a similar vein to the article above, DeepMind researchers propose a new approach to interpreting/explaining deep neural network models by leveraging methods from cognitive psychology. For example, when children guess they meaning of a word from a single example (one-shot word learning), they are employing a variety of inductive biases, such as shape bias. DeepMind assesses this bias in their models to improve their interpretation of what’s happening under the hood. Original paper . Interpreting Deep Neural Networks using Cognitive Psychology here by Wu . Digs into the question, Towards Understanding Generalization of Deep Learning: Perspective of Loss Landscapes et al why do deep neural networks generalize well? Resources, Tutorials & Data by Oliver Cameron of Voyage. A helpful overview of the tech stack powering a self-driving car, digging into Voyage’s compute, power, and drive-by-wire systems. Under the Hood of a Self-Driving Taxi by Tim Anglade. A walk-thru of how the TV show built their app that famously identifies and . How HBO’s Silicon Valley built “Not Hotdog” with mobile TensorFlow, Keras & React Native Silicon Valley hotdogs not hotdogs , a new IDE purpose-built for machine learning with visual model representation. . Machine UI Video by Qure.ai. Overview of state-of-the-art in semantic segmentation. As context, s_emantic segmentation is understanding an image at pixel level i.e, we want to assign each pixel in the image an object class._ A 2017 Guide to Semantic Segmentation with Deep Learning By . Isaac is an investor at Venrock ( ). If you’re interested in deep learning or there are resources I should share in a future newsletter, I’d love to hear from you. Isaac Madan email is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers. Requests for Startups ** **❤” Please tap or click “︎ to help to promote this piece to others.