Up to Speed on Deep Learning: July Update

Written by RequestsForStartups | Published 2017/07/07
Tech Story Tags: machine-learning | deep-learning | apollo | neural-network-libraries | artificial-intelligence

TLDRvia the TL;DR App

Sharing some of the latest research, announcements, and resources on deep learning.

By Isaac Madan (email)

Continuing our series of deep learning updates, we pulled together some of the awesome resources that have emerged since our last post. In case you missed it, here are our past updates: June (part 1, part 2, part 3, part 4), May, April (part 1, part 2), March part 1, February, November, September part 2 & October part 1, September part 1, August (part 1, part 2), July(part 1, part 2), June, and the original set of 20+ resources we outlined in April 2016. As always, this list is not comprehensive, so let us know if there’s something we should add, or if you’re interested in discussing this area further.

Research & Announcements

Apollo by Baidu. Newly launched source platform for building autonomous vehicles.

Neural Network Libraries by Sony. Sony demonstrates its interest in deep learning by releasing their own open source deep learning framework.

CAN (Creative Adversarial Network) — Explained by Harshvardhan Gupta. Facebook researchers propose a new system for generating art. The system generates art by looking at art and learning about style; and becomes creative by increasing the arousal potential of the generated art by deviating from the learned styles. This post walks through the paper and explains it. Original paper here.

‘Explainable Artificial Intelligence’: Cracking open the black box of AI by George Nott. One current downside, and ongoing research area, of deep neural networks today is that they are black boxes, meaning their decision making & outcomes can’t be easily justified or explained. Article discusses various attempts and ongoing work in this area, including work by UC Berkeley & Max Plank Institute described in this original paper here.

Interpreting Deep Neural Networks using Cognitive Psychology by DeepMind. In a similar vein to the article above, DeepMind researchers propose a new approach to interpreting/explaining deep neural network models by leveraging methods from cognitive psychology. For example, when children guess they meaning of a word from a single example (one-shot word learning), they are employing a variety of inductive biases, such as shape bias. DeepMind assesses this bias in their models to improve their interpretation of what’s happening under the hood. Original paper here.

Towards Understanding Generalization of Deep Learning: Perspective of Loss Landscapes by Wu et al. Digs into the question, why do deep neural networks generalize well?

Resources, Tutorials & Data

Under the Hood of a Self-Driving Taxi by Oliver Cameron of Voyage. A helpful overview of the tech stack powering a self-driving car, digging into Voyage’s compute, power, and drive-by-wire systems.

How HBO’s Silicon Valley built “Not Hotdog” with mobile TensorFlow, Keras & React Native by Tim Anglade. A walk-thru of how the Silicon Valley TV show built their app that famously identifies hotdogs and not hotdogs.

Machine UI, a new IDE purpose-built for machine learning with visual model representation. Video.

A 2017 Guide to Semantic Segmentation with Deep Learning by Qure.ai. Overview of state-of-the-art in semantic segmentation. As context, s_emantic segmentation is understanding an image at pixel level i.e, we want to assign each pixel in the image an object class._

By Isaac Madan. Isaac is an investor at Venrock (email). If you’re interested in deep learning or there are resources I should share in a future newsletter, I’d love to hear from you.

Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers.

**Please tap or click “︎**❤” to help to promote this piece to others.


Published by HackerNoon on 2017/07/07