Up to Speed on Deep Learning: May Update

Written by RequestsForStartups | Published 2017/05/02
Tech Story Tags: machine-learning | deep-learning | ai | up-to-speed | requests-for-startups

TLDRvia the TL;DR App

Sharing some of the latest research, announcements, and resources on deep learning.

By Isaac Madan (email)

Continuing our series of deep learning updates, we pulled together some of the awesome resources that have emerged since our last post. In case you missed it, here are our past updates: April part 2, April part 1, March part 1, February, November, September part 2 & October part 1, September part 1, August part 2, August part 1, July part 2, July part 1, June, and the original set of 20+ resources we outlined in April 2016. As always, this list is not comprehensive, so let us know if there’s something we should add, or if you’re interested in discussing this area further.

Announcements & Research

Caffe2 release by Facebook. Open-sourcing the first production-ready release of Caffe2 — a lightweight and modular deep learning framework emphasizing portability while maintaining scalability and performance. Shipping with tutorials and examples that demonstrate learning at massive scale. Deployed at Facebook.

Speech synthesis with minimal training data by Lyrebird. PhD students from the University of Montreal announce, they are developing new speech synthesis technologies which, among other features, allow us to copy the voice of someone with very little data.

Understanding deep learning requires rethinking generalization by Google researchers. An ICLR 2017 Best Paper, t_hrough extensive systematic experiments, we show how the traditional approaches fail to explain why large neural networks generalize well in practice, and why understanding deep learning requires rethinking generalization._

The Synthetic data vault by MIT researchers. Describes machine learning system that automatically creates synthetic data — with the goal of enabling data science efforts that, due to a lack of access to real data, may have otherwise not left the ground. This synthetic data is completely different from that produced by real users.

Resources

The Modern History of Object Recognition — Infographic by Đặng Hà Thế Hiển. Summarizes important concepts in object recognition, like bounding box regression and transposed convolution, and also outlines the history of deep learning approaches to object recognition since 2012.

The Deep Learning Roadmap by Carlos Perez. A map that categorizes the various research threads and advancements within deep learning. A useful categorization as you follow developments in the space.

Failures of Deep Learning (video) by Shai Shalev-Shwartz. Lecture on three families of problems for which existing deep learning algorithms fail. We illustrate practical cases in which these failures apply and provide a theoretical insight explaining the source of difficulty. Slides here.

Introduction to Deep Learning by MIT. A week-long intro to deep learning methods with applications to machine translation, image recognition, game playing, image generation and more. A collaborative course incorporating labs in TensorFlow and peer brainstorming along with lectures. All lecture slides and videos available.

A Brief History of CNNs in Image Segmentation: From R-CNN to Mask R-CNN by Dhruv Parthasarathy. An overview of CNN developments applied to image segmentation.

Deep learning for satellite imagery via image segmentation by Arkadiusz Nowaczynski. A top performing team of a recent Kaggle competition discusses their deep learning approach to image segmentation of satellite imagery and shares lessons learned.

Keras Cheatsheet by DataCamp. Cheatsheet for the six steps that you can go through to make neural networks in Python with the Keras library.

Tutorials

How to Build a Recurrent Neural Network in TensorFlow by Erik Hallström. This is a no-nonsense overview of implementing a recurrent neural network (RNN) in TensorFlow. Both theory and practice are covered concisely, and the end result is running TensorFlow RNN code.

Interpretability via attentional and memory-based interfaces, using TensorFlow by Goku Mohandas. A gentle introduction to attentional and memory-based interfaces in deep neural architectures, using TensorFlow. Incorporating attention mechanisms is very simple and can offer transparency and interpretability to our complex models. GitHub repo here.

Recurrent Neural Networks & LSTMs by Rohan Kapur. A gentle and detailed introduction to RNNs. See the rest of their blog for more fantastic introductory resources.

Deep Neural Network from scratch by Florian Courtial. Tutorial on how deep neural networks work and a Python implementation with TensorFlow.

The GAN Zoo by Avinash Hindupur. List of all named GANs and their respective papers.

By Isaac Madan. Isaac is an investor at Venrock (email). If you’re interested in deep learning, we’d love to hear from you.

Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers.

**Please tap or click “︎**❤” to help to promote this piece to others.


Published by HackerNoon on 2017/05/02