paint-brush
Up to Speed on Deep Learning: May Updateby@RequestsForStartups
2,830 reads
2,830 reads

Up to Speed on Deep Learning: May Update

by Requests for StartupsMay 2nd, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

<em>By </em><a href="https://www.linkedin.com/in/isaacmadan" target="_blank"><em>Isaac Madan</em></a><em> (</em><a href="mailto:[email protected]" target="_blank"><em>email</em></a><em>)</em>

People Mentioned

Mention Thumbnail

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - Up to Speed on Deep Learning: May Update
Requests for Startups HackerNoon profile picture

Sharing some of the latest research, announcements, and resources on deep learning.

By Isaac Madan (email)

Continuing our series of deep learning updates, we pulled together some of the awesome resources that have emerged since our last post. In case you missed it, here are our past updates: April part 2, April part 1, March part 1, February, November, September part 2 & October part 1, September part 1, August part 2, August part 1, July part 2, July part 1, June, and the original set of 20+ resources we outlined in April 2016. As always, this list is not comprehensive, so let us know if there’s something we should add, or if you’re interested in discussing this area further.

Announcements & Research

Caffe2 release by Facebook. Open-sourcing the first production-ready release of Caffe2 — a lightweight and modular deep learning framework emphasizing portability while maintaining scalability and performance. Shipping with tutorials and examples that demonstrate learning at massive scale. Deployed at Facebook.

Speech synthesis with minimal training data by Lyrebird. PhD students from the University of Montreal announce, they are developing new speech synthesis technologies which, among other features, allow us to copy the voice of someone with very little data.

Understanding deep learning requires rethinking generalization by Google researchers. An ICLR 2017 Best Paper, t_hrough extensive systematic experiments, we show how the traditional approaches fail to explain why large neural networks generalize well in practice, and why understanding deep learning requires rethinking generalization._

The Synthetic data vault by MIT researchers. Describes machine learning system that automatically creates synthetic data — with the goal of enabling data science efforts that, due to a lack of access to real data, may have otherwise not left the ground. This synthetic data is completely different from that produced by real users.

Resources

The Modern History of Object Recognition — Infographic by Đặng Hà Thế Hiển. Summarizes important concepts in object recognition, like bounding box regression and transposed convolution, and also outlines the history of deep learning approaches to object recognition since 2012.

The Deep Learning Roadmap by Carlos Perez. A map that categorizes the various research threads and advancements within deep learning. A useful categorization as you follow developments in the space.

Failures of Deep Learning (video) by Shai Shalev-Shwartz. Lecture on three families of problems for which existing deep learning algorithms fail. We illustrate practical cases in which these failures apply and provide a theoretical insight explaining the source of difficulty. Slides here.

Introduction to Deep Learning by MIT. A week-long intro to deep learning methods with applications to machine translation, image recognition, game playing, image generation and more. A collaborative course incorporating labs in TensorFlow and peer brainstorming along with lectures. All lecture slides and videos available.

A Brief History of CNNs in Image Segmentation: From R-CNN to Mask R-CNN by Dhruv Parthasarathy. An overview of CNN developments applied to image segmentation.

Deep learning for satellite imagery via image segmentation by Arkadiusz Nowaczynski. A top performing team of a recent Kaggle competition discusses their deep learning approach to image segmentation of satellite imagery and shares lessons learned.

Keras Cheatsheet by DataCamp. Cheatsheet for the six steps that you can go through to make neural networks in Python with the Keras library.

Tutorials

How to Build a Recurrent Neural Network in TensorFlow by Erik Hallström. This is a no-nonsense overview of implementing a recurrent neural network (RNN) in TensorFlow. Both theory and practice are covered concisely, and the end result is running TensorFlow RNN code.

Interpretability via attentional and memory-based interfaces, using TensorFlow by Goku Mohandas. A gentle introduction to attentional and memory-based interfaces in deep neural architectures, using TensorFlow. Incorporating attention mechanisms is very simple and can offer transparency and interpretability to our complex models. GitHub repo here.

Recurrent Neural Networks & LSTMs by Rohan Kapur. A gentle and detailed introduction to RNNs. See the rest of their blog for more fantastic introductory resources.

Deep Neural Network from scratch by Florian Courtial. Tutorial on how deep neural networks work and a Python implementation with TensorFlow.

The GAN Zoo by Avinash Hindupur. List of all named GANs and their respective papers.

By Isaac Madan. Isaac is an investor at Venrock (email). If you’re interested in deep learning, we’d love to hear from you.

Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers.

**Please tap or click “︎**❤” to help to promote this piece to others.