paint-brush
Up to Speed on Deep Learning: June Update, Part 2by@RequestsForStartups
1,932 reads
1,932 reads

Up to Speed on Deep Learning: June Update, Part 2

by Requests for StartupsJune 9th, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

<em>By </em><a href="https://www.linkedin.com/in/isaacmadan" target="_blank"><em>Isaac Madan</em></a><em> (</em><a href="mailto:[email protected]" target="_blank"><em>email</em></a><em>)</em>

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Up to Speed on Deep Learning: June Update, Part 2
Requests for Startups HackerNoon profile picture

Sharing some of the latest research, announcements, and resources on deep learning.

By Isaac Madan (email)

Continuing our series of deep learning updates, we pulled together some of the awesome resources that have emerged since our last post. In case you missed it, here are our past updates: June (part 1), May, April (part 1, part 2), March part 1, February, November, September part 2 & October part 1, September part 1, August (part 1, part 2), July (part 1, part 2), June, and the original set of 20+ resources we outlined in April 2016. As always, this list is not comprehensive, so let us know if there’s something we should add, or if you’re interested in discussing this area further.

Research & Announcements

Scalable and Sustainable Deep Learning via Randomized Hashing by Spring and Srivastava of Cornell. Rice University computer scientists have adapted a widely used technique for rapid data lookup to slash the amount of computation — and thus energy and time — required for deep learning. “This applies to any deep learning architecture, and the technique scales sublinearly, which means that the larger the deep neural network to which this is applied, the more the savings in computations there will be,” said Shrivastava. News article here.

A neural approach to relational reasoning by DeepMind. Relational reasoning is the process of drawing conclusions about how things are related to one another, and is central to human intelligence. A key challenge in developing artificial intelligence systems with the flexibility and efficiency of human cognition is giving them a similar ability — to reason about entities and their relations from unstructured data. These papers show promising approaches to understanding the challenge of relational reasoning. Original papers here and here.

Resources

Applying deep learning to real-world problems by Rasmus Rothe of Merantix. A must-read on key learnings when using deep learning in the real world. Discussion of the value of pre-training, caveats of real-world label distributions, and understanding black box models.

CuPy by Preferred Networks. An open-source matrix library accelerated with NVIDIA CUDA. Compatible with, or a drop-in replacement for, Numpy. GitHub repo here.

Speaker Resources 2017 by The AI Conference. Various news articles, academic papers, and datasets shared by folks involved in and enthusiastic about AI. (h/t Michelle Valentine)

Neural Network Architectures by Eugenio Culurciello. An in-depth overview & history or neural network architectures in the context of deep learning, spanning LeNet5, AlexNet, GoogLeNet, Inception, and a discussion of where things are headed in the future. Original paper here.

Model Zoo by Sebastian Raschka. A collection of standalone TensorFlow models in Jupyter Notebooks, including classifiers, autoencoders, GANs, and more. The broader repo for Sebastian’s book is also useful, here.

Tutorials & Data

Sketch-RNN: A Generative Model for Vector Drawings by Google. A TensorFlow recurrent neural network model for teaching machines to draw. Overview of the model and how to use it. Described in greater depth by Google here and here.

Exploring LSTMs by Edwin Chen. It turns out LSTMs are a fairly simple extension to neural networks, and they’re behind a lot of the amazing achievements deep learning has made in the past few years. So I’ll try to present them as intuitively as possible — in such a way that you could have discovered them yourself. An overview of long short-term memory networks, and a tutorial on their use.

Vistas Dataset by Mapillary. Free for research, the MVD is the worlds largest manually annotated semantic segmentation training data set for street level imagery. Primarily being used to train deep neural nets focused on object detection, semantic segmentation, and scene understanding for ADAS and autonomous. (h/t Andrew Mahon)

By Isaac Madan. Isaac is an investor at Venrock (email). If you’re interested in deep learning or there are resources I should share in a future newsletter, I’d love to hear from you.

Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers.

**Please tap or click “︎**❤” to help to promote this piece to others.