paint-brush
Up to Speed on Deep Learning: June Update, Part 4by@RequestsForStartups
1,787 reads
1,787 reads

Up to Speed on Deep Learning: June Update, Part 4

by Requests for StartupsJune 27th, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

<em>By </em><a href="https://www.linkedin.com/in/isaacmadan" target="_blank"><em>Isaac Madan</em></a><em> (</em><a href="mailto:[email protected]" target="_blank"><em>email</em></a><em>)</em>

People Mentioned

Mention Thumbnail

Company Mentioned

Mention Thumbnail
featured image - Up to Speed on Deep Learning: June Update, Part 4
Requests for Startups HackerNoon profile picture

Sharing some of the latest research, announcements, and resources on deep learning.

By Isaac Madan (email)

Continuing our series of deep learning updates, we pulled together some of the awesome resources that have emerged since our last post. In case you missed it, here are our past updates: June (part 1, part 2, part 3), May, April (part 1, part 2), March part 1, February, November, September part 2 & October part 1, September part 1, August (part 1, part 2), July (part 1, part 2), June, and the original set of 20+ resources we outlined in April 2016. As always, this list is not comprehensive, so let us know if there’s something we should add, or if you’re interested in discussing this area further.

Research & Announcements

Grounded Language Learning in a Simulated 3D World by Google DeepMind. Teaching an AI agent to learn & apply language. Here we present an agent that learns to interpret language in a simulated 3D environment where it is rewarded for the successful execution of written instructions. The agent’s comprehension of language extends beyond its prior experience, enabling it to apply familiar language to unfamiliar situations and to interpret entirely novel instructions.

One Model To Learn Them All by Google. Getting a deep learning model to work well for a specific task like speech recognition, translation, etc. can take lots of time researching architecture & tuning. A generalizable model that works well across various tasks would thus be quite useful — Google presents one such model. In particular, this single model is trained concurrently on ImageNet, multiple translation tasks, image captioning (COCO dataset), a speech recognition corpus, and an English parsing task.

Tensor2Tensor by Google Brain. An open-source system for training deep learning models in TensorFlow. T2T facilitates the creation of state-of-the art models for a wide variety of ML applications, such as translation, parsing, image captioning and more, enabling the exploration of various ideas much faster than previously possible. This release also includes a library of datasets and models, including the best models from a few recent papers. GitHub repo here.

TensorFlow Object Detection API by Google. Last year Google demonstrated state-of-the-art results in object detection and won the COCO detection challenge, and featured their work in products like the NestCam. They’re now open sourcing this work: a framework built on top of TensorFlow that makes it easy to construct, train and deploy object detection models. Our goals in designing this system was to support state-of-the-art models while allowing for rapid exploration and research.

MobileNets: Open-Source Models for Efficient On-Device Vision by Google. It’s hard to run visual recognition models accurately on mobile devices given limitations in computational power and space. As such, MobileNets, a family of mobile-first computer vision models for TensorFlow, designed to effectively maximize accuracy while being mindful of the restricted resources for an on-device or embedded application.

deeplearning.ai by Andrew Ng. A new project by Andrew Ng coming up in August 2017. No details provided on the site yet.

Me trying to classify some random stuff on my desk:)

Resources, Tutorials & Data

Building a Real-Time Object Recognition App with Tensorflow and OpenCV by Dat Tran. In this article, I will walk through the steps how you can easily build your own real-time object recognition application with Tensorflow’s (TF) new Object Detection API and OpenCV in Python 3 (specifically 3.5). The focus will be on the challenges that I faced when building it. GitHub repo here.

What Can’t Deep Learning Do? by Bharath Ramsundar. A tweetstorm listing some of the known failures behind deep learning methods. Helpful in understanding where future research may be directed.

Generative Adversarial Networks for Beginners by O’Reilly. Build a neural network that learns to generate handwritten digits. GANs are neural networks that learn to create synthetic data similar to some known input data. GitHub repo here.

Measuring the Progress of AI Research by Electronic Frontier Foundation. Tracking what’s state-of-the-art in ML/AI and understanding how a specific subfield is progressing can get complicated. This pilot project collects problems and metrics/datasets from the AI research literature, and tracks progress on them.

By Isaac Madan. Isaac is an investor at Venrock (email). If you’re interested in deep learning or there are resources I should share in a future newsletter, I’d love to hear from you.

Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers.

**Please tap or click “︎**❤” to help to promote this piece to others.