By Isaac Madan
Continuing our series of deep learning updates, we pulled together some of the awesome resources that have emerged since our last post. In case you missed it, here are our past updates: November, September part 2 & October part 1, September part 1, August part 2, August part 1, July part 2, July part 1, June, and the original set of 20+ resources we outlined in April. As always, this list is not comprehensive, so let us know if there’s something we should add, or if you’re interested in discussing this area further.
High-Resolution Image Inpainting using Multi-Scale Neural Patch Synthesis by Chao Yang et al of USC. A novel deep learning approach to filling in large holes in images effectively. GitHub repo here.
Pixel Recursive Super Resolution by Dahl et al of Google. Researchers at Google Brain demonstrates a deep learning method of generating higher resolution images from lower resolution, pixelated images.
Domain Transfer Network implemention by Yunjey Choi. TensorFlow implementation of Unsupervised Cross-Domain Image Generation — generating novel images of previously unseen entities, while preserving their identity.
Deep Learning Summer School and Reinforcement Learning Summer School by University of Montreal, organized by Yoshua Bengio. A conference aimed at graduate students and industrial engineers and researchers who already have some basic knowledge of machine learning (and possibly but not necessarily of deep learning) and wish to learn more about this rapidly growing field of research. June 26th to July 4th, 2017. Applications due March 20th, apply here.
The AWS Deep Learning AMI, Now with Ubuntu by Joseph Spisak of AWS. Amazon announces that you can now run deep learning in the cloud via Ubuntu instances that have popular frameworks pre-installed, like TensorFlow, Caffe, etc.
Announcing TensorFlow 1.0 by Google. At the TensorFlow Developer Summit in mid-February in Mountain View, Google announces the official 1.0 release of TensorFlow. The framework is now being used in over 6000 open source repositories.
Generative Adversarial Networks (GANs) in 50 lines of code (PyTorch) by Dev Nag, CTO of Wavefront. An explanation of GANs and a simple way to get started with them. GitHub repo here.
Dissecting Reinforcement Learning by Massimiliano Patacchiola of Plymouth University. An in-depth explanation of reinforcement learning, with an accompanying GitHub repo for code and resources discussed — found here.
Training a deep learning model to steer a car in 99 lines of code by Matt Harvey. Using Udacity self-driving car simulator to train a generalized steering model for a car in under 100 lines of code.
10 Deep Learning Terms Explained in Simple English by Mike Waldron of Data Science Central. Brief explanations of common deep learning terms like backpropagation and gradient descent. Also helpful from the same site, 15 Deep Learning Tutorials.
Creating Human-Level AI by Yoshua Bengio (video). Exploring the path forward to human-level AI at the Beneficial AI 2017 Conference organized by the Future of Life Institute. Also, A Path to AI by Yann LeCun at the same conference.
Deep Reinforcement Learning: An Overview by Yuxi Li. Overview of recent advancements in deep reinforcement learning, alongside background explanation.
Learn TensorFlow and deep learning, without a PhD by Martin Gorner of Google Cloud Platform. Course for developers on deep learning basics and leveraging TensorFlow.
Duplicate Question Detection with Deep Learning on Quora Dataset by Eren Golge. Quora launched their first publicly available dataset in late January for developers to get a sense of the challenges in building a knowledge-sharing network. This exploration takes a deep learning approach to identifying duplicate questions.
Recurrent Neural Networks for Steering Through Time by MIT (video). Lecture 4 of course 6.S094: Deep Learning for Self-Driving Cars taught in Winter 2017 at MIT. Course website here. Lecture 4 slides here.
Please tap or click “︎❤” to help to promote this piece to others.
Create your free account to unlock your custom reading experience.