You can find a video version of this article on my .
So you’ve seen the recent news about how artificial intelligence (AI) is changing everything. However, the idea of AI has been around for a long time. Machines that think and talk like humans have been the inspiration for movies and stories for decades.
But what’s the deal? Why has AI been getting better and better over the past few years?
One of the major driving forces of the recent boom is the remixing of new technologies with tried and true ideas. Enter deep learning.
Deep learning in a sentence: The layered extraction of features out of an information source.
This definition will vary depending on where you look but for now, it will suffice.
Deep learning utilises multiple layers of neural networks to abstract information from an input source to a more structured output source. The key words here are multiple layers.
The ‘deep’ in deep learning refers to neural networks with multiple internal layers.
The idea of neural networks has been around since the 1940s. So why have they only recently made such a big resurgence?
Two reasons.
1. More data. 2. More compute power.
For a deep learning system to gather tangible insights from a body of information, there needs to be a lot of it (although people are actively working to solve this). And everywhere you look around the world is being converted to data, through text, through video, through audio. We recorded more information in the past 5-years than all of human history.
Okay, cool. We’ve got plenty more data than ever. But I’ve got a shelf of books at home and they don’t make me smarter just sitting there. I have to read them to learn what’s inside.
This is where more computing power comes in. Our bandwidth is limited. We can only read at a certain speed. A good book may take a month or longer to get through.
There’s no way, even with all the human brains on the planet we could process all the data we’ve been collecting.
Computers to the rescue!
Breakthroughs in computing hardware and accessibility have made crunching through all the extra information we’ve collected with deep learning easier than ever. Using our laptops, you and I can now load up an access point to a warehouse of computers, all from the comfort of our favourite lounge chairs.
All of a sudden, if we’ve got a large dataset we’d like to gather insights from, we can do what used to take 1000’s of human hours (potentially years) in the time it takes to have a good nap (some things will take a little longer).
Alright enough with the technology overview. So you’re interested in learning deep learning? Well, this article is here to help. It’s an overview of one the best deep learning courses available to you right now.
Seriously, if you want to save yourself time, head over to Coursera and search ‘deep learning’ right now, choose the deeplearning.ai specialisation and get amongst it.
Still here? Sweet. Let’s start with why.
You’ve probably experienced some of the results of deep learning already. Perhaps without even knowing it.
Facebook’s photo tagging system uses it. Their facial recognition is as good as humans.
All (or close to) of Google’s products use it.
Your smartphone probably uses a version of it to improve its battery life over time.
Uber use it to make sure you’re connected with the right driver at the right time.
But as the amount of information we have about the world increases, so do the use cases of deep learning.
Lawyers are using it to figure out how to make better decisions about legal cases, real estate agents are using it to better price houses, doctors are using it to help them make better diagnosis’.
Andrew Ng, the course instructor, refers to AI as the new electricity.
AI is the new electricity. — Andrew Ng
Right now, AI is limited to our smartphones and smart speakers but soon enough, it will be injected into everything we interact with. And like now, most of it will be behind the scenes.
The deeplearning.ai specialization is dedicated to teaching you state of the art techniques and how to build them yourself.
If you’re a software developer who wants to get into building deep learning models or you’ve got a little programming experience and want to do the same, this course is for you.
If you’re just looking to understand some use cases and how it might affect your industry, I’d look elsewhere.
Deep learning and machine learning skills are in demand. If you’re after a career change, as I was, this course will set you on the path.
Whatever your reason. Make sure you have one before starting. Write it down It will give you something to refer back to when the learning gets hard. It will be a reminder of why you started.
You’ve got your why. You want to build technologies which impact the world. Or you want to gather better insights out of your business’s data. Or you want a new job. Great. They’re all valid reasons.
Having a reason is the first step. Now, what do you need to start?
The course page lists programming experience and a basic knowledge of mathematics and machine learning as prerequisites.
Python is the language of the choice for the course and much of deep learning. So if you’ve got at least a few months Python experience or are experienced with other programming languages and ideologies, you should be in a good place.
As for the math, I’ve never taken a math course outside of high school. If I needed to learn some math for the course, I went to Khan Academy.
Some math topics covered in the course (all linked to Khan Academy).
Before starting the course, I didn’t understand all of these in-depth. Andrew Ng, the main lecturer, does a great job explaining enough of the math to get you started during the lectures. For anything deeper, you’ll find the links above a great help.
As for machine learning experience, I’d completed Andrew’s Machine Learning Course on Coursera prior to starting. Is it 100% required? No. But it did help with a few concepts here and there. The course is free however, it is done in Matlab/Octave which I found a bit more difficult since I had been used to Python.
Overall, if you’ve got a high school math education and are comfortable to code functions a few lines long in Python, you’ve got enough to start.
So you’re ready to start. Epic. What will you actually learn?
The course is broken into five parts. Each can be done individually but I found them to be great compliments to each other.
Time allocation for each part varies in length between 2–4 weeks and has a recommended study time of 4–5 hours/week. I could usually do a week’s worth of classes and assessment in one 6–8 hour day, including breaks. Meaning, the entire course took me about 4–5 weeks.
This section introduces the concept of neural networks and deep learning. Kind of like the introduction of this post but with actual code and far more depth.
You’ll start out by building your own neural networks from scratch and learn a thing or two about Python’s numerical library NumPy.
What does from scratch mean?
From scratch is without using any frameworks. Imagine a framework being a collection of code someone else has written to make the code you write smaller (less lines). Some popular deep learning frameworks are Keras, TensorFlow and PyTorch. Whenever you see an article titled, “Best results ever in 11-lines of code,” the article probably uses one of these frameworks.
Those same 11-lines of code may turn out to be 50+ in NumPy/pure Python. This first section will run you through the full 50 lines to understand what frameworks are doing behind the scenes.
Deep learning is often referred to as a black box, meaning, your model learns things but you’re not quite sure how it learns them.
The issue with this is that it can be hard to improve your model when if it isn’t working as you’d hoped.
Working through part 2 you’ll learn about common deep learning tidbits such as hyperparameter tuning, initialisation, optimisation, mini-batch gradient descent and regularisation.
Woah. Slow down. What’s all this jargon?
For now, just think of them as ways to get the most out your neural network.
You’ll also get a taste of what its like to split up your datasets into training, validation and test sets. A training set is where your neural network will learn and validation and testing sets are you can test the robustness of your network on unseen data.
Where part 1 started with Python and NumPy, part 2 will expose you to one of the most popular deep learning libraries, TensorFlow.
Alright, so you now know how to build your own deep neural network and you’ve got some ideas on some projects you want to take on. But how? Part 3 has your back.
This section is a big one, not in terms of length but in terms of practicality. It’s one thing to be able to be able to build machine learning systems but another thing to be able to diagnose them when they go wrong and improve them for future use.
Part 3 takes you through two case studies. You’re put in the driver’s seat to decide upon how a deep learning system could be used to solve a problem within them. Or how the current deep learning system could be improved.
I’ve seen teams waste months or years through not understanding the principles taught in this course. — Andrew Ng
Courses two and three are quite unique to the deeplearning.ai specialization. I haven’t seen many other courses talk about these topics in the way Andrew does.
When it comes to computer vision, convolutional neural networks (CNNs) are the bee knees.
Deep learning is what has given rise to the incredible improvements in facial recognition (remember Facebook’s facial recognition?), classifying X-ray reports and self-driving car systems such as Tesla’s autopilot.
In week 1, you’ll learn about all the parts that make up a convolutional neural net before a programming assignment involving building your very own model step by step.
For the next three weeks, Andrew will show you how to take the CNN you made in week 1 into a deep convolutional model (adding more layers). It’s from here, you’ll be exposed to Keras, a deep learning framework built on top of the library TensorFlow.
You’ll practice implementing deep CNN’s such as the YOLOv2 algorithm to detect objects in images and videos. You can even upload your own images, I had a bunch of fun with this.
The final project involves building a facial recognition system to only allow people who are smiling into your home. Turning this one into a reality may be handy if you’re hosting a party.
After part 4, you’ll now know how to make computers see. But what if you’ve got a bunch of audio data? How about text? CNN’s could be used for this but sequence models are preferred.
You’re in Spain. And you’ve had a few too many Spanish wines. Now you need to find a bathroom but you can’t speak a word of Spanish. Not to worry, you pull out your phone type in ‘bathroom please’ into Google Translate. You find the nearest person you can and point to your phone. Pressing the microphone button, your phone reads out ‘baño por favor’. Your new Spanish friend smiles and points you up the street.
This whole interaction was powered by wine and deep learning. Google’s Translate is only one example of a product that uses deep sequence models.
Sequence models are a kind of model which take in any kind of data that occurs over a time series. Think about audio waves over time or words in a sentence.
You’ll start by being introduced to recurrent neural networks step-by-step, a popular type of sequence model. Then you’ll use long short-term memory cells (LTSM’s) to build your own jazz improvisation model. I was never great at playing instruments but thanks to deep learning, I was able to teach a computer to play a jazz solo for me.
Next, you’ll cover natural language processing (NLP). You’ll learn how to represent words with numbers and then how you can train recurrent neural networks to understand them. Computers understand numbers far better than words.
To finish off, Andrew takes you through the use of RNN’s in speech recognition and Trigger Word Detection. These two techniques are what allow us to say ‘Hey Siri, find me directions to the closest cafe.’ ‘Hey Siri,’ is the trigger word and the whole process of turning soundwaves in the air into something your iPhone can understand (a sequence of numbers) is speech recognition.
Deep learning is making computers better at recognising speech and natural language but there’s still plenty of improvement to go. Any kind of interaction more than a few basic sentences and conversations with Alexa or Google Home begins to break down. These kind of problems make NLP one of the most exciting areas of deep learning research.
At the end of each week of lectures, there’s a quiz and programming assignment associated with what you’ve learned in the previous week.
The assignments are hosted in Jupyter Notebooks within the same web browser the course lives in. Jupyter Notebooks are a beautiful interface for many different types of coding projects, especially data science and deep learning. One beautiful thing about completing your assessment right in the browser is the grading is almost instantaneous. You can see where you went wrong straight away.
Each piece of assessment has an 80% threshold. Failed an assignment or quiz? No problem. You get 3 attempts for submission every 8-hours. If you keep failing, take a break and come back.
I can’t share my code from each of the assignments because that violates the course guidelines. No code can be shared on forums or other online sources. However, you can ask questions on the forums using pseudo-code (code which is similar to your problem but doesn’t reveal the exact details).
For the first half of the specialisation, at the end of each week of classes, there’s an interview with a deep learning superhero. Andrew sits down with people such as Yann LeCun and Geoffrey Hinton to discuss the current state of deep learning and where the field is heading. These interviews were one of my favourite parts of the course.
Alright, you know why you want to study deep learning and what you’ll be learning but where do you go to find all this beautiful information?
All of the video interviews and lectures are available free on the deeplearning.ai YouTube channel.
For access to the course forums and assessment, you’ll need to sign up on Coursera.
The beauty of online learning means you can do it anywhere. You’ll need a computer with an internet connection for the assessment but you can watch the video lectures offline through Coursera’s mobile applications.
If you’re a good solo learner, learning online may be perfect for you. I spent most of my time studying alone in my bedroom. But if you prefer learning in a group, you may want to convince some of your friends to buckle down on some deep learning with you.
Just like Netflix’s releasing of whole seasons at a time, when you sign up for the course, you don’t have to wait to access the material. You could do it all in a 3-day marathon if you wanted. I’d be impressed if you did but sleep does help cement what you’ve learned.
A new cohort starts every couple of weeks. You can start anytime you want but if you pick one of these dates, you’ll have other people all over the world studying alongside you. This is helpful because it means the forums will have people asking questions and answering them around the same time you are.
You’ve got the details. You know when to start, you know what you’re going to learn, now how do you actually do it?
There is no right way to answer this. What worked for me might not work for you.
I used Trello to track my progress. Trello is a free application which helps you visualise your plans. Before starting the course, I mapped out the curriculum onto a board.
Here’s what an example board might look like.
You can view a public version of the full board here.
On the far left, you’ll have resources which you need access to. The following column contains parts of the course you haven’t completed yet.
Part 2 is in the Doing column because it’s what you’re currently working on. And Part 1 gets moved to the Done column because you’ve finished it.
Within each part, you have a list of the lectures and assessment items you need to complete and tick them off as you go.
Trello helps me visualise what I’ve done and what I need to do next. The due dates also help me stay up-to-date with the rest of the cohort.
No. You can do it as fast or as slow as you want. You will get updated with reminders of assessment due dates but these are not compulsory, they’re more so to keep you encouraged and on track.
For perspective, I finished the course in about 4–5 weeks studying 6–8 hours a day 3–4 days per week.
It’s likely you’ll eventually run into a problem with your programming assignment. Or perhaps you want to know more about a certain topic. When this occurs, the forums are your friend.
Since the course has been out for a few months now, there are a bunch of questions that have already been answered. A quick search of your problem will often result in a few students having similar issues.
If your question isn’t answered, don’t be afraid to ask for help. No matter how pointless you think your problem may be, it’s still a problem. Thinking my question was stupid has held my learning back in the past. Now I realise, thinking your question is stupid, is stupid.
When you ask a question, be as clear as possible. Read it out loud to see if it makes sense. Often I’ve been so caught up on a problem, I forget to clearly communicate myself which lowers my chances of being helped.
You can watch all of the course lectures for free on YouTube. But if you sign up for the full Specialisation on Coursera, the cost is $64AUD per month (around $47USD).
The full specialisation is suggested at taking 4–5 months to complete but since I completed it faster, I only paid for 2-months ($128AUD total).
Coursera often offers a 7-day trial with their specialisations. So you could sign up, see if the course is for you and if not, cancel your subscription.
There is financial aid available for those who are eligible. To receive this, you have to apply when signing up for the course.
The specialisation is broken down into five parts. Each part is broken down into 2–4 weeks worth of lectures and assessment.
Lectures can range from 5-minutes long to 20-minutes long and average around 10-minutes each.
I found I could complete a whole week worth of lectures in about 3-hours, listening on 1.5x speed.
Andrew uses a keynote and voiceover to teach the content. He will often annotate the slides as he’s talking to further illustrate a point. I really enjoyed this teaching style.
An example lecture slide. The blue writing was annotated alongside Andrew’s voice over.
At the end of each week of content, there’s a quiz and a programming assignment. Once you’ve completed all the quizzes and programming assignments within a section, you’ll be able to view your certificate of competition.
Completing all five sections of the specialisation means you’ll receive a certificate of completion from Coursera. Which makes for a great new addition on your LinkedIn profile. Don’t be afraid to share it, you worked hard for it.
Indefinitely. The lectures are on YouTube and I still have access to the lectures on Coursera even though I’m not paying for the specialisation anymore.
As for assessment, I found it hard to get back to the Jupyter Notebooks after finishing a programming assignment so I made sure to download them whilst I was working on them for future reference.
Out of all the courses I’ve done, this is by far one of the best. Andrew is a practitioner who weaves his knowledge gained through experience in each of the lessons. He’s got skin in the game, so you know what you’re being taught has been put into practice.
It was difficult at times. I ran into a bunch of roadblocks. But having the forums and the rest of the internet, I knew I could solve problems as they arrived as long as I was patient.
My two favourite sections were part 3 and part 5. Anything to do with language and communication fascinates me, so part 5 was a real highlight. And it was great to hear Andrew’s insider wisdom on how to get the most out of machine learning projects during part 3.
Woah! You did it. Congratulations. You made it through the course. But you were always going to, it just took a little effort.
Everything probably seems like a blur.
“Did I actually learn anything?” I asked myself when I finished.
The course was so jam-packed with knowledge. I found myself asking questions in part 5 that were answered in lectures in part 2. But that’s fine. I remember 1% of what I learn so going back over something helps to rebuild the connections in my brain a little stronger.
Now I know, when I come across a deep learning problem, I can refer back to the lectures and refresh my memory. But I also know how fast the field moves and what’s in the lectures could be outdated. This is where supplementary learning comes in.
I found completing some other deep learning courses from Udacity and fast.ai in parallel with the deeplearning.ai specialization were a great help. Where one course fell down, the other picked up and visa versa.
Finishing the specialisation is just the beginning of a new learning journey.
Whether it be applying your new found skills to a building your own project or using them to improve a current one means you’ll probably have to keep learning. Eventually, you’ll come across something you may have to find you’re own answer to.
If you make something, be sure to share your work, write a blog post, or make a video about it. That way you’ll be able to practice communicating the skills you’ve learned and if your project needs improving, other people can offer their advice.
It’s an exciting time to be alive. Artificial intelligence is proving to be one of the most impactful technologies in history. And deep learning is a big reason for many recent advances and likely many more to come.
So if you want to join in and ride the tailwinds of society, get learning. Or better yet, sign up to for the deeplearning.ai specialisation on Coursera and get deep learning.
Liked what you read? There’s a on my YouTube if you’re more visually inclined.
I completed this course as part of my own AI Masters Degree.