Neural networks can be intimidating, especially for people new to machine learning. However, this tutorial will break down how exactly a neural network works and you will have a working flexible neural network by the end. Let’s get started!
With approximately 100 billion neurons, the human brain processes data at speeds as fast as 268 mph! In essence, a neural network is a collection of neurons connected by synapses. This collection is organized into three main layers: the input later, the hidden layer, and the output layer. You can have many hidden layers, which is where the term deep learning comes into play. In an artificial neural network, there are several inputs, which are called features, and produce a single output, which is called a label.
The circles represent neurons while the lines represent synapses. The role of a synapse is to take the multiply the inputs and weights. You can think of weights as the “strength” of the connection between neurons. Weights primarily define the output of a neural network. However, they are highly flexible. After, an activation function is applied to return an output.
Here’s a brief overview of how a simple feedforward neural network works:
At its core, neural networks are simple. They just perform a dot product with the input and weights and apply an activation function. When weights are adjusted via the gradient of loss function, the network adapts to the changes to produce more accurate outputs.
Our neural network will model a single hidden layer with three inputs and one output. In the network, we will be predicting the score of our exam based on the inputs of how many hours we studied and how many hours we slept the day before. Our test score is the output. Here’s our sample data of what we’ll be training our Neural Network on:
Original example via Welch Labs
As you may have noticed, the ? in this case represents what we want our neural network to predict. In this case, we are predicting the test score of someone who studied for four hours and slept for eight hours based on their prior performance.
Let’s start coding this bad boy! Open up a new python file. You’ll want to import numpy as it will help us with certain calculations.
First, let’s import our data as numpy arrays using np.array. We'll also want to normalize our units as our inputs are in hours, but our output is a test score from 0-100. Therefore, we need to scale our data by dividing by the maximum value for each variable.
<a href="https://medium.com/media/ca1690714f5ade6a1ed1e526e38e5c95/href">https://medium.com/media/ca1690714f5ade6a1ed1e526e38e5c95/href</a>
Next, let’s define a python class and write an init function where we'll specify our parameters such as the input, hidden, and output layers.
<a href="https://medium.com/media/a4e69278ad8229cad6b356799df27006/href">https://medium.com/media/a4e69278ad8229cad6b356799df27006/href</a>
It is time for our first calculation. Remember that our synapses perform a dot product, or matrix multiplication of the input and weight. Note that weights are generated randomly and between 0 and 1.
In the data set, our input data, X, is a 3x2 matrix. Our output data, y, is a 3x1 matrix. Each element in matrix X needs to be multiplied by a corresponding weight and then added together with all the other results for each neuron in the hidden layer. Here's how the first input data element (2 hours studying and 9 hours sleeping) would calculate an output in the network:
This is all a Neural Network actually does!
This image breaks down what our neural network actually does to produce an output. First, the products of the random generated weights (.2, .6, .1, .8, .3, .7) on each synapse and the corresponding inputs are summed to arrive as the first values of the hidden layer. These sums are in a smaller font as they are not the final values for the hidden layer.
<a href="https://medium.com/media/6cd50e440085e93247e097592ba33e63/href">https://medium.com/media/6cd50e440085e93247e097592ba33e63/href</a>
To get the final value for the hidden layer, we need to apply the activation function. The role of an activation function is to introduce nonlinearity. An advantage of this is that the output is mapped from a range of 0 and 1, making it easier to alter weights in the future.
There are many activation functions out there. In this case, we’ll stick to one of the more popular ones — the sigmoid function.
<a href="https://medium.com/media/70db7b8ab616a76dd398f8beb253dd11/href">https://medium.com/media/70db7b8ab616a76dd398f8beb253dd11/href</a>
Now, we need to use matrix multiplication again, with another set of random weights, to calculate our output layer value.
<a href="https://medium.com/media/d6e51f43180744df5f9dda7f3849da8c/href">https://medium.com/media/d6e51f43180744df5f9dda7f3849da8c/href</a>
Lastly, to normalize the output, we just apply the activation function again.
<a href="https://medium.com/media/2ba6028c0c64676f922049bdb75d2b58/href">https://medium.com/media/2ba6028c0c64676f922049bdb75d2b58/href</a>
And, there you go! Theoretically, with those weights, out neural network will calculate .85 as our test score! However, our target was .92. Our result wasn't poor, it just isn't the best it can be. We just got a little lucky when I chose the random weights for this example.
How do we train our model to learn? Well, we’ll find out very soon. For now, let’s countinue coding our network.
If you are still confused, I highly reccomend you check out this informative video which explains the structure of a neural network with the same example.
Now, let’s generate our weights randomly using np.random.randn(). Remember, we'll need two sets of weights. One to go from the input to the hidden layer, and the other to go from the hidden to output layer.
<a href="https://medium.com/media/254adf290168eb64fec13300f83ab4c5/href">https://medium.com/media/254adf290168eb64fec13300f83ab4c5/href</a>
Once we have all the variables set up, we are ready to write our forward propagation function. Let's pass in our input, X, and in this example, we can use the variable z to simulate the activity between the input and output layers. As explained, we need to take a dot product of the inputs and weights, apply an activation function, take another dot product of the hidden layer and second set of weights, and lastly apply a final activation function to receive our output:
<a href="https://medium.com/media/9794593fee20ea0474f5bafb591dfffc/href">https://medium.com/media/9794593fee20ea0474f5bafb591dfffc/href</a>
Lastly, we need to define our sigmoid function:
<a href="https://medium.com/media/0484022c97f531a6a52a832fd7d25cc7/href">https://medium.com/media/0484022c97f531a6a52a832fd7d25cc7/href</a>
And, there we have it! A (untrained) neural network capable of producing an output.
<a href="https://medium.com/media/98a08b826c403d8e6db7c77b8f92583b/href">https://medium.com/media/98a08b826c403d8e6db7c77b8f92583b/href</a>
As you may have noticed, we need to train our network to calculate more accurate results.
Since we have a random set of weights, we need to alter them to make our inputs equal to the corresponding outputs from our data set. This is done through a method called backpropagation.
Backpropagation works by using a loss function to calculate how far the network was from the target output.
One way of representing the loss function is by using the mean sum squared loss function:
In this function, o is our predicted output, and y is our actual output. Now that we have the loss function, our goal is to get it as close as we can to 0. That means we will need to have close to no loss at all. As we are training our network, all we are doing is minimizing the loss.
To figure out which direction to alter our weights, we need to find the rate of change of our loss with respect to our weights. In other words, we need to use the derivative of the loss function to understand how the weights affect the input.
In this case, we will be using a partial derivative to allow us to take into account another variable.
This method is known as gradient descent. By knowing which way to alter our weights, our outputs can only get more accurate.
Here’s how we will calculate the incremental change to our weights:
Calculating the delta output sum and then applying the derivative of the sigmoid function are very important to backpropagation. The derivative of the sigmoid, also known as sigmoid prime, will give us the rate of change, or slope, of the activation function at output sum.
Let’s continue to code our Neural_Network class by adding a sigmoidPrime (derivative of sigmoid) function:
<a href="https://medium.com/media/99dc46d88dd51da3fbe535009815eea3/href">https://medium.com/media/99dc46d88dd51da3fbe535009815eea3/href</a>
Then, we’ll want to create our backward propagation function that does everything specified in the four steps above:
<a href="https://medium.com/media/2b4734a7a4484bfc4e16a4af5b655a34/href">https://medium.com/media/2b4734a7a4484bfc4e16a4af5b655a34/href</a>
We can now define our output through initiating foward propagation and intiate the backward function by calling it in the train function:
<a href="https://medium.com/media/6c72b28749cd7b263c395e24ce704d3d/href">https://medium.com/media/6c72b28749cd7b263c395e24ce704d3d/href</a>
To run the network, all we have to do is to run the train function. Of course, we'll want to do this multiple, or maybe thousands, of times. So, we'll use a for loop.
<a href="https://medium.com/media/a8ffd84921eb019603698ff919951f11/href">https://medium.com/media/a8ffd84921eb019603698ff919951f11/href</a>
Here’s the full 60 lines of awesomeness:
<a href="https://medium.com/media/cf7225cdf897eb37ca396817c406baab/href">https://medium.com/media/cf7225cdf897eb37ca396817c406baab/href</a>
There you have it! A full-fledged neural network that can learn from inputs and outputs. While we thought of our inputs as hours studying and sleeping, and our outputs as test scores, feel free to change these to whatever you like and observe how the network adapts! After all, all the network sees are the numbers. The calculations we made, as complex as they seemed to be, all played a big role in our learning model. If you think about it, it’s super impressive that your computer, an object, managed to learn by itself!
Stay tuned for more machine learning tutorials on other models like Linear Regression and Classification!
Special thanks to Kabir Shah for his contributions to the development of this tutorial