Today, with open source machine learning software libraries such as , or we can create neural network, even with a high structural complexity, with just a few lines of code. Having said that, the Math behind neural networks is still a mystery to some of us and having the Math knowledge behind neural networks and deep learning can help us understand what’s happening inside a neural network. It is also helpful in architecture selection, fine-tuning of Deep Learning models, hyperparameters tuning and optimization. TensorFlow Keras PyTorch Introduction I ignored understanding the Math behind neural networks and Deep Learning for a long time as I didn’t have good knowledge of algebra or differential calculus. Few days ago, I decided to to start from scratch and derive the methodology and Math behind neural networks and Deep Learning, to know how and why they work. I also decided to write this article, which would be useful to people like me, who finds it difficult to understand these concepts. Perceptrons Perceptrons — invented by Frank Rosenblatt in 1957, are the simplest neural network that consist of number of inputs, only one neuron and one output, where is the number of features of our dataset. The process of passing the data through the neural network is know as forward propagation and the forward propagation carried out in a Perceptron is explained in the following three steps. n n : For each input, multiply the input value with weights and sum all the multiplied values. Weights — represent the strength of the connection between neurons and decides how much influence the given input will have on the neuron’s output. If the weight w₁ has higher value than the weight w₂, then the input x₁ will have higher influence on the output than w₂. Step 1 xᵢ wᵢ The row vectors of the inputs and weights are x = [x₁, x₂, … , xₙ] and w [w₁, w₂, … , wₙ] respectively and their is given by = dot product Hence, the summation is equal to the of the vectors and dot product x w : Add bias to the summation of multiplied values and let’s call this . Bias — also know as offset is necessary in most of the cases, to move the entire activation function to the left or right to generate the required output values . Step 2 b z : Pass the value of to a non-linear activation function. Activation functions — are used to introduce non-linearity into the output of the neurons, without which the neural network will just be a linear function. Moreover, they have a significant impact on the learning speed of the neural network. Perceptrons have as their activation function. However, we shall use also know as function as our activation function. Step 3 z binary step function Sigmoid — logistic where, denotes the activation function and the output we get after the forward prorogation is know as the . σ Sigmoid predicted value ŷ Learning Algorithm The learning algorithm consist of two parts — Backpropagation and Optimization. Backpropagation, short for , refers to the algorithm for computing the gradient of the loss function with respect to the weights. However, the term is often used to refer to the entire learning algorithm. The backpropagation carried out in a Perceptron is explained in the following two steps. Backpropagation : backward propagation of errors : To know an estimation of how far are we from our desired solution a is used. Generally, is chosen as loss function for regression problems and for classification problems. Let’s take a regression problem and its loss function be Mean Squared Error, which squares the difference between (yᵢ) and ( ŷᵢ ). Step 1 loss function Mean Squared Error cross entropy actual predicted value Loss function is calculated for the entire training dataset and their average is called the . Cost function C : In order to find the best weights and bias for our Perceptron, we need to know how the cost function changes in relation to weights and bias. This is done with the help the s ( — how one quantity changes in relation to another quantity. In our case, we need to find the gradient of the cost function with respect to the weights and bias. Step 2 gradient rate of change) Let’s calculate the gradient of cost function with respect to the weight using Since the cost function is not directly related to the weight wᵢ, let’s use the . C wᵢ partial derivation. chain rule Now we need to find the following three gradients Let’s start with the gradient of the (C) with respect to the ( ŷ ) Cost function predicted value Let y = [y₁ , y₂ , … yₙ] and ŷ =[ ŷ₁ , ŷ₂ , … ŷₙ] be the row vectors of actual and predicted values. Hence the above equation is simplifies as Now let’s find the the gradient of the with respect to the This will be a bit lengthy. predicted value z. The gradient of with respect to the weight is z wᵢ Therefore we get, What about Bias? — Bias is theoretically considered to have an input of constant value . Hence, 1 Optimization is the selection of a best element from some set of available alternatives, which in our case, is the selection of best weights and bias of the perceptron. Let’s choose as our optimization algorithm, which changes the and , proportional to the negative of the gradient of the Cost function with respect to the corresponding weight or bias. ( ) is a hyperparameter which is used to control how much the weights and bias are changed. Optimization : gradient descent weights bias Learning rate α The weights and bias are updated as follows and the Backporpagation and gradient descent is repeated until convergence. Conclusion I hope that you’ve found this article useful and understood the maths behind the Neural Networks and Deep Learning. I have explained the working of a single neuron in this article, however the these basic concepts are applicable to all kinds of Neural Networks with some modifications. If you have any questions or if you found a mistake, please let me know in the comment.