paint-brush
A Beginners Guide to the Gradient Descent Algorithmby@ashwinsharmap
360 reads
360 reads

A Beginners Guide to the Gradient Descent Algorithm

by Ashwin Sharma POctober 15th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The gradient descent algorithm is an approach to find the minimum point or optimal solution for a given dataset. In Machine Learning, it is used to update the coefficients of our model. It also ensures that the predicted output value is close to the expected output. The weight updation takes place by decrementing the cost function in steps of the gradient (derivative) of weight function. In real-world cases, it moves in a zig-zag manner for most of the datasets.
featured image - A Beginners Guide to the Gradient Descent Algorithm
Ashwin Sharma P HackerNoon profile picture

The gradient descent algorithm is an approach to find the minimum point or optimal solution for a given dataset. It follows the steepest descent approach. That is it moves in the negative gradient direction to find the local or global minima, starting out from a random point. We use gradient descent to reach the lowest point of the cost function.

In Machine Learning, it is used to update the coefficients of our model. It also ensures that the predicted output value is close to the expected output.

For example, in Linear Regression, where we separate the output using a linear equation.

Let us say we have an equation y=mx+c.

Here, m stands for slope and c for the y-intercept. These two values can be optimized and the error in the cost function (the difference between expected and predicted output) can be reduced using the gradient descent algorithm.

Weight updation

So let us see how weight updation works in gradient descent. Let us consider a graph of cost function vs weight. For improving our model, bringing down the value of cost function is essential. We consider the lowest point in the graph as the winner since the cost function would be minimal at this point.

In the above diagram, we can see that with each iteration, the function tries to bring down the cost value. But that is not the case in real-world datasets. In real-world cases, it moves in a zig-zag manner for most of the datasets. The graph for real-world cases is as shown below.

What is the maths behind it?

The weight updation takes place by decrementing the cost function in steps of the gradient (derivative) of weight function. The equation used for weight updation is:

Here ω corresponds the weight and η the learning rate, which determines by what value the descent is made in each iteration.

For every cross mark shown in the graph, we calculate the slope. According to the slope value, we update the weights in the above equation.

Conclusion

Here is a sequence diagram to brief out the process of updating the weights. This update in weights leads to a reduction in the cost function.


Hope you learnt something from this article? Thank you for taking the time to read it.

Previously published at https://ashwinsharmap.hashnode.dev/what-is-gradient-descent-algorithm