paint-brush
Implementing different variants of Gradient Descent Optimization Algorithm in Python using Numpyby@NKumar
11,063 reads
11,063 reads

Implementing different variants of Gradient Descent Optimization Algorithm in Python using Numpy

by NiranjanKumar24mApril 21st, 2019
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

This is a follow-up to my previous post on optimization algorithms. Learn how tensorflow or pytorch implement optimization algorithms by using numpy and create beautiful animations using matplotlib. Learn how to implement different variants of gradient descent optimization technique and also visualize the working of the update rule for these variants using Matplotlib. This is one of the most commonly used optimization techniques to optimize neural networks. The algorithm updates the parameters by moving in the direction opposite to the gradient of the objective function with respect to network parameters.

People Mentioned

Mention Thumbnail

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - Implementing different variants of Gradient Descent Optimization Algorithm in Python using Numpy
NiranjanKumar HackerNoon profile picture
NiranjanKumar

NiranjanKumar

@NKumar

DeepLearning Enthusiast. Data Science Writer @marktechpost.com

About @NKumar
LEARN MORE ABOUT @NKUMAR'S
EXPERTISE AND PLACE ON THE INTERNET.
L O A D I N G
. . . comments & more!

About Author

NiranjanKumar HackerNoon profile picture
NiranjanKumar@NKumar
DeepLearning Enthusiast. Data Science Writer @marktechpost.com

TOPICS

THIS ARTICLE WAS FEATURED IN...

Permanent on Arweave
Read on Terminal Reader
Read this story in a terminal
 Terminal
Read this story w/o Javascript
Read this story w/o Javascript
 Lite