paint-brush
Zen of Stochastic Gradient Descent Principleby@samzer
929 reads
929 reads

Zen of Stochastic Gradient Descent Principle

by Samir Madhavan4mAugust 12th, 2019
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Stochastic Gradient Descent is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. differentiable or subdifferentiable) It is called stochastic because the method uses randomly selected (or shuffled) samples to evaluate the gradients" The key is to do something that is not perfect but it will be close enough to minimise your cost function. The next step is to start off with a basic research, choose one and start the basic exercise club.

Company Mentioned

Mention Thumbnail
featured image - Zen of Stochastic Gradient Descent Principle
Samir Madhavan HackerNoon profile picture
Samir Madhavan

Samir Madhavan

@samzer

Founder at Modelchimp. 9 years in data science.

About @samzer
LEARN MORE ABOUT @SAMZER'S
EXPERTISE AND PLACE ON THE INTERNET.
L O A D I N G
. . . comments & more!

About Author

Samir Madhavan HackerNoon profile picture
Samir Madhavan@samzer
Founder at Modelchimp. 9 years in data science.

TOPICS

THIS ARTICLE WAS FEATURED IN...

Permanent on Arweave
Read on Terminal Reader
Read this story in a terminal
 Terminal
Read this story w/o Javascript
Read this story w/o Javascript
 Lite