Zen of Stochastic Gradient Descent Principle

Written by samzer | Published 2019/08/12
Tech Story Tags: artificial-intelligence | machine-learning | philosophy | datascience | optimization | stochastic-gradient-descent | sgd | hackernoon-top-story

TLDR Stochastic Gradient Descent is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. differentiable or subdifferentiable) It is called stochastic because the method uses randomly selected (or shuffled) samples to evaluate the gradients" The key is to do something that is not perfect but it will be close enough to minimise your cost function. The next step is to start off with a basic research, choose one and start the basic exercise club.via the TL;DR App

no story

Written by samzer | Founder at Modelchimp. 9 years in data science.
Published by HackerNoon on 2019/08/12