paint-brush
Kososola Gradient Moyenne Stochastiquepene@kustarev
31,906 botángi
31,906 botángi

Kososola Gradient Moyenne Stochastique

pene Andrey Kustarev4m2024/06/06
Read on Terminal Reader
Read this story w/o Javascript

Molai mingi; Mpo na kotánga

Descent ya gradient ezali optimisation populaire oyo esalelamaka pona ko localiser ba minimes mondiaux ya ba fonctions objectives oyo epesami. Algorithme esalela gradient ya fonction objective pona ko traverser pente ya fonction tii ekokoma na point ya se. Full Gradient Descent (FG) na Stochastic Gradient Descent (SGD) ezali deux variations populaire ya algorithme. FG esalela ensemble ya ba données mobimba na tango ya iterations moko moko pe epesaka taux ya convergence ya likolo na coût ya calcul ya likolo. Na iterations moko na moko, SGD esalela sous-ensemble ya ba données pona kosala algorithme. Ezali mosika mingi koleka efficace kasi na convergence incertaine. Stochastic Average Gradient (SAG) ezali variation mosusu oyo epesaka ba avantages ya ba algorithmes nionso mibale ya kala. Esalelaka moyenne ya ba gradients ya kala pe sous-ensemble ya ensemble ya ba données pona kopesa taux ya convergence ya likolo na calcul ya se. Algorithme ekoki ko modifier lisusu pona ko améliorer efficacité na yango en utilisant vectorisation na ba mini-lots.

People Mentioned

Mention Thumbnail

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Kososola Gradient Moyenne Stochastique
Andrey Kustarev HackerNoon profile picture
0-item


Descent gradient ezali technique ya optimisation oyo eyebani mingi na modélisation ya apprentissage automatique (ML). Algorithme e minimiser erreur entre ba valeurs prévues na vérité ya mabele. Lokola technique etalaka point moko na moko ya ba données pona ko comprendre pe ko minimiser erreur, performance na yango etali taille ya ba données ya formation. Ba techniques lokola Stochastic Gradient Descent (SGD) esalemi pona kobongisa performance ya calcul mais na coût ya précision ya convergence.


Stochastic Average Gradient e équilibrer approche classique, eyebani na kombo ya Full Gradient Descent na SGD, pe epesaka ba avantages nionso mibale. Kasi yambo tokoka kosalela algorithme, esengeli nanu to comprendre signification na yango pona optimisation ya modèle.

Optimiser ba Objectifs ya Apprentissage Machine na Descent ya Gradient

Algorithme nionso ya ML ezali na fonction ya perte associée oyo ezali na tina ya ko minimiser to ko améliorer performance ya modèle. Na matematiki, bobungisi ekoki kolimbolama boye:


Ezali kaka bokeseni kati ya bobimisi ya solo mpe oyo esakolamaki, mpe kokitisa bokeseni oyo elakisi ete modèle na biso epusani penepene na motuya ya bosolo ya mabele.


Algorithme ya minimisation esalela descent ya gradient pona ko traverser fonction ya perte pe koluka minimum mondial. Etape moko na moko ya traversal esangisi ko mettre à jour ba poids ya algorithme pona ko optimiser sortie.


Kokita na Gradient ya Pamba

Algorithme ya kokita ya gradient ya momesano esalela moyenne ya ba gradients nionso oyo e calculer na ensemble ya ba données mobimba. Cycle de vie ya ndakisa moko ya formation ezali lokola oyo elandi:



Equation ya mise à jour ya poids ezali lokola oyo elandi:

Epayi wapi W ezali komonisa ba poids ya modèle pe dJ/dW ezali dérivé ya fonction ya perte na oyo etali poids ya modèle. Méthode conventionnelle ezali na taux ya convergence ya likolo kasi ekomaka cher na calcul tango ozali kosala na ba ensembles ya ba données ya minene oyo ezali na ba millions ya ba points de données.

Bokiti ya Gradient stochastique (SGD) .

Méthodologie ya SGD etikali ndenge moko na GD ya pamba, kasi na esika ya kosalela ensemble ya ba données mobimba pona kosala calcul ya ba gradients, esalelaka lote moke oyo euti na ba entrées. Méthode ezali beaucoup plus efficace mais ekoki ko sauter trop autour ya ba minima mondiales puisque iterations moko na moko esalela kaka partie ya ba données pona apprentissage.

Gradient ya moyenne stochastique

Ndenge ya Gradient moyen stochastique (SAG) ekotisama lokola esika ya katikati kati ya GD na SGD. Eponaka point ya ba données aléatoires mpe e mettre à jour valeur na yango na kotalela gradient na point wana mpe moyenne pondérée ya ba gradients ya kala oyo ebombami mpo na point de données wana particulier.


Ndenge moko na SGD, SAG e modelaka problème nionso lokola somme finie ya ba fonctions convexes, différenciables. Na iterations nionso epesami, esalelaka ba gradients ya lelo pe moyenne ya ba gradients ya kala pona mise à jour ya poids. Equation yango ezuaka lolenge oyo :



Taux ya Convergence

Kati ya ba algorithmes mibale oyo eyebani mingi, gradient complet (FG) na descente gradient stochastique (SGD), algorithme ya FG ezali na taux ya convergence ya malamu koleka puisque esalelaka ensemble ya ba données mobimba na tango ya iterations moko moko pona calcul.

Atako SAG ezali na structure oyo ekokani na SGD, taux ya convergence na yango ekokani na pe tango mosusu malamu koleka approche ya gradient mobimba. Tableau 1 oyo ezali awa na se ezali na bokuse ba résultats oyo ewutaki na ba expériences ya Schmidt mpe bato mosusu. al .

Liziba: https://arxiv.org/pdf/1309.2388. Ezali na ntina mingi kozala na bomoi ya malamu

Mbongwana mosusu

Malgré performance na yango ya kokamwa, ba modifications ebele e proposer na algorithme original ya SGD pona ko aider na ko améliorer performance.


  • Re-pondissement na ba iterations ya liboso : Convergence ya SAG etikalaka malembe na tango ya ba iterations ya liboso puisque algorithm e normaliser direction na n (nombre total ya ba points de données). Yango epesaka estimation ya sikisiki te lokola algorithme emoni nanu ba points ya ba données ebele te. Modification epesi likanisi ya normalisation na m au lieu ya n, esika m ezali nombre ya ba points de données oyo emonanaka ata mbala moko tii na iterations wana particulière.
  • Mini-lots : Approche ya Gradient Stochastique esalelaka ba mini-lots pona ko traité ba points ya ba données ebele na mbala moko. Ndenge moko ekoki kosalelama mpo na SAG. Yango epesaka nzela na vectorisation mpe parallèlisation mpo na kobongisa efficacité ya ordinateur. Ezali mpe kokitisa charge ya mémoire, mokakatano monene mpo na algorithme SAG.
  • Bomeki ya bonene ya matambe: Bonene ya matambe oyo tolobelaki liboso (116L) epesaka mbano ya kokamwa, kasi bakomi bamekaki lisusu na kosalelaka bonene ya matambe ya 1L. Oyo ya nsuka epesaki boyokani ya malamu koleka. Kasi, bakomi bakokaki te kolakisa analyse officielle ya ba résultats oyo ebongisamaki. Bazali kosukisa ete esengeli komeka bonene ya litambe mpo na koluka oyo ebongi mpenza mpo na mokakatano yango ya sikisiki.


Makanisi ya Nsuka

Descent ya gradient ezali optimisation populaire oyo esalelamaka pona ko localiser ba minimes mondiaux ya ba fonctions objectives oyo epesami. Algorithme esalela gradient ya fonction objective pona ko traverser pente ya fonction tii ekokoma na point ya se.

Full Gradient Descent (FG) na Stochastic Gradient Descent (SGD) ezali deux variations populaire ya algorithme. FG esalela ensemble ya ba données mobimba na tango ya iterations moko moko pe epesaka taux ya convergence ya likolo na coût ya calcul ya likolo. Na iterations moko na moko, SGD esalela sous-ensemble ya ba données pona kosala algorithme. Ezali mosika mingi koleka efficace kasi na convergence incertaine.


Stochastic Average Gradient (SAG) ezali variation mosusu oyo epesaka ba avantages ya ba algorithmes nionso mibale ya kala. Esalelaka moyenne ya ba gradients ya kala pe sous-ensemble ya ensemble ya ba données pona kopesa taux ya convergence ya likolo na calcul ya se. Algorithme ekoki ko modifier lisusu pona ko améliorer efficacité na yango en utilisant vectorisation na ba mini-lots.


L O A D I N G
. . . comments & more!

About Author

Andrey Kustarev HackerNoon profile picture
Andrey Kustarev@kustarev
Director of Portfolio Management at WorldQuant. Expert in quantitative finance.

KOKANGA BA ÉTIQUES

ARTICLE OYO EZALAKI PRESENTE NA...