paint-brush
Intuitive RL: Intro to Advantage-Actor-Critic (A2C)by@rudygilman
87,967 reads
87,967 reads

Intuitive RL: Intro to Advantage-Actor-Critic (A2C)

by Rudy GilmanJanuary 9th, 2018
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Reinforcement learning (RL) practitioners have produced a number of excellent tutorials. Most, however, describe RL in terms of mathematical equations and abstract diagrams. We like to think of the field from a different perspective. RL itself is inspired by how animals learn, so why not translate the underlying RL machinery back into the natural phenomena they’re designed to mimic? Humans learn best through stories.

People Mentioned

Mention Thumbnail

Company Mentioned

Mention Thumbnail
featured image - Intuitive RL: Intro to Advantage-Actor-Critic (A2C)
Rudy Gilman HackerNoon profile picture

Reinforcement learning (RL) practitioners have produced a number of excellent tutorials. Most, however, describe RL in terms of mathematical equations and abstract diagrams. We like to think of the field from a different perspective. RL itself is inspired by how animals learn, so why not translate the underlying RL machinery back into the natural phenomena they’re designed to mimic? Humans learn best through stories.

This is a story about the Actor Advantage Critic (A2C) model. Actor-Critic models are a popular form of Policy Gradient model, which is itself a vanilla RL algorithm. If you understand the A2C, you understand deep RL.

After you’ve gained an intuition for the A2C, check out:

Illustrations by @embermarke