

âReinforcement Learning is simply Science of Decision Making. Thatâs what makes it so General.ââââDavid Silver
You might have observed a level of saturation in Machine Learning recently. Well thatâs actually saturation in âSupervised Learningâ actually (poor Kaggle).
Most of us donât know any other learning algorithm than Back-Propagation. There are a few recent ones like âEquilibrium Propagationâ and âSynthetic Gradientsâ but more-or-less fall under similar paradigm as Back-propagation.
While, Unsupervised learning and Reinforcement learning remain unscratched (comparatively, in mainstream at least).
In this blog weâll be diving into Reinforcement Learning or as I like to call it âStupidity-followed-by-Regretâ or âWhat-Ifâ learning. Those are actually pretty accurate names.
Reinforcement learning is interaction based learning in an âcause-effectâ environment.
Effect or consequence is based upon the actions and cause if the goal to be achieved.
Much like talking to your crush. The goal is to impress her but every word you say will have a consequence.
Sheâll either:
#BelieveMeIAmAGuy #IntrovertProblems
RL is Goal oriented learning, where the learner (referred to as âagentâ in RL) is not dictated what action to take, rather it is encouraged to explore and discover the best action to yield best results. Vaguely speaking, exploration is based upon trial-and-error approach.
In RL, your present action not only affect your immediate reward (getting that kiss) but also your future rewards (her agreeing for a second date). So the agent might opt for higher future rewards instead of high current reward.
Remember the following statement, youâll understand it better later:
RL is not defined by characterising a learning method but by characterising the learning problem itself.
The learner in RL is referred to as agent.
An âagentâ has the sense of itâs environment and is able to take actions in form of interactions with the environment in order to maximise the reward received from the environment.
Q) Who is a Supervisor?
Ans) In Supervised learning, the training set supervises the learning of the learner. Hence it acts like a supervisor.
Q) How and when is an RL Agent better than a Supervisor?
Ans) In an uncharted territory, a supervisor (your ever-single friend) does not have much idea about the correct actions (step to impress your crush). In this case much better learning can be done by own (agent) experiences.
Obviously there is a trade-off, there is always a trade-off!
Supervised learning has Bias-Variance trade-off and RL has Exploration-Exploitation trade-off.
To obtain high reward, the agent can either prefer
Letâs break it down
In Stochastic tasks, an action has to be tried multiple times to gain a reliable estimate about itâs reward. Yet, this canât ensure the which is the best possible action. Agent canât explore and exploit at same step, hence the trade-off.
Information gained during exploration can be exploited multiple times during future steps.
At times RL involves supervised learning. This is done to determine which capabilities are critical for achieving rewards.
Example:
Evolution, makes sure you know how to breath otherwise no matter how awesome you are, you wonât be able to achieve anything.
Growing up, do you achieve noble prize of keep playing with fidget spinners is not the concern of the supervised part.
These algorithms are categorised as Deep-Reinforcement-Learning.
More on that later, for now just have a look how good they can perform!
RL has 4 fundamental concepts:
It defines the behaviour of the agent at any given time.
Roughly, policy is mapping of currently known states and actions to be taken when in those states.
In terms of psychology, it corresponds to stimulus-response rules.
Policy can be in form of a look-up table or an extensive search process.
Reward is the gain that indicates what is good action in immediate sense.
Value is cumulative gain to estimate of how good an action will be in long run.
Example:
Eating cake will make me happy right now. Though in long run, cake is bad for my health.
The Reward System, combines reward and value under single concept and conveys:
In biological sense it refers to pleasure and pain.
In relationship sense it refers to her saying âyou are looking cuteâ and âyou should shave that beard offâ (it still hurts).
Mostly, the reward system is unaltered by the agent, though it may serve as bases for altering the policy.
Example:
Falling off the cycle hurts, so very low rewarding. I canât alter the reward system to take pleasure in that pain. BUT. I can later the way I ride cycle so I donât fall again.
Our main concern is the Value and not Reward.
Q) Why?
Ans) Reward only makes sure that immediate gain is max, future gains could be way worse. Value makes sure that we have max overall gain.
Q) Whatâs the problem then?
Ans) Usually immediate gains have more surety than future gains.
Q) What are value functions?
Ans) As the name suggest, value functions are functions used to compute value of an action. The tricky part is how they do it. These usually use complex inference based algorithms.
Example:
You buy a stock. Current market status is good. So with much surety you can sell it tomorrow and get good reward. Now if you keep the shares for another year, the value might sky-rocket to multiple folds but one can never be very sure about that.
Although they sound very similar, RL and Evolutionary algorithms are not the same thing.
Evolutionary algorithms donât use Value function. They directly search into spaces of policies.
Pros:
Cons:
This was just the first chapter, a basic intro to RL. We still have a world to explore in upcoming chapters.
Yeah, thatâs a random GIF.
Create your free account to unlock your custom reading experience.