paint-brush
Deep Learning isn’t the brainby@seanaubin
734 reads
734 reads

Deep Learning isn’t the brain

by Sean AubinOctober 8th, 2016
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

<em>[epistemic status: I work in a lab dedicated to biologically plausible neural circuits, so I’m informed on the problem, but probably still biased. There’s probably going be a follow-up post to this once I get a bunch of rebuttals from people smarter than me.]</em>

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Deep Learning isn’t the brain
Sean Aubin HackerNoon profile picture

(but sometimes the results look like one)

[epistemic status: I work in a lab dedicated to biologically plausible neural circuits, so I’m informed on the problem, but probably still biased. There’s probably going be a follow-up post to this once I get a bunch of rebuttals from people smarter than me.]

Update: As expected, I got a bunch of rebuttals and have adjusted my position accordingly

I keep seeing academic and non-academic articles comparing Deep Learning (DL) and the brain. This offends my sensibilities a bit, because although there are many results from DL that resembles certain areas of the brain, DL is not a good overall description of the brain. DL explicitly passes the buck on biological plausibility (like almost every other cognitive modelling approach) and implies that it’s “neurons” can be implemented biologically, it’s just that no one has bothered yet. I think the problem goes much deeper and that DL is missing a lot of the key features of the brain, which makes it a poor analogical target.

The brain is low power

DL is power hungry. Alpha GO consumed the power of 1202 CPUs and 176 GPUs, not to train, but just to run. The TenserFlow Processing Unit is an attempt to satiate this hunger, but it’s still not even close to the brain’s power consumption of 20W. IBM’s TrueNorth chip is another example of trying to bring low-power computation, but it’s capabilities are quite limited when compared to other Neuromorphic hardware. Specifically, True North only implements feed-forward networks and has no on-chip learning.

The brain can’t do back-prop

Back-propagation is the foundation of all DL. Although there is evidence that errors being propagated through multiple layers is happening in the brain, no one has come up with a method for back-propagation (back-prop)that doesn’t rely on information propagating backwards through unidirectional synapses. I personally think it’s only a matter of time before a biologically plausible method is discovered, but until then it is unwise to ignore the implementation details and the restrictions it might place on what can be learned.

The brain uses spikes to communicate

Although it is possible to convert DL networks into spiking neurons for use on neuromorphic hardware, these spikes are not leveraged for specific computational advantages. As far as I know (and I still have some reading to do), spiking computation has yet to be used anywhere for the learning in DL.

Neurotransmitters aren’t just spike transporters and neurons aren’t just spike-machines

DL completely ignores the role of neurotransmitters. However, neurotransmitters have been shown to be computationally significant in adapting the receptive features networks on the fly, something which DL has really hard time doing.

The brain is noisy

Given the choice between neuron redundancy and neuron performance, evolution chose to make the brain redundant. Neurons are noisy, which isn’t surprising when you consider the warm, biologically variable environment they’re in. Although certain DL networks can cope with loss of their nodes, DL isn’t known for it’s robustness to noisy input or noisy training data.

In conclusion, there are a lot of features of the brain that DL is omitting, thus using DL as an analogy for neural circuitry isn’t ideal. The alternative to comparing with DL is using a modelling paradigm that takes these challenges into account. At the time of writing, the only approach I know of is the Neural Engineering Framework (NEF) from the laboratory I belong to, but I’m sure as research marches forward other frameworks will emerge.

“But Sean,” you cry with a gleam of mischief in your eye, “don’t most NEF models from the lab you belong to suffer from the same problems as DL? The models usually stop at LIF neurons, which aren’t realistic neurons at all! Why don’t you use more complex neuron models, i.e. things like dendritic computation, multi-compartmental neurons, glial cells and neurogenesis?”

That’s a work in progress. There are 2 people (Aaron Voelker and Peter Duggins) of the 16-strong Computation Neuroscience Research Group (CNRG) that I belong to who are working on the problem of more complex neuron models. Eric Hunsberger is working on biologically plausible back-prop and the moment he has a break-through you can be sure I’ll be shoving it in everyone’s face.

As for neurogenesis, There’s no good computational model of what neurogenesis does and the CNRG lacks the resources to do that sort of basic research. Dendritic and glial cell computation has no one working on it, because we’re only 16 people. If that bothers you, maybe you want to join us?

Hacker Noon is how hackers start their afternoons. We’re a part of the @AMIfamily. We are now accepting submissions and happy to discuss advertising &sponsorship opportunities.

To learn more, read our about page, like/message us on Facebook, or simply, tweet/DM @HackerNoon.

If you enjoyed this story, we recommend reading our latest tech stories and trending tech stories. Until next time, don’t take the realities of the world for granted!