paint-brush
Pruning neurons with the Socratic Methodby@andrew_lucker

Pruning neurons with the Socratic Method

by Andrew LuckerFebruary 28th, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

By now most technically minded people have heard of neural networks. This is a technology that is widely adopted while still being very active in research. It is an exciting time to do research in CS.

Company Mentioned

Mention Thumbnail
featured image - Pruning neurons with the Socratic Method
Andrew Lucker HackerNoon profile picture

By now most technically minded people have heard of neural networks. This is a technology that is widely adopted while still being very active in research. It is an exciting time to do research in CS.

However, this article does not expect the reader to know much about theoretical or applied method in AI. All that is required is for the reader to imagine a Neural Network as layers of connections of neurons that send signals (synapses) to one another.

First, let look at depth and breadth, to find our first issue. Hardware, at least the kind that you would need to model the human brain, is inaccessible. No one has that sort of computing power yet, though there are plans to build towards that, but that is another story. The first issue is simply that when building neural networks with many layers (deep) and many pathways (breadth) you run out of space. This is why we need to downsize or prune the networks regularly.

Second, our philosophy of mind is prepubescent. We have no rigorous theory of how computation and simulated brains map onto real human minds. We don’t even know what we don’t know. This manifests as a hit-or-miss research schedule that almost abandoned deep learning until we realized that it was actually amazing.

When taken together, we have giant networks of essentially banana mush, that are driving cars and running the financial system. If this sounds ridiculous to you, its because it is. Everyone knows it is crazy, but it is still state-of-the-art, and nothing but nuclear war will stop this sort of bandwagoning momentum.

This is where pruning comes in. Everyone knows that there is inefficiency in the models and duplication in the code, but we don’t know where. So we can start looking.

So we start with the socratic method. Let’s let the machines ask themselves questions and prune the models to be most verily correct, yet compressed in application.

First question: If the input and output of two subnets of a network are equivalent in outcome, can we remove one and replace it with a connection to the other? The answer to this is context sensitive.

Second question: Can we enumerate all possible networks and refer to them by ordinal values? The answer to this is yes (See Church-Turing theses).

Third question: Can we order all previous networks such that the most prevalent networks have the lowest, and thus most compact, ordinal. The answer to this is yes (same proof by Church).

So in three questions, if you accept all of the assumptions, we have found an algorithm to create perfectly compact neural networks from any input. In practice, nothing is this simple. The search for our neural platonic solids continues.

Hacker Noon is how hackers start their afternoons. We’re a part of the @AMIfamily. We are now accepting submissions and happy to discuss advertising & sponsorship opportunities.

To learn more, read our about page, like/message us on Facebook, or simply, tweet/DM @HackerNoon.

If you enjoyed this story, we recommend reading our latest tech stories and trending tech stories. Until next time, don’t take the realities of the world for granted!