Deep learning is the opposite of where I want to be. Here’s why:
Deep learning and neural nets are famous because they just work. The missing implication is that you have a whole data-center’s worth of resources to burn. Simply put, I don’t have those resources.
What I do have is insight into what might be happening inside those expansive models. I am betting on those models being slower to train and overall more expensive to build than optimal by several orders of magnitude. This is my margin.
I’ve already tested simple heuristics against several OpenAI games. It is amazing how simple methods can accomplish complex tasks:
This demo involved only a left/right decision based on a 4 LIDAR array input. This is from raw video, where the concept of LIDAR is even not well defined, yet it still works, sort of.
This is the start of my search for the Neural Platonic Solids. In research I believe that there are recurring patterns in neural networks, that once deduplicated, will lead to at least a 1000x improvement in training, space, and computational requirements.
It is awesome to see what is coming down the pipeline. Looking at frontier research and products is exciting, no doubt. But I will stick to my methods because I am a spoil-sport like that. I will just continue to clean and comb the nets into their rightful, original, and idyllic form. Hopefully it won’t involve too much math because that might go over my head.
Hacker Noon is how hackers start their afternoons. We’re a part of the @AMIfamily. We are now accepting submissions and happy to discuss advertising & sponsorship opportunities.
To learn more, read our about page, like/message us on Facebook, or simply, tweet/DM @HackerNoon.
If you enjoyed this story, we recommend reading our latest tech stories and trending tech stories. Until next time, don’t take the realities of the world for granted!