All scientific ambition is grounded in the assumption that the world is a complex system of simple interactions.
Measuring and testing hypotheses through the scientific method is sort of like a “guess the next number in sequence” game. 1, 2, 3, 4 what comes next? The game seems simple until you realize that there are infinitely many sequences that fit the constraints, and the next number could be literally any number.
All reinforcement learning, including deep learning and neural networks have historically and persistently to this day, tend to approximate rather than reproduce its input. First it was xor. Now it is optical illusions. Every approach to learning has limits, even to the very core of what we know as humans, not just as machines.
However, this problem has never been a huge obstacle. We believe that the world is simple, and all evidence to this day suggests that it is simple to some fundamental level. This means that the perfect mathematical models that we generate from Occam’s razor should be sufficient. Or at least last until we look really really close or really really far away. This is the most interesting and tantalizing part of the scientific method, an almost religious undertone, is that so far all of our models have been flawed from the beginning.
The resulting point of view, is that not just our fledgling machines, but also our entire society, is based on a rough approximation of true reality. We already expect the next model breaking discovery. We already know that we are still in a cave, watching the shadow of reality dance before us. So what do we do: we look closer, and we look farther. Closer and farther than ever before. So far now that we need machines to help us gloss through all the data that we gather.
And that is now where our machines are. Stuck in Socrates’ cave, and blindly searching the wall to find a better view of ourselves.