No, We’re Probably Not In A Simulation

Written by dandisagrees | Published 2016/06/09
Tech Story Tags: simulation | not-in-a-simulation | in-a-simulation | elon-musk | computers

TLDRvia the TL;DR App

Elon Musk famously stated that he thinks the chances are 1 in billions that we aren’t in a simulation. He is not the first to think or talk about this, but he is a famous person these days, so I’m going with him. The argument boils down to this:

  1. Compute power is growing exponentially.
  2. We are very interested in simulations for both entertainment and technical reasons.
  3. With both of these holding true, eventually, we’ll be able to construct arbitrarily-perfect simulations of entire universes.
  4. If you’re in a good-enough simulated universe, nothing is stopping you from simulating “one deeper”
  5. If all of this is true, there would be more simulated realities than base realities, making it statistically more likely that we are in a simulated reality.

When smart people say stuff that doesn’t make intuitive sense, I can’t help but endlessly think and re-think through the details. Having studied physics and working in software, I’m familiar with exploring alien, complex, and non-intuitive domains and finding great new truths there. After much thought, I don’t think there’s much here — I think Elon et. al. are out to lunch. Sorry.

There’s a lot of interesting armchair philosophy involving brains, vats, and more, which have been explored ad nauseam — watch The Matrix if you haven’t already — but I don’t really want to go there right now. What I want to talk about is the assumptions that lead people like Elon to the conclusion that we are likely to be in a simulated reality.

Assumption one: compute power is growing exponentially and will continue to do so forever. Even a slow-growing exponential curve will do surprising things if you run it for long enough. If you fold a paper in half (doubling the thickness every time), it will be as thick as the observable universe after about a hundred folds. Assuming that the past 100 or so years of compute power doubling will continue at some pace or another for the next million years is troubling for two reasons. The first is that it’s illogical to assume something will last significantly longer than it already has. The second is the plain old reality. An average sheet of paper can’t be folded more than 7 times. You can squeeze a few more in by bending the rules, but eventually you just run out of space. Even Gordon Moore thinks that his eponymous law will start winding down around 2025. In the end, I think it’s pretty reasonable to assume that “computation density” has a physical upper limit, and we’ll start to bump up against it in the next 50 or 100 years.

Assumption two: computation will continue to be the rate limiting factor. Right now, we are limited in many of the things that we do by computation speed. We have to optimize simulations in order to get what we want done in finite time on the computers that we have. To see where this might go in a million years (or a thousand), consider a strange analogy: sand. There are (in round numbers) ten billion tons of sand, at $10 per ton, so 100 billion dollars worth of sand in the world. Apple (the computer company) could, in principle, buy all of the sand in the world. Why don’t they? Because they just don’t need that much damn sand! Similarly, there might be enough compute power in a few hundred/thousand/million years to do a competent universe simulation for the budget of a small country or even a rich individual, but it’ll still cost money… At some point, someone has to decide to hit “go” on the giant simulation, and source and/or pay for the power that it requires. What would motivate someone to do that?

Assumption three: we’ll get close to perfect simulations. Simulations are an incredible tool for understanding our world. We’ve gotten better and better at it, and we’ll continue to do so. However, at some point it’s just easier to do the thing in real life rather than simulation. Back in the 40’s, the Mississippi River Basin Model was constructed in order to better understand how rainfall and other environmental factors affected flooding. They built a model because it was easier to do so than to do the calculations by hand. Currently, a lot of things that used to be done in wind tunnels are now done on desktop workstations and supercomputers. As our understanding and compute power grows, more and more can be done in software. However, it’s crucial to recognize the idea of “constraints” — limitations added to the simulation to make it simpler. You have to pick and choose what to simulate or test at any given time, you just can’t do it all at once. A wind tunnel test is not a crash test, and the real world is both, all the time. Removing all the constraints means that actually doing something is more efficient than simulating it. This isn’t a question of shrinking computers, or making them more powerful. You’ll never be able to perfectly simulate any large system with any degree of efficiency.

Assumption four: nested simulations. Even if I grant you hat you might get somewhere close to a high-efficiency simulation, I don’t see any way to argue that a simulated computer simulation would be higher efficiency than a simulation in a base reality. Nesting a simulation doesn’t give you free computation somehow — you still have to do calculations somewhere in the base reality. Picturing a base reality with billions of nested simulations running inside it makes no sense whatsoever — even if it were physically possible, what would possess someone to do nothing but build and maintain a gigantic supercomputer responsible for simulating billions of sub-realities? Do you really expect me to believe that an average computer might be so powerful that this might happen accidentally?

I’m arguing that not just one of the assumptions is implausible, but all of them are fatally flawed. No amount of waving of the “exponential growth magic wand” gets you out of the fundamental issues — it just doesn’t seem to hold water. It seems to me that this is a clear-cut case of reducto ad absurdem — if following logically from first principles results in something absurd, then one of your principles is incorrect.

All that being said, I’d love to hear flaws in my assumptions, logic, or conclusions — I mostly wrote this all down because I kind of hope someone will tell me I’m missing something!


Published by HackerNoon on 2016/06/09