I never thought I’d say this…

I never thought I’d say this, but recent press reports that the Simulation Hypothesis have been ruled out are bullshit. Yes, the work that has been published in Science Advances is interesting and utterly worthwhile, but the press coverage has been execrable. The news posted recently on Seeker, Newsweek, and numerous other outlets is basically wrong. The fairest coverage I’ve seen so far is at Boing Boing, unsurprisingly, and even that article misses the point.

What am I talking about? I’m talking about the idea that we’re all living in a computer simulation. Why are we talking about it? Because Elon Musk has an opinion on it. Why should you care? Certainly not because of the Simulation Hypothesis itself, which is ridiculous. But, rather, because public statements about what kinds of approaches can and can’t be used to model nature determine what science gets funded. And that determines what we as a society bother exploring. And that determines whether your grandchildren get warp drive and teleport, or nuclear fallout and cholera.

Why do I believe I’m qualified to speak on this topic? Because probably nobody else alive at this point has put in as much time or effort as I have simulating universes using completely discretized, algorithmic methods.

Sure, luminaries like Ed Fredkin and Stephen Wolfram have put in their time and done great work, but none of them have actually sat down and reproduced actual experimental results from quantum mechanics in their models. I have spent about twenty years involved in that pointless quest. I went off on a rant when Elon made his original remarks to say just how unlikely the Simulation Hypothesis was. Now I feel the need to do the same again in the wake of this ‘refutation’.

What has been proved? That certain quantum effects are vastly, unreasonably hard to model using Monte Carlo models. What this means is that any simulation trying to reproduce QM the way we think about it would require insanely huge computers that wouldn’t fit in our universe. This result comes as no surprise to me and I trust its validity absolutely.

What’s wrong with the conclusions the journalists have drawn? Well, for starters the theory of QM that’s being tested against is one that assumes that everything is connected up so that it’s perfectly interdependent via what’s called a Hilbert Space. But here’s the problem: we have zero evidence that the underlying mechanics of the universe actually work that way.

Now, don’t get me wrong, I’m not denying that QM is the most effective theory that humanity has ever come up with or that it gets the right answers basically all the time. And neither am I saying I have something better. Rather, the problem is this: you need the Hilbert Space to predict what nature is going to do. But nature itself only has to pursue a single course of events, despite popular assertions to the contrary.

It’s like this: imagine that you were building a mathematical tool that told you when busses were going to show up at your stop, and all you could ever see was the busses arriving. To predict all the things the busses might do, you’d have to go to crazy-town adding in features. You might even get to believing that busses were perfectly mathematically distributed across all possible locations until you saw them arrive. But the actual busses? They don’t give a shit about your model. They either show or they don’t.

Similarly, when considering a Monte Carlo model of a particle system, the assumption here is that the number of possible interactions always perfectly scales with the number of particles. But you can build models of reality that match with observed results in which this is not true, so long as you can’t tell which connections are being dropped from inside the simulation. Why do I believe that? Because I’ve built them.

Are these nice models? No. Are they scientifically useful? Probably not. Do they also slow down due to rapidly ramping computational complexity? Yes. Yes they do. But the point is that the possible spread of algorithmic models for QM processes that’s been explored so far is pitiful. We have no idea what’s possible and what’s not at this point. Super-cheap physics, while very unlikely, hasn’t actually been ruled out.

But this hasn’t been noticed because these algorithmic models require a violent shift in perspective from they way people normally think about physics that the academic world is transparently not ready for. Despite the slow accumulation of circumstantial evidence that the universe is ‘real and non-local’ as opposed to the other way around, setting down the existing set of QM modeling tools basically forces you to do about two hundred years of theoretical catch-up, and nobody has that much spare time between grant applications.

But for those of you who might be cheering over a reprieve for the Simulation Hypothesis, not so fast. It is still effectively ruled out by the minimality argument that I’ve put forward numerous times. We’re not living in a simulation because physics is predictable, not for a reason any more glamorous or complicated than that.

Furthermore, it may still be absolutely true that computing the universe bit-wise is horrendously computationally expensive, but compared to the infinite cost of calculation implied by current continuously modeled theories, that’s insanely cheap. In fact, the moment you start talking about computational cost and its implications for model plausibility is exactly the point at which continuum models should be disregarded.

So, to sum up. 1: The recent research has ruled out nothing conclusively. 2: The Simulation Hypothesis is just exactly as unlikely as it was two weeks ago. And 3: algorithmic models of spacetime are still a far more plausible, more robust bet than those models currently under consideration.

Thank you for your kind attention. Rant complete.

 

2 thoughts on “I never thought I’d say this…”

Leave a Reply

Your email address will not be published. Required fields are marked *