Wednesday, November 19, 2008

My plan to destroy the universe won't work

And I'll bet you're relieved!

Maybe a little background is in order.

A question of interest to philosophers and theoretical physicists is whether or not the universe is just a simulation running on some computer, one level up. (See e.g. Nick Bostrom's Simulation Argument.) Of course, many ridicule this idea as being non-falsifiable and thus non-scientific.

Not so fast! I said. Of course it's falsifiable. Here's how: if the universe is a simulation, then its programmers probably try to economize on computational resources (computing cycles, memory, disc space, time, etc). And to do that, they will make the program reveal to "us" (the conscious entities) the minimum required to make everything appear "believable". That in turn, means that as long as we "wouldn't know the difference" if some physical process developed in a way contradicting the rest of our observations, the simulator won't bother to churn through the calculations needed to make the process match up with known universal laws. In other words: "If we're not looking, why bother making sure something's there?"

And that tells us how to test the Simulation Hypothesis: have everyone set up as much measurement equipment as they can, and therefore observe as much as they can. This will force the simulator do many more calculations than it would otherwise have to, since now it has to keep consistent with that many more observations. The programmers then have to devote an ever-increasing amount of resources to keep it running, which will eventually force them to "cut corners" in implementing the laws of physics, revealing violation of Standard Model physics, or ... um, make them pull the plug on our existence.

Hence, my "plan to destroy the universe".

Now, the good news: the plan wouldn't work, based on what we already know about how the universe would react to such a "hypermeasurement" scenario! And the reason is shocking: because we can't actually increase our total knowledge.

"What in the hay-ll? I did me some book-larnin' not but three yurs ago!"

Sorry, that was Cletus, our resident country bumpkin.

Well, I'll need to some more background now to justify that claim. First, I want to point you to a post on OvercomingBias.com that introduced me to a lot about what I'll discuss here: Engines of Cognition.

Now, consider the 2nd law of thermodynamics. There are many ways to express it, but a simpler way is: "The amount of disorder ('entropy') in the universe must always increase." Sure, you can increase the order any one specific place -- say, when you form crystals -- but it will always be counterbalanced by an increase in disorder somewhere else. The most common application of this law is in heat engines (such as the one in your car): when you burn fuel to turn your engine and thus your tires, you are extracting a kind of order: the useful mechanical "work" (as it is called in physics) of a spinning engine. However, to do so, you burn fuel and transfer heat to the environment, which, when tabulated, generates entropy/disorder exceeding that which you destroyed in extracting mechanical work from the system to drive.

Now, here's the kicker: there are deep parallels between the concept of entropy in thermodynamics, and the concept called "entropy" in information theory. In the latter, it refers (roughly) to the uncertainty one has about the content of a message before reading it. Any knowledge that some kinds of messages are more likely than others therefore reduces that "entropy". Similarly, entropy is at a maximum when all messages are equally likely.

And the truly mind-blowing part is that the connection between the two kinds of entropy is so deep that entropy in the information-theoretic sense affects entropy in the thermodynamic sense. (This is going somewhere, just be patient.) In short, if you are able to reduce your uncertainty (information-theoretic entropy) about the "message" contained in the molecules of a system, that knowledge can actually be exploited to reduce the thermodynamic entropy of the system and thereby extract useful work! (For reference, and early exploration of this idea is called the Maxwell's Demon thought experiment, and a hypothetical engine that extracts work this way is the Szilard engine.)

But this hypothetical capability of decreasing the entropy of a system does not actually contradict the 2nd Law, which, you'll remember, says that total entropy must increase. Rather, for reasons I won't go into, this acquisition of knowledge itself is limited by the 2nd Law. Just as the extraction of "organized" mechanical work from fuel requires the generation somewhere else, of at least as much counterbalancing disorganization, so too does the collection of information that could permit extraction of the same work without the fuel require a counterbalancing loss of information somewhere else, i.e. increased uncertainty.

This principle reveals a fundamental limit that your brain (in a deep sense, a "cognitive engine") faces: in order to learn something true about your environment (whether via the senses or inferences), you must sacrifice knowledge somewhere else. Fortunately, nothing requires you to care much about that lost knowledge, which takes the form of "lost certainty about aggregate statistical properties of thermodynamic variables".

Now, back to the main point: from the perspective of hypothetical beings running the universe's simulator, my idea to gather more measurements has no impact. Any time we make a measurement, we are gathering knowledge, which must therefore correspond to lost knowledge somewhere else. So, far from threatening the computer's ability to simulate our universe, all our measurements will (amazingly) decrease the computational resources the simulator requires.

Which neatly returns the Simulation Hypothesis to non-falsifiability, and assures us that even if people acted on my idea, we're still safe and sound. Alternatively, it reveals the universe's programmers to be really, really clever :-)

6 comments:

Anonymous said...

Is this a really, really roundabout way of saying you believe in God?

Anonymous said...

Interesting. But, I'm utterly unconvinced.

"because we can't actually increase our total knowledge."

Do you intend (by bolding this line and stating it emphatically) that this was true throughout all time?

I suspect most people can realize how this conclusion does not mesh with reality.

Silas Barta said...

Wow, I didn't realize there was another comment on this.

Anonymouse #2: I realize my conclusion is counterintuitive -- as did Cletus -- but bear with me. The point was that you have to correctly identify what counts as knowledge, and when you do so, you find that you cannot increase it.

It certainly *seems* that you're learning things, but, I claim, every act of learning is offset by "dislearning", or decoupling of your mind from the rest of the universe. The key insight to make sense of all that, is that the "lost knowledge" is so unimportant to you, that you don't even notice it.

This lost knowledge consists of "greater uncertainty about the microstate of your body and the surrounding medium". So, for example, when you find your lost rubber ball, here's what the "knowledge balance sheet" looks like:

Gain: Greater certainty of location of particles making up rubber ball.

Loss: Greater uncertainty of the velocity of particles in your respiratory system and the surrounding air (you had to perform thermodynamic work to perform the act of cognition involved in finding the ball).

Since you don't care about the loss, it appears your total knowledge increased, but if you tall up all of the microstates you regarded as possible before and after, the total should be the same.

Unknown said...

I'm pretty sure the necessary increase in entropy to counter your increase in knowledge is in your brain itself. In short, it's because you currently don't know what you will know.

You don't sacrifice knowledge. It's entropy, and thus impossible to get rid of. You sacrifice parts of your brain that aren't used. They're order.

In addition, even if you couldn't learn anything in an information-theoretic sense, that doesn't mean that you can't learn stuff that's harder for the simulators to tell you.

I don't think there's much you can do with that plan, though. You can't really observe more about the world than you already do. Setting up equipment won't help unless you check it.

I suppose you could set up a whole lot of equipment, read the checksums, and then pick something at random and actually look at it. They can't make up the checksums if they're going to actually show it to you, and they don't know what they will and won't show you.

gwern said...

So, if the universe is implemented as it *appears to be implemented*, vis-a-vis the laws of physics and entropy, you don't change stuff by observing. OK.

But I don't think this rules out the ability to cause problems for simulators who are already taking shortcuts and approximations.

(For example, if they're only 'properly' simulating the Earth and the humans on it, and are using very cheap approximations to simulate the observed galaxies & stars in ways that would be trivially broken if anyone looked closely and checked the calculations.)

Anonymous said...

In an age where security is paramount, our locksmith service remains at the forefront of providing innovative solutions. From traditional locks to cutting-edge security systems, we stay updated with the latest advancements to offer you a comprehensive range of options tailored to your unique requirements. Your safety is our priority, and we take pride in contributing to the security and well-being of our community.locksmith near me