And I'll bet you're relieved!
Maybe a little background is in order.
A question of interest to philosophers and theoretical physicists is whether or not the universe is just a simulation running on some computer, one level up. (See e.g. Nick Bostrom's
Simulation Argument.) Of course, many ridicule this idea as being non-falsifiable and thus non-scientific.
Not so fast! I said. Of
course it's falsifiable. Here's how: if the universe is a simulation, then its programmers probably try to economize on computational resources (computing cycles, memory, disc space, time, etc). And to do that, they will make the program reveal to "us" (the conscious entities) the minimum required to make everything appear "believable". That in turn, means that as long as we "wouldn't know the difference" if some physical process developed in a way contradicting the rest of our observations, the simulator won't bother to churn through the calculations needed to make the process match up with known universal laws. In other words: "If we're not looking, why bother making sure something's there?"
And that tells us how to test the Simulation Hypothesis: have everyone set up as much measurement equipment as they can, and therefore observe as much as they can. This will force the simulator do many more calculations than it would otherwise have to, since now it has to keep consistent with that many more observations. The programmers then have to devote an ever-increasing amount of resources to keep it running, which will eventually force them to "cut corners" in implementing the laws of physics, revealing violation of Standard Model physics, or ... um, make them pull the plug on our existence.
Hence, my "plan to destroy the universe".
Now, the good news: the plan wouldn't work, based on what we already know about how the universe would react to such a "hypermeasurement" scenario! And the reason is shocking:
because we can't actually increase our total knowledge.
"What in the hay-ll? I did me some book-larnin' not but three yurs ago!"
Sorry, that was Cletus, our resident country bumpkin.
Well, I'll need to some more background now to justify that claim. First, I want to point you to a post on OvercomingBias.com that introduced me to a lot about what I'll discuss here:
Engines of Cognition.
Now, consider the
2nd law of thermodynamics. There are many ways to express it, but a simpler way is: "The amount of disorder ('entropy') in the universe must always increase." Sure, you can increase the order any one specific place -- say, when you form crystals -- but it will always be counterbalanced by an increase in disorder somewhere else. The most common application of this law is in heat engines (such as the one in your car): when you burn fuel to turn your engine and thus your tires, you are extracting a kind of order: the useful mechanical "work" (as it is called in physics) of a spinning engine. However, to do so, you burn fuel and transfer heat to the environment, which, when tabulated, generates entropy/disorder exceeding that which you destroyed in extracting mechanical work from the system to drive.
Now, here's the kicker: there are deep parallels between the concept of entropy in thermodynamics, and the concept called "entropy" in information theory. In the latter, it refers (roughly) to the uncertainty one has about the content of a message before reading it. Any knowledge that some kinds of messages are more likely than others therefore reduces that "entropy". Similarly, entropy is at a maximum when all messages are equally likely.
And the truly mind-blowing part is that the connection between the two kinds of entropy is so deep that entropy in the information-theoretic sense
affects entropy in the thermodynamic sense. (This is going somewhere, just be patient.) In short, if you are able to reduce your uncertainty (information-theoretic entropy) about the "message" contained in the molecules of a system, that knowledge can actually be exploited to reduce the thermodynamic entropy of the system and thereby extract useful work! (For reference, and early exploration of this idea is called the
Maxwell's Demon thought experiment, and a hypothetical engine that extracts work this way is the Szilard engine.)
But this hypothetical capability of decreasing the entropy of a system does not actually contradict the 2nd Law, which, you'll remember, says that total entropy must increase. Rather, for reasons I won't go into, this acquisition of knowledge
itself is limited by the 2nd Law. Just as the extraction of "organized" mechanical work from fuel requires the generation somewhere else, of at least as much counterbalancing disorganization, so too does the collection of information
that could permit extraction of the same work without the fuel require a counterbalancing
loss of information somewhere else, i.e. increased uncertainty.
This principle reveals a fundamental limit that your brain (in a deep sense, a "cognitive engine") faces: in order to learn something true about your environment (whether via the senses or inferences), you must sacrifice knowledge somewhere else. Fortunately, nothing requires you to care much about that lost knowledge, which takes the form of "lost certainty about aggregate statistical properties of thermodynamic variables".
Now, back to the main point: from the perspective of hypothetical beings running the universe's simulator, my idea to gather more measurements has no impact. Any time we make a measurement, we are gathering knowledge, which must therefore correspond to lost knowledge somewhere else. So, far from threatening the computer's ability to simulate our universe, all our measurements will (amazingly)
decrease the computational resources the simulator requires.
Which neatly returns the Simulation Hypothesis to non-falsifiability, and assures us that even if people acted on my idea, we're still safe and sound. Alternatively, it reveals the universe's programmers to be really, really clever :-)