Optimization Principle
The Logic

The Simulation Argument, Step by Step

By · · 5 min read

Could someone build a universe? Not metaphorically. Actually build one, the way you build a computer. Nick Bostrom asked a version of this in 2003 and it changed philosophy. If the answer is yes, that single fact leads somewhere strange. Each step below follows from the one before it, and any broken link kills the conclusion. Break any one and the chain fails.

Step 1: can you build a universe?

This one's actually solid. We already do it.

We already create simulated worlds vastly larger than the hardware running them, on consumer devices. AI systems already live inside classical simulations. These simulations only render what's being observed. Quantum mechanics does the same thing: unobserved properties remain in superposition rather than committing to definite values. The universe doesn't compute what nobody is measuring.

That's classical simulation. Quantum simulation would extend it to full physics, and it's active research right now. You don't need to build a physical universe out of atoms. You just need compute. And the universe already provides that. See Universe Creation for the full picture.

Step 2: would anyone ever try?

Think about where computing was 50 years ago. Now think about where it'll be in 500 years. Or 5,000. Or a million.

The argument only needs ONE civilization, anywhere, at any point in the entire history of the cosmos, to attempt this. Not in our universe. Not in our galaxy. Not in our species. Just once, across all of infinity. That's a low bar.

The counterargument: maybe every civilization destroys itself before it gets there. That's possible. But "every civilization in all of existence self-destructs" is a strong claim, too. You'd need a universal filter that no civilization in any reality ever escapes.

Step 3: then the numbers take over

Here's where it gets weird.

Say one civilization builds a self-optimizing universe. That universe, if it's any good at optimizing, eventually produces its own civilizations. Some of them build their own universes. Those produce more civilizations. More universes. On and on.

If each created universe produces even two more that eventually do the same thing, you get an explosion. After 100 levels: roughly 10^30 created universes. After 1,000 levels: 10^300 to 1. One original, trillions upon trillions of copies.

So if you're picking a random conscious experience to find yourself in, the odds are overwhelming that it's inside a created universe, not the original.

The cascade is already partially showed (see Universe Creation for the step-by-step chain). The cascade doesn't require proving consciousness in simulated entities. It requires entities inside simulations to optimize and create, which they already do. See How Deep Does It Go? for the numbers and Consciousness for why the consciousness question is definitional, not empirical.

Step 4: what's it for?

Here's where it gets interesting. If we're in a created universe, what's it created FOR? (For the full seven-fork thought experiment version of this question, see Can You Build a God?.)

Two independent arguments point to optimization.

The engineering argument. Among all possible simulation designs, self-optimizing beats everything else. A self-optimizing universe explores all possibilities, selects the best outcomes, and gets better at getting better. Whatever your original purpose (science, entertainment, resource production), a reality engine achieves it better than a fixed-purpose design.

It's the difference between a calculator and a general-purpose computer. Nobody who can build a computer chooses the calculator. The rational engineering move is always "build the thing that solves all problems, including the ones you haven't thought of yet." The AI industry already accepts this logic for superintelligence: how much would you spend on a system that makes all other tools redundant? Everything.

Think about what a controlled universe would look like: a creator who wants obedient beings has no need for 120 orders of fine-tuning precision, dark energy, vast empty space, or billions of years of evolution. A controlled universe is just Boltzmann brains or video game NPCs. Effortless to build. No optimization structure needed. Our universe looks nothing like that.

The observer-counting argument. Even if some simulations are built for other purposes, the self-optimizing ones dominate the count. A self-optimizing universe produces civilizations that create more universes (the cascade from Step 3). An entertainment simulation or a controlled experiment does not. Over time, self-optimizing universes contain exponentially more observers than fixed-purpose ones. By the same observer-counting math that powers Step 3, you're overwhelmingly likely to be inside the type that produces the most observers: the self-optimizing kind.

The empirical check. If we're in a self-optimizing universe, everything should serve optimization. The test: find one phenomenon that doesn't fit. One thing, anywhere, that can't be explained as optimizing the process of optimization itself. Nobody's found one yet. See Why 100% for the full falsification protocol and the counterexample challenge if you want to try.

Steps 5-7: what follows

If steps 1-4 hold, three consequences follow:

If someone built this universe to optimize, they'd probably keep an eye on how it's doing. Not constant intervention (that defeats the purpose of letting the system explore), but selection pressure built into the rules. See Safety Systems.

Under the transactional interpretation of quantum mechanics, the way things turn out in the future reaches back and influences which present becomes real. You end up on the best available path, not because something guides you there, but because the physics selects for it.

And suffering? It's an emergent property of biology, the same way evolution is. Organisms with pain outcompete organisms without it. The system minimizes suffering to the functional level (homeostasis), not to zero. Nobody is being punished. See Suffering & Struggle.

Where the chain can break

Each step has something that would kill it:

Step 1 dies if someone proves a simulation can never produce consciousness, or that there's a fundamental limit to what computation can do. Nobody's proved either.

Step 2 dies if every civilization in all of existence destroys itself before reaching this capability. Possible, but a very strong claim.

Step 3 weakens if created universes produce fewer than one successor each. But we already show the recursive property: we create simulations that help create more simulations. The chain is already happening at the classical level.

Step 4 dies if someone finds ONE counterexample to the optimization claim under the fixed falsification constraints. The challenge is open.

Step 1 is the strongest link. We already build simulated universes. We already show the cascade. Try to break any link.

The universe is structured for optimization and you're part of that process. Your highest-impact move: creating things that create better things. Builders building better builders.

Try to Break This

Steel-manned objections — strongest counterarguments first. Submit yours →

That's the point. Each link is falsifiable. Simulation is already showed. The cascade is already happening. Try to break a specific link.

Pascal's Wager says "believe because the payoff is better if you're right." This chain says "examine each step on its evidence and see if any fails." It doesn't ask for belief. It asks for falsification attempts. If you can't break any link, the conclusion follows regardless of whether you'd prefer it to be true.