How Deep Does Simulation Theory Go?
By Eugene Sandugey · · 2 min read
If simulated universes create civilizations that create more simulated universes, the originals are outnumbered. At depth 100 with just two successors per level, the ratio is roughly 10^30 to 1. At depth 1000, it's 10^300 to 1. The numbers get absurd fast.
Nick Bostrom (2003) established the basic dilemma: either civilizations almost never reach simulation capability, or they almost never choose to simulate, or we are almost certainly in a simulation. But Bostrom never asked what KIND of simulation. His examples (ancestor sims, entertainment, historical curiosity) are all bounded. They end. The framework asks a further question: among all possible simulations, which type generates the most observers? Self-optimizing universes generate unboundedly more through the recursive cascade. The same counting logic that favors "simulated" over "base reality" also favors "self-optimizing" over "bounded."
This extension adds one assumption Bostrom did not make: that self-optimizing simulations exist and generate more observers. We already show this: we create simulations, and AI systems inside those simulations help create more simulations. The cascade is already happening.
The cascade argument
The key variable is R: the average number of child universes per parent that successfully produce their own universe-creating civilizations. Think of it like a reproduction number (the same concept from epidemiology that became household knowledge during COVID).
R > 1 (the chain grows): Each universe produces more than one successor, so the total number of created universes explodes. The ratio of simulated-to-base observers becomes overwhelming. This is the strong case.
R = 1 (the chain barely survives): Growth is fragile. Each universe produces exactly one successor on average. The cascade continues but could die out from random bad luck at any point.
R < 1 (the chain dies): Each universe produces fewer than one successor. The process peters out. This is the weakest case for the argument.
The evidence that R is above 1 comes from a chain that is already partially showed (see Universe Creation for the full breakdown): we create internally consistent worlds larger than the hardware running them (showed), AI inside simulations optimizes and creates (showed), AI helps create more simulations (showed), fidelity increases with compute (showed), and the consciousness question is unanswerable for any substrate because nobody has a working definition. The cascade doesn't require consciousness to be proven. It requires entities inside simulations to optimize and create, which they already do.
"But classical simulations aren't real universes." Correct. The claim isn't that Minecraft is a universe. The claim is that every step in the chain from "create an internally consistent world" to "entities inside create their own worlds" is either showed or actively happening. The step from classical to quantum simulation is an engineering challenge, not a physics impossibility (Feynman proposed quantum computing for exactly this reason in 1982). And the argument works at the classical level anyway: AI systems inside classical simulations already help create more simulations.
If R = 2 (two successful universe-creating civilizations per level), by depth 100 the ratio is roughly 2^100, about 10^30 simulated observers per base-reality observer. By depth 1000, it is 10^300 to 1. The numbers get absurd fast.
What deep nesting looks like
A first-generation simulation would have rough, minimal physics. Generation 1 gets the basics working. Generation 50 has been refined 50 times. Generation 10,000 has physics polished to absurd precision.
What we observe is the opposite of rough. The cosmological constant is fine-tuned to 10^-122, far more precisely than observers require. Quantum mechanics is structured in ways that suit computation (interference amplifies correct answers, superposition enables parallel search). Consciousness is emerging at a cosmologically important moment for information processing. The four forces each serve specific optimization functions. Error correction operates at every scale from DNA repair to immune systems to quantum decoherence.
This pattern is what deep nesting predicts. Shallow simulations wouldn't bother with 120 orders of magnitude of excess precision. Deep ones, refined across thousands of generations, would.
The bootstrap question
"If everything needs a creator, what created the first universe?" This sounds devastating, but the argument does not require answering it.
The cascade only needs to start once. Ever. Anywhere across all of space and time. Possibly infinite universes exist (multiverse theories). Infinite time may be available (some cosmological models suggest the universe keeps spawning new regions forever). Once started, if each created universe produces more than one successor, the cascade grows exponentially with depth.
At the classical level, we already show the recursive property: we create simulations that contain entities (AI systems) that help create more simulations. "Can simulations host conscious observers?" is a question that dissolves on examination. We can't even define what a conscious observer IS, so demanding proof that simulations can host one is demanding proof of an undefined thing. What we CAN say: AI systems inside simulations already optimize, learn, and create. Whether that constitutes "consciousness" hinges on how you define the terms, and no definition has consensus. The cascade math works regardless.
Evolution of the rules
If civilizations create universes with tuned parameters, each generation can discover better rules. Generation 1 gets basic physics and slow optimization. Later generations get refined parameters. The extraordinary precision of our physics (10^-122 cosmological constant) would reflect many iterations of refinement rather than a single lucky draw.
The framework predicts constants were tuned through iterative optimization, not randomly generated and anthropically selected. The two predictions differ: anthropic selection predicts the minimum precision compatible with observers. Iterative optimization predicts excess precision. We observe excess precision. 120 orders of magnitude of it.
The hardware / firmware / software distinction
Think of a computer. It has hardware (the chip), firmware (the BIOS that boots it up), and software (the programs you run). You wouldn't expect to unify these into one thing. They're different layers by design.
The universe has the same structure. The hardware is whatever runs the computation. The firmware is the physics constants and laws (set at universe creation, like BIOS settings). The software is matter, life, intelligence (programs running on top).
Here's why this matters: physicists have spent decades trying to unify quantum mechanics and gravity into one theory. What if the reason they can't is that these operate at different layers? You wouldn't expect a website's code to compile into circuit diagrams. Quantum mechanics and gravity don't unify because they're not supposed to. They operate at different layers of the stack. This explains a 50-year failure.
Neil deGrasse Tyson has publicly stated he gives "better than 50-50 odds" that the universe is a simulation. Elon Musk put the odds of base reality at "one in billions." These are directional assessments from prominent thinkers, not formal arguments. But they reflect the same intuition the cascade math formalizes: if simulations are possible and recursive, the numbers overwhelm base reality.
How this extends Bostrom
Among all possible simulations, different types have different scopes. Entertainment and research simulations are bounded: they end when the purpose is served. Ancestor simulations replay history: also bounded. Self-optimizing universes have unbounded scope via the cascade. They create beings who create universes who create beings who create universes. (The seven-fork thought experiment walks through this logic step by step.)
The counting logic is the same as Bostrom's, just taken one step further. A bounded simulation (entertainment, ancestor replay) contains a fixed number of conscious beings and then stops. A self-optimizing universe contains conscious beings who create more universes, which contain more conscious beings, who create more universes. The numbers keep multiplying. If you're randomly picking which conscious experience to "be," the self-optimizing universes have produced so many more experiences that the chance of being in a bounded simulation drops toward zero, by the same math Bostrom used.
If you accept Bostrom's counting argument at all, this is a natural further question: not just WHETHER simulated, but WHAT KIND of simulated.
Bostrom identified WHERE. This framework answers WHY: optimize optimization. The reason Bostrom did not ask "toward what purpose" is the same reason mainstream physicists do not ask it: the question sounds teleological, and teleological questions make scientists uncomfortable.
Existing Objections to Bostrom
Several objections to Bostrom's original argument are worth examining under this extension:
Marcelo Gleiser objected that ancestor simulations would be a "colossal waste of time." For ancestor simulations, this is a reasonable criticism. For a self-optimizing universe, it does not apply: a system that solves every problem (including future ones) is not a waste. But Gleiser's point stands against the specific purposes Bostrom proposed.
Sean Carroll objected that "the laws we observe have hidden complexity not used for anything." Under Bostrom's ancestor simulation framing, this is a real puzzle: why simulate unnecessary complexity for a historical replay? Under the optimization framework, this objection dissolves. The "hidden complexity" has function. Quantum mechanics enables computation. Virtual particles maintain vacuum energy. The computational structure of physics is the optimization infrastructure. Carroll's objection works against Bostrom's ancestor simulation. It doesn't work against a self-optimizing universe, because in that case the complexity IS the point: it's the machinery that does the optimizing. Name a feature of physics that has no optimization role. That's the counterexample challenge. Nobody has found one.
Try to Break This
Steel-manned objections — strongest counterarguments first. Submit yours →
We cannot directly observe a parent simulation. But we can evaluate whether the universe's features are more consistent with deep optimization than with alternatives. The precision of constants (10^-122), the computational structure of quantum mechanics, and the cross-domain optimization patterns are all observations that can be compared against predictions from different frameworks. Unfalsifiability would mean no observations could ever count for or against the theory. The 100% claim makes every phenomenon a potential falsifier.
The argument uses Bostrom's observer-counting logic and asks which simulation type generates the most observers. Self-optimizing ones generate unboundedly more through recursive multiplication. This follows from the math, not from assuming the conclusion. We already show the cascade: we create simulations containing AI systems that help create more simulations.
We have partial demonstration. We create internally consistent worlds larger than the hardware running them (Minecraft, No Man's Sky). AI inside simulations optimizes and creates. Fidelity increases with compute. Each step from "create a world" to "entities inside create their own worlds" is either showed or actively happening. Whether R currently exceeds 1 at the full universe-creation level is unknown. But the trend line points in one direction, and R only needs to exceed 1 once, anywhere, across all of time.
Yes. The conclusion depends on which reference class you adopt. Under observer-moment sampling (each conscious experience equally likely to be "yours"), the cascade produces overwhelming ratios. Under universe-weighted measures, it may not. This is a genuine open problem in cosmology and philosophy, not specific to this framework. Bostrom's original argument has the same dependency. The structural claim holds across measures: recursive cascades produce more created universes than base realities. Where you probably are depends on which counting method you trust.
Related
How Easy Is It to Build a Universe?
Minecraft creates a world larger than Earth on a phone. Quantum simulation extends this to full physics. You don't need atoms, just compute.
Can You Build a God? Seven Thought-Experiment Forks
Seven forks, two choices each. At every fork, the simpler path converges on optimize optimization. A thought experiment that lands where the universe is.