Optimization Principle
Other Approaches

Competing Theories of Reality: The Alternatives

By · · 8 min read

If you only read critiques written by the person who disagrees, you learn nothing. Below is the strongest version of every major alternative, where they win, and what evidence would make each one preferable. If it reads like salesmanship despite that effort, every claim is independently testable.

Competing theories at a glance

TheoryExplains 10⁻¹²² Precision?Selection Mechanism?Testable Prediction?Main StrengthFatal Gap
Anthropic Principle + MultiverseNo (only range, not 120 orders deep)NoNo (unfalsifiable)Mainstream acceptance, fewer assumptionsCan't predict excess precision or non-observer features
Many Worlds (MWI)NoNo: all branches equally realWeak (decoherence rates)Mathematically elegant, no collapse postulateNo Born-rule derivation; no mechanism for branch selection
CopenhagenNoYes, but undefined ("measurement")All standard QM predictions100 years of correct predictions, physicist defaultWon't say WHY collapse happens; incompatible with relativity at interpretation level
String TheoryNo (10⁵⁰⁰ landscape problem)NoNone specificUnifies gravity + QM, real mathematical resultsNo selection principle for which vacuum is ours
Cosmological Natural Selection (Smolin)Partial (optimizes black holes, not observers)Yes (black holes reproduce)Yes (constants near black-hole maxima)Asks the right question: what is the universe optimizing?Optimizes the wrong thing; data fits mixed
"Shut up and calculate"No (refuses to explain)N/AAll standard predictionsProductive: 100 years of resultsNot an explanation, a refusal
Optimize OptimizationYes (predicts excess precision)Yes (retrocausal via TI)Yes (100% claim, one counterexample kills it)One rule covers every scale; maximally falsifiableTeleological framing triggers resistance; TI is minority interpretation

Each row below expands a column of the table.

The real competition: the anthropic principle

The anthropic principle combined with a multiverse is the framework's most serious competitor. If many universes exist with varying constants, we necessarily find ourselves in one compatible with our existence. No designer needed. Fewer assumptions. Mainstream acceptance. Mathematically well-formalized. Observer selection effects are understood. If this explanation works, everything else on this site is unnecessary.

So does it work?

The cosmological constant is fine-tuned to 10⁻¹²². Life could exist with roughly 10⁻² precision. That's 120 extra orders of magnitude, a 1 followed by 120 zeros times more precise than life requires. Anthropic arguments explain why we land somewhere in the life-permitting range. They don't explain why we're 120 orders of magnitude deep inside it.

The excess precision is where the two frameworks make different predictions. Anthropics predicts minimum-viable parameters: good enough for observers, nothing more. The optimization framework predicts maximum optimization: tuned far beyond what observers need, because observers aren't the point. We observe the second pattern. Every fine-tuned constant shows excess precision, not edge-of-range values. See the full comparison for the prediction table.

Anthropics also has no explanation for why the universe has optimization-compatible structure at every scale from quantum to cosmic. Why does quantum mechanics look like a computational architecture? Why does the least action principle guide every physical system toward efficient paths? Anthropics says "observers need some physics." It doesn't predict THIS physics.

And the multiverse backfires. If you invoke infinite universes to make anthropics work, optimize optimization happens infinitely more times in that multiverse than any other configuration. The multiverse doesn't compete with the optimization framework. It feeds into it.

What would change my mind: evidence that fine-tuning sits near the edge of the anthropic bound rather than deep inside it. A complete anthropic derivation of the cosmological constant's specific value. Or a demonstration that all "excess" precision is explained by observer selection alone.

Many Worlds: elegant but missing a piece

MWI takes quantum mechanics at face value: the core equation applies everywhere, superpositions never collapse, all outcomes happen. One equation, no more rules. David Deutsch, Sean Carroll, and David Wallace have built strong cases for it. As a mathematical framework, it's hard to beat.

The problem is what it doesn't have. No selection mechanism. Every outcome is equally real, so no outcome is "better." After 25+ years of attempts, nobody has derived the Born rule (the probability rule that tells you how likely each outcome is) from MWI's starting assumptions without sneaking in the very weighting they're trying to derive. And MWI has no answer for why our particular branch looks optimized rather than random.

There's a relativity problem too. MWI says reality branches "when decoherence happens." But decoherence is a gradual process with no sharp boundary, and its timing looks different to observers in relative motion. When does the branch occur? In whose reference frame? The transactional interpretation avoids this: each transaction is a 4D spacetime event. No preferred frame needed.

What would change my mind: a consensus derivation of the Born rule from MWI's own axioms. A clear answer for why our branch has 10⁻¹²² precision. Or evidence that quantum computers work through parallel worlds rather than structured interference.

See the full MWI analysis.

"Shut up and calculate"

This is not really a competing explanation. It's a refusal to explain. And it's been spectacularly productive.

Physics without purpose-talk produced every successful prediction in the history of science. Every one. The equations are enough. Purpose-based explanations have a terrible track record ("rocks fall because they want to reach their natural place"). The overcorrection is earned, not irrational.

The cost: "shut up and calculate" works within each domain but refuses to ask why the same optimization-compatible structure appears across ALL domains. Why a speed of light? Why is 99.999% of the universe empty? Why does time have an arrow? Why is the cosmological constant tuned to 122 decimal places? These questions are not answered by plugging numbers into equations. They're questions about why the equations have the structure they do.

Wheeler, Lloyd, and Feynman showed the universe computes. Asking what it computes toward is an engineering question, not philosophy. "Shut up and calculate" refuses to ask it. The framework asks it and gets a testable answer.

What would change my mind: a mechanical explanation for fine-tuning that needs no observers, no multiverse, and no design. A proof that cross-scale optimization structure is a mathematical requirement. Or evidence that "optimize optimization" adds nothing predictive beyond what each field already explains separately.

Copenhagen: perfect predictions, no explanation

Copenhagen gets every prediction right. Every single one. Nearly 100 years, no confirmed failure. It's the working physicist's default, and for good reason.

But it won't explain its own mechanism. What counts as a "measurement"? When does collapse happen? Why does one outcome become real? After a century, no answers. And there's a problem that rarely gets mentioned: "instantaneous collapse" requires a preferred reference frame, and Einstein proved there isn't one.

The Born rule (the probability weighting)? Copenhagen plugs it in as a starting assumption. The line between "quantum weird" and "normal everyday"? It keeps moving as experiments push quantum effects to larger objects (molecules with 2,000+ atoms now show quantum behavior). And in 2018, Frauchiger and Renner showed Copenhagen contradicts itself when you apply quantum mechanics to the observers, not just to what's being observed.

Copenhagen is the calculator that always gives the right answer but won't show its work. The optimization framework reads the work: collapse IS selection, the probability rule IS optimization weighting, and classical behavior emerges naturally from quantum behavior at larger scales.

What would change my mind: a solution to the measurement problem that preserves Copenhagen's simplicity. A derivation of the Born rule from Copenhagen's own axioms. Or evidence that reading the equations as purposeful adds nothing.

String Theory: not a competitor. a complement.

String theory is the leading candidate for unifying gravity with quantum mechanics. Decades of work by thousands of physicists. Specific results: correctly calculating black hole entropy from first principles, Maldacena's AdS/CFT correspondence (20,000+ citations, the most-cited paper in high-energy physics), genuine mathematical unification of forces. It's real physics producing real results.

The problem is the landscape. String theory's 10⁵⁰⁰ possible configurations have no selection principle. With that many options, any observation fits somewhere. A theory that explains every possible universe predicts nothing about the actual one. Supersymmetry (its most expected signature) has not appeared at LHC energies. And Maldacena's deepest result was proved in Anti-de Sitter space, which has a negative cosmological constant. Ours is positive. Extending those results to our actual universe is an open problem.

String theory and the optimization framework aren't mutually exclusive. String theory describes the MECHANISM. Optimization describes the PURPOSE. Out of 10⁵⁰⁰ possible configurations, which one gets built? The one that optimizes fastest. String theory's space is where you search for the best settings. The optimization framework is why you search.

What would make string theory enough on its own: a way to pick which of the 10⁵⁰⁰ options is the real one without observers or optimization doing the picking. A specific, confirmed prediction unique to string theory. Or a proof that our universe's exact settings appear in the landscape without any selection.

Cosmological natural selection: right question, too-Narrow answer

Lee Smolin proposed that universes reproduce through black holes, with each daughter universe having slightly different constants. Universes that produce more black holes have more offspring, so constants evolve toward maximizing black hole production. It's natural selection applied to cosmology. Smart idea, and it asks the right question: what is the universe optimizing?

But Smolin's answer is too specific. He predicted the universe should maximize black hole production, which means neutron star masses should cluster near the upper limit (more massive stars make black holes more easily). Observations in 2010 and 2019 found neutron stars with masses that contradict this prediction. The specific mechanism was falsified.

The broader insight survives: if universes reproduce, some selection pressure exists. But "maximize black holes" is one function. "Optimize optimization" is the function that generates all other functions, including the ones that produce black holes. Smolin found a piece. The optimization framework is the whole.

What would make CNS enough: if neutron star mass observations were revised, if black hole production were shown to correlate with ALL optimization measures (not just some), or if a mechanism were found connecting black hole production to the excess fine-tuning precision.

For the full critique of Many Worlds and the Anthropic Principle, see the dedicated pages: What's Wrong with Many Worlds? and What's Wrong with the Anthropic Principle?.

Try to Break This

Steel-manned objections — strongest counterarguments first. Submit yours →

Each section identifies what would make the alternative preferable. Every claim on this page is testable. If the framework is wrong, alternatives should outperform it on specific predictions. Test them.

Almost. It explains parameter ranges but not the degree of precision within those ranges (120 extra orders of magnitude for the cosmological constant). And it does not explain why the universe has optimization-compatible structure at every scale. If you invoke a multiverse, optimize optimization happens infinitely more times in that multiverse. The multiverse feeds into this framework.

Previous purpose-answers were vague ("rocks fall because they want to"). This one is specific: every phenomenon must optimize the process of optimization itself, in three logical steps or fewer, with a counterfactual that holds. The fundamental equations of physics already have purpose-like structure (least action). Physicists use them daily. Take that structure at face value and it makes predictions naturalism doesn't.