Optimization Principle
If This Is True...

Hassabis, Levin, Sutskever: Who Else Is Seeing This

By · · 6 min read

Demis Hassabis said it directly in his Lex Fridman interview: "I think information is primary. Information is the most fundamental unit of the universe, more fundamental than energy and matter." A Nobel laureate in AI, saying the universe is built out of information. Nobody asked him to support an optimization framework. He was talking about what AlphaFold's success tells us about reality.

Three scientists from three different fields, none of them collaborating, each arriving at a piece of the same picture. Hassabis (AI) showed reality has learnable structure. Michael Levin (biology) documented intelligence operating at every biological scale. Ilya Sutskever (AI safety) converged on embodied identity as the key to alignment. They were solving their own problems and kept running into the same walls.

Demis Hassabis: the Universe is learnable

Co-founder of DeepMind, Nobel laureate (Chemistry 2024) for AlphaFold. In his Lex Fridman Podcast appearance (#475, Nov 2025), Hassabis laid out a position that should have gotten more attention.

His core claim: information is more fundamental than energy and matter. Not a metaphor. Literally primary. He said it directly:

"I think information is primary. Information is the most fundamental unit of the universe, more fundamental than energy and matter."

— Demis Hassabis, Lex Fridman Podcast #475, November 2025

What does AlphaFold's success actually tell us about reality? Proteins can fold into an astronomical number of configurations, and AlphaFold solved it by learning the patterns. That only works if the patterns are actually there to learn. Random structures can't be predicted by any AI, no matter how powerful.

Hassabis noticed something bigger: AI systems trained on completely different datasets tend to converge on similar representations of physical reality. Any lawful universe would produce some convergence, but the degree of it is hard to explain away. If reality lacked consistent, deeply compressible structure, different AI systems would learn less compatible representations. Instead, they keep finding the same patterns, even across different domains and architectures.

He calls it "survival of the stablest": every pattern we observe in nature survived some selection process. He conjectures that any natural pattern can be efficiently discovered by a classical learning algorithm, and proposes a new mathematical category (Learnable Natural Systems) to formalize this idea. He is currently working on the Virtual Cell project, aiming to simulate a complete yeast cell from first principles.

Michael Levin: intelligence is everywhere

Professor at Tufts, pioneer in studying how cells use electrical signals to communicate and how organisms grow into the right shape (Lex Fridman Podcast, 2025).

Levin's experiments are the kind that make you rethink basic assumptions. Cut a flatworm into pieces and each piece regenerates into a complete organism. The pieces "know" what they should become. The pattern information exists beyond the physical tissue. His lab assembled "xenobots" from frog skin cells that developed behaviors never seen in nature and never programmed: self-replication, wound healing, collective behavior.

His theoretical framework matches what his experiments show. His "Cognitive Light Cone" idea says every system has a horizon: the scale of the biggest goal it can work toward. A rock's horizon is molecular (crystal formation). A cell's horizon is chemical (finding food). A brain's horizon is behavioral (planning next week). A civilization's horizon is planetary (climate policy, space travel). His broader argument: some form of intelligence exists at all scales, not just in brains. Every system with memory, goals, and the ability to adapt has some degree of mind.

The connection to the optimization framework: Levin keeps finding goal-directed behavior in places most biologists wouldn't look for it. Individual cells making decisions. Organs self-organizing. Organisms navigating environments with no central controller. This is exactly what "optimize optimization" predicts: optimization at every scale, not just in brains. Levin's experiments are the biological evidence for what the framework claims about every domain.

Ilya Sutskever: alignment requires identity

Co-founder of OpenAI, founder of Safe Superintelligence Inc. (SSI). Dwarkesh Patel Interview, November 26, 2025.

Sutskever's interview revealed a researcher hitting walls that conventional AI thinking can't get past. His core frustration: current AI models are really good at interpolating (filling in gaps between things they've already seen) but bad at genuinely understanding new situations the way humans do. They're pattern-matchers, not thinkers. Something fundamental is missing, and he knows it.

Why would an AI researcher care about emotions? His take is revealing.

"Some kind of value function thing... hardcoded by evolution in some very non-obvious way."

— Ilya Sutskever, Dwarkesh Patel interview, November 26, 2025

That's an AI researcher recognizing that emotions aren't bugs in human cognition. They're optimization signals, built in by evolution for reasons we don't fully understand. He also talked about great researchers having an innate quality he couldn't define, a sense for what's important.

"Detecting beauty and simplicity: ugliness, there's no room for ugliness."

— Ilya Sutskever, on what separates top researchers

He said the current approach to training AI will end. Something radically different is coming.

Sutskever is pointing in the same direction from a different angle: current AI is missing something about how values and understanding actually work. He hasn't explicitly endorsed this framework. But his recognition that emotions are evolved optimization signals, that something beyond pattern-matching is needed, and that external training will end, leads to the same conclusion the framework reaches: alignment has to come from the inside out. Embodied identity, not external constraints.

Why this matters

So what happens when you put these three together? Three researchers from three different fields, none of them collaborating. Hassabis explicitly argued that information is more fundamental than matter and that reality has deeply learnable structure. Levin's experiments show optimization and goal-directed behavior at biological scales where nobody expected to find it. Sutskever sees current AI hitting walls that external training alone can't solve. Each, from their own direction, is pointing at the same thing. The optimization framework unifies what they're seeing.

These three were chosen because their work fits the framework. A Nobel laureate in AI, a pioneer in biological intelligence, and the co-founder of OpenAI all independently point the same direction without coordinating. That's a pattern.

Researchers who disagree

Not everyone points this direction. Some prominent physicists explicitly argue against purpose in physics:

Sean Carroll argues the universe has no purpose and that teleological reasoning is a cognitive bias, not a feature of reality. His position: physics is complete without purpose, and adding it is unnecessary. The framework's response: Carroll uses the principle of least action daily, which has purpose-like mathematical structure. He treats the purpose-like form as a mathematical convenience. The framework takes it at face value. Carroll's experimental results don't contradict the optimization framework. His philosophical interpretation does.

Lawrence Krauss argues the universe came from nothing and requires no designer or purpose. His position: quantum fluctuations can produce universes without any external cause. The framework's response: quantum fluctuations producing universes IS the mechanism. The question is what selects which fluctuations become stable, long-lived universes. "Random nothing" doesn't explain the precision. Krauss's physics is compatible with the framework. His interpretation isn't.

Sabine Hossenfelder argues that naturalness and fine-tuning arguments are misleading heuristics, not evidence for anything. Her position: the constants might just be what they are, with no deeper explanation needed. The framework's response: "no explanation needed" is a philosophical position, not an evidential argument. The precision is real regardless of whether you think it needs explaining.

These disagreements are about interpretation, not data. None of these researchers have findings that are inconsistent with optimization. They have philosophical positions that reject teleological framing. The framework predicts this resistance (see The Forbidden Question): teleophobia is the primary barrier to engagement, not evidence.

Karl Friston and the free energy principle

Karl Friston's Free Energy Principle is the closest mainstream academic framework to this thesis. His work proposes that all self-organizing systems minimize surprise by building increasingly accurate models of their environment. The math leads to recursive composition at increasing scales and what Friston calls "circular causality," where systems and their environments co-create each other. His 2019 paper "A Free Energy Principle for a Particular Physics" describes something that looks remarkably like optimize optimization expressed in Bayesian mathematics. The difference: Friston frames it as inference. This framework frames it as engineering.

Try to Break This

Steel-manned objections — strongest counterarguments first. Submit yours →

See the "Researchers Who Disagree" section above. Carroll, Krauss, and Hossenfelder are explicitly engaged. Their experimental findings don't contradict the framework. Their philosophical interpretations do. The distinction matters: the framework claims to explain the same data better, not that other physicists are doing bad experiments.

Convergent evidence from independent sources is the strongest evidence science has. If three people using different instruments all measure the same thing, the measurement is probably real. If each observation stands on its own merits AND they point the same direction, coincidence gets harder to defend.