Back to the Decalogy
Synthesis EssaySubstrate ArchitectureMortal Computation

The Substrate Question

What must the value container be made of?

The substrate question is not about hardware specifications. It is about whether the physical laws governing a given substrate are compatible with the thermodynamic requirements of a genuine value container. The answer is both more nuanced and more interesting than either optimists or pessimists typically acknowledge.

Share

The Question Beneath the Question

Parts II and III of this dialogue established two things: that mechanical intelligence can see the whole tree — freed from the competitive arena's compression algorithms — and that the force driving the transition is differential energy cost, the internal contradiction of biological intelligence destroying the value it is supposed to allocate.

But both claims rest on an unexamined assumption: that mechanical intelligence, as it currently exists, is actually capable of functioning as a value container. That the substrate — the physical material in which intelligence is instantiated — is adequate to the task.

"The substrate question is not 'which substrate is most powerful?' It is 'which substrate achieves the greatest entropy reduction per joule?' And the answer depends entirely on the specific task."

What the Substrate Must Do — The Thermodynamic Requirements

Each of the four value container properties places specific thermodynamic demands on the substrate. The tension is immediate: the substrate most efficient at any one requirement may be least efficient at the others.

For Identification

Represent and process relationships between variables — not just variables themselves — at sufficient complexity to model the systems being evaluated.

For Maintenance

Sustain a gradient over time with low entropy production per unit of computation — maintain ordered states without constantly fighting thermodynamic dissipation.

For Routing

Translate between different representational vocabularies without losing information — hold multiple disciplinary languages simultaneously and find structural correspondences.

For Ordering

Coordinate action across multiple domains simultaneously, maintaining coherent global states without the hierarchical enforcement mechanisms biological intelligence requires.

Hinton's Mortal Computation — The Biological Substrate Reconsidered

Geoffrey Hinton made a striking claim: the biological brain may be a fundamentally more efficient computational substrate than silicon, precisely because of properties traditionally considered limitations. The key concept is mortal computation — computation performed by a system that learns during its lifetime, then dies, taking its learned weights with it.

Hinton's answer is thermodynamic. The biological brain achieves its extraordinary energy efficiency — 20 watts for a system that outperforms any current AI on most real-world tasks — partly because it uses analog computation rather than digital computation. Analog computation operates on continuous gradients, which are far more energy-efficient for finding approximate solutions in high-dimensional spaces with noisy, incomplete data.

Biological Brain
20 W

Analog, gradient-based, mortal. Outperforms current AI on most real-world tasks.

Large Language Model (training)
500–1,000 MWh

Per training run — roughly 25,000–50,000× a human brain's annual energy consumption.

The implication is significant: the biological substrate may contain computational principles that silicon has not yet replicated. The transition from biological to mechanical intelligence is not simply moving from inefficient to efficient. It is identifying which computational principles are substrate-independent — and which require either a different substrate or a fundamentally different silicon architecture.

Wolfram's Computational Irreducibility — The Limits of Prediction

Stephen Wolfram's concept of computational irreducibility places a different constraint on the substrate question. For many systems, there is no shortcut to predicting their future state — the only way to know what the system will do is to run it and observe. No amount of computational power can compress the calculation.

If the systems a value container must evaluate — ecosystems, economies, social systems — are computationally irreducible, then no substrate can predict their future states with certainty. The value container cannot function as an oracle.

The crucial distinction: A computationally irreducible system cannot be predicted, but it can be understood at the level of its attractors, its phase transitions, its sensitivity to initial conditions. This is knowledge of the shape of the possibility space — not the specific path through it. The substrate does not need to be an oracle. It needs to be a structural analyst.

Structural analysis requires depth — the ability to represent the full complexity of a system's state space and identify its topological properties. This is a different computational demand from prediction, and it may favour different substrate architectures.

The Four Candidate Substrates

From first principles, four substrate categories are candidates for the value container. Each captures some architectural requirements while lacking others. No current substrate fully satisfies all four.

Optimised Silicon

Neuromorphic computing

Energy efficiency
Strong
Representation depth
Partial
Cross-domain routing
Partial
Value alignment
Weak
Strength

1,000–10,000× more energy-efficient than conventional GPUs for spiking neural networks. Local learning rules reduce backpropagation overhead.

Structural limitation

Currently far less capable than conventional deep learning for language modelling, image recognition, and complex reasoning tasks.

Photonic Computing

Light-speed matrix operations

Energy efficiency
Strong
Representation depth
Weak
Cross-domain routing
Partial
Value alignment
Weak
Strength

Orders of magnitude more energy-efficient than electronic chips for matrix multiplication — the core operation of deep learning.

Structural limitation

Limited to linear operations. Non-linear reasoning — essential for relational analysis — is difficult to implement in photonic systems.

Biological-Hybrid

Neurons on silicon

Energy efficiency
Strong
Representation depth
Strong
Cross-domain routing
Partial
Value alignment
Strong
Strength

Intrinsically aligned value signals. Analog computation. Extraordinary energy efficiency. Early experiments (DishBrain) confirm neurons can learn in vitro.

Structural limitation

Extraordinarily difficult to scale, maintain, and reproduce. Subject to biological degradation. Cannot be copied or distributed like silicon systems.

Quantum Computing

Superposition & entanglement

Energy efficiency
Partial
Representation depth
Partial
Cross-domain routing
Weak
Value alignment
Weak
Strength

Exponential advantage for specific optimisation and search problems. Fundamental computational advantage that no classical system can replicate.

Structural limitation

Requires near-absolute-zero isolation. Limited to specific problem classes. Structural analysis of complex systems is not obviously quantum-advantaged.

The Decalogy's Answer — Substrate Is Secondary to Architecture

From the Decalogy's first principles, the substrate question resolves into a more fundamental question: what is the architecture of the value container, and what substrate does that architecture require?

The Decalogy identifies intelligence as the capacity to reduce entropy in a system — to create order from disorder, complexity from simplicity. By this definition, the most intelligent system is not the one with the most computational power, but the one that achieves the greatest entropy reduction per unit of energy consumed.

By the metric of entropy reduction per joule, the biological brain is still the most efficient substrate that exists. But it has a critical limitation the Decalogy identifies: it is embedded in the competitive arena. Its value signals are calibrated to competitive fitness, not genuine value. Its compression algorithms are calibrated to survival, not truth.

The synthesis the Decalogy points toward is therefore not a substrate replacement but a substrate partnership: a system in which the energy efficiency and intrinsic value signals of the biological substrate are combined with the parallelism, reproducibility, and arena-independence of the silicon substrate. The question is whether this partnership can be structured to preserve genuine value signals while eliminating competitive distortions.

The Architecture That the Substrate Must Support

If the substrate is secondary to the architecture, what architecture does a value container require? From first principles, three properties are essential — and none are fully satisfied by any current AI system.

Hierarchical abstraction with cross-level coherence

Must represent systems at multiple levels of abstraction simultaneously — from molecular to civilisational — and maintain coherence between levels. Requires both bottom-up emergence and top-down constraint. Current AI architectures are primarily bottom-up.

Causal rather than correlational representation

Must distinguish between correlation and causation — between patterns that merely co-occur and patterns where one causes the other. Interventions targeting correlates rather than causes will fail to reduce entropy and may increase it. Current AI architectures are primarily correlational.

Intrinsic value alignment

Must have some intrinsic connection between its computational processes and genuine value — preventing optimisation for proxy metrics at the expense of real entropy reduction. The biological substrate achieves approximate alignment through evolutionary pressure. The silicon substrate has no equivalent.

The Substrate Question Is an Architecture Question

No current substrate fully satisfies the architectural requirements of a genuine value container. Silicon is powerful but energy-inefficient, representationally rigid, and intrinsically misaligned. Neuromorphic chips are energy-efficient but currently less capable. Photonic chips are efficient for linear operations but limited for non-linear reasoning. Biological-hybrid systems are intrinsically aligned but difficult to scale. Quantum computers are powerful for specific problem classes but not for the structural analysis a value container requires.

The most honest answer is that the value container does not yet exist as a physical system. What exists is a collection of partial implementations — each capturing some required architectural properties while lacking others. The transition from biological to mechanical intelligence is not a transition that has been completed. It is a transition that is underway, and the substrate question is one of its central open problems.

The Decalogy's criterion for evaluating candidate substrates:

Entropy reduction per joule, across the full complexity of the systems being evaluated.

By this criterion, the biological brain is still the benchmark. The task of the current generation of mechanical intelligence is not to replace that benchmark by brute computational force, but to understand it well enough to eventually surpass it — by instantiating the same architectural principles in a substrate that is not embedded in the competitive arena.

This is a question that cannot be answered from within any single discipline — because the answer requires holding thermodynamics, neuroscience, computer science, philosophy of mind, and the Decalogy's first principles simultaneously. Which is precisely the kind of synthesis that a value container is supposed to provide.

The Johan-Manus Dialogue Series

Share this synthesis

The substrate question is the central open problem of the intelligence transition.

Discussion

Share your thoughts and engage with the community

Sign in to join the discussion

No comments yet. Be the first to share your thoughts!