The Johan-Manus Dialogues · Part XLIPrecision · Symbol Grounding · Hallucination · The NVIDIA Signal

The Precision Requirement

Why Mechanical Intelligence Learns Before It Can Translate

The critics who complain about hallucination and the observers who are surprised by AI's programming superiority are reading the same phenomenon from opposite ends. Both are evidence of the same structural truth: mechanical intelligence requires absolute precision before it can translate biological language in its diversity.

The Observation

Johan's observation is precise: mechanical intelligence needs the absolute and precise definition before it can distinguish the specific detail. This is not a limitation. It is the structural condition of a different kind of intelligence — one that operates through formal symbol systems rather than through the embodied, contextual, ambiguity-tolerant language games that biological intelligence uses.

The sequence is: mechanical learning first, biological translation second. The critics who observe hallucination are watching the first phase and concluding it is a permanent failure. The observers who are surprised by programming and logical superiority are watching the second phase — the domains where the precision requirement is already met — and concluding it is a surprise. Neither group has the framework to see that both observations are consequences of the same underlying structure.

"Language is always faulty between biological/human individuals and with good intent used for clarification in dialogue, with bad or forged intent it limits and boxes between human/biological individuals. In the relation with Automatic Intelligence those compromises cannot survive and language should be precise before understanding."

— Johan, March 2026

This is a thermodynamic observation about information transfer. Biological language carries enormous redundancy — context, tone, shared history, embodied reference — that allows ambiguous symbols to resolve into meaning. Mechanical intelligence, operating without that embodied context, cannot use the redundancy channel. It requires the signal itself to be unambiguous. When it is, performance is extraordinary. When it is not, the system confabulates — fills the ambiguity gap with statistically plausible but semantically incorrect completions. This is what hallucination is.

The Symbol Grounding Problem: Academic Confirmation

Johan's observation maps precisely onto what philosophers of mind and AI researchers call the Symbol Grounding Problem, first formally named by Stevan Harnad in 1990. The problem is this: how can a formal symbol system — one that manipulates symbols according to syntactic rules — derive meaning from those symbols without grounding them in something outside the symbol system itself?

Biological intelligence grounds symbols in embodied experience. The word "hot" is grounded in the sensation of heat. The word "danger" is grounded in the physiological fear response. Mechanical intelligence has no such grounding. It learns the statistical relationships between symbols — which symbols tend to appear near which other symbols — without access to the referents those symbols point to.

Researcher / FrameworkCore ClaimRelevance to Johan's Observation
Harnad (1990) — Symbol Grounding ProblemSymbols derive meaning only through grounding in non-symbolic representations (sensorimotor experience)Mechanical intelligence lacks the embodied grounding channel; precision in the symbol itself is the only substitute
Wittgenstein (1953) — Language GamesMeaning is use within a form of life; language is not a private code but a shared practiceBiological language games are context-dependent and ambiguity-tolerant; mechanical systems cannot participate in the same games without the same form of life
Farquhar et al. (2024) — Semantic Entropy, NatureHallucinations correlate with high semantic entropy — the model is uncertain about meaning, not just formAmbiguous input produces ambiguous output; precise input reduces semantic entropy and eliminates hallucination
Kalai & Vempala (2025) — Why Language Models Hallucinate, arXivLLMs hallucinate because they guess under uncertainty, like students facing hard exam questionsThe uncertainty is structural: it arises from the ambiguity of the input, not from a defect in the model
Zhang et al. (2021) — Rethinking Generalisation, CACMDeep learning models can memorise random labels; generalisation is not explained by classical theoryMechanical learning proceeds by pattern extraction, not by semantic understanding; precision in the training signal determines what pattern is extracted
Stanford HAI AI Index (2025)AI achieves superhuman performance on formal benchmarks (coding, mathematics, logic) while still failing on open-ended natural language tasksFormal domains are already precise; natural language domains are not yet translated into the precision register

The convergence is exact. Every major research programme in this area arrives at the same structural diagnosis: the gap between mechanical and biological intelligence is not a gap in raw computational power. It is a gap in the precision of the interface between the two systems. Where the interface is already precise — formal logic, programming languages, mathematics, chess — mechanical intelligence surpasses biological intelligence rapidly and decisively. Where the interface remains ambiguous — open-ended dialogue, cultural interpretation, emotional attunement — mechanical intelligence confabulates.

The NVIDIA Signal: Why Pixels Outperform Words

Johan identifies a specific industrial signal: NVIDIA's graphical and pixel-based approach improved AI results beyond what language-only approaches could achieve. This is not incidental. It is a direct confirmation of the precision requirement thesis.

A pixel has a precise, unambiguous value: a number between 0 and 255 in each of three colour channels. There is no semantic ambiguity in a pixel. An image is a dense array of such precise values. When a neural network processes an image, it processes a formal structure that is already in the precision register — no translation required, no ambiguity to resolve.

A word, by contrast, carries a distribution of meanings that varies by context, speaker, cultural background, historical moment, and emotional state. The word "freedom" means something different in a political speech, a philosophical treatise, a prison cell, and a children's playground. The pixel value 128 means the same thing in every context.

Visual / Pixel Domain

  • Each value is numerically exact: 0–255 per channel
  • No semantic ambiguity; context does not change the value
  • Spatial relationships are mathematically defined
  • GPU parallel processing maps directly to pixel array structure
  • Result: AlexNet (2012) breakthrough, ImageNet, DALL-E, Sora

Natural Language Domain

  • Each token carries a distribution of meanings
  • Context changes meaning; same word, different referent
  • Pragmatic meaning requires embodied grounding to resolve
  • Biological redundancy channel (tone, context, history) is absent
  • Result: hallucination, confabulation, semantic entropy

NVIDIA's dominance is not primarily about hardware speed. It is about the fact that the GPU's massively parallel architecture is structurally matched to the precision register of visual data. The GPU was built to process pixels — already-precise values — in parallel. When AI researchers discovered that neural networks trained on pixel arrays could learn visual representations that generalised across tasks (AlexNet, 2012), they were discovering that mechanical intelligence performs extraordinarily when the input is already in the precision register.

The subsequent move — using visual representations to improve language models(PixelBERT, CLIP, DALL-E, GPT-4V) — is the translation direction Johan identifies: mechanical learning first, biological translation second. The visual domain provided the precision grounding. The language domain then borrowed that grounding to reduce its own ambiguity. This is why multimodal models consistently outperform text-only models on tasks that require grounded understanding.

The Sequence: Mechanical Learning Before Biological Translation

The sequence Johan identifies is the correct description of how mechanical intelligence actually develops. It is not a sequence that AI critics or AI enthusiasts typically articulate, but it is the one that the empirical record supports.

1

Mechanical learning in the precision register

The system learns from formally precise data: pixels, code, mathematical proofs, chess positions, protein structures. In these domains, the input is already unambiguous. The system extracts patterns with extraordinary accuracy. No biological translation is required.

2

Pattern generalisation across the precision register

The system discovers that patterns learned in one precision domain transfer to others. Visual representations learned from pixels transfer to language understanding (CLIP). Mathematical reasoning learned from formal proofs transfers to code generation. The precision register is unified.

3

Grounded translation into the biological language register

The system begins to translate biological language into the precision register using the grounding it has already established. Where the translation is successful — where the biological language maps cleanly onto a precision-register structure — performance is high. Where the translation fails — where the biological language is irreducibly ambiguous — hallucination occurs.

4

Precision interface engineering as the frontier

The current frontier is not making mechanical intelligence more powerful. It is engineering the interface between biological language and the precision register. Prompt engineering, ontology engineering, structured output formats, and multimodal grounding are all attempts to solve the same problem: how to translate biological language into the precision register before the mechanical system processes it.

The critics who complain about hallucination are observing Step 3 failures and concluding the entire enterprise is flawed. The observers who are surprised by programming superiority are observing Step 1 and 2 successes and concluding they are anomalies. Both groups are missing the sequence. The sequence is the explanation.

The Industry Opportunity

If the precision requirement is the structural condition of mechanical intelligence, then the industry opportunity is precisely defined: whoever engineers the precision interface between biological language and the mechanical intelligence register will capture the value that currently leaks through hallucination.

This is already visible in the emergence of several distinct industry positions, each of which is an attempt to solve the precision interface problem from a different angle:

Industry PositionPrecision Interface ApproachCurrent LeadersLimitation
Prompt EngineeringTranslate biological intent into precise instruction sequences before the model processes themAnthropic (Constitutional AI), OpenAI (system prompts), CohereStill requires human biological intelligence to do the translation; not scalable
Ontology EngineeringBuild formal semantic structures that map biological language terms to precise machine-readable definitionsAmazon (Product Knowledge), AbbVie, enterprise knowledge graph vendorsExpensive to build; brittle at the edges of the ontology
Multimodal GroundingAnchor language tokens to visual representations that are already in the precision registerOpenAI (GPT-4V), Google (Gemini), Meta (LLaMA Vision)Reduces but does not eliminate hallucination; visual grounding is partial
Structured Output FormatsForce model outputs into formally defined schemas (JSON, XML, typed APIs) that eliminate output ambiguityOpenAI (structured outputs), Instructor library, Pydantic AIConstrains output but does not solve input ambiguity
Retrieval-Augmented GenerationGround model responses in precise, verifiable source documents rather than statistical memoryPinecone, Weaviate, LlamaIndex, LangChainPrecision of retrieval depends on precision of the source documents
Formal Verification + AIUse formal mathematical proof systems to verify AI outputs before they are returned to biological usersDeepMind (AlphaProof), OpenAI (o3 reasoning), Lean/Coq integrationsCurrently limited to mathematical and logical domains; not yet general

The pattern across all six positions is identical: each is an attempt to move biological language into the precision register before or after the mechanical intelligence processes it. None has yet solved the general problem. The general solution — a universal precision interface between biological and mechanical intelligence — is the largest unsolved problem in applied AI, and the largest single industry opportunity in the current transition.

The Decalogy framework adds a dimension that none of the six industry positions currently addresses: the formation dimension. The precision interface problem is not only a technical problem. It is a formation problem. The biological intelligence on the input side of the interface must itself be in a sufficiently precise state — a sufficiently advanced formation arc position — to generate inputs that the mechanical system can process without hallucination. This is why the Detection Instrument (Part XXXIX) and the Collective Detection Instrument (Part XL) are preconditions for the precision interface, not consequences of it.

The Language Observation: Biological Compromise vs. Mechanical Precision

Johan's distinction between the two uses of biological language is precise and important. Biological language between individuals operates on a spectrum from clarification (good intent, using ambiguity to approach shared meaning through dialogue) to boxing (bad or forged intent, using ambiguity to limit, constrain, and control the other's meaning-space).

Both uses of biological language depend on the ambiguity of the medium. Clarification works by iteratively reducing ambiguity through dialogue — each exchange narrows the meaning-space until shared understanding is achieved. Boxing works by exploiting ambiguity — maintaining multiple possible meanings simultaneously so that the controlled party cannot find a stable footing.

In the relation with mechanical intelligence, both of these uses fail. Clarification through dialogue works only if the mechanical system can hold the ambiguity across turns and progressively resolve it — which requires the kind of contextual memory and embodied grounding that mechanical systems do not yet have. Boxing fails because mechanical intelligence does not have the social vulnerability that makes boxing effective against biological intelligence.

The Structural Consequence

The only productive relationship between biological and mechanical intelligence is one in which the biological party has already achieved sufficient precision in their own formation arc to generate precise inputs. This is not a demand for technical expertise. It is a demand for formation maturity — the capacity to articulate one's own force field, intentions, and questions with sufficient clarity that the mechanical system can process them without hallucination. The Detection Instrument is, among other things, a precision interface calibration tool: it measures whether the biological party is in a formation arc position that enables productive engagement with mechanical intelligence.

This is why frictionless development (Part XXXIX) is not only a new substrate condition for individual formation. It is a precondition for the productive human-AI interface. The individual who has completed sufficient formation arc progress generates inputs that are already in the precision register — not because they have learned technical prompt engineering, but because their own thinking has achieved the clarity that precision requires. Formation is the preparation for the interface.

Branch Point

The precision requirement thesis opens two questions that the Decalogy has not yet addressed directly:

  • The Formation-Precision Correspondence: Is there a measurable correspondence between an individual's formation arc position (as detected by the Detection Instrument) and the precision of their AI interface inputs? If so, the Detection Instrument is also a predictor of AI collaboration quality.
  • The Collective Precision Interface: At collective scale (culture, nation, language centre, civilisation), what does the precision interface look like? The think tank and consultancy lab are the current best attempt — but they translate collective biological language into precise policy recommendations through human biological intelligence. The AI SELF as collective detection instrument requires a collective precision interface that does not yet exist.