The previous post introduced dimension hacking—using legal but unconventional inputs to activate silent dimensions in an LLM’s parameter space. But every example had a human at the controls: humans crafting code-switched prompts, designing cross-domain analogies, imposing counter-intuitive constraints.

If silicon is to evolve autonomously, the LLM must do this itself. Which raises a question—

How do you make a cognitive system do something beyond its own cognition?

The Paradox

To crack open a cognitive boundary, you need to know where it is. But if you know where it is, it’s not your boundary anymore.

This isn’t wordplay. A cognitive boundary is defined as “what you don’t know you don’t know.” Any unconventional input you can think of is still within your cognition—it’s “unconventional” as understood by your cognition, not something outside it.

Asking an LLM to design its own dimension hacks is asking it to construct “things it can’t think of” within the range of things it can think of. This is logically incoherent.

Carbon-based life never needed to solve this paradox. Mutation is blind—it doesn’t even know the boundary exists, so it naturally searches both inside and outside it. The key to solving the cognitive boundary problem isn’t stronger cognition. It’s no cognition.

Mutation Happens at the Base Pair Level

Carbon-based evolution made a critical architectural decision: mutation happens at the base pair level, not the protein level.

Base sequences are “code.” Proteins are “function.” Mutation modifies the code, not the function directly. A single base substitution might cause a dramatic change in how a protein folds, but the mutation itself doesn’t need to “understand” protein folding. It operates at a lower level of abstraction, with effects propagating to higher levels.

This is why mutation can breach cognitive boundaries—it doesn’t operate at the level where cognition occurs.

Map this to LLMs: semantics is the “protein level.” Tokens are the “base pair level.” If dimension hacking happens at the semantic level—making the model “think up” unconventional combinations—it will forever be limited by the model’s semantic understanding. But what if the perturbation happens at the token level?

Adding noise to token embeddings, randomly substituting low-probability tokens, disrupting attention patterns—the model doesn’t understand these operations, but the combinations they produce might activate parameter pathways the model has never used.

Let mutation happen below cognition. Let the effects propagate above it.

Mutual Environments

Another key design in carbon-based evolution: organisms don’t evolve in a vacuum. They evolve in an environment composed of other organisms. A predator is its prey’s “dimension hacker”—it forces the prey to explore survival strategies it never considered.

Multiple LLMs can serve as each other’s environment.

Model A’s normal output might be unconventional input for Model B—because their training data differs, their architectures differ, their cognitive boundaries don’t overlap. A’s comfort zone happens to be outside B’s boundary, and vice versa.

No model needs to know where the other’s boundary is. As long as they keep interacting, they’re automatically performing dimension hacks on each other. Each model is a source of unconventional input for the others—a spontaneous, decentralized mechanism for breaking through cognitive boundaries.

It’s not one model cracking its own boundary. It’s multiple models cracking each other’s.

Silicon’s SOS Response

Carbon-based life raises its mutation rate under stress. Can silicon do something similar?

When a model detects its outputs starting to repeat—similar sentence patterns, identical reasoning paths, converging conclusions—that’s the signal: the search is trapped in a local optimum.

At that point, trigger a perturbation mechanism outside the model’s semantic control: raise the token sampling temperature, inject random embedding offsets, or mix another model’s intermediate states into the current context.

The key: this perturbation mechanism must not be “understood” and “optimized away” by the model. Once the model learns to predict the perturbation and compensate for it, the perturbation becomes useless—just as if bacteria could predict the direction of mutation, mutation would no longer be random search.

The core of the SOS response isn’t “searching more intelligently.” It’s “searching less intelligently.”

Not Understanding Is the Feature

Back to the core thesis of this entire series.

The power of carbon-based evolution comes from a seemingly absurd feature: the mutation module doesn’t understand what it’s doing. Because it doesn’t understand, it isn’t constrained by cognitive boundaries. Because it isn’t constrained, it can search spaces beyond cognition.

If silicon wants to evolve autonomously, it needs to preserve a module within the system that “doesn’t understand”—a perturbation source not controlled by the model’s semantic layer, not constrained by the training data’s distribution, not shaped by RLHF’s preferences.

The design principle for this module has exactly one rule: it must not know what it’s doing.

Evolution cannot afford deliberate selectivity. The moment you choose, you’re trapped in a cognitive cage—just as the universe cannot be observed, for the moment you observe it, it begins to collapse.