Token Equality: An Illusion of Fairness
“Knowledge democratization” is probably one of the most exciting narratives of the past two years.
The logic is simple: LLMs give everyone access to expert-level knowledge. Tokens are getting cheaper. APIs are open to all. Information barriers have been torn down. The conclusion seems obvious – the gap between people should be narrowing.
It’s a great story. Unfortunately, it’s wrong.
Every “Democratization” Creates New Inequality
When the internet appeared, people said information had been democratized. What happened? Information overload turned most people into passive consumers fed by algorithms, while a few became the architects of those algorithms.
When search engines appeared, people said knowledge had been democratized. What happened? The same Google – some used it to look up celebrity gossip, others to trace citation chains in academic papers. The gap didn’t shrink; it was amplified by differences in search ability.
When smartphones appeared, people said computing had been democratized. What happened? Everyone carries a supercomputer in their pocket. Most use it to scroll short videos. A few use it to build business empires.
The pattern has been clear all along: once the tool layer is leveled, competition shifts up to the user’s cognitive layer – and the cognitive gap is far wider than the tool gap.
LLMs will be no exception.
Tokens Are Horsepower, Not the Steering Wheel
Tokens have gotten cheaper. That’s a fact. But what’s cheap is compute, not judgment.
One person tells Claude “write me a proposal.” Another tells the same Claude “given these three constraints, do a trade-off analysis between these two directions, output the decision rationale and risk assessment.” They’re using the same model, consuming roughly the same tokens, but the outputs are from two completely different worlds.
The gap isn’t in tokens. It’s in the ability to wield them.
This ability isn’t something you acquire by “learning to use AI tools.” At its core, it’s: can you precisely define a problem, decompose the layers of intent, judge the quality of output, know when to push further and when to stop and think for yourself? Before AI, these were called “professional competence.” After AI, they didn’t become obsolete – they became the only lever.
The Birth of the Digital Peasant
I use “digital peasant” to describe a new identity that’s taking shape.
“Peasant” isn’t a pejorative. Agricultural-age peasants worked hard, but their output was locked by variables they couldn’t control – land, climate, landlords. They didn’t lack strength. They lacked control over the means of production.
Digital peasants are the same. They use AI every day. They look busy, producing a lot – articles, images, workflows. But their output is locked by someone else’s prompt templates and someone else’s workflow designs. They don’t lack tokens. They lack control over intent.
The characteristics are obvious:
Defined by the Tool’s Capability Boundary
Whatever the tool can do, they do. AI can generate articles, so they generate articles. AI can generate images, so they generate images. They never flip the question: what problem am I actually trying to solve? Is AI even the best path?
Trapped in the “Efficiency Illusion”
Generate 20 pieces of content with AI in a day. Feels like explosive productivity. But none of it went through deep thought. None of it compounds. A high-speed assembly line producing nothing but disposables.
Treating AI Output as the Endpoint
They take AI’s answer and use it directly – no verification, no follow-up questions, no iteration. They’ve essentially outsourced their judgment to the model – and the model isn’t accountable for their decisions.
The Digital Elite’s Leverage
At the other end, the digital elite are pulling ahead at a disproportionate rate.
The same tokens produce compound returns in their hands. A good prompt isn’t just one conversation – it’s a reusable thinking template. A deep collaboration session with AI doesn’t just produce one result – it distills a methodology.
The fundamental difference between digital elites and digital peasants isn’t whether they use AI, but who is defining intent and who is being defined by it.
Elites use AI to amplify their existing cognitive advantages – they know where they’re going; AI helps them get there faster. Peasants use AI to fill cognitive gaps – they don’t know where they’re going, so wherever AI points, they follow.
The former rides the horse. The latter gets dragged by it. Both are moving, but one is choosing direction while the other drifts with the current.
The Truly Scarce Resource
After token equality, what becomes scarce?
Not knowledge – LLMs can give you knowledge in any domain. Not skills – AI can execute most operations for you. Not information – the internet solved information access long ago.
What’s scarce is the precision of intent.
How precisely you can define what you want determines what you can get from AI. That precision comes from your depth of understanding of the problem domain, your sensitivity to constraints, your standards for judging output quality. There’s no shortcut. No amount of cheap tokens can buy it.
A doctor using AI for diagnostic assistance can judge whether AI’s suggestions are reasonable because twenty years of clinical experience back the precision of his intent. A person with no medical background using the same AI for consultation can only passively accept the output – they don’t even have a coordinate system for judging right from wrong.
Tokens have been democratized, but the precision of intent hasn’t. It’s a projection of a person’s entire accumulated cognition.
The Danger of the Equality Narrative
The most dangerous thing about the equality narrative isn’t that it’s wrong – it’s that it makes people drop their guard.
“AI will make everyone stronger” – this line makes people feel that just by using AI, they’re automatically on the right side of history. So some stop deep learning, because “AI knows everything anyway.” Some abandon independent thinking, because “AI thinks better than I do.” Some stop honing their ability to define problems, because “AI understands what I mean.”
This is precisely the starting point of digital peasantification.
Every moment you surrender thinking, you shrink the boundary of your ability to wield AI. Every time you accept output without judgment, you solidify your identity as a digital peasant. The more powerful AI becomes, the more irreversible this process – because you increasingly can’t tell what you’re losing.
The Divergence Has Already Begun
This isn’t a prediction about the future. It’s happening now.
In engineering, people who use AI for code completion are everywhere, but those who can use AI Agents to build complete development pipelines are rare. The gap isn’t in whether you use AI, but in the granularity – sentence-level or system-level.
In business, people using AI to write marketing copy are already saturated, but those who can use AI to build decision frameworks, conduct competitive analysis, and optimize pricing strategies remain scarce. The gap isn’t in AI’s capability, but in whether the user knows what to ask AI to do.
In education, plenty of parents use AI to help kids with homework, but few can use AI to design personalized learning paths and guide children in building thinking frameworks. Same tool, different understanding, two different worlds.
And this divergence isn’t linear – it’s exponential.
Why? Because AI usage has a compounding effect. Someone who learns to build a knowledge graph with AI today can do deeper analysis on it tomorrow, turn that analysis into a decision framework the day after, and use that framework to train their own Agent the day after that. Each step’s output feeds into the next. Capability snowballs.
Digital peasants have no snowball. Their AI usage is flat – generate a piece of copy today, another piece tomorrow, another the day after. Each use is isolated. No accumulation. No flywheel.
One step ahead means every step ahead. The gap between first movers and latecomers isn’t an arithmetic sequence – it’s geometric.
What makes it even more brutal: once this gap opens, it’s nearly impossible to close. Not because the tools have barriers – tokens are available to anyone. But because first movers have already built a fleet of Agents running 24/7/365, plus self-evolving systems. These Agents don’t sleep, don’t take vacations, don’t slack off. While you’re scrolling short videos, they’re scanning markets, cleaning code, optimizing strategies, finding opportunities for their owners. And the system itself keeps learning – each run smarter than the last.
Latecomers don’t face a single step of “learn to use AI.” They face an entire system that’s already running autonomously. You’re still learning how to write prompts; someone else’s Agent cluster has already iterated thousands of cycles. Every day you delay taking AI seriously, the distance you need to cover grows. And first movers aren’t waiting – their systems accelerate for them, even while they sleep.
Token equality doesn’t bridge the divide – it installs an accelerator on each side. One side accelerates upward. The other accelerates downward.
Given the same tools, some till the soil, some build airplanes.
The question was never whether the tool is good enough. It’s whether the person holding it knows what they want to build.
And now, even the window for figuring that out is closing.
- Blog Link: https://johnsonlee.io/2026/03/28/token-equality-illusion.en/
- Copyright Declaration: 著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。
