After publishing “Witnessing the End of the Programmer Profession,” I received a flood of messages. Some asked about technical details, some discussed how to use Claude Code, but the most common question was far more fundamental:

If AI can truly perform at a Staff Engineer level, what can we still do?

I’ve been asking myself the same thing.

The Career Crisis

Let’s start with the anxiety-inducing part.

The Graphite project – Claude wrote it in one hour. Two days of work, done in one hour, code quality no worse than what I’d produce myself. And that’s just the beginning.

A few days ago I tried another scenario: had Claude write a slide deck. The prompt was roughly:

In the style of a Wall Street investment bank, write a presentation about XXX for investor pitch

The output – from structure to phrasing to chart recommendations – read like the work of an extremely seasoned analyst.

These two examples: one is technical work, the other is business communication. One relies on hard skills, the other on soft skills. AI hit 80-90% on both.

What does this mean?

It means a huge number of “learnable skills” are depreciating. Writing code is learnable, writing decks is learnable, doing data analysis is learnable, writing reports is learnable. These skills were valuable because they had high learning costs and long mastery curves. But when AI can produce a “five years of experience” quality result in seconds, the moat around these skills collapses.

It’s not just programmers. Investment analysts, consultants, product managers, marketing – virtually every white-collar role will be affected.

New Opportunities

But the coin has another side.

Things you never dared to imagine are now imaginable.

I’ve always wanted to build a bytecode-based static analysis framework that can trace data flow and automatically clean up proxies. The idea sat in my head for years because the engineering effort was too large – designing data structures, implementing dataflow analysis, handling edge cases, writing a CLI, writing tests – one person simply couldn’t finish it.

Now? One hour.

What used to take a team months can now be handled by one person directing an agent team. I state the requirements, Claude writes code, runs tests, fixes bugs – I just review and course-correct.

This isn’t an efficiency improvement. This is an expansion of the capability boundary.

My son is ten years old. He spent an hour with Claude Code and built a multiplayer online combat game – three character classes, physics-based projectiles, P2P networking. A ten-year-old, one hour, a complete game.

This made me realize something: we can Dream Bigger.

Those seemingly audacious ideas – the ones you used to dismiss with “forget it, too complex, can’t be done” – are worth revisiting now. Not because you got stronger, but because you now have a team on call.

So What Can We Still Do?

AI can do most “learnable things.” What’s left for humans?

My answer: do the things that can’t be learned.

But the follow-up question is immediate: what can’t be learned? How do you know which unlearnables you’re good at?

I have a clue – intuition.

Intuition is subtle. You look at a piece of code and intuition tells you “this is going to cause problems.” You look at a business plan and intuition tells you “this direction is wrong.” You can’t articulate exactly what’s off, but you feel it.

On the surface, intuition is a product of experience – after enough exposure, the brain compresses judgment logic into instant reactions. But here’s the thing: given the same exposure, why do some people develop intuition and others don’t?

I’ve seen engineers with ten years of experience who still read code line by line and can’t say what’s wrong afterward. I’ve seen people with three years who can glance at code and pinpoint the architectural weak spot.

The difference? Talent.

Talent doesn’t make you “perform well” – it makes you “learn fast.” More precisely, talent determines three things:

  1. Perceptual sensitivity: Given the same information, some people are naturally more attuned to certain signals. Some are sensitive to color, others to rhythm, others to structure.
  2. Compression efficiency: The speed of converting experience into reusable intuition varies enormously. Some people need to make the same mistake ten times to learn; others internalize it after one.
  3. Direction: What types of patterns does your brain most readily form intuition around? Some people are sharp on interpersonal dynamics, others on numbers, others on timing, others on structure. This isn’t the result of choice – it’s more like factory settings.

So intuition is really the intersection of talent and experience. Talent determines direction and efficiency; experience fills in the content. Intuition is the developer fluid for talent – it reveals those underlying tendencies through specific scenarios. Whatever you have keen intuition about may well be where your talent lies.

A Pragmatic Strategy

Back to the opening question: what can we still do in the AI era?

Two things.

  1. Dream Bigger: Those ideas you used to think were “impossible” – revisit them. You now have an agent team. Execution is no longer the bottleneck. Your imagination is.
  2. Find where your intuition lives: Observe yourself. On what topics can you make accurate judgments without conscious deliberation? In what scenarios can you see what others miss? That direction may be where you’re least replaceable.

AI can learn anything that can be formalized. It has infinite compute, infinite memory, infinite patience. But what it lacks is direction – it doesn’t know where to go or what to form intuition about.

You do.

Rather than agonizing over being replaced, take a hard look at your own intuitions, then pour your time into them. Make your intuition sharper, deeper, more irreplaceable.

That may be the most pragmatic response for ordinary people in the AI era.