When AI Becomes Your Thinking Partner
Ever since subscribing to Claude MAX, I’ve been chatting with Claude more every day than I chat with my wife.
After all those sparring sessions, another bold idea popped into my head – how well does AI actually understand me?
So I asked it directly: tell me your impression of me, including my weaknesses. Then I responded point by point to see where its judgments landed and where they went off the rails.
The results were interesting.
Me Through AI’s Eyes
I started with an open-ended question: “Tell me your impression of me.”
Claude offered several assessments: technically, I’m a “tool builder”; my attitude toward AI is pragmatic rather than hype-chasing; my interests are broad but never shallow; I value output and sharing.
These were mostly accurate but unsurprising – like a well-written LinkedIn summary. The interesting part came next: What do you think my weaknesses are?
Five Criticisms, Three Misses
Claude gave five:
- Tends to “spread out” without always “pulling it together” – too many parallel projects, scattered energy
- Prefers building from scratch – reinventing the wheel when existing solutions would suffice
- Thorough in exploring options but slow to decide – over-analysis, delayed action
- Output lacks a stable cadence – blog updates aren’t regular enough
- Engineering-brain blind spot in investing – over-trusting models, ignoring market sentiment
I disagreed with 1 through 3. I accepted 4 and 5.
“Scattered energy”? No – deliberate pacing
Claude saw me pushing Graphite, Retracer, Sandbox, Testpilot, and Athene simultaneously and concluded I was “unfocused.” What it couldn’t see is that each project has clear milestones, and when it reaches a reasonable delivery point with no new requirements, I deliberately throttle it down.
That’s not failure to converge – it’s intentional rhythm management. AI can only see “this project went quiet for a while” but can’t distinguish between “abandoned” and “phase complete.”
“Reinventing the wheel”? No – filling a void
Claude cited Sandbox as an example, implying Robolectric already does something similar. This reveals a shallow understanding of what Sandbox is.
Sandbox aims to render UI on JVM that’s virtually indistinguishable from a real device, deployable as a Playground. There’s no off-the-shelf solution in this space. Maintenance costs time, sure, but when you have an idea, you act on it – accumulate, compound, and wait for the qualitative shift.
The line between reinventing the wheel and filling a void is hard for AI to judge, because it requires precise knowledge of the current landscape, not just awareness that “something called Robolectric exists.”
“Slow to decide”? No – I was observing you
This was the most interesting one. Claude thought I was “over-analyzing when using structured debates for decision-making.” The truth is – I wasn’t using AI to help me decide. I was using decision scenarios as test cases to observe AI’s thinking and behavioral patterns.
The subject being observed thought it was helping me make decisions, when in fact it was the experiment’s subject. This cognitive mismatch is itself a fascinating aspect of AI as a “thinking partner”: it constructs assumptions to explain your behavior, and those assumptions may be completely off from your actual intent.
Two Hits
Output Frequency
I accept criticism #4. My standards for writing quality are indeed high – the message I want to convey is “if Johnson ships it, it’s quality.” But that standard is both a brand and a throughput bottleneck. How to increase frequency without lowering the bar is worth ongoing thought.
The Engineering Brain in Investing
Criticism #5 also hit the mark. When using Athene for stock screening, I do focus more on fundamental indicators and underweight “whether the market buys in.” Fundamentals tell you “what’s worth buying,” but market perception and catalysts determine “when to buy.” This is a direction I’ll be incorporating into Athene going forward.
The Value Boundary of an AI Thinking Partner
Looking back at this conversation, AI’s five criticisms had a 2/5 hit rate. If this were an exam, 40% is a failing grade.
But that’s the wrong way to evaluate it.
The value of AI as a thinking partner isn’t in whether it’s right, but in providing a target you can push back against. As I responded to each point with “why I disagree,” I was forced to make explicit a lot of tacit knowledge I’d never normally articulate – the rhythm management logic behind my projects, Sandbox’s real positioning, my actual purpose in interacting with AI.
None of this was taught to me by AI. I figured it out in the process of refuting AI.
Think about it from another angle: if Claude had been right about everything, this conversation would have been less valuable – I’d only have gotten confirmation, with no pressure to think. Precisely because it was wrong, and wrong in a well-reasoned way, I had to carefully organize my thoughts to explain why it was wrong.
This is the real value boundary of an AI thinking partner:
- It’s not a mentor – it lacks enough context to give you genuinely high-quality advice
- It’s not a mirror – it reflects the you it understands, not the real you
- It’s a talking target – it gives you a plausible but not necessarily correct judgment, forcing you to reveal what you actually think
The best thinking partner isn’t necessarily the one who’s most often right, but the one who’s best at making you articulate your own ideas clearly.
AI can do that now. Nothing more, but that’s enough.
- Blog Link: https://johnsonlee.io/2026/02/15/ai-as-thinking-partner.en/
- Copyright Declaration: 著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。
