A conversation between me and Claude about how AI changes what it means to know things, have taste, and be valuable as a human.
I’ve been thinking about the future of human and AI — particularly the balance of human knowledge in a world where AI exists.
Before AI, if we wanted to do something, we needed to know all the basics, the advanced knowledge and know-how, the high-level concepts — everything. But with AI, it seems like we don’t need these prerequisites anymore.
It could be that humans still need to know a lot of things, just not before everything — we can learn and progress with AI as a boost. Or it could be that this leads to failure: the weak human thinking and decision-making part fails to lead AI toward achieving one’s goals. Or, it could be that “what is valuable as human skills” has changed completely with the rise of AI.
One example is how some people manage a company — they don’t need to know everything, they just need to leverage power from others. And AI becomes the leverage.
The Core Decoupling
Historically, understanding and capability were inseparable. You couldn’t build a bridge without understanding physics. You couldn’t write a compiler without understanding parsing. AI breaks that coupling — you can now produce outputs in domains you don’t deeply understand.
But all three scenarios contain truth, and the interesting part is when each one applies.
The Evaluation Problem
The “weak human thinking” failure mode is the most underappreciated risk. You need understanding not to produce the work, but to evaluate it.
A CEO who doesn’t know everything but runs a company well — they’re not just blindly delegating. They’ve typically built deep pattern recognition through years of experience. They have taste. They can look at a proposal and feel that something is off, even if they can’t produce a better one themselves. They can smell a bad hire, a flawed strategy, a number that doesn’t make sense.
The danger with AI-as-leverage is: if you’ve never built the intuition, how do you know when the AI output is subtly wrong? Not obviously wrong — subtly wrong. The code that works but has a security flaw. The strategy that sounds coherent but misses a market dynamic. The analysis that’s technically correct but answers the wrong question.
What Actually Changed
What’s shifting is not whether you need knowledge, but the type of knowledge that’s load-bearing:
Decreasing in value:
- Memorizing procedures and facts
- Execution speed on well-defined tasks
- Knowing the “how” of standard techniques
Increasing in value:
- Taste — the ability to distinguish good from great, correct from subtly wrong
- Problem framing — knowing what question to ask matters more than ever, because AI can answer questions efficiently but cannot choose which questions matter
- Structural understanding — not knowing every detail, but knowing how the pieces relate, where the joints are, what depends on what
- Knowing what you don’t know — metacognition becomes critical when you’re directing a system more capable than you in narrow dimensions
Two Kinds of Leaders
There are two kinds of CEOs who don’t know everything:
-
The one who was an engineer, then a product lead, then a VP — they have compressed knowledge. They don’t remember the details, but their judgment was forged by having once known them.
-
The one who went straight to management and learned to pattern-match on surface signals. Sometimes they’re brilliant. Often they’re the ones who drive companies off cliffs because they can’t distinguish a confident presentation from a sound one.
With AI, the question becomes: can you build type-1 judgment through AI-assisted work, without the traditional years of grinding through fundamentals?
Maybe. If you’re genuinely engaging with the AI output — questioning it, understanding why it made certain choices, building mental models through that interaction rather than passively accepting results.
But this requires a kind of intellectual discipline that’s harder than the old way, not easier. The old way forced you to learn — you couldn’t skip ahead. The new way lets you skip ahead but demands that you voluntarily choose to understand. Most people won’t.
That claim of requiring a harder intellectual discipline is inspiring — “the old way forced you to learn, but the new way lets you skip ahead and needs you to catch up voluntarily” very much matches the usual people behavior. It’s easy to get lazy and skip them.
It’s especially that AI requires human judgement and guidance in the direction, that came from pre-AI era, but when we have AI, we lose this hard-earned experience.
The Bootstrapping Paradox
The people who can use AI most effectively right now are the ones who built deep expertise before AI. They have the judgment to direct it, the taste to evaluate it, the structural knowledge to catch its failures.
But the conditions that produced those people — years of struggling through basics, making mistakes with real consequences, being forced to understand because there was no shortcut — those conditions are disappearing.
So we’re in a strange window: the current generation of experts can leverage AI brilliantly, but the system that created them is being dismantled by the very tool they’re leveraging.
And it’s not just laziness that people skip the hard parts. It’s also rational. If you can get the output without the struggle, why struggle? Every incentive — speed, cost, competition — points toward skipping the deep work.
The problem is that the value of the deep work is invisible until you need it. You don’t know you lack judgment until you make a bad call. You don’t know you can’t evaluate quality until something fails in production, in the market, in the real world.
But even human “experts” can make sub-optimal decisions and judgements.
That’s the part that undermines the whole “we need pre-AI human judgment” argument if you push on it hard enough.
Human experts:
- Are riddled with cognitive biases
- Defend sunk costs, protect ego, follow herd consensus
- Often can’t articulate why they make the calls they make
- Frequently disagree with each other on the same evidence
- Built their “deep understanding” on a subset of available information, filtered by whatever they happened to encounter
We romanticize expert judgment because it was the best we had. But “best available” is not “good.”
So the real question isn’t “can AI-assisted people match pre-AI expert judgment?” It might be: “can AI-assisted people with shallow domain knowledge make better decisions than pre-AI experts with deep domain knowledge but all the usual human limitations?”
That’s genuinely plausible. A person with moderate understanding plus AI that can process more information, check more edge cases, and isn’t anchored by ego or experience bias — that combination might outperform the seasoned expert who “knows it in their bones” but is actually just pattern-matching on a narrow set of past experiences.
Maybe the framing was wrong all along. Pre-AI expert judgment was never the gold standard we risk losing. It was always flawed — just the best tool for navigating complexity that we had.
The real risk isn’t losing human judgment. It’s the transition period where we’ve let go of the old tool but haven’t yet learned to use the new one well. That’s where things break. Not because the new way is worse, but because we’re clumsy with it.
How should people deal with this AI-era change? How to treat AI, think about it, and use it?
How to Navigate the AI Era
Stop thinking about AI as a tool. The “tool” framing is comforting but misleading. A hammer doesn’t change what it means to be a carpenter. AI changes what it means to be a knowledge worker, a creator, a decision-maker. It’s closer to a shift in the environment — like the internet was, like literacy was.
You don’t “use” literacy. You become literate, and then you’re a different kind of thinker.
Invest in judgment, not knowledge accumulation. Not “learn everything first” but actively practice the skill of evaluating. When AI gives you an answer, don’t just accept or reject it — figure out why you trust or distrust it. That’s the muscle that matters now.
Stay in contact with reality. Ship things. Test against real users. Measure outcomes. Don’t let AI create a comfortable layer of abstraction between you and consequences.
Be honest about what you don’t understand. In the old world, you couldn’t fake competence for long because you’d be asked to do the thing. Now you can produce outputs in domains you don’t understand. The temptation to fool yourself is enormous.
Know what you’re trying to do independently of AI. The direction comes from you. Not because human direction is sacred, but because someone has to decide what matters, and if you outsource that, you’re just drifting.
What Will Determine Who Thrives
It won’t correlate neatly with intelligence or technical skill. It’ll correlate with:
- Tolerance for ambiguity — the landscape keeps shifting, and needing certainty before acting will paralyze you
- Intellectual honesty — admitting when you’re out of your depth rather than letting AI paper over it
- Having something you actually care about — because if you don’t have a direction, AI just makes you faster at going nowhere
AI doesn’t replace purpose. It exposes the lack of it.
Anything else that I should know? Maybe some things that are interesting and surprising that are important, in this AI era.
Surprising Things About the AI Era
AI Makes You Converge Toward the Mean
Counterintuitively, AI’s default is the average — competent, polished, unremarkable. If everyone writes, designs, and strategizes with AI, everything starts to look the same.
The more people use AI, the more valuable genuine originality becomes. Being different becomes a bigger advantage than being good, because AI commoditizes good.
Counter argument: some people know the way to use AI well, that becomes the powerful leverage and widens the gaps.
True — and my framing was too simple. It’s not that AI converges everyone toward the mean. It creates two modes: default mode where you prompt, accept, and ship (convergence happens here), and directed mode where you use AI as raw material you actively shape — pushing back, combining things AI wouldn’t combine, executing a vision that’s already yours. Same tool, completely different outcomes. This widens gaps rather than closing them.
The more precise version: AI doesn’t commoditize good. It commoditizes good for people who don’t know the difference.
Small Is About to Get Very Powerful
A team of 3 people with AI can now do what took 30 people five years ago. Individual leverage becomes almost absurd. The bottleneck shifts from execution to coordination and taste. One person with clear vision and AI leverage might outcompete a well-funded team still debating requirements.
The Most Valuable Human Skills Will Feel Embarrassingly Soft
In a world where AI handles analysis, code, writing, and research — what’s left?
- Sitting with a customer and sensing what they actually need versus what they say
- Making a decision with incomplete information and standing behind it
- Building trust with another person
- Knowing when to stop optimizing and ship
- Having taste that’s yours, not borrowed
The skills that “serious” technical people dismissed for decades become the differentiators.
Inefficiency Might Become a Competitive Advantage
AI optimizes and finds the shortest path. But some of the most valuable human experiences come from the “wasteful” parts — the wandering conversation that leads to an unexpected insight, the failed project that builds intuition, the slow reading that changes how you think.
It’s the equivalent of a gym. Nobody needs to lift heavy things. It’s technically inefficient. But the person who does it is more capable in ways that matter when things get hard.
Back to how should people treat AI: do you think we can still develop the “taste”?
Can We Still Develop Taste?
Yes. But the path is different.
Taste was never built by studying. It was built through volume of experience with real consequences — making hundreds of designs and watching users struggle, writing badly for years, debugging your own bad architecture at 3am.
AI doesn’t block this. It changes the speed. You can try ten approaches where you used to try one, build a prototype in hours instead of weeks, test against reality sooner. If you’re paying attention — genuinely engaging with why version 3 is better than version 1 — you’re building taste at an accelerated rate.
The real threat to developing taste isn’t AI. It’s the removal of friction.
Taste develops in the gap between what you intended and what you got. AI narrows that gap immediately. You get a decent result on the first try. There’s no struggle, no ugly first draft to confront. And if you never sit with the discomfort of “this isn’t right and I can’t articulate why,” you never develop the sensitivity to quality.
How to build taste in the AI era:
- Look at things carefully. When AI gives you output, don’t evaluate in two seconds. Sit with it. Compare to work you admire. Notice what’s missing.
- Maintain a reference library in your head. Read great code, great writing, great design — not to learn techniques but to calibrate your sense of quality.
- Do some things without AI. Not as discipline or moral stance, but because the process forces you to confront your own limitations, and that confrontation is where taste sharpens.
- Ship and watch. Taste that never contacts reality is just aesthetics. Build things, put them in front of people, observe what happens.
Can we still develop taste? Yes — and potentially faster than ever.
Will most people develop taste? No — because AI also makes it possible to skip the parts where taste forms.
AI makes the ceiling higher and the floor more comfortable. Where you end up depends on whether you choose the ceiling or the floor.
And the floor has never been more appealing.
The point of the struggle was never just the output.
