The Exceptionalism Trap
Or, why everyone thinks their job is special
I came across a podcast recently where Marc Andreessen, the venture capitalist, made a prediction so laughable it almost made me spit out my coffee. When AI automates everything else, he confidently proclaimed, identifying promising startups might be “one of the last remaining fields that people are still doing.” His reasoning? Venture capital requires psychological analysis—understanding how founders react under pressure, keeping them from falling apart. “You end up being a psychologist half the time,” he explained. He waxed poetic about how VC skills were “intangible,” even “quite literally timeless.”

The irony was almost too much to handle. Here’s someone funding the very AI companies that will automate other people’s jobs, blithely explaining why his own work is different, special, immune.
I have to admit this is the first and may be the last time I’ve felt a human connection to Marc Andreessen, because what he was expressing feels like a basic psychological response to encroaching automation. If you look, you see a version of what I like to call the “exceptionalism trap” everywhere.
Doctors declare that “patients aren’t standardized question stems” and require the kind of nuanced observations only physicians will be able to make. A financial blog claims that accountants are safe from AI job replacement because “the qualitative aspects of human insight—intuition, empathy, contextualization—cannot be replicated by an AI.” An attorney wrote that AI cannot substitute for “the wisdom, sound judgment, experience, maturity, and impartiality” that human judges bring to the bench. The President of the Writers Guild of America West emphasized, “Writing, like any art form, is based on a lived human experience, an emotion. Our whole job is to react in a human way…AI can’t bring lived experience.” Every profession has identified its supposedly untouchable human core, the thing AI could never master.
From a psychological perspective, I get why the exceptionalism trap is so widely deployed. If you’ve spent years or decades building expertise in radiology or coding or writing, or even something as saintly as venture capital, admitting a machine can do it might feel like admitting you’ve wasted your life, or that you weren’t as talented as you thought. That’s an unbearable identity threat in meritocratic cultures. The culture tells you that if you work hard and get good at something, that should be a major source of your sense of personal value. When a machine makes that value plummet, both economically and socially, that’s not just a financial liability for your livelihood, it’s also a personal attack on your identity. And identity attacks activate very strong defense mechanisms.
What I worry about is that the retreat into human exceptionalism overshadows other, more useful conversations. I’m not arguing that professions don’t have distinctive skills—of course they do. Venture capital does require astute psychological insight. Medicine does require complex judgment. I’m arguing that whether these capabilities remain beyond AI’s reach may not be the right question to be asking. AI will keep improving, and things that seem impossible for machines to do today won’t be tomorrow. We need another way to discuss AI besides appealing to human uniqueness in the face of encroaching technological prowess.
I’d suggest a different approach, what I call the “human needs lens.” Not, “what is unique about humans that AI can’t do?” but “what human needs are served by humans doing this work, regardless of whether AI can do it?”
Take music. Let’s say that AI could generate beautiful melodies, maybe even “better” by some standards. But that misses the point entirely. Playing music with friends on a Friday night isn’t valuable because of the sound waves produced. Making music is valuable because of the shared experience, the vulnerability of performance, the social bonding, the personal expression, the way it feels to create something together. For professionals, AI will drive down wages, and make it harder to make a living doing something humans intrinsically enjoy doing—making music. The value of human music-making has nothing to do with the quality of AI-generated music.

Or take teaching. Imagine a world where schools didn’t exist because every student had an AI tutor—one that was genuinely better at everything we currently measure. Perfect personalization, infinite patience, test scores through the roof. Students learn faster, retain more, excel on every metric typically used to evaluate academic performance.
What’s lost? Teachers lose meaningful work—the joy of watching understanding dawn, of being the adult who believes in someone when no one else does, of being a social pillar in their community. Students lose the teacher who notices they’re struggling not with fractions but with family chaos. Communities lose schools as gathering places, sources of local identity, spaces where parents connect and kids learn to form connections with people unlike themselves. Sure, test scores might be higher. But is that the only thing we value about education?

By applying a human needs lens to AI, you can begin to draw boundaries that don’t depend on evolving technical capabilities. Maybe AI diagnoses cancer better, but do you want a human in the room when you hear the results? Maybe AI can identify therapy patterns, but does that replace talking to someone who’s actually felt despair? The exceptionalism trap, all that manic insisting that AI can’t do our jobs, actually prevents us from articulating why human involvement matters beyond measurable outputs. We’re so busy defending our turf that we never explain why the turf is worth defending in the first place.
To be clear, I’m not arguing we smash all the AI servers. It’s about articulating values more clearly.1 I see very little value in AI-generated music, for example, and a lot of harms. In contrast, I am already seeing how AI is helping doctors, and I see a lot of potential there. In teaching, it’s mixed. The educational system is far from perfect, and AI could either exacerbate or assuage those imperfections. At places like the Alpha school, students use AI tutors for two hours a day on core skills, then spend the rest of the day doing collaborative projects and socializing and so on. That strikes me as a promising middle ground. You may not agree with the Alpha school’s vision, but finding any middle ground between human exceptionalism and full automation requires clarity about what’s worth preserving, even when AI can technically do it.
But that’s exactly what the exceptionalism trap prevents. Instead of facing reality together—“automation is being deployed to devalue all of our work” or “I don’t want to live in a world where machines do X, even if they can”—people retreat into individual claims of uniqueness. Short-story writers insist they’re nothing like technical writers (“well, their work was always more mechanical”). Oncologists insist they’re nothing like radiologists (“we treat people, they just analyze images”). Everyone’s drawing lines to stay on the “safe” side, while the safe side shrinks for everyone. What feels like solidarity—professionals defending human work—may actually act as a splintering force. Each profession stands alone when automation arrives. And unlike VCs like Marc Andreessen, most of us won’t have enough capital to cushion the fall.
This moral clarity about the value of labor helps explain why the Luddites are having such a revival right now—from tech journalists to college students to even Peter Thiel musing if they were right. The Luddites weren’t effective because of their sophisticated arguments about how weaving was uniquely human. They were effective because they had clarity about what they valued: craft quality over mass production, good livelihoods for skilled workers over cheap goods for many, community and autonomy over widespread efficiency. (Well, and all the smashing.) Their fight wasn’t about whether machines could weave—it was about whether the way of life that mechanization would bring was worth what would be destroyed. The Luddite question isn’t “can machines do this?” It’s “do we want to live in a world where all of this is automated away?”




If experts in every field are saying "AI can't do my job, but it can probably do everyone else's," why would we trust their beliefs about fields they *aren't* experts in, but not their beliefs about fields they *are* experts in?
Wouldn't it make more sense to assume that the reason everyone thinks AI can't do their *own* job is because they know enough about their own field to see the ways in which an AI output fails, but they think it can do other jobs because they aren't experts so can't tell why an AI output might look good but not actually hold up? A sort of "Gell-Mann amnesia" effect.