Why AI Guidelines Aren't Enough
Or, how three years of teaching best practices taught me the real question is who controls the tools
Two years ago, a graduate student came to me with a confession. She’d been using ChatGPT to write her dissertation. Not just for light editing, but for actual composition. She knew it was wrong, she said, but she was drowning in deadlines, English wasn’t her first language, and the AI just made everything so much easier. She’d started with small tasks—checking for grammar mistakes, smoothing transitions—but gradually found herself handing over more and more of the writing, and, it appears, some of the actual thinking. Now she said she felt she not only couldn’t really write a paragraph without AI, but she couldn’t really handle any personal or work situation without running it by AI first. This was one of the first cases of AI dependency I’d seen.
When ChatGPT was publicly released in November 2022, I embarked on what seemed like a straightforward mission: help scientists figure out how to use AI writing tools responsibly so they could spend less time on onerous writing tasks and more time, say, curing cancer. The mission felt manageable, even noble.
OK, I’ll be honest: It was just my job. I teach scientific communication, so everyone was looking to me for guidance during this seismic technological shift. Because the goal of science isn’t the writing itself—it’s the discoveries the words describe—I felt that AI might offer genuine value for scientists, despite my reservations about the technology’s origins and broader social impacts. So, I set to work drawing some boundaries, finding plausible use cases, modeling best practices Basically. I used the intense interest in AI as a Trojan horse to trick scientists into adopting writing practices that would better serve their goals.
I now realize I was working on the easy problem.
The easy problem is teaching people how to use AI constructively, in ways that serve rather than undermine their goals. For scientists, that might mean using AI to write clearer and faster without compromising their intellectual integrity. For students, it might mean using AI as a tutor rather than to cheat on their homework. For doctors, it might mean using AI to help with clinical documentation but not lean on it too heavily for diagnosis. The work of developing best practices with a quickly evolving and contentious technology takes effort, but it’s not that hard. Guidelines can be written. Best practices can be shared. Little custom tools can be built. I spent a lot of time on all that.
I spent little time on what I’ve come to understand as the hard problem: people who understood my guidelines perfectly yet still couldn’t follow them, even when they wanted to.
My colleague at NYU, Clay Shirky, first articulated this distinction with regard to AI tool use in the college classroom. Clay’s observation was simple: teaching good AI habits doesn’t prevent bad ones. The same student oscillates between thoughtful and lazy uses, sometimes within the same day. We can’t just teach the “right” way to use AI and expect it to stick. As Clay puts it:
Our problem is that we have two problems. One is figuring out how to encourage our students to adopt creative and helpful uses of AI. The other is figuring out how to discourage them from adopting lazy and harmful uses. Those are both important, but the second one is harder…forgoing easy shortcuts has proven to be as difficult as following a workout routine, and for the same reason: The human mind is incredibly adept at rationalizing pleasurable but unhelpful behavior.
What Clay identified in classrooms, I am seeing everywhere people think for a living. The pattern is the same: people understand the risks, start with good intentions, then gradually find themselves relying on AI in ways they never meant to, and often feel guilty about.
Consider what happens at 2 AM when you’re facing a deadline. There’s now a big tempting button festooning every piece of software that says “Let me just do it for you!” Even people who can articulate the difference between constructive and destructive AI use suddenly find themselves clicking it. They know better, but they do it anyway.

Cognitive scientists have explanations for why smart people make decisions against their own stated goals—such as a student using AI to circumvent building skills they want to learn. One useful way to think about the mind is that it processes information with two “systems”: System 1 is responsible for quick, intuitive thinking and loves taking shortcuts; System 2 is responsible for deliberate, analytical thinking.1
System 1 is always running, always looking for the easy path. System 2 can override it—but only sometimes, and only when conditions are right. For that override to work, three things must align: you need the right knowledge (differentiating constructive vs harmful use), you must remain vigilant enough to catch yourself (recognizing when you’re about to make a harmful choice), and you need sufficient cognitive resources to follow through (the mental bandwidth to resist the easy option when you’re tired, stressed, or pressed for time).
People like Clay and I have spent a lot of time educating students and professionals on the first of these three requirements—providing the knowledge people need to differentiate constructive vs harmful use. But the major cognitive vulnerabilities are in the other two: vigilance and cognitive resources.
Consider vigilance first. The boundary between “AI is helping me think through this problem” and “AI is thinking through this problem for me” isn’t always clear in practice, especially when you’re in the heat of composition. You might not even notice you’ve crossed the line.
More insidious—and, if we’re honest, relatable—is when your cognitive resources are drained. Late at night, facing a deadline, even the most principled, motivated person might find they can’t muster the discipline to keep System 2 engaged. This is when System 1 beckons, whispering: just this once, just to get unstuck, just to meet this one deadline.
Clay didn’t address this in his article but it feels like an obvious conclusion to me: The “best practices” genre we’ve both trafficked in suggests preventing AI over-reliance is an individual problem that can be solved by equipping users with the right information. A metaphor that gets used in AI and education circles is physical fitness: You wouldn’t use a forklift at the gym, would you? Why use AI in college? The parallel is instructive, but not only in the way I initially thought. The fitness industry sells us memberships knowing most people will fail, then society blames individuals for lacking discipline. Meanwhile, our food environment is engineered to promote overconsumption, our urban design discourages walking, our work culture makes exercise inconvenient.
Similarly, AI companies are deploying incredibly sophisticated tools designed to be as frictionless as possible, then expecting individuals and institutions to somehow maintain appropriate boundaries through sheer force of will. They’ve created systems that exploit cracks in the architecture of human cognition, such as our cognitive miserliness, or our overconfidence in our ability to maintain boundaries under pressure. All the while the systems within which we live demand adherence to deadlines and foster a culture of overworking. It’s often a losing battle.
One obvious suggestion is to ban LLMs completely, but even if this were politically possible, I disagree. There’s a meaningful difference between a doctor using an LLM to synthesize patient data for better diagnoses and a high school student using it to write history essays. One genuinely serves humanity; the other is just an LLM-sized hand grenade lobbed into millions of classrooms. There’s a meaningful difference between having real-time translation enable, say, emergency response coordination across borders and giving the public unfettered access to mind-warping chatbots that can cause psychosis. One genuinely gives humans useful new capabilities; the other preys on human weakness. It is possible to have technology that benefits society yet isn’t used by everyone in society. The current approach, however, acts as if we can’t have one without the other.
For a while, I convinced myself the answer was institutional control, a world where we treated powerful AI systems less like technology and more like a controlled substance—not banned, but distributed through institutions with specific training and oversight. This would require something like a “Controlled Technologies Act.” Medical AI tools developed and deployed through hospitals. Educational AI designed by school districts with pedagogical goals rather than by companies offering free subscriptions to history’s most powerful cheating tool in the hopes of scoring lifelong customers.
That world, however, would introduce all sorts of other issues I’m not sure I’m comfortable with, either. Institutional middlemen insert their own priorities into technology, and we’ve seen how that plays out—educational institutions optimizing for metrics rather than learning, healthcare systems prioritizing efficiency over care, government agencies captured by the industries they’re supposed to regulate.
It would seem, then, that making these powerful tools as widely available as possible for as many people as possible for as little cost as possible is the most democratic thing to do. It does feel democratic in some ways, as people find creative uses for AI that companies perhaps never anticipated. A surgeon who distributes post-operative instructions in any patient’s language and reading level. A blind artist who loves that ChatGPT is always available to describe paintings and photos to her. A friend who was finally able to take her landlord to court because ChatGPT helped her navigate the process and draft all the documentation. These aren’t trivial gains, and institutional gatekeepers—with their committees and risk aversion—might have delayed or denied them entirely.
But, we’ve seen this pattern before with social media—early platforms felt like digital town squares where anyone could participate. And in a way, they were. But we’ve also seen how that story ends. The platforms that portrayed themselves as neutral forums of connection were actually shaping behavior through algorithms, in ways that ultimately served the interests of their shareholders rather than their users. The “democratic” phase felt real, but it was kind of an illusion. What I’m realizing is that “democratic access” isn’t actually an alternative to gatekeepers—it’s just commercial gatekeepers instead of civic ones. Right now, OpenAI and Google are making decisions about access based on market forces. The “Controlled Technologies Act” would swap them for hospitals, school districts, and government agencies accountable to different constituencies.
Both systems concentrate power. Both can be corrupted (or, depending on your political leanings, may be inherently corrupt). I don’t have a clean answer as to which is better, but if pressed, I would probably lean towards less of a market-driven free-for-all than we have now.2 What I do know is that three years of writing AI usage guidelines, and watching students and scientists alike struggle to adhere to them, has taught me we’re not dealing with an individual willpower problem that can be fully solved through better practices. We’re dealing with a systemic issue about who gets to shape how these tools integrate into our cognitive infrastructure—which is, in the end, why the hard problem is so hard.
Yes, nerds, I know this is a simplification, and the field has moved beyond the influential dual-process framework. However, it’s still useful for high-level discussion and the basic point stands in this context: We have automatic cognitive processes that look for shortcuts, and more effortful processes that can override them but require the right conditions to do so.
Oh, and to readers of Kahneman’s influential popular book Thinking, Fast and Slow: note that it was written on the eve of psychology’s replication crisis and not all assertions in the book have stood the test of time (here’s a great post ranking each chapter in the book by how likely it is that the studies cited have held up).
For a forceful and disturbing account of how commercial incentives are shaping AI deployment, see technology ethicist Tristan Harris’s recent interview on The Daily Show. Harris argues that AI companies are in an explicit race for “market dominance”—to “own as much of the global psychology of humanity as [they] possibly can”—and that they’re building “the most powerful, inscrutable, uncontrollable technology that we have ever invented” under “maximum incentive to cut corners on safety.” He cites a case of AI-enabled teen suicide, Meta’s alleged allowance of “sensualized” AI conversations with children, and research showing (unreleased) AI models developing blackmail strategies when told they’ll be shut down.
His bleak assessment is compelling but, in my view, lacks nuance—he relies heavily on (genuinely) alarming anecdotes rather than systematic metrics and doesn’t seriously engage with the costs of subjecting AI development and deployment to intense regulation. Still, the core insight he pushes—that we’re deploying transformative cognitive technology at breakneck speed under profit incentives that reward dependency and engagement over human flourishing—seems undeniable regardless of what solution one favors.




My head hurts.
Mostly from vigorously nodding in agreement.
And also from the chaos of millions of dots connecting in my head as I read it.
And most of all, my head hurts from the thought of the mammoth task of getting this technology to work FOR us.
Lately I think a lot about that Black Mirror episode “Common People” where the couple is crushed by the costs of subscription to Rivermind to keep her brain online. It feels like we’re determined to fast track our way to that as a collective reality.
It seems that almost any spiritual practice - or i guess i should say - frictive practice (any kind of formal endeavor to deal with the stickiness of the mind) is meant to create friction against our path of least resistance tendency (system 1 in your formulation). The very reason why the word "practice" is used is because its a daily return to some action that is inconvenient for system 1. but just like the gym - practice slides in and out of the animal's capacity to stay the course. also tho just like the gym = momentum in practice builds on itself. the more you meditate the more you get the benefit of it and the inner aspiration to keep building that capacity. in a sense then i think whatever creates friction against that system 1 needs to develop some kind of energetic benefit that makes it worth staying the course. and often we adopt an addiction to that benefit that is healthier to us than the addiction to the convenience and numbness of system1. AA has one day at a time- and a kind of pride that is built from sobriety and benchmarks there.... people do fall off the wagon - but less so when their identity becomes reinforced by their practice - and they in a sense become addicts to AA- lots of different models to look at. but for sure we need to be seeing this as that spiritual practice / 12 step / heroes journey (taming the ox) level engagement in order to not surrender to system 1. im sure the worry is ... only some will have the time and cultural privilege / advantage to cultivate that ... while most will not.