A eulogy for OpenAI's Sora
Plus, I was partly wrong about the AI detection company Pangram

Welcome back to The Third Hemisphere, where I try to make sense of how AI is reshaping work, thinking, and creativity, often by watching my own assumptions get upended.
Today’s newsletter includes a little about a Longreads essay I’ve been working on for months and am very excited to share with you, as well as an exercise in intellectual humility.
If you were forwarded this and want to subscribe, click below. If you want to support a real human writing about AI, upgrade to paid.
Sora the memory maker
For some fifteen years now, my wife and I have been having the same argument about a yoga class. She can reconstruct very specific details, like the French acrobat teacher with rainbow dreads, where we sat in relation to the door, the teal mats we sat on. I maintain I have never taken a yoga class in my life, and I think it would be more likely that I would remember something I’d only done once. We both have our evidence; neither of us will budge. I’ve long made peace with the fact that our conflicting memories are probably unreconcilable, and that it doesn’t really matter anyway.
The reason I even bring this up is that now technology is messing with memories in ways I didn’t expect it could. I was having a casual conversation with a neighbor, Andrew Deutsch, who told me that OpenAI’s Sora app—which let users create deepfakes of themselves—was muddling with his autobiographical memory:
He created an AI-generated video of himself scaling Mount Rushmore and watched it several times. Then, a few weeks later, he was getting his dog ready for a walk. He felt a flicker of recollection, of that time he’d climbed Mount Rushmore. “I felt just this twitch of confusion about it. It felt like a memory, very faintly.”
Not a full memory, exactly. But not not a memory either.
I found other users, called up memory experts, and tried to think about what it means when an app can mess around with the brain’s root processes for sorting reality from fantasy. Sora is now dead, but I doubt it will be the last self-deepfake technology, and I think our brains are going to be particularly vulnerable to it. I liked the way David Pillemer, a psychologist, explained it to me for the essay:
When a memory includes a visual image, he told me, the person remembering it is more likely to believe it actually happened. Seeing yourself in the scene is a hallmark of vivid memories. There’s an evolutionary logic to this, he explained. “If your life was in danger 5,000 years ago and you were at the water hole and the tiger came up, if you have a visual image of what happened, it’s good to not only hold that image, but believe the image, trust it. You’ll avoid that water hole.” The visual doesn’t just record experience; it confers credibility. I thought about the yoga teacher—the French acrobat with dreads, the studio, the spot where my wife says we sat. Her evidence was a lifelike mental image. Mine was an argument. Pillemer had just told me which one the brain trusts. And that ancient trust, calibrated over thousands of generations to actual waterholes and actual predators, doesn’t have a mechanism to determine whether the image was rendered on a server farm.
The full piece is up at Longreads: https://longreads.com/2026/04/09/openai-sora-deepfake-memories/
Please share widely!
OK, so Pangram is actually better than I thought
Speaking about maybe being wrong about what’s in your own head: Earlier this week I wrote a skeptical piece about the AI detection company Pangram, which has been at the center of a spate of AI-authorship scandals. My basic thesis was: be wary when the same company authors the research, riles up the mob, and sells the solution. I argued that AI detection is reliable for population-level analyses (what proportion of newspapers articles contain AI?) but not individual determinations (is this specific article generated by AI?)
People on social media flagged that I was being too harsh on Pangram and I got into a back-and-forth with the company’s CEO. Rather than double down, I looked into it. After all, the only thing worse than being wrong is staying wrong.
So, I read several technical papers, chatted with Pangram’s CEO offline, and called up a few independent computer scientists with expertise in this area. I concluded that for fully AI-generated text from major AI models, Pangram is actually pretty damn accurate even on short pieces of writing. This is a real step forward, and I stand corrected that AI detection is only to be trusted for population-level analysis. However, computer scientists emphasized that AI detection does fail in unexpected ways in the real world, so the risk of false positives and false negatives is almost certainly higher than the companies insist. Using AI detector results to prove individual accusations of AI use on short pieces of text may be possible, but the certainty really depends on the details of the situation. Technical specs aside, I still take issue with the CEO leaning into call-out culture to the advantage of his product, and generally think this dynamic is toxic for media and publishing, as I argued in another Substack post, Detect and Punish.
So, in the interest of intellectual humility (I am not afraid to change my mind when the data calls for it!), I appended a little update on my original skeptical Substack post, and will be publishing a full essay in Slate on the matter, but I wanted readers of the Third Hemisphere to know first.





