Silicon Valley’s Mythology of Human Amplification
Or, the difference between traveling and being traveled

Welcome to The Third Hemisphere, where I try to make sense of how AI is reshaping work, thinking, and creativity, often by watching my own assumptions get upended.
Today’s essay started with an argument I couldn’t win. A friend asked me what’s actually wrong with using Google Maps anyway, instead of a paper map. I had a strong intuition but not really a strong answer. So I went looking for one, and this essay is the result. Hope you enjoy…
If you were forwarded this and want to subscribe, click below. If you want to support a real human writing about AI, upgrade to paid.
In 1981, a young Steve Jobs—bearded, bespectacled, brown corduroy blazer over an open-collared shirt—sat in front of an Apple II and explained what he thought a personal computer was for. He’d read an article in Scientific American that compared the efficiency of locomotion across species. The condor, he said, came out on top. Humans ranked about a third of the way down, “not too proud a showing for the crown of creation.” But then someone had the insight to test a human on a bicycle, and the cyclist blew the condor away.
“What it really illustrated,” Jobs said, “was man’s ability as a tool maker to fashion a tool that can amplify an inherent ability that he has. And that’s exactly what we think we’re doing.” The computer, he said, was “a 21st century bicycle” for the mind.

In the age of AI, Jobs’ quaint bicycle has received an update from Silicon Valley. With the launch of ChatGPT, gushed Microsoft CEO Satya Nadella in early 2023, “We went from the bicycle to the steam engine.” Reid Hoffman, LinkedIn’s CEO, routinely calls AI a “steam engine of the mind” that will usher in a “cognitive industrial revolution.” In a Time magazine article modestly entitled “A Roadmap to AI Utopia,” venture capitalist Vinod Khosla writes that “AI amplifies and multiplies the human brain, much like steam engines once amplified muscle power.”
I find the shift from bicycle to steam engine instructive for the current AI moment.1 In invoking the steam engine, there’s something about the bicycle that Jobs’s heirs seem to have forgotten. Like another 19th-century invention, the steam locomotive, the bicycle was a technological revolution. But a train traveler sat back and enjoyed the ride, while a cyclist still had to put in effort. With a bicycle, “you are traveling,” wrote a cycling enthusiast in 1878, “not being traveled.”
I think about this distinction a lot: between traveling and being traveled. Bicycles and trains are both technologies that move us from place to place. In that sense, in the sense of their outward function, it’s fine to lump them together. But the comparison falls apart when you consider their effects on the traveler. In terms of effort, a steam engine doesn’t really “amplify an inherent ability.” It replaces it. You sit back and the coal does the work. You arrive, but you’ve been traveled. So one way to look at a technology is how powerful it is, what it can enable humans to do. But an equally important question is what happens to humans when they use the technology.
I’ve started calling this the mythology of amplification—the assumption, buried so deep in Silicon Valley’s rhetoric that it goes unexamined, that its tools merely add capability without subtracting anything meaningful. Some tools really do work this way, or close enough to it. Perhaps the personal computer, in its early days, was one such tool. A true, literal “computer,” a machine that computes. But general-purpose AI, at least as the tech titans envision it, is not like that. And the difference has nothing to do with how powerful the tool is or how impressive its outputs are. It has to do with what the tool asks of you. Whether you travel with it or are being traveled by it.
The rifle and the GPS
I was recently reading an ethnographic study by Claudio Aporta and Eric Higgs,2 about GPS adoption among Inuit hunters in Igloolik, a small community in the Canadian Arctic. The Inuit had been navigating that landscape for millenia using methods that took years to learn: reading wind-shaped snowdrifts, tracking animal movements, interpreting tidal patterns, memorizing thousands of place-names passed down through generations. Elders would wake children early and send them outside to report on wind and sky conditions, a practice called anijaaq. The knowledge was hard-won, but it was “perfectly reliable,” as the ethnographers state. The Inuit may have several words for snow,3 but they had zero words for being lost, a concept which, the researchers noted, was “without basis in experience, language, or understanding.”

When GPS units arrived in the mid-1990s, the Inuit did what they’d done with other new technology: they assessed it, experimented with it, and adapted. This was a culture that had already adopted rifles and snowmobiles. These technologies weren’t neutral: Rifles made hunting more solitary, more distant; snowmobiles were loud and fast, replacing the slow dog-sled travel that had been an ideal context for teaching younger hunters to read the land. With rifles and snowmobiles, the texture of Inuit culture changed. But neither struck at something that the elders felt was fundamental to cultural knowledge and identity. You could hunt differently, travel faster, but you still needed to deeply understand the natural world.
GPS was different. The ethnographers put it bluntly: “For the first time in history the navigator can completely rely on technology and travel successfully knowing nothing about navigation and very little about the environment.” The device didn’t amplify the Inuit’s famous wayfinding skills but bypassed them entirely. This skill erosion had consequences: During a search for an overdue traveler in a blizzard, an elder realized the GPS was leading the party directly toward pressure ridges and dangerous ice. He took over and guided the party using traditional knowledge of the wind and snowdrifts. The GPS had given the correct answer to “where is my destination?” while removing any need to understand the journey.
Most of us will never navigate a blizzard in the Canadian Arctic. But you’ve probably experienced a milder version of this when navigating a new city. If you unfold a paper map, you study the streets, trace a route, convert the bird’s-eye abstraction into the first-person POV of actually walking—and by the time you arrived, you’d have a nascent mental model of how the city fits together. Or you could fire up Google Maps: A blue dot, an optimal line from A to B, a reassuring robotic voice telling you when to turn. You follow, you arrive, you have no idea, really, where you are. A paper map demands something from you, and that demand leaves you with knowledge. GPS requires nothing, and leaves you with nothing. A paper map and GPS are tools with the same purpose, but opposite cognitive consequences.
Neuroscience suggests a mechanism. In one study, participants had their brains scanned while they virtually navigated London streets they’d previously walked. When participants worked out routes themselves, hippocampal activity crescendoed with the complexity of each intersection. When they followed GPS directions, the hippocampus went quiet because there was nothing for it to compute. Neuroscientists Louisa Dahmani and Veronique Bohbot then connected the neurological changes to real-world GPS habits: they tracked drivers over three years and found that GPS use caused subsequent declines in spatial memory. A 2024 meta-analysis of 23 studies confirmed the pattern: GPS use is negatively associated with both environmental knowledge and sense of direction.
With GPS, the stakes are relatively low for most people. But AI tools are now mediating the same trade-off across far more consequential domains: reasoning, analysis, writing, clinical judgment, scientific thinking. I thought about this after Anthropic recently released a small study that reveals how offloading affects skill formation. Researchers had software developers complete coding tasks using a new Python library—some with AI assistance, some without. The AI group finished slightly faster. But when tested afterward on their understanding of the library, they scored 17 percent lower than the control group. The gap was largest on debugging questions: the very skill required to catch AI errors.
Psychologists Elizabeth and Robert Bjork explain why. They distinguish retrieval strength (how easily you access something now) from storage strength (how durably it’s encoded in the brain). Struggling to generate an answer builds storage strength. Being handed the answer builds nothing. As the Bjorks write: “Any time that you, as a learner, look up an answer or have somebody tell or show you something that you could, drawing on current cues and your past knowledge, generate instead, you rob yourself of a powerful learning opportunity.” Just as the Bjorks would have predicted, the AI group in the Anthropic study bypassed the struggle and thus bypassed the learning that came with it.4
One Inuit elder, Alianakuluk, put what was happening in his community this way: “The wisdom and knowledge of the Inuit are being diminished with these gadgets.” It might be easy for a Western reader to admire this sentiment from a comfortable distance—to see the Inuit relationship to their landscape as something rare and culturally specific, an interesting testament to human ingenuity but ultimately like studying astronomy with an astrolabe. The Anthropic study should make that distance harder to maintain. Software developers lost measurable comprehension in an hour. The knowledge at stake wasn’t an esoteric art of extreme orienteering. It was a Python library. The mechanism was identical. A tool that bypasses cognitive effort doesn’t care what culture produced the knowledge it’s replacing.5
What Silicon Valley is blind to

So the Inuit elders could see what Silicon Valley leaders seem unable to grasp: that some tools extend human capability while demanding skill, and others replace the skill entirely. But this raises a question. The elders weren’t uniformly suspicious of technology. They integrated the rifle and the snowmobile. Why could they see the difference when Nadella and Hoffman and Khosla cannot?
Part of the answer is glaringly obvious: tech executives have products to sell. But I don’t think that fully explains it. Return to Steve Jobs for a moment. “Not too proud a showing for the crown of creation.” That phrase slipped out so casually because there’s a deep tradition in the West of treating human limitation as a problem technology exists to solve—you can trace it from Prometheus to Francis Bacon to the singularitarians if you want.6 But I’m not sure you need intellectual history to explain what’s happening here. I think it comes down to something simpler: what you believe effort is for.
The Inuit elders and the Silicon Valley executives are looking at the same phenomenon and seeing different things because they’re asking different questions that come from a different set of values. The executives seem to be asking: what kind of output does this human produce? The elders seem to be asking: what kind of person does using this tool produce?
If output is your only metric, then the steam engine really is just a better bicycle. Both get you from A to B. One gets you there faster with less effort. Case closed. The fact that you arrive having done nothing, learned nothing, built nothing—that’s not a bug, that’s the point. Effort is a cost to be minimized, not a value to be preserved.7
But embedded in that worldview is that the journey is merely instrumental. The only thing that matters is arrival. That it doesn’t matter if you travel or are traveled. The Inuit elders seem to operate on a different premise. Arrival, of course, mattered. These were hunters who needed to find caribou and get home alive. But only through the journey could you acquire deep knowledge of the terrain. You couldn’t separate arriving at the destination from what you learned on the way there.
So when Nadella is rhapsodizing away about steam engines, he’s telling you what he values. He values arrival. The faster and more effortless, the better. The passenger who shows up rested and refreshed is, in his framework, strictly better off than the cyclist who shows up tired and sweaty. What he doesn’t seem to care about—what the output-only frame makes invisible—is what happens to the cyclist along the way. The increase in muscle strength, in cardiovascular fitness. The knowledge of the landscape. And, crucially, the capacity to do it again tomorrow, alone, if necessary.
“To choose Inuit wayfinding,” the ethnographers conclude, “becomes increasingly heroic in the face of wayfinding that depends on an advanced technological system.”
“Heroic.” It’s an odd thing to say about basic competence, about knowing where you are and how to get home. But when learning a new skill, or even just maintaining an old one, requires you to opt out of convenience, to refuse assistance, to insist on doing things the slow way, that choice starts to feel like stubbornness or eccentricity or affectation. Try telling someone in the passenger seat that you’d rather not use GPS to find your way around a new town. The social pressure is already there: just use your phone, we’ll get there faster, why are you making this harder than it needs to be.
It may be a losing battle to demand that basic acts of competence don’t require heroism. But I still think it’s worth noticing what assumptions that Silicon Valley’s steam engine metaphor is trying to force us to accept. When bicycles and steam engines get talked about as though they’re the same kind of thing, a concession has already been made: output is all that matters. The cyclist’s legs and lungs and mind are incidental, and the only interesting question is how fast you can get from A to B.
If you look closely, the Scientific American chart also includes jet fighters, helicopters, and automobiles, all of which cover far more ground than a bicycle. Jobs could have compared the computer to any of them. He chose the one where the human effort is amplified in constructive, athletic, freeing ways. His heirs chose the machine that makes human effort nearly irrelevant. When Nadella says "we went from the bicycle to the steam engine," it’s not so much an upgrade to the metaphor but a change to the metric. Rather than the efficiency of human movement, it’s simply the distance that a machine can travel.
I think I first read about this study in Nicholas Carr’s excellent book, The Glass Cage.
The researchers identified six distinct patterns of AI use. Three preserved learning: asking conceptual questions, requesting explanations alongside generated code, or using AI to check understanding after attempting the work. Three undermined it: delegating the task entirely, progressively offloading more work to the AI, or repeatedly asking the AI to debug without trying to understand why. I think this has promising implications for AI tool design, but whether people would actually choose to use tools designed for cognitive engagement is another question.
A reasonable response is that this is how technology has always worked. Every wave of automation fiddles with what humans need to know. Factory workers moved from manual labor to supervising machines. Accountants moved from calculation to strategy. The skills change, but things work out. Maybe so. But “worked out” depends entirely on what you’re measuring.
Historian David Noble’s The Religion of Technology describes this theological substrate in detail, arguing that the Western technological project has been “literally and historically” suffused with the conviction that advancing knowledge recovers what was lost in Adam and Eve’s Fall.
A related assumption is that AI will automate drudgery and free us up for "what matters." I examined this claim in a previous essay on the substitution myth, and found that equilibrium shifts in joint cognitive systems don't work that way. When dishwashers reduced time spent washing dishes, total work didn't decrease but was redistributed in response to economic pressures and social norms. This redistribution had many benefits and some drawbacks, but the point is the freed-up time went where the system's incentives sent it, not necessarily where individuals hoped.




On top of what you wrote, there's a question of independence and sovereignty. Most of the advanced tools being introduced generate a dependency that ties your survival to both external factors and external interests. How will you find food if you're out of electricity and can't access GPS? How will you find food if the country/company operating your GPS network decides to cut your access?
This is very close to my perspective on the subject. It comes down to instrumentalism, a style of thinking which, in Silicon Valley, is taken to be “right thinking”. This style of thought truncates anything qualitative as part of its mechanistic and reductionist logic.
Thanks for the essay it was fun to read.