Is AI like the slide rule or is it like the Industrial Revolution?
I've used this analogy before in other contexts, but the way we respond to or think about artificial intelligence reminds me of the old parable – first recorded in a Buddhist text from about 500 BCE – about the blind men who encountered an elephant for the first time. The man near the trunk thought it was a kind of snake, and the one near its legs thought it was a kind of tree; the man near the tusks thought it was a kind of spear, and so on. Casual users of OpenAI's ChatGPT or Anthropic's Claude or Google's Gemini probably have one vision of what AI is or can do – it can answer simple questions. University students probably see it as a way to write term papers more quickly. Others who use tools like OpenAI's Sora video-generation engine have a different idea: they might see it as a tool much like Adobe's Photoshop or Apple's iMovie, but one that can create things that don't exist. And those using AI tools in the lab might see it as a kind of supercomputer that can detect cancers or fold complex proteins.
All of these things are applications of current AI engines – so-called large-language models, or generative pre-trained transformers (which is what the GPT in ChatGPT stands for). In a sense, they are just tools, like a slide rule or a personal computer, or a steam-powered locomotive. But in the aggregate, all of these tools – and newer ones that are still being developed – look a lot like a tidal wave of disruption that could sweep through virtually every industry. In other words, AI looks a lot more like the Industrial Revolution, where mechanical processes took over a host of different industries, leading to the extinction of some jobs and the creation of others – new tasks that no one had even thought of before the machines came. In economic terms, it's a classic example of what Joseph Schumpeter called "creative destruction," in which new innovations replace and make obsolete older innovations. So electricity replaces fire, cars replace horse-drawn carriages, and refrigeration replaces ice harvesting.
In their book AI Snake Oil, Princeton computer scientist Arvind Narayanan and PhD student Sayash Kapoor argue (among other things) that the blanket term AI covers a wide range of different technologies and methods, some reliable and others not. It's as if we didn't have terms for different forms of transportation, they write – all we have is the word "vehicle." In such a world, they say, there would be "furious debates about whether or not vehicles are environmentally friendly, even though no one realizes that one side of the debate is talking about bikes and the other side is talking about trucks." Someone who only uses ChatGPT to generate grocery lists or recipes probably thinks all the talk of an AI-powered apocalypse is nonsensical, because they may be unaware that modern LLMs have been shown to fabricate lies about both their behavior and their motivation, especially if they have been given an incentive, which we know because Anthropic continues to do research in an attempt to understand why Claude does what it does.
In a recent speech, author Neal Stephenson wrote that he sees parallels between AI and the Manhattan Project – not because it produced weapons that obliterated hundreds of thousands of people without warning, but because the bombs dropped on Hiroshima and Nagasaki came as a complete surprise to the average person, even though the neutron was discovered in 1932 and work had been underway for decades to learn more about nuclear reactions and potential applications. The arms race that followed, he says, in which the US and the USSR spent billions trying to out-do each other by blowing up South Pacific islands and other things, made it seem as if this was the only thing that people could do with nuclear science. But of course this wasn't the case:
Looking at those black-and-white images of mushroom clouds over the Pacific, ordinary people might have reflected that some of the earlier and more mundane applications, such as radium watch dials that enabled you to tell what time it was in the dark, x-rays for diagnosing ailments without surgery, and treatments for cancer, seemed a lot more useful and a lot less threatening. Likewise today a graphic artist who is faced with the prospect of his or her career being obliterated under an AI mushroom cloud might take a dim view of such technologies, without perhaps being aware that AI can be used in less obvious but more beneficial ways.
Note: In case you are a first-time reader, or you forgot that you signed up for this newsletter, this is The Torment Nexus. You can find out more about me and this newsletter in this post. This newsletter survives solely on your contributions, so please sign up for a paying subscription or visit my Patreon, which you can find here. I also publish a daily email newsletter of odd or interesting links called When The Going Gets Weird, which is here.
Augmentation and amputation

I don't really have an opinion about the likelihood of AI doom, to be quite honest. But I do think that there's a good chance that the development artificial intelligence – including all the various engines and varieties, and newer ones that are in the process of being developed – is not just an accumulation of new tools, like the personal computer or the slide rule, but something potentially much more significant, on the scale of the industrial revolution or possibly the discovery of fire, or the development of written language. On that note, it's worth pointing out that even writing itself was once seen by some in the pre-writing era as an inherently suspicious activity (much as the printing press and books would be seen a thousand years later), a pursuit that might even ruin society by making people less intelligent, by encouraging them to use their thinking skills a lot less. Here's an excerpt from a piece that Kwame Appiah wrote recently for The Atlantic on the economic and social risks of a wave of AI-powered "de-skilling":
In the Phaedrus, which dates to the fourth century BCE, Socrates recounts a myth in which the Egyptian god Thoth offers King Thamus the gift of writing — “a recipe for memory and for wisdom.” Thamus is unmoved. Writing, he warns, will do the opposite: It will breed forgetfulness, letting people trade the labor of recollection for marks on papyrus, mistaking the appearance of understanding for the thing itself. Socrates sides with Thamus. Written words, he complains, never answer your particular questions; reply to everyone the same way, sage and fool alike; and are helpless when they’re misunderstood.
As Appiah notes, the term "de-skilling" is pretty vague. He calls it "a catchall term for losses of very different kinds: some costly, some trivial, some oddly generative. To grasp what’s at stake, we have to look closely at the ways that skill frays, fades, or mutates when new technologies arrive." He points out that when sailors began using sextants to find their way at sea, they left behind the ability to read the stars the way that their ancestors did, and satellite navigation did the same thing to sextant skills. Owning a car once meant that you had to know how to patch tubes, how to set the ignition timing, how to replace fan belts and what the right fuel-oxygen mix was. "The slide rule, starting in the 17th century, reduced the need for expertise at mental math," Appiah writes, and then centuries later, the pocket calculator also made people uneasy. As Stephenson notes, media theorist Marshall McLuhan said that every augmentation is also an amputation, meaning that every new invention leaves something else behind.
So what will AI augment and what will it leave behind? If you believe some of the scare-mongering about ChatGPT use, it could wind up leaving behind thinking as we know it. And there is no question that widespread use of AI could allow some specific skills to atrophy. But Appiah and others argue that it could also sharpen some skills, or elevate them to a different level. For example, if a lot of low-level programming gets taken over by AI tools using what the kids like to call "vibe coding," some would argue that younger programmers won't learn the fundamentals of the trade, and won't be able to spot errors or perform the work if AI isn't available – like the staff at the stock exchange, who were left adrift when the computers went down, because there weren't any who remembered how to do the old handwritten notation of buy and sell orders. But others would say that handing over low-level programming to AI tools will give human programmers time to work on higher-level problems. Here's Appiah again:
When people talk about de-skilling, they usually picture an individual who’s lost a knack for something — the pilot whose hand-flying gets rusty, the doctor who misses tumors without an AI assist. But most modern work is collaborative, and the arrival of AI hasn’t changed that. Most forms of de-skilling, if you take the long view, are benign. Some skills became obsolete because the infrastructure that sustained them also had. Telegraphy required fluency in dots and dashes; linotype, a deft hand at a molten-metal keyboard; flatbed film editing, the touch of a grease pencil and splicing tape, plus a mental map of where scenes lived across reels and soundtracks. When the telegraph lines, hot-metal presses, and celluloid reels disappeared, so did the crafts they supported.
The decline of ice harvesting

Obviously, if you were a linotype operator or an ice harvester, the arrival of commercial printing and industrial refrigeration wasn't a great thing. But this is the way that society always progresses, and there are plenty of examples of retraining leading to different jobs once a specific type of technology is no longer useful. Harvard economist David Deming wrote recently about how the introduction of mechanical switching for telephone networks in the early part of the last century put thousands of human operators – primarily women – out of work. But at least some of those operators, and younger students who might have once thought about that as a job, became typists and secretaries, driven by a new technology: the typewriter. "Over a period where the typewriter was rapidly adopted, the occupation of typists and secretaries grew sixfold," he writes. Appiah also points out that de-skilling can be democratizing:
For scientists who struggle with English, chatbots can smooth the drafting of institutional-review-board statements, clearing a linguistic hurdle that has little to do with the quality of their research. De-skilling here broadens access. Or think of Sennett’s bakery and the Greek men who used to staff the kitchen floor. The ovens burned their arms, the old-fashioned dough beaters pulled their muscles, and heavy trays of loaves strained their back. By the ’90s, when the system ran on a Windows controller, the workforce looked different: A multiethnic mix of men and women stood at the screens, tapping icons. The craft had shrunk; the eligible workforce had grown. Filmmaking didn’t merely borrow from theater; it brought forth cinematographers and editors whose crafts had no real precedent. Each leap enlarged the field of the possible. The same may prove true now.
Laura Veldkamp, a finance professor at Columbia Business School, says her research predicts a 5-percent decline in the labor share of income due to AI, which is close to the kind of historical shift brought about by the Industrial Revolution. But interestingly, in the context of AI, this doesn't always result in a comparable loss of jobs. "There’s been a lot of talk of AI replacing labor. That didn’t happen in finance," says Veldkamp. "In fact, there was more hiring. Firms that are adopting AI are hiring people with AI skills.” The Fraser Institute compares it to the introduction of electricity: electric lights enabled factories to operate over longer hours; electric light enabled miners to work faster given they were at a reduced risk of explosions caused by the use of gas lamps; the availability of consistent light allowed surgeries to be performed more safely. Many of these developments required new skills and created more (and arguably better) jobs.
One thing that is likely to be different in an AI-driven economic upheaval is the speed with which it may arrive: more than 50 per cent of the US population had used ChatGPT within 10 months of its launch, while the internet took almost two decades to reach the same level. And it took almost 100 years from when the first steam engine was installed in the United States at the beginning of the 19th century until peak adoption by the turn of the next century. Georgetown professor Harry Holzer, a former chief economist for the US Department of Labor, notes that the arrival of the automobile put horse-and-buggy drivers out of business, but "the number of jobs that opened up in the auto industry — especially once Henry Ford created the assembly line and the Model T dropped in price, creating enormous demand in ways that couldn’t have been conceptualized 10 or 20 years earlier — produced not just new categories of jobs, but enormous new numbers of jobs.” Here's Deming again:
Generative AI commodifies the manipulation of digital information. It may take half a century, but I believe it will eventually lead to the extinction of office and administrative support jobs like administrative assistants and financial clerks. The entire purpose of these jobs is to lower the cost of transmitting and storing information. In the long-run, AI will drive the cost of “routine” information processing down to nearly zero, eventually eliminating the need for most human labor in those jobs. There is a clear analogy here to the impact of mechanization on farm labor. For most of human history, the bottleneck to increasing food production was physical power. Steam and electricity eventually relaxed that constraint, and farm work mostly disappeared.
So if AI is the computer-powered equivalent of the invention of the steam engine or the mechanical loom, and removes the need for information storage, retrieval and transmission-related jobs – the guy who pulls together the insurance report, or the finance staffer who does some research on lending practices, or the legal intern who goes through law books to find a precedent – what other kinds of jobs might it create? If it extinguishes or amputates something from the past, to use McLuhan's phrase, what new skills or technologies or abilities will it augment? If I knew, I would be investing in them 😄
Got any thoughts or comments? Feel free to either leave them here, or post them on Substack or on my website, or you can also reach me on Twitter, Threads, BlueSky or Mastodon. And thanks for being a reader.