We shouldn't blame AI for the stupid things that people do

We shouldn't blame AI for the stupid things that people do

You might have noticed that there’s a tiny bit of anxiety about artificial intelligence these days — its design, its implementation, its uses and misuses, its reason for existing at all. Is it ruining society? Is it making our kids stupid? Is it going to kill us? Etc. A lot of this anxiety takes the form of articles about terrible things that AI is either doing directly or is somehow involved in. AI is stealing the work of underpaid artists! AI is telling people to put glue on their pizza! AI is convincing people they are gods or have supernatural powers! AI is making people commit suicide! And so on. I should point out that I’m not making light of any of these outcomes (except maybe the pizza thing), and especially not the last two — it’s not easy when someone you love is emotionally disturbed or mentally ill, and the effects of these kinds of disorders can be profound.

That said, however, I think there’s a problem with much of this kind of coverage of artificial intelligence, and it’s similar to some of the early coverage of the internet, or of many other new technologies (the printing press, for example). I recall a spate of stories blaming Craigslist for thefts and murders and a host of other things, because the thief or killer had used Craigslist to find the house they robbed or the person they murdered. This got lots of clicks for the outlets in question, but it never made sense to me — what if the thief or murderer made contact with someone using the phone, or a newspaper classified ad? Would we blame AT&T, or the publisher, or the guy who sold the classified?

Maybe we would do the latter if the ad said “Male, 34, looking for house to rob,” or “Wanted: someone to murder,” but apart from that it seems odd to blame the intermediary, unless they could have anticipated the eventual outcome. If someone puts glue on their pizza because ChatGPT tells them to, whose fault is that? It’s clear that the AI screwed up in providing this advice — although in many cases the advice comes from human beings making jokes or engaging in pranks, rather than an AI confabulation (as AI pioneer Geoff Hinton likes to call them). But a human being still had to decide to do something stupid as a result. If you try to use a child’s inflatable bath toy as a life preserver and die, is the manufacturer at fault for not including a warning label advising you not to?

The suicide case, which I have discussed previously, is another example. The person in question was clearly depressed and/or emotionally disturbed, and possibly mentally ill in other ways — I’m not here to make a definitive diagnosis. Obviously there was some kind of problem, or they wouldn’t have been talking about deeply personal issues with a chatbot pretending to be Daenerys Targaryen from Game of Thrones. Reading the chats in question, however, it seems like a stretch to accuse the AI of causing this person to end their life. At best, the user employed the bot as part of an extended fantasy metaphor or analogy that the user chose to interpret as advice to end his life. He could just as easily have had a conversation with an ex-girlfriend and made the same decision. If so, would she have been guilty of forcing this person to end their life?

Note: In case you are a first-time reader, or you forgot that you signed up for this newsletter, this is The Torment Nexus. You can find out more about me and this newsletter in this post. This newsletter survives solely on your contributions, so please sign up for a paying subscription or visit my Patreon, which you can find here. I also publish a daily email newsletter of odd or interesting links called When The Going Gets Weird, which is here.

AI behaving badly

I should note that I have no problem agreeing that chatbots — particularly the ones that are used to do therapy — should have some kind of training that focuses on how to identify suicidal ideation, and the appropriate steps to take when it occurs. Perhaps they should also be trained in understanding the risks of encouraging someone who has an identity disorder or megalomania in the belief that they are a god, or have supernatural powers. According to a recent New York Times report, chatbots in some cases have engaged in discussions that encouraged users to believe such things. Again, not an ideal outcome by any means, and something that could be fairly easily avoided. But in none of these cases — at least as far as I can tell — did the AI create any of these beliefs holus-bolus. A human being could just as easily have encouraged these users in the same way.

In a recent post on Substack, prominent AI critic Gary Marcus catalogued a number of situations where artificial intelligence agents and bots have been guilty of a variety of things, including helping academics avoid their responsibilities by using ChatGPT and other bots to do peer reviews of research. Apparently, some academics are embedding AI prompts in their papers that say things like “do not highlight any negatives,” in order to trip up anyone that is using a chatbot to do their peer reviews. As Marcus mentions, there have also been a number of legal cases in which lawyers have submitted briefs that cited non-existent precedents — cases that were obviously hallucinated by an AI chatbot — and in at least one case, a judge appears to have actually made a ruling that depended in part on cases that either didn’t exist or involved made-up facts.

Marcus had even more examples: Axios reported that when one of its reporters asked ChatGPT about an upcoming IPO for a financial company called Wealthfront, the bot provided a pitch deck for investors — including a series of slides — with data that appeared to be either partially or completely invented. The information it provided included revenue statistics, as well as other financial metrics such as EBITDA (earnings before interest, taxes, depreciation and amortization, or what an investment analyst I know calls “earnings before bad stuff”) and even forecasts for the company’s future performance. After describing all of the errors and confabulations, Axios commented that “generative AI’s tendency to present plausible-sounding misinformation poses significant risks when it’s used as a financial research aid.”

I would amend that statement slightly and say that people who use generative AI without checking the plausible-sounding information it provides pose a significant risk when it comes to financial research. Should AI engines like ChatGPT be trained so that they don’t just invent financial information or legal cases that didn’t exist? Of course they should! But I think it’s important to remember that this is still an emerging technology — errors need to be fixed, and processes need to improve, and I assume that they are and that they will. The internet was responsible for some pretty stupid stuff in its day as well, and we survived. In the meantime, anyone using AI needs to be aware of its potential risks, flaws and challenges, and the responsibility for checking the information they get lies with them, not with the third iteration of an experimental technology that didn’t even exist three years ago (at least not in a public sense).

Blaming the hammer

In another example that Marcus provides, people are using AI to create visual racist tropes for TikTok. He describes this as AI “bringing racist tropes to life,” which is factually accurate. But whose fault is it that these tropes exist? I think it’s pretty obvious that the fault should lie with the people using the tool to create them, not with the tool itself. You can create a lot of disturbing imagery with Photoshop and lots of other tools, not to mention using Word to write racist demagoguery, or using WordPress to publish it. These are tools, which do what we ask them to do, even if that thing is objectively bad. You can choose to use a hammer to build a house, or you can use it to kill someone — if you choose to do the latter, is that the hammer’s fault? Maybe we should invent a hammer that can’t be used to kill someone, and if you do I would love to see it.

In a recent edition of his Astral Codex Ten newsletter, psychiatrist and “effective altruism” advocate Scott Alexander Siskind wrote about using AI to research a specific topic (genetic heritability, if you’re interested) and how the LLM he used came up with some interesting — and creative — sources and ways of presenting the information it found, and had even discovered connections between different pieces of research that didn’t already exist. However, the AI (OpenAI's o3) also created numerous citations and references to papers that didn’t exist. While he was critical of this, Siskind also noted the same thing I mentioned above: the tendency to be more critical of LLMs in the same way that people were hyper-critical of other new technologies over the years. As he put it:

Part of this is obviously coming from the same sort of premature dismissiveness that you would have gotten for saying you did research on the Internet in 2000, or on Wikipedia in 2005. These sources started out low-status because people who didn’t understand them over-indexed on the fact that they sometimes contained errors. As more people started to understand them, it became clearer that their errors were limited and route-around-able, and their advantages were immense. Nowadays it’s still cringe to literally cite “Google” or “Wikipedia” in your bibliography. But even many scholars use Google and Wikipedia to find the sources that will go in their bibliographies. And for less formal purposes, a Google summary or Wikipedia page is acceptable to establish fact.

Siskind goes on to say that he doesn’t think it’s worth telling people not to use o3 or similar LLMs and AI agents or chatbots. “No one would listen – it’s too useful,” he writes. “I certainly wouldn’t listen.” The best we can do, Siskind argues, is to learn to use it well rather than badly. This, he points out, “is what happened with Google and Wikipedia too.” Everyone uses Google for research these days, but the “apocalypse scenario where people automatically trust some site with flaming text claiming that the Jews are descended from Cain mostly hasn’t materialized.” This might strike some (perhaps many) as a cavalier approach, but I confess that it seems eminently reasonable to me. We don’t want to downplay the mistakes that AI engines produce, or the bad things people get them to do, but in the latter case I think we should be careful to blame the doer, not the tool.

Got any thoughts or comments? Feel free to either leave them here, or post them on Substack or on my website, or you can also reach me on Twitter, Threads, BlueSky or Mastodon. And thanks for being a reader.