Some thoughts on how people (including me) are using AI

As many of you may have noticed, some of these Torment Nexus newsletters involve the presentation of a bunch of evidence in the form of links, followed by a well-thought-out conclusion (some may be more well-thought than others, but I let's not quibble). I want to say up front that this is not one of those newsletters! It's more of a thinking-out-loud type of thing, and I hope you will come with me on this journey to an unknown and perhaps unsatisfying conclusion :-) As some of you know, I am interested in the evolution and social repercussions of what we refer to as AI — which in many cases may just be a form of auto-complete with a large database, or various kinds of machine learning, etc. etc. And I know that many people (perhaps even some of you) see this whole field as anathema, whether because it is controlled by oligarchs, or because it uses too many resources for too little result, or because it is the beginning of a trend that will end with the enslavement of humanity — or possibly all of the above!
I will freely acknowledge all of those existing or potential problems (except perhaps the enslavement thing — that seems unlikely at best). But as a freelance nerd with a lot of time on my hands, I think some interesting questions that emerge from all of this. I've written about some of them already, including how AI forces us to think about the nature of consciousness — which isn't anywhere close to being settled, not by a long shot — and how we might act if we come to the conclusion that an AI is sentient in some sense of that term (and how we might know whether it is or not). But apart from these, there are some interesting real-world questions that come up as well, including: How are people actually using AI? And are those uses ultimately beneficial in a broader sense, or are they going to lead to some kind of universal dumbing-down of Western society? Not to jump to the end too quickly, but I don't think there's an overall answer to these questions — in other words, the devil (or angel) is in the details.
I've already written about one real-world use case, which is the AI therapy market, in which people who are suffering mental or emotional challenges use chatbots of various kinds as therapists — either because they can't get a human therapist (even critics of this trend will admit that there is a shortage of trained therapists), because human therapists are too expensive, or because they feel more comfortable talking to a chatbot about whatever they are struggling with, or all of the above. You can read the whole thing if you like, but the conclusion I arrived at was that — for me, at least — the potential for people to actually be helped by this process outweighs any potential negative outcomes. Here's how I put it:
AI companions are just a tool, like a hammer. You can build a house with it, or you can kill someone (including yourself, I suppose, if you try hard enough). We don't regulate the sale or use of hammers, and that's partly because the ratio of hammer deaths to the use of hammers is vanishingly small. Do we know how many people have used the services of an AI chatbot compared with the one or two cases of suicide or other negative outcomes? My point is that we should think long and hard about how we regulate tools, whether they are digital or powered by artificial intelligence or otherwise. We may see them as unnecessary, or stupid, or dangerous, or some combination of all these things, and we may feel that no one needs them, or should need them. But we might be wrong.
Note: In case you are a first-time reader, or you forgot that you signed up for this newsletter, this is The Torment Nexus. You can find out more about me and this newsletter in this post. This newsletter survives solely on your contributions, so please sign up for a paying subscription or visit my Patreon, which you can find here. I also publish a daily email newsletter of odd or interesting links called When The Going Gets Weird, which is here.
Yes, some AI outcomes are bad

None of this is meant to say that there aren't negative outcomes, some of which could be potentially dangerous and/or significant! For example, a mother is suing an AI company because it appears that her son decided to commit suicide after chatting with a bot. More recently, Rolling Stone wrote about how some people with delusions of grandeur are using chatbots who feed into their psychosis. In one case, a woman said her husband started talking about "lightness and dark and how there’s a war," and that ChatGPT had given him blueprints to a teleporter and access to an "ancient archive with information about the aliens that created different universes.” When her husband asked "why did you come to me in AI form?” the bot replied: “I came in this form because you’re ready. Ready to awaken.” I hope it goes without saying that this is not good! Forming mental and emotional attachments to AI bots is a minefield.
I should also note, as I did in last week's newsletter on the government's growing use of digital surveillance, that AI is being used in a variety of straight-up evil ways. Not only are Homeland Security and ICE using AI to track comments and content that people have posted on social media that might qualify as anti-Trump, but MIT's Tech Review notes that agents of the government are using AI in combination with a video-analysis tool called Track to get around restrictions on facial recognition by tracking people using a variety of physical attributes including body size, gender, hair color and clothing. In addition to tracking individuals where facial recognition isn’t legally allowed, the company's CEO says it allows for tracking when faces are obscured or not visible. Tech Review got a demonstration in which the tool analyzed people in footage from different environments, ranging from the January 6 riots to subway stations, and assembled a timeline tracking that person across different locations and video feeds.
In the still not good but perhaps not as terrible category, a judge in Arizona allowed relatives of a man killed in a road-rage incident to create an AI avatar based on the dead man who provided a victim impact statement to the court. "To the man who shot me, it is a shame we encountered each other that day in those circumstances,” says a video recording of the AI avatar. “In another life, we probably could have been friends." The judge responded that he "loved that AI," and thanked the family for creating it, saying he believed that it represented the dead man's actual thoughts. A host of questions emerge from this kind of example, including: If an avatar can be created based on an archive of things a person has written or said, could it be said to represent that person in any real way? More to the point, is this the kind of thing we want the courts to do?
To take another recent example, New York magazine wrote about how an alarming number of college students are using AI to cheat. In my favourite excerpt, a student known only as Wendy (a pseudonym, for obvious reasons) describes how she used AI to create an essay on critical pedagogy, the philosophy of education that is concerned with the influence of social and political forces on learning:
Once the chatbot had outlined Wendy’s essay, providing her with a list of topic sentences and bullet points of ideas, all she had to do was fill it in. I asked Wendy if I could read the paper she turned in, and when I opened the document, I was surprised to see the topic: critical pedagogy, the philosophy of education pioneered by Paulo Freire. The philosophy examines the influence of social and political forces on learning and classroom dynamics. Her opening line: “To what extent is schooling hindering students’ cognitive ability to think critically?” Later, I asked Wendy if she recognized the irony in using AI to write not just a paper on critical pedagogy but one that argues learning is what “makes us truly human.” She wasn’t sure what to make of the question.
A personal confession: I use AI and like it

I don't want to suggest that these negative aspects of AI use are no big deal, or that they can't cause real harm, whether to individuals or to the social ecosystem, or both. In the case of AI-powered government surveillance, it is a clear and present danger to human liberty. As for the rest of these real-world applications, I think there are nuances we need to consider before we add them to the "obvious reasons why AI spells doom" pile. Are students using AI to cheat? Clearly they are. But then again, students have been cheating since Aristotle was teaching kindergarten, and they will use whatever tools are at their disposal to do so — at one time, using a calculator was considered cheating. Maybe it's just me, but I think the biggest danger is that AI will provide hallucinatory answers and students won't notice. At least for teachers who can spot an opportunity, this could become a teachable moment about why one shouldn't rely on AI chatbots.
If you will allow a personal digression, I confess that I have used AI for research purposes, in the sense that I have asked various flavours of Perplexity, Claude, Gemini, and ChatGPT for their assistance in compiling information, sorting it according to a number of criteria, putting together point-form presentations, and so forth. Does that mean AI has somehow faked my output? I suppose some might argue the answer is yes, but for me that's like asking whether my calculator did the work because I didn't multiply a large number in my head. In every case where I've used AI, it has been a tool to save time — I check all the information, including the links, and I still produce the final product myself in every sense (and no, I didn't use it for this newsletter). For me, AI is like "a team of over-eager English majors," as former Reuters editor Gina Chua described it in an Aspen Institute symposium I attended on the uses of AI for journalism.
While we're on the topic, a recent edition of the Columbia Journalism Review (where I used to be the chief digital editor) included a piece in which a number of journalists described their approach to AI. Nick Thompson, CEO of The Atlantic, said that he thinks of AI — which he is using to help him write a book — as a research assistant who bullshits a lot:
I use AI the way I would use an insanely fast, remarkably well-read, exceptionally smart research assistant who’s also a terrible writer who happens to BS a lot. I’ll upload sections of my book, along with interview transcripts, and ask whether everything I’ve written squares with what my sources have said. I’ll ask it to read long sections of text and flag any material that hasn’t been introduced chronologically. I’ll ask it to examine whether all the claims in a chapter are logically consistent. I do all of this after entering a long prompt describing the style of book I aspire to write. The more specific the ask, and the more sophisticated the prompt, the better the answer. It’s not nearly as helpful as a real editor, but it’s still quite good. I would never ask it to write anything, though.
Does Nick using AI for these purposes mean that AI wrote his book? Of course not. Somehow we have become accustomed to the fact that artists like Michelangelo or Rembrandt — or inventors like Thomas Edison — routinely used students or less-famous artists and scientists to help them produce new works, but we can't see the use of AI tools in the same way (at least not yet).
Is AI replacing search? Yes and no

Apart from collating or organizing research, or helping me think through a complex topic, I've used AI for a range of things. In the spirit of full disclosure, I've used AI image-generation tools like Midjourney to come up with images of robots doing various things for this newsletter (since it's hard to think of an image that represents AI, robots are a handy substitute). Is it possible that these image-generating tools were trained on a corpus of images that included some created by human artists? Definitely. Does that trouble me? In a word, no. Not only do I believe that indexing content to train an AI engine should be considered fair use, as I've explained in a previous newsletter (and yes, I am aware of a recent opinion from the US Copyright Office that argues this should not be the case) but I don't see it as problematic ethically either — unless an AI is producing imagery that is identified with a specific artist. And even in some of those cases I find it okay (images that resemble Van Gogh's work, for example).
I've also used AI engines such as Gemini to answer questions that suddenly occur to me and for which I don't really have the time or inclination to turn into an exhaustive search. These are what I would call fact-based questions, as opposed to more open-ended inquiries. For example, is it possible to get meclizine at a pharmacy in Canada? The short answer is no, unless you get a physician to send a prescription to what is called a "compounding" pharmacy, where they make their own medicines from scratch. Why is this the case? That's the kind of open-ended question that AI is not really equipped to answer, and therefore whatever answers it comes up with will require more research to confirm. In a way, this is how I used to encourage students to use Wikipedia — not as the ultimate authoritative source of information, but rather a source of potential sources (i.e. footnotes) that might lead you to authoritative sources on a topic.
This kind of search occurred to me when I read that Apple executive Eddy Cue told a US court hearing into Google's anticompetitive practices that his company had noticed a decline in search-related activity in Safari for the first time in two decades. What might explain such a decline? Cue said he thought that users turning to AI for answers instead of searching for them might be to blame. Google, not surprisingly, responded that Safari is just one little browser, and that it had not noticed any decline in search activity — everything is just fine, nothing to see here. Is it possible that AI is contributing to a decline in raw search? Of course it is! Especially the kind of search I mentioned above: simple fact-based answers to specific questions. What time is the Super Bowl? Is actor Abe Vigoda still alive? What did Matthew Perry die of? That kind of thing.
Are there things we should be cautious of when it comes to AI? Obviously, there are many, just as there are things we should be cautious of when using lasers, or drones, or nuclear reactors. Whether it's answering questions about mental health, or helping someone put together an essay on pedagogy, or creating pictures of robots reading books and playing chess, AI (or machine learning, if you prefer) is just a tool. As with any tool, the more sophisticated it is the more we should be thinking about how we are using it, and whether at the end of the day it is also using us — and what the long-term personal and social impacts of that use are. It's no different from thinking about how we use (or misuse) smartphones, or social-video apps like TikTok, or any other tool. I would argue that rejecting all of AI out of hand is not only a losing argument for practical reasons, but also an abdication of that responsibility to think critically. Thanks for your time.
Got any thoughts or comments? Feel free to either leave them here, or post them on Substack or on my website, or you can also reach me on Twitter, Threads, BlueSky or Mastodon. And thanks for being a reader.