Using AI to write isn't always wrong and other heresies

Using AI to write isn't always wrong and other heresies

I'm a writer, as some (hopefully most) of you know. I've been a writer and journalist for more than 40 years now. It's one of the few things I really know how to do, and it's about the only way I have ever managed to make any money, so not surprisingly, I am pretty attached to it. As a writer, I think many people assume that I would belong to the "AI writing over my dead body" group in terms of the current debate over artificial intelligence and writing-slash-journalism. These are the kinds of folks behind a recent campaign to convince publishers not to deal with those who use AI: it asks writers to not only renounce AI, and promise they will never use it, but to also refuse to support or do business with writers who do use AI. There's been a dramatic increase in that kind of sentiment recently, which isn't surprising, since there seems to have been a pretty dramatic increase in the numbers of writers and journalist who are happy to use AI. I think the important question is: What are they using AI for? And is that defensible?

If you believe that everything AI is involved with is worthless "slop," you should probably stop reading. As with most things (apart from a few exceptions) I think there is a place for most tools when it comes to doing the work, and to me AI is just another tool, much like the printing press or the typewriter or the internet. I'm old enough to remember when people were pretty upset about the internet and the impact it was going to have on creative pursuits or the world in general (no, I don't remember the arrival of the printing press, contrary to what my kids might think). As one of the first staffers at the Globe and Mail's live news site in 2000, I wrote an inaugural column about how great the internet was for writers like me — the ability to have our work read (and commented on) by large numbers of people with little or no friction. Did I regret some of those words after a decade or so online, especially the comment part? Sure I did. But on balance I still think it was and is mostly good. After all, it makes it easy for me to send you this!

I realize that artificial intelligence and everything it involves — the training on data that AI companies don't have the rights to, for example, or the fact that it sometimes encourages people to believe that they should kill themselves — makes it somewhat different from the printing press or even the internet (although I would argue not as much as some seem to think). Then there's the whole "will AI kill everyone" question, which I'm not really equipped to answer. But in terms of a tool that can help with writing, or pretty much any other task, I think it makes perfect sense — in certain contexts. Is it going to pollute the internet with slop? Of course. But so have countless human beings over the past few decades. So that's a difference of magnitude, rather than a difference in kind. Is it going to take some people's jobs? Of course — just as countless other technologies have, from the automated loom to the colour printer or the electronic calculator. But it could also create new jobs along the way. Will they be as good? I have no idea.

I've felt this way for some time now, but my interest in writing about it was sparked by a spate of articles and comments from and about journalists and other writers who either use or are horrified by AI. Megan McArdle, a writer with the Washington Post, mentioned on X that she uses AI to help her write, and later wrote about how angry a number of people got about it. She said she rarely reads the summaries that AI engines come up with, and she never lets it touch her writing directly, but that shefinds it "enormously helpful as a super search engine, data downloader and interlocutor to steelman opposing views," and that it also works as a supplementary fact-checker. However, McArdle said that the negative reaction to her comments suggests that "many people think that using AI at any stage of the writing process amounts to outsourcing your thinking to a machine" and are horrified that a journalist would do so.

One example of the tone of this reaction can be seen in a newsletter from Rusty Foster, the author of Today In Tabs, who devoted an entire issue to the question of who is using AI and who isn't — and what we (allegedly) know about them as a result. Here's an excerpt of what he wrote about McArdle and others:

After twenty-five years of dumb opinions clumsily expressed, is it any wonder that Ms. M is happy to turn over that labor to a device? AI couldn’t be worse at her job than she is, and anyway, being incompetent has never proven any hindrance in her grimly illustrious career filling the endowed Libertarian chair at a range of publications that wanted to help their other conservatives appear serious and thoughtful, if only by comparison. Kind, good, happy, secure people never go AI. They may be the hard-working columnist, the former blogger, the independent media entrepreneur, or the virtuosic book critic—you’ll never make sloppers out of them. But the bored pseudo-intellectual, the rich and scared speculator, the fearful ink cannon, the fellow who has achieved success by smelling out the wind of success—they would all go AI in a crisis.

Note: In case you are a first-time reader, or you forgot that you signed up for this newsletter, this is The Torment Nexus. Thanks for reading! You can find out more about me and this newsletter in this post. This newsletter survives solely on your contributions, so please sign up for a paying subscription or visit my Patreon, which you can find here. I also publish a daily email newsletter of odd or interesting links called When The Going Gets Weird, which is here.

Like an AI rewrite desk

As a former hard-working columnist and blogger who occasionally uses AI tools to research or summarize information as I'm writing, I take exception to Rusty's description :-) And I'm sure McArdle would as well, for obvious reasons. It's a pretty ad hominem argument! I have been a big fan of Today In Tabs since its earliest incarnation, but I think Foster's description of McArdle and others who might use AI tools is over the top in an unnecessary way. Kevin Roose, the New York Times writer, has "turned flack for the machines," Foster says. And why not, he asks? After all, he is apparently "not known or valued for his elegance of expression." It's an example of how polarizing a topic the use of AI for writing is: if you soil yourself with it, you are unclean, and all your work is suspect. And more than that, it calls into question your entire career, and suggests that you were never worthy of it to begin with. You are a slopper — a pseudo-intellectual and a thoughtless ink cannon (which I have to admit is a great phrase).

Here's how McArdle describes her approach to using AI:

If you’ve used Google, you’ve allowed a complex algorithm to shape what you know and how you think. We’re not arguing about whether machines can ever touch our work. We’re arguing about where to draw the line. My line is that I outsource tedious tasks such as “searching the web” or “finding data buried in the footnotes” or “clicking through janky websites.” I use AI judiciously to play roles that other humans have always played for writers, such as sounding board or fact-checker, but never involve it in outlining or writing a column or editorial. I draw the line there because my answer to the question “What is writing for?” is that writing — my kind of writing, at least — is a way that humans learn together.

I think what McArdle is describing is defensible! If that makes me a slopper, or someone who has become a flack for the machines, then so be it. I think the important question is not whether every aspect of AI is bad per se, but which uses help the craft of writing or journalism, and which do not? I'm sure there were those in the past who felt that books and other manuscripts should only be produced by a quill pen, and that printing presses were an abomination (in fact, the church did its best to push this message for as long as it could). Hemingway used to write standing up with a pencil, and thought it was a good day if he wrote a single sentence. But we use the internet and other tools all the time now, and they have arguably expanded the supply of good writing (and bad), and made it a lot easier for people to reach potential audiences. On balance, I think that makes them good. Is something lost in the process? Undoubtedly.

A recent Wired article described how a number of journalists use AI, including Alex Heath, who recently left The Verge and went independent on Substack. He says he records ideas with a microphone and then uses Claude to transcribe them and write a first draft. He has a Ten Commandments-style document that tells Claude exactly how to write in his style, including instructiuons on how he wants his pieces to be structured. I'm sure Rusty and others would be horrified at this, and it may be farther than I would go with my own writing, but I'm not here to condemn it. Wired describes how a number of journalists compared AI writing tools to the old newspaper "rewrite desk," which would take calls from foreign correspondents and put their words into shape for editing (I did this for a short time on the night shift at the Globe and Mail in the mid-1980s).

In the New York Times, Jasmine Sun describes how she uses AI:

I fed the chatbot Claude an archive of my past writing, along with notes about what worked and didn’t about each piece. I used this to create a custom editing rubric based on my voice. Some criteria are generic, and others are personalized. I dumped this guidance into a Claude project along with a reminder of its role: “You are not a co-writer. You cannot perceive. Your role is to help Jasmine write like the best version of herself.” This AI editor has become a valuable part of my process. Like any reader, it’s not always right. I am careful not to let it trap me into one narrow stylistic lane. But Claude pushes me to iterate and improve faster than I could alone, pointing out where my execution failed to meet the standards of my own taste. “Stop trying to write the ending as a thesis and write it as a scene,” it told me while editing a recent post.

Clickbait has always been with us

This all sounds fine to me, I should note — although I'm sure Rusty and others would disagree. But that said, there are some obvious places where I think AI and writing don't necessarily mix well, and where AI is probably doing too much of the work. The Wall Street Journal had a great example in a recent piece about a Fortune writer named Nick Lichtenberg, who according to the paper "produced more stories in six months than any of his colleagues at Fortune delivered in a year." Since last July, he has apparently written more than 600 stories for the magazine, most of them short and based on a single fact or quote. One Wednesday in February, he cranked out seven. "While many journalists hit the phones and cultivate source relationships," the Journal writes, "when news breaks Lichtenberg often uploads press releases or analyst notes into AI tools and prompts them to spit out articles that he can edit and publish quickly." His articles accounted for 20 percent of Fortune's traffic in the second half of last year.

Does this surprise me? Not at all. As Max Read mentioned in his analysis of this phenomenon, the traffic-accumulation game (or "ink cannon" game, if you will) has been going on almost as long as the consumer internet has been around. I remember writing at Gigaom about Demand Media, which used search algorithms as prompts for stories, or about SEO tactics like "What Time Is The Superbowl?" or when Gawker hired a guy named Neetzan Zimmerman because he was able to use search analytics to figure out what people were interested in and get millions of clicks. Any one of these people would have used AI in a heartbeat, and all of their publications would have as well. When I was writing for Fortune, an editor told me (not long before it was sold to a Thai billionaire) that we should forget about our beats and "just write about what's trending on Google" so that we could meet our overly aggressive advertising metrics.

Is this great for writing or journalism, or for society in general? Clearly not. But these things have a way of working themselves out. Whenever someone is grasping for easy traffic, it is almost always a death knell. In the early days of newspapering, the New York Sun wrote a feature about the bat-winged people who live on the moon (the paper later claimed that it was satire). Did it help the Sun? Sure it did. Is the Sun still around? It is not. The same goes for the newspaper where legendary truth-teller Benjamin Franklin planted fake stories about how the British were paying indigenous people to kill American settlers. I'm not even going to mention William Randolph Hearst and his yellow journalism. My point is that "journalism" has always been subject to this kind of craven endeavour, so it should surprise no one that AI would get dragged into it somehow.

Similar kinds of shenanigans have always gone on in the book world (fake books, etc). The use of AI just makes it that much easier. Romance writing, for example, is in many ways as formulaic as computer code but with more adjectives, and so it seems that many writers are already using AI to enable them to write dozens more books than they would otherwise have been able to. Is anyone harmed by this? Perhaps, but not any more than they would have been before AI. I came across an "author" on Amazon who has written more than 300 books, all of them about sports figures, all with suspiciously similar titles and book cover photos, and probably language as well. Did someone just tell AI to rewrite the Wikipedia entries of these athletes? No doubt. Is this a good thing? Definitely not. But again, I wonder who is being harmed. If someone wants to pay $3 for one of this person's books and enjoys it, I say more power to them both.

Should everyone who uses AI in any way add a disclaimer? I'm a little torn on this, to be honest. Obviously it's a good thing to disclose if your piece was written either entirely or largely by AI. But what if you just used AI for transcribing an interview, or for some research that you then rewrote? Should we disclose that? No one discloses when they use Google search algorithms to decide what angle to take. Becca Rothfeld wrote that "fact-checking, generating ideas, and shaping questions for interviewees are not grunt work, they are at the heart of the journalistic process, and if you cannot do them yourself, you should leave your coveted full-time writing position to someone who can." What if I get a fellow editor to help me with that? Is that okay? What about a transcription service? I wish it was as binary as AI critics seem to think it is, but it just isn't. It's a tool, and we as a society are still working out how and when to use it. So it goes.

Got any thoughts or comments? Feel free to either leave them here, or post them on Substack or on my website, or you can also reach me on Twitter, Threads, BlueSky or Mastodon. And thanks for being a reader.

Read more