Does fact-checking even work?

Does fact-checking even work?

At the risk of making things too personal, the recent election of Donald Trump to a second term as president triggered some pretty severe negative flashbacks for me, and I'm sure that I'm not the only one. I remember waking up in 2016 after he was elected and saying to someone (perhaps myself) that it felt like journalism had failed. For months, newspapers and TV networks had been reporting the details of Trump's various indiscretions and even outright crimes: the tape in which he bragged about getting away with sexual assault, the fraud, payments to former porn stars for keeping quiet about his affairs, and so on.

Every day, it seemed as though dozens of lies were being fact-checked rigorously by journalists, including during TV debates. All of this effort at setting the record straight, at showing how Trump lied not just for specific political purposes but flagrantly and enthusiastically for no reason. Multiple stories proving that he was a philanderer and a terrible businessman who lied about his net worth, someone who talked about Christian values but has been accused by twenty-seven women of sexual misconduct, and found liable by a jury in a civil sexual-abuse case. How could so many people have voted for him anyway?

In a recent edition of this newsletter, I wrote about how it's tempting to blame social media for the outcome of the election, to see Facebook and Twitter and YouTube and TikTok as the source of the problem:

It's tempting to blame what happened on Tuesday night on social media in one form or another. Maybe you think that Musk used Twitter to platform white supremacists and swing voters to Trump, or that Facebook promoted Russian troll accounts posting AI-generated deepfakes of Kamala Harris eating cats and dogs, or that TikTok polarized voters using a combination of soft-core porn and Chinese-style indoctrination videos to change minds — and so on. In the end, that is too simple an explanation, just as blaming the New York Times' coverage of the race is too simple, or accusing more than half of the American electorate of being too stupid to see Trump for what he really is. They saw it, and they voted for him anyway. That's the reality.

Note: In case you are a first-time reader, or you forgot that you signed up for this newsletter, this is The Torment Nexus. You can find out more about me and this newsletter in this post. This newsletter survives solely on your contributions, so please sign up for a paying subscription or visit my Patreon, which you can find here. I also publish a daily email newsletter of odd or interesting links called When The Going Gets Weird, which is here.

King Canute and the tides

As I wrote in that earlier piece, it's become accepted wisdom that social media somehow convinces people that the world is flat or that birds aren't real or that people are selling babies and shipping them inside pieces of Wayfair furniture (I am not making this up). That's why we see articles about how terrible it is that social platforms have "given up" on fact-checking misinformation on their networks. But is there any proof that social media either convinces people to believe things that aren't true, or increases the levels of polarization around political or social issues? The short answer to both of those questions is no. In that sense, social media is more of a symptom than it is a cause.

In a similar way, it's tempting to think that if we had fact-checked Trump's statements, and those of his henchmen — Elon Musk, JD Vance, etc. — a little more vigorously, or in different ways, or if we had just gotten people to share our fact-checks more frequently on the right networks, we could have stemmed this tide. King Canute ordered the ocean to recede in the 12th century and was less than successful (this story is often told in a way that suggests Canute was an idiot, but he was probably trying to show his courtiers that God's power is greater than that of a mere mortal). Fact-checkers had about as much success convincing people that Trump was a liar and a fraud and therefore should probably not be the leader of one of the world's most powerful nations.

Not for the first time, this outcome has prompted questions like the one that I asked in the headline of this issue: Does fact-checking even work? There was a lot of wailing and gnashing of teeth from journalists over Meta's recent announcement that it was shutting down its global fact-checking program, which has been running since 2016, but one of the most interesting responses I read was from Jonathan Stray, a senior scientist at the UC Berkeley Center for Human-Compatible AI and a former technologist with the Associated Press, who said that Meta may have shut down the program for political reasons, but it we shouldn't lament this too much because it wasn't really working — and in fact, likely never would have worked in any kind of meaningful way:

Despite the excellent work of many individual fact checkers, the fact checking program as a whole was struggling in important ways. For such a program to work, it has to accomplish at least three things: Accurately and impartially label harmful falsehoods, maintain audience trust, and be big enough and fast enough to make a difference. Fact checkers can check only a handful of posts per day. Meta never released any data on the operation of the program, but a Columbia Journalism Review  article  mentions that all US fact checkers combined completed 70 fact checks in five days—or 14 items per day. And if fact checks take days to complete, then most people will view viral falsehoods before any label is applied.

Fact-checking's inescapable problem

In his announcement about the end of the fact-checking program, Zuckerberg said that Meta would be adopting the same approach as Twitter (or X's) Community Notes system, which relies on crowdsourcing (YouTube has a similar program). But as Stray and others point out, the Twitter approach — which was originally known as Birdwatch — suffers from a similar problem, which is that its fact-checks take a long time to get posted and therefore most people don't see them until it's arguably too late for them to have any effect. A meta-analysis in 2019 found that the effects of fact-checking on beliefs are "quite weak,” and even negligible in many real-world scenarios. While fact-checking can be used to strengthen preexisting convictions, the authors wrote that "its credentials as a method to correct misinformation are significantly limited.” Also, researchers have also shown that the impact of fact-checking dissipates fairly quickly.

Apart from the fact that most people will never see a fact check, fact-checking has always suffered from one other important and inescapable problem, which is that it requires everyone to agree on what the facts are. If both sides don't agree, then a fact-check will not only fail to correct the problem, but in many cases will be used as evidence that the person doing the check is either deluded or has some kind of hidden agenda — in other words, it could actually make things worse. In 2017, Wall Street Journal editor James Taranto said that fact checking is "opinion journalism pretending to be some sort of heightened objectivity," and in a more recent look at the phenomenon, the New Yorker (famous for its fact-checking) wrote that "the provision of facts does not, in itself, engender trust."

Nicholas Carr, a journalist and author who writes a blog called New Cartographies, argued in a recent edition of his newsletter that the truth doesn't scale. Sometimes fact-checking is about the facts — you get a date wrong or you garble a quotation. But in many cases "it’s fuzzier. It’s about interpretation. Are you pushing the facts too far? Are you skewing the evidence? Are you drawing a clear enough line between opinion and fact? In summarizing some event or concept, are you distorting it? There are no clear-cut answers." Nate Silver, who started a statistics-based political blog called Five Thirty-Eight in 2008, made what I think is a fair point in a recent piece, in which he argued that the whole creation of a separate practice known as fact-checking seems a little odd:

The notion of “fact-checking” as a separate subfield within journalism has always been strange. Fact-checking has long been an essential part of every journalist's job to the point where it doesn’t really need a name. There’s also the question of what claims are deemed as requiring a “fact check” or scrutinized for containing “misinformation” instead of being handled in the ordinary course of journalistic business. I suspect these are often precisely those claims that are either unresolved or unresolvable. Matters of opinion more than facts qua facts.

If a claim were easily refuted through regular journalistic methods, it would be. What filters through to the fact-checkers, who are rarely the journalists on the front lines of a story, are often the edge cases: half-truths and political hyperbole, or claims for which there’s no evidence either way, but a particular null hypothesis is privileged. Labeling these claims as dangerous misinformation or otherwise cordoning them off as out of bounds is essentially a bluff.

Fact-checking and weaponized uncertainty

In some ways, I think the rise of the fact-checking industry happened as a reaction to what Jay Rosen has called the "view from nowhere" approach to political journalism, which treats each side of even significant ethical questions (do transgender people have rights, for example) as equal, and reports on politics like it is a horse race, covering just the winners and losers rather than who is right. But even if you think there have been a lot more facts and alleged facts that require checking than there have been during similar periods in our political history, it's still worth noting that fact-checking as a business (funded until recently by Meta) is a relatively recent phenomenon, and that to some extent it helps support an industry that Joe Bernstein once called "Big Disinfo."

In that piece, Bernstein compared the argument that disinformation changes people's perceptions or beliefs to the pitch from ad executives — or platforms like Meta — that advertising can somehow manipulate minds. Both are highly questionable at best, with little evidencde that either proposition is true. I wrote recently for CJR about how misinformation works, and how most sociologists believe that these tactics don't change people's minds at all. Carl Miller, the research director at a UK think tank, told me that instead of spreading fake images or videos to get people to change their minds, most disinformation or influence campaigns simply agree with people’s existing worldviews, and then "flatter them, confirm them, and then try to harness that.”

Is there a lot of misinformation (incorrect facts) and disinformation (deliberate misrepresentation) out there? Definitely. Is there more than there has been at any time in history? Unlikely. I'm not saying we shouldn't correct mistakes or point out lies, but part of the problem is that a term like "disinformation" is almost impossible to define. It sounds more scientific than it really is, as though there is any kind of consensus on what is "true." Can we agree that the sky is blue? On most days, yes. But can we all agree that COVID was a natural phenomenon and didn't escape from a lab? Not even close. At one point, Facebook would have banned your account for suggesting the latter, but it changed that policy because reputable scientists disagree (I wrote about this for the Columbia Journalism Review, and what some call "weaponized uncertainty").

Obviously, one way to approach this problem is to just forge ahead, checking facts even if no one ever sees it — or worse, sees it as a perverse kind of endorsement of their favorite hoax or conspiracy theory. We can't stop fact-checking lies and pointing out hypocrisy when we find it. But we also can't continue to behave as though fact-checking is going to magically change the world. Tom Rosenstiel of the University of Maryland noted in 2017 that misinformation is "not like plumbing, a problem you fix. It is a social condition, like crime." If religion has taught us anything, it is that people will believe whatever they want, regardless of what their eyes or brain or science tells them. Talking to someone like that about facts is like talking to a deaf person about the sound of music.

Got any thoughts or comments? Feel free to either leave them here, or post them on Substack or on my website, or you can also reach me on Twitter, Threads, BlueSky or Mastodon. And thanks for being a reader.