It's not that we lack free speech it's that there's too much of it

The current battle over regulating speech online seems new, in part because the Trump government and its allies are going to extreme lengths to police speech about the murder of Charlie Kirk (a man who trumpeted the benefits of free speech, but is now the reason why people are losing their jobs for saying negative things about him). This is a twisted situation, to be sure, a kind of right-wing pretzel logic that many Trump acolytes seem to be able to internalize without even noticing that their position contradicts itself in multiple ways. But it is only the latest bizarre spin on a struggle that has been going on almost since the internet was invented: How should we – internet users as well as technological entities like social-media platforms and political entities like governments – behave in a world where the problem is not how to protect free speech, but how to cope with an excess of speech, especially one driven by algorithms whose internal workings we have little knowledge of and even less control over.
I'm bringing all this up not just because of the Charlie Kirk situation, which is quite obviously a case of "Free speech for me but not for thee," but because YouTube just announced that it is going to reinstate the accounts that it banned for spreading disinformation, whether about the dangers of COVID-19, the results of the 2020 election, or about the riots on January 6th. One of the accounts that was banned, of course, belonged to the current president of the United States, whose account was blocked because he posted comments that were perceived as inciting violence (his account was restored in 2023). YouTube's decision also comes at the same time as it is fighting a lawsuit launched by Trump over the ban on his account, and the company has no doubt been following other lawsuits in which Meta, X, ABC and Paramount all paid huge sums of money even though there was arguably no legal basis whatsoever for Trump's claims.
The announcement of YouTube's reinstatement of the accounts came via a letter to Rep. Jim Jordan, the chairman of the House Judiciary Committee, who has been holding hearings into the alleged "censorship" he says was practiced by platforms like Google and Facebook during COVID and the aftermath of the 2020 elections, among other things. Here's how Google described what happened:
The COVID-19 pandemic was an unprecedented time in which online platforms had to reach decisions about how best to balance freedom of expression with responsibility, including responsibility with respect to the moderation of user-generated content that could result in real world harm. Senior Biden Administration officials conducted repeated and sustained outreach to Alphabet and pressed the Company regarding certain user-generated content related to the COVID-19 pandemic that did not violate its policies.
While the Company continued to develop and enforce its policies independently, Biden Administration officials continued to press the Company to remove non-violative user-generated content. As online platforms, including Alphabet, grappled with these decisions, the Administration’s officials created a political atmosphere that sought to influence the actions of platforms based on their concerns regarding misinformation. It is unacceptable and wrong when any government, including the Biden Administration, attempts to dictate how the Company moderates content, and the Company has consistently fought against those efforts on First Amendment grounds.
Note: In case you are a first-time reader, or you forgot that you signed up for this newsletter, this is The Torment Nexus. You can find out more about me and this newsletter in this post. This newsletter survives solely on your contributions, so please sign up for a paying subscription or visit my Patreon, which you can find here. I also publish a daily email newsletter of odd or interesting links called When The Going Gets Weird, which is here.
The devil is in the details

If you've been following this battle for awhile now, you will know that what Google is talking about was the subject of a lawsuit involving what political types refer to as "jawboning" – a term used when the government tries to persuade companies to do its bidding by suggesting or hinting at the kind of conduct it wants to see, or even threatening repercussions if certain actions aren't taken. There's an ocean of legal commentary and debate on what it is and how to define it, but for me, a perfect example would be when Brendan Carr, the current head of the Federal Communications Commission, warned ABC about what might happen if it didn't remove late-night host Jimmy Kimmel for his remarks about Charlie Kirk's murder. As Carr put it in an interview on a right-wing podcast: "We can do this the easy way or we can do it the hard way."
Was that a threat? Possibly. But a threat of what exactly is unclear. That's classic jawboning. "That's a nice network you got there – be a shame if anything happened to it," to use the kind of language preferred by the mob, or anyone who runs a protection racket. In Google's letter, the company tries to put its behavior in a broader context – describing how even the advice from recognized medical authorities changed over time during the evolution of the COVID epidemic, as did things like the consensus over whether the virus was a result of natural events or escaped from a research lab. As I've written before, the nature of truth with a capital T can change during periods of scientific upheaval, and what was once considered disinformation suddenly becomes fact.
Not surprisingly, the loudest responses to YouTube's announcement didn't take into account any of the nuances about this kind of process – or the nuances about anything, for that matter. For the most part, conservative voices saw the letter to Jordan as a flat-out admission that Google and YouTube caved to jawboning and threats from the Biden administration (although it doesn't say what the nature of the threats were exactly). The response to the letter's publication was an orgy of piling on by conservative and right-wing commentators on X, including Mike Cernovich and Elon Musk, all of whom crowed that they knew all along that YouTube was guilty, and that Biden and his administration were engaged in widespread censorship, and that the company's admission means that Google effectively helped rig the outcome of the 2020 election.
If we step back from the partisan political questions, the broader issue is how to reconcile the American principle of unfettered free speech – not just the First Amendment, which talks specifically about governmental restrictions, but the larger principle that the amendment is an extension of – with the explosion of speech enabled by the internet and social platforms, and the algorithmic amplification of that speech by platforms like Facebook, X, YouTube and others. As Columbia Law professor Tim Wu wrote in 2017, the First Amendment was designed for a world in which speech was difficult and expensive, and subject to being stifled by governments, but that world and the one we live in now are polar opposites. The problem now is too much speech, not a lack of it. And the First Amendment – not to mention all the jurisprudence around it – has absolutely nothing at all to say about that. Here's Wu:
The First Amendment first came to life in the early twentieth century, when the main threat to the nation’s political speech environment was state suppression of dissidents. The jurisprudence of the First Amendment was shaped by that era. It presupposes an information-poor world, and it focuses exclusively on the protection of speakers from government, as if they were rare and delicate butterflies threatened by one terrible monster.
But today, speakers are more like moths—their supply is apparently endless. The massive decline in barriers to publishing makes information abundant, especially when speakers congregate on brightly lit matters of public controversy. The low costs of speaking have, paradoxically, made it easier to weaponize speech as a tool of speech control. The unfortunate truth is that cheap speech may be used to attack, harass, and silence as much as it is used to illuminate or debate. And the use of speech as a tool to suppress speech is, by its nature, something very challenging for the First Amendment to deal with.
Speech vs. algorithms

Simplistic takes on this topic – and there are many – argue that all speech should be free, and there should be no restrictions at all on what is said, or who may say it. Many people love to say that the only restriction should be that "you can't shout 'fire!' in a crowded theater," which is both factually and legally incorrect, as my friend Mike Masnick at TechDirt loves to point out (the case that the statement refers to was objectively a terrible case of government intrusion, and was quickly overturned; more here). Others believe the only thing that should be prevented is "hate speech," but are incapable of coming up with a coherent definition of what should qualify as hate speech. Elon Musk and others like to say speech should be free, and make offers to finance the legal challenges of people whose tweets get them fired, and then are silent when people are fired because of something they said about Charlie Kirk. Here's Wu again:
Consider three main assumptions that the law grew up with. The first is an underlying premise of informational scarcity. For years, it was taken for granted that few people would be willing to invest in speaking publicly. Relatedly, it was assumed that with respect to any given issue—say, the war—only a limited number of important speakers could compete in the “marketplace of ideas.” The second notable assumption arises from the first: listeners are assumed not to be overwhelmed with information, but rather to have abundant time and interest to be influenced by publicly presented views.
Finally, the government is assumed to be the main threat to the “marketplace of ideas” through its use of criminal law or other coercive instruments to target speakers (as opposed to listeners) with punishment or bans on publication. Without government intervention, this assumption goes, the marketplace of ideas operates well by itself. Each of these assumptions has, one way or another, become obsolete in the twenty-first century, due to the rise in importance of attention markets and changes in communications technologies.
Even if we accept that all speech should be free, that doesn't help us determine how to behave in a world of social platforms and algorithmic filtering. "All speech should be free" is a great slogan when the worst thing you have to worry about is a guy standing up at Speaker's Corner in London, or someone nailing pamphlets to a church door, but it is no help at all in trying to understand the repercussions of a tweet or an Instagram post or a TikTok video being algorithmically forwarded to billions of people around the world, devoid of any context. We might all agree that I should be allowed to say that people who belong to the Rohingya minority in Myanmar are vermin and ought to be exterminated, but what happens when Facebook promotes that view via its algorithm and thousands of innocent people are persecuted? Is that my fault or Facebook's?
With its letter to Jim Jordan, which is either an admission of defeat or a cynical play for Trump's favor (or possibly both), YouTube is essentially saying that there is nothing they can do – or at least nothing they choose to do – about the hate or disinformation on their platform, even if their algorithm is promoting it, and potentially even monetizing it. Meta has said fundamentally the same thing, and X was a lost cause long ago. It's one thing to say that the Biden administration shouldn't have put pressure on the platforms over COVID misinformation – something I might even agree with (except that we were in the middle of a global pandemic) – but it's another to wash your hands completely. I'm not saying I have a solution; even Wu couldn't come up with much. But it would be nice to face this problem head on instead of sweeping it under the carpet.
Got any thoughts or comments? Feel free to either leave them here, or post them on Substack or on my website, or you can also reach me on Twitter, Threads, BlueSky or Mastodon. And thanks for being a reader.