The UK's well-meaning online safety law is a dumpster fire

Let's all agree, for the sake of argument, that there's a lot of bad stuff on the old interwebs, okay? I mean, there's some really bad stuff – and not just porn, but violence and death and that sort of thing. I would rather not go into detail about what's out there, but suffice it to say that a lot of it is not great. All the worst aspects of humanity on display, in other words. Things you wouldn't want your friends or family to see, let alone a child. So that must mean a government law like the UK's Online Safety Act must be a good thing, right? After all, one of its stated aims is to help protect children from being exposed to harmful or disturbing content online. Unfortunately, as well meaning as the British law may be, it has turned into a massive trainwreck – or dumpster fire, depending on which is your favorite metaphor for complete and utter chaos. And the same fate awaits other laws that are similar to the UK one, including a few that members of Congress in the US have been working on for some time.
Just to get everyone up to speed on what we're talking about here, the Online Safety Act, which has been in the works for almost a decade, was passed by the UK parliament in 2023. It gives the government wide latitude to designate and suppress any online content that is deemed to be illegal or harmful to children. It also creates a so-called "duty of care" for online platforms, requiring them to take action against any content – legal or otherwise – that might be harmful to children, if children are likely to access it (this includes content related to bullying and potentially harmful physical stunts). Any platform that is found to have failed in this duty could be fined up to 10 percent of their annual revenue, which in the case of Facebook would be about $18 billion. The law also requires any platform, including those whose messaging apps are end-to-end encrypted (such as Apple and Facebook), to scan for the existence of child pornography, although the British government has said that it may hold off on this requirement until it becomes "technically feasible," which it currently isn't.
So if this law was passed in 2023, why are people up in arms about it now? The short answer is that the "age gate" requirement of the law just went into effect, and it requires online services of all kinds to implement some way of determining who is a child – and therefore deserving of the Online Safety Act's protection from otherwise legal content – and who isn't. This, as some of you are probably already aware, is a lot easier said than done. And in order to make this kind of law even halfway enforceable requires a massive invasion of privacy not just for children but for anyone who uses services such as Facebook, YouTube, X, and Reddit – even if they use those services for completely legal purposes. How do you verify someone's age online? After all, the internet was designed in such a way that no one knows if you are a dog, let alone a human child. The only way to do this reliably is to use government ID, facial scanning, and other highly invasive processes of that nature. Here's the Electronic Frontier Foundation:
Mandatory age verification tools are surveillance systems that threaten everyone’s rights to speech and privacy. To keep children out of a website or away from certain content, online services need to confirm the ages of all their visitors, not just children—for example by asking for government-issued documentation or by using biometric data, such as face scans, that are shared with third-party services like Yoti or Persona to estimate that the age of the user is over 18. This means that adults and children must all share their most sensitive and personal information with online services to access a website. Once this information is shared to verify a user's age, there’s no way for people to know how it's going to be retained or used by that company, including whether it will be sold or shared with even more third parties like data brokers or law enforcement.
Note: In case you are a first-time reader, or you forgot that you signed up for this newsletter, this is The Torment Nexus. You can find out more about me and this newsletter in this post. This newsletter survives solely on your contributions, so please sign up for a paying subscription or visit my Patreon, which you can find here. I also publish a daily email newsletter of odd or interesting links called When The Going Gets Weird, which is here.
A massive invasion of privacy

All of the above risks and concerns would apply even if the UK law and its age verification requirements worked as they are supposed to, which at this point they most definitely do not. Ryan Broderick, who writes a tech and culture newsletter called Garbage Day, described the rollout of the verification rules as "an unmitigated disaster." Users of X (formerly known as Twitter) in the UK were blocked from seeing footage of a protest that turned violent after law enforcement attacked a mostly peaceful pro-Palestine demonstration in Leeds. Moderators on the Discord chat app were reportedly kicked off their own servers for failing to verify their identities, and on the more ridiculous end of the spectrum, the Reddit thread dedicated to drinking cider started requiring users to upload their photo ID, according to a screenshot that one user uploaded to the X service (Discord and Reddit users can also apparently bypass the identity verification by using the photo mode from a popular game called Death Stranding).
The latter example shows just how problematic facial recognition is, and why skeptics believe that identity-verification systems shouldn’t rely on it. These kinds of face-scanning systems, which are also used by a number of law enforcement and other agencies, routinely suffer from mistakes and misidentification -- which in some cases has had serious consequences. As the EFF points out, “just last year, a legal challenge was launched against the Metropolitan Police in London after a community worker was wrongly identified and detained following a misidentification by the Met’s live facial recognition system.” For age-verification purposes, the technology reportedly has an error range of over a year, which means that users could be blocked incorrectly or locked out of content even though they are old enough to access it. Even if it was flawless, the EFF notes, “it would still be an unacceptable tool of invasive surveillance that people should not have to be subject to just to access content.”
Mike Masnick of Techdirt, who has been following the British law for some time, also wrote about the slow-motion trainwreck that the Online Safety Act has become. The “age assurance” part of the act, he said, has “turned out to be exactly the privacy-invading, freedom-crushing, technically unworkable disaster that everyone with half a brain predicted it would be.” In addition to Reddit threads about drinking cider, Masnick says, Reddit users had to submit government ID or some form of self-identification in order to access communities about such dangerous and harmful topics as quitting smoking, as well as threads about menstruation, support for the victims of sexual assault, and threads documenting the violence committed by state actors in Syria and elsewhere. Users have also been forced to upload ID to access some Spotify streams that have content rated for 18 years and older, including rap and hip hop (which, as more than one person has noted, is played on the radio and TV with no age limit).
Yes, you read that right. A law supposedly designed to protect children now requires victims of sexual assault to submit government IDs to access support communities. People struggling with addiction must undergo facial recognition scans to find help quitting drinking or smoking. The UK government has somehow concluded that access to basic health information and peer support networks poses such a grave threat to minors that it justifies creating a comprehensive surveillance infrastructure around it.
As both Broderick and Masnick point out, and as the EFF has also noted in the piece quoted above, the bugs and quirks involved in verifying someone’s age or identity are just the beginning of the issues with the UK act. Even if all of that went flawlessly, and it was possible to verify someone’s identity and age quickly and accurately by using facial recognition or capturing government ID documents, that information still represents a huge and potentially ongoing invasion of privacy. Why ongoing? Because as Masnick notes, “the facial recognition systems are so poorly implemented that people are easily fooling them with screenshots from video games. This reveals the fundamental security flaw at the heart of the entire system. If these verification methods can’t distinguish between a real person and a video game character, what confidence should we have in their ability to protect the sensitive biometric data they’re collecting?" And that data can be hacked and leaked, as the personal ID info from a US dating-safety app called Tea was, violating the privacy of tens of thousands of users.
A vast engine of censorship

As bad as these two things are – the age-verification flaws and quirks and irregularities, and the massive and ongoing invasion of privacy presented by all of the government-mandated surveillance and data collection – these aren’t the most important problem with laws like the Online Safety Act. The biggest issue is that in the guise of protecting children from an over-reaching and vaguely defined collection of potential online harms, the UK government is engaging in censorship on a truly staggering scale. And the structure of the law, with the “duty of care” requirements for platforms and the significant fines associated with them, incentivizes those platforms to censor as much content as they possibly can. If your company is facing a potential multibillion-dollar fine for allowing even a single child to access something that might be seen as harmful, you’re likely to remove not just that content but anything that seems even remotely in the same category -- all of which, let’s remember, may be completely legal.
The British government removed one of the provisions from the bill that was potentially the most censorship-inducing, since it would have forced platforms like Facebook and Twitter to block even adults from accessing what it defined as “legal but harmful” content, including content related to self-harm, as well as misogynistic and otherwise unpleasant material. But even after the removal of that provision, the platforms seem to be acting as though it is still in force just to be on the safe side. And even if that wasn’t the case, the Online Safety Act still allows the British government to engage in widespread censorship in the name of protecting children – even when the material they are being protected from is something they are trying to access in order to help with a wide range of emotional or physical issues they are having. Here’s the EFF again:
Young people should be able to access information, speak to each other and to the world, play games, and express themselves online without the government making decisions about what speech is permissible. But under the Online Safety Act, the UK government is deciding what speech young people have access to, and are forcing platforms to remove any content considered harmful. We know that the scope for so-called “harmful content” is subjective and arbitrary, but it also often sweeps up content like pro-LGBTQ+ speech. Policies like the OSA, that claim to “protect children” or keep sites “family-friendly,” often label LGBTQ+ content as “adult” or “harmful,” while similar content that doesn't involve the LGBTQ+ community is left untouched. Sometimes, this impact—the censorship of LGBTQ+ content—is implicit, and only becomes clear when the policies are actually implemented. Other times, this intended impact is explicitly spelled out in the text of the policies. But in all scenarios, legal content is being removed at the discretion of government agencies and online platforms, all under the guise of protecting children.
As Taylor Lorenz pointed out in her coverage of what she called the UK’s “censorship catastrophe,” the practical impact of the legislation is that anyone under the age of 18 in the UK won’t be allowed to access porn and other harmful content, but also won’t be allowed to access independent reporting and community analysis of breaking news from around the world. Instead, they will get “sanitized, mainstream or government‑approved narratives.” It's just as crucial for young people to learn about the world and get access to information outside of government propaganda and the mainstream media as it is for adults, Lorenz argues. Such laws “limit young people's opportunities for critical thinking and civic understanding and they isolate young people from global perspectives. They also prevent them from connecting with other young people in foreign countries.” And because of the way the UK law is enforced and the duty of care, “platforms are forced to err on the side of caution by over‑blocking vague categories of potentially disturbing content” in case they might be seen by a child.
I think Broderick put it well when he said that proving an internet user is underage means that “you also have to prove that everyone else isn’t. Monitoring one kind of user means monitoring everyone else.” So how do we make sure that our children aren’t being exposed to unpleasant or harmful content online without engaging in widespread, indiscriminate censorship? There is no simple solution to this problem. Instead, there are two choices, Ryan says: ”Fight for the chaotic, open internet that allows anonymity — and all of the good and bad that comes with it. Or continue to slide into an internet that feels safer, but surveils our every move and will inevitably censor what we see and do, supported by massive databases of our most embarrassing and sensitive data.” If we choose the latter, “we shouldn’t pretend to be shocked when it all blows up in our face.”
Got any thoughts or comments? Feel free to either leave them here, or post them on Substack or on my website, or you can also reach me on Twitter, Threads, BlueSky or Mastodon. And thanks for being a reader.