What did Mark Zuckerberg know and when did he know it?
My last Torment Nexus piece was about how weak the FTC's antitrust case against Meta was, weak enough that it was thrown out by a federal court judge. But don't take that argument as evidence that I am a Meta fan — far from it. It may not be a monopoly, but that doesn't mean it isn't harmful and in some cases actually dangerous. The one I have the most experience with is the situation in Myanmar, where Facebook ignored the signs that its platform was being used to promote violence against the Rohingya population in that country, and ignored them for so long that a United Nations panel came to the conclusion that the company enabled a genocide that killed thousands and left thousands more maimed and homeless. Did Facebook deliberately do this? Of course not. They're not monsters (or at least not that specific kind of monster). Instead, they simply overlooked the evidence in front of them, or more likely decided it wasn't important enough to get in the way of the platform's growth and engagement goals.
Whenever something like this happens — not just with Facebook, but plenty of other tech companies — the response has become a kind of ritualized theater performance, a stylized exercise of going through the motions without any real outcome or change. In Meta's case, it involves Mark Zuckerberg or some other functionary from Facebook or Instagram commenting in the press about something hateful or dangerous that its platform enabled, and then in some cases appearing before Congress, shamefaced and sometimes truculent about the said wrongdoing. Zuckerberg or his stand-in will say that they are sorry, and that they had no idea that (insert hateful or dangerous conduct here) was being enabled by the platform. At some point, months or even years later, it will be revealed that Facebook or Instagram knew exactly what was happening and chose not to do anything about it, or at least nothing substantive anyway.
One of the examples of this that I am the most familiar with was when former Facebook staffer Frances Haugen blew the whistle on the company's behavior involving young and mostly female users of Instagram, in 2021. According to the thousands of pages of internal documents that Haugen took with her when she left the company — which were shared with the Wall Street Journal and other outlets, as well as with members of Congress — Meta senior executives knew from their internal research that Instagram was increasingly linked to emotional distress and body-image issues among young women. As Haugen described in an interview with me at the Mesh conference in 2023, she and a number of other staffers worked on ways of trying to reduce or even eliminate this problem, but time and again their work was ignored — because doing so might decrease engagement or interfere with Meta's growth and revenue targets. So did Meta know? Yes. Did they care? No. Or, at least not enough to do anything about it.
“We make body image issues worse for one in three teen girls,” said one slide from 2019, summarizing research about teen girls who experience the issues. “Teens blame Instagram for increases in the rate of anxiety and depression,” said another slide. “This reaction was unprompted and consistent across all groups.” Among teens who reported suicidal thoughts, 13% of British users and 6% of American users traced the desire to kill themselves to Instagram, one presentation showed. More than 40% of Instagram’s users are 22 years old and younger, and about 22 million teens log onto Instagram in the U.S. each day.
I should note here that I am on record as being skeptical of the more over-wrought analysis of social media's impact on the psychological well-being of teens, which I think has elements of moral panic, something I've written about before for Torment Nexus. Psychologist and author Jonathan Haidt has written many times about the epidemic of emotional harm that he says is being driven by smartphones and social tools like Instagram, but unfortunately social scientists say there is little or no tangible evidence of the kind of widespread harm he describes. I also think (as I wrote in a separate Torment Nexus piece) that banning teens from social media, as Australia has tried to do, is a mistake and will likely backfire. All that said, however, I believe that when evidence of harm is produced, it is incumbent on the company to try to mitigate those effects where possible, instead of turning a blind eye because it is focused on the bottom line.
Note: In case you are a first-time reader, or you forgot that you signed up for this newsletter, this is The Torment Nexus. You can find out more about me and this newsletter in this post. This newsletter survives solely on your contributions, so please sign up for a paying subscription or visit my Patreon, which you can find here. I also publish a daily email newsletter of odd or interesting links called When The Going Gets Weird, which is here.
Strike 17 and you're out

All of this was triggered by some recent revelations from documents filed in relation to a lawsuit against Meta and several other social-media companies (including TikTok and Snapchat), a case that aggregates thousands of separate lawsuits launched by US school districts, dating back to 2023, alleging that the social-media apps are damaging the mental health of their students. To take just one of the allegations contained in the brief, Instagram’s former head of safety Vaishnavi Jayakumar testified that when she joined Meta in 2020 she was shocked to learn that the company had a “17x” strike policy for accounts that reportedly engaged in the “trafficking of humans for sex.” In other words, a user could incur 16 violations for prostitution and sexual solicitation, and only after the 17th violation would their account be suspended.
As Time describes, the brief was filed by plaintiffs in the Northern District of California, and "alleges that Meta was aware of serious harms on its platform and engaged in a broad pattern of deceit to downplay risks to young users." It says that — among other things — Meta was aware that millions of adult strangers were contacting minors via its sites and social apps; that its products potentially exacerbated mental-health issues in teen users; and that content related to eating disorders, suicide, and child sexual abuse was frequently detected by the company's internal controls, but was rarely removed. The brief alleges that the company failed to disclose these harms to the public or to Congress, and refused to implement fixes that could have protected users. From Time:
“Meta has designed social media products and platforms that it is aware are addictive to kids, and they’re aware that those addictions lead to a whole host of serious mental health issues,” says Previn Warren, the co-lead attorney for the plaintiffs in the case. “Like tobacco, this is a situation where there are dangerous products that were marketed to kids,” Warren adds. “They did it anyway, because more usage meant more profits for the company.”
The brief is based on what the legal team behind the case says are sworn depositions by current and former Meta executives, internal communications, and company research and presentations obtained during the lawsuit’s discovery process. It includes quotes and excerpts from thousands of pages of testimony and internal company documents. According to Time, the plaintiffs claim that since 2017, Meta has "aggressively pursued young users, even as its internal research suggested its social media products could be addictive and dangerous to kids." Despite the fact that Meta employees proposed multiple ways to mitigate these harms, any proposed changes were repeatedly blocked by executives who feared that new safety features would hamper teen engagement or user growth (just as Frances Haugen argued that hers were).
A Meta spokesperson told Time that the company "strongly disagrees" with the allegations, which "rely on cherry-picked quotes and misinformed opinions to paint a misleading picture." The company also points out that it has implemented new safety features for younger users, including the introduction of a new category of Instagram Teen Accounts, which put any user between 13 and 18 years of age in a special class of accounts that are private by default, limit sensitive content, and don't allow messages from adults who aren't already connected to the user. However, these features weren't added until last year. The testimony included in the brief quotes a former Meta vice-president of partnerships as saying: “My feeling then and my feeling now is that they don’t meaningfully care about user safety. It’s not something that they spend a lot of time on. It’s not something they think about. And I really think they don’t care.”
Headline goes here

According to the brief, in late 2019, Meta did a "deactivation study," which looked at users who stopped using Facebook and Instagram for a week, and found that they showed lower rates of anxiety, depression, and loneliness. The company didn't publicly disclose the results, stating that the research was unsound, in part because it was biased by the “existing media narratives around the company.” And in 2020, when Facebook was asked to appear before the members of the Senate Judiciary Committee, the panel asked the company whether it was able to determine whether increased use of its platform among teenage girls has any correlation with increased signs of depression or increased signs of anxiety, and the company said no, it was not able to do so.
Why did the company not pursue this line of inquiry? The brief in the current lawsuit states that by 2020, the growth team had come to the conclusion that making the accounts of younger users private by default would mean losing about 1.5 million monthly active teens every year on Instagram. Over the next several months, staffers in the policy, legal, and well-being teams all recommended that the company should make teen accounts private by default. But Meta did not. Subsequently, according to the brief, inappropriate interactions between adults and kids on Instagram rose to 38 times the level of problematic interactions on Facebook Messenger, and the launch of Instagram Reels allegedly compounded the problem, because it .allowed young teenagers to broadcast short videos to a wide audience, including adult strangers.
Meta's policy states that users under 13 are not allowed on its platforms, and yet the brief notes that it is common knowledge that millions of children under that age regularly use Facebook and Instagram. Time points out that internal research cited in the brief suggests there were 4 million users under 13 on Instagram in 2015 and by 2018, roughly 40% of children aged 9 to 12 said they used Instagram daily. In part, that's because Meta clearly started targeting younger users, while at the same time ignoring evidence of harm, and deliberately ignoring its own policies around those under 13. The brief describes a coordinated effort exploring new products designed for "users as young as 5-10," and says that some employees internally expressed disgust at the attempt. "Oh good, we’re going after <13 year olds now?" one wrote. "Targeting 11 year olds feels like tobacco companies... like we’re seriously saying ‘we have to hook them young.’"
While Meta developed AI tools to monitor the platforms for harmful content, the company didn’t automatically delete that content even when it determined with 100% confidence that it violated Meta’s policies against child sexual-abuse material or eating-disorder content. Meta’s AI classifiers did not automatically delete posts that glorified self-harm unless they were 94% certain they violated platform policy, according to the plaintiffs’ brief. As a result, most of that content remained on the platform, where teenage users often discovered it. In a 2021 internal company survey cited by plaintiffs, more than 8% of respondents aged 13 to 15 reported having seen someone harm themselves, or threaten to do so, on Instagram during the past week.
As mentioned above, this isn't a problem just with Meta — many other tech companies have been faced with similar issues. Casey Newton described in a recent edition of his Platformer newsletter that Roblox, which offers kids tools to build online virtual worlds, has been routinely criticized for the fact that its service enables problematic behavior on a large scale, and yet has done little or nothing to prevent harms to the young users it courts — primarily because limiting this kind of interaction would harm its growth. OpenAI has also faced the decision whether to implement controls on its AI tools, or continue with features that are problematic but result in greater growth and engagement. So do these companies know about the problems their products create? In most cases, yes. Do they care? No — or not enough to do anything meaningful.
Got any thoughts or comments? Feel free to either leave them here, or post them on Substack or on my website, or you can also reach me on Twitter, Threads, BlueSky or Mastodon. And thanks for being a reader.