When the worst person you know makes a good point

Share
When the worst person you know makes a good point

In case you aren't terminally online, as the kids say, there's a popular meme that uses a photo of a balding man with a steely gaze and the caption "Heartbreaking: The worst person you know just made a great point" (apparently the man's name is Josep Maria García, and he is from Spain; the picture was taken in 2014 during a trip to Barcelona, during which he helped his photographer brother-in-law set up a photoshoot). I was reminded of this meme again while reading all of the coverage of Elon Musk's lawsuit against OpenAI, which just started court proceedings in federal court in California. Some of you may remember that I wrote about this for The Torment Nexus in November of 2024, and in that piece I argued that despite having a ton of terrible opinions about a wide range of things, Musk has a number of points in his OpenAI lawsuit that I think are worth considering. Believe me, I don't like being in this position, but just because he is a difficult or terrible person doesn't mean he doesn't make some good points.

To recap, Musk originally sued OpenAI two years ago, accusing the company of breaching a contract by putting profits ahead of its original goal of developing artificial intelligence in the public interest. In particular, Musk alleged that the multibillion-dollar deal between OpenAI and Microsoft — which at the time gave the software company a stake in anything developed by OpenAI up until the achievement of what it called "artificial general intelligence," or human-like abilities — contravened the company's pledge to develop AI safely and to make the technology publicly available. The lawsuit came just a few months after OpenAI cofounder Sam Altman survived a boardroom coup in which a number of board members (all of whom have now left the company) tried to have him removed. Here's how the New York Times described the Musk lawsuit:

Mr. Musk’s lawsuit said he became involved with OpenAI because it was created as a nonprofit to develop artificial intelligence for the “benefit of humanity.” A key component of that, the lawsuit said, was to make its technology open source, meaning that it would share the underlying software code with the world. Instead, the company created a for-profit business unit and restricted access to its technology. The lawsuit, which seeks a jury trial, accused OpenAI and Mr. Altman of being in breach of contract and violating fiduciary duty, as well as unfair business practices. Mr. Musk is asking that OpenAI be required to open up its technology to others and that Mr. Altman and others pay back Mr. Musk the money that Mr. Musk gave to the organization.

A few months after filing the lawsuit, Musk withdrew it, and then later refiled it, having strengthened the claims about Altman's behavior and that of OpenAI president Greg Brockman. In particular, he argued that the company had broken federal laws against racketeering by conspiring to defraud him. The suit claimed that Altman and Brockman knowingly misled Musk when they partnered with him to create OpenAI in 2015. “Elon Musk’s case against Sam Altman and OpenAI is a textbook tale of altruism over greed,” the suit said. “Altman, in concert with other defendants, intentionally courted and deceived Musk, preying on Musk’s humanitarian concern about the existential dangers posed by A.I.” It may be difficult to imagine a man like Musk — someone who ran a bogus government department and slashed billions of dollars from USAID and other programs, and who sought a trillion-dollar payout from his own company — being described as altruistic in anything but a sarcastic way, but let's leave that for now.

Musk is asking for more than $150 billion in damages from OpenAI and Microsoft, and also wants the court to remove Altman from OpenAI’s board, and to unravel a shift the company recently made to operate as a for-profit company (albeit one that is defined as a "public benefit corporation"). In its original form, the company's operating arm was controlled by a nonprofit foundation, and the amount of money it could make was capped, which Altman said made it difficult to raise the hundreds of billions of dollars required to build out its AI engine. The nonprofit still controls the for-profit arm, but there is no longer a cap, and staff now have equity in the for-profit entity. It is therefore free to pursue a public stock-market listing or IPO, which some believe could value the company as high as $1.2 trillion (although the company recently missed its revenue targets, which has some analysts skeptical of its chances on the open market).

Note: In case you are a first-time reader, or you forgot that you signed up for this newsletter, this is The Torment Nexus. Thanks for reading! You can find out more about me and this newsletter in this post. This newsletter survives solely on your contributions, so please sign up for a paying subscription or visit my Patreon, which you can find here. I also publish a daily email newsletter of odd or interesting links called When The Going Gets Weird, which is here.

For all mankind

As the New York Times noted in an overview of OpenAI's creation, the concept of an open-source AI company pursuing research for the good of humanity emerged after Musk had a discussion with Google founder Larry Page about where artificial intelligence was going, and the potential risks. The talk reportedly took place during a party for Musk's 44th birthday, and Page argued that even if AI eventually exterminated humanity, it was still worth pursuing. Musk disagreed, and not long afterward he and Altman started talking about creating a nonprofit AI research organization that could try to develop AI safely, and share the results of its research with everyone. OpenAI was launched, and the bulk of the initial funding came from Musk. But by 2017, there was a debate inside the company about whether an open-source nonprofit was the best way to go — that it might actually be more dangerous, and that it would be unable to raise money. The preamble to Musk's lawsuit describes what he argues happened afterwards:

"Never before has a corporation gone from tax-exempt charity to a $157 billion for-profit, market-paralyzing gorgon — and in just eight years. Never before has it happened, because doing so violates almost every principle of law governing economic activity. It requires lying to donors, lying to members, lying to markets, lying to regulators, and lying to the public. No amount of clever drafting nor surfeit of creative dealmaking can obscure what is happening here. OpenAI, Inc., co-founded by Musk as an independent charity committed to safety and transparency... is, at the direction of Altman, Brockman, and Microsoft, fast becoming a fully for-profit subsidiary of Microsoft."

Musk's lawsuit, and his repeated claims that "Scam Altman" and Brockman "stole a charity" etc., make it sound like he is a selfless altruist, and the current operators of OpenAI are craven capitalists and thieves, but of course we know that the reality is more complex than that. As Altman noted in a statement of defense, Musk himself was an advocate of a for-profit structure in the interests of being able to finance the company's growth, and in fact (according to Altman) wanted to take control of the company by merging it with Tesla, taking majority ownership, and making himself CEO. Doesn't that sound altruistic? But as I put it when I wrote about the suit back in 2024, even if Musk's criticisms are driven by naked self-interest, he still makes a fair point: What happened to those early commitments to make OpenAI actually open? To share knowledge about the foundations of its model, so that others could learn and so that outsiders could keep track of any concerning developments as it approached AGI? Didn't it raise funding based on that vision, and therefore isn't its current form a betrayal of that goal?

It was fairly easy to believe that Musk's lawsuit was driven by naked self-interest when he was suing for hundreds of millions and the assumption was that it would all go into his pocket, but he recently amended the complaint to state that any proceeds in the way of fines or settlements should be paid to the nonprofit entity inside OpenAI's new structure. This also helps to derail any lingering suspicions that Musk just wants to cripple OpenAI so that his AI efforts might be spared the competition, something OpenAi and Altman have argued in their past responses to the lawsuit (xAI has lost virtually all of its staff and Musk has said he wants to start again on a different path; what was left of xAI has been absorbed into SpaceX). Of course, when you are already almost a trillionaire, adding a few more piles of bills to your Scrooge McDuck-style swimming pool of money probably isn't much of an incentive to do anything, let alone a lawsuit.

Elon Musk testified Tuesday he’s suing OpenAI because the startup’s pivot from a charity to a for-profit business is wrong and sets a concerning precedent for other philanthropic efforts. “It is not okay to steal a charity, that’s my view,” Musk told jurors at the outset of a trial in federal court in Oakland, California. Musk said the consequences of the legal fight go far beyond the people involved, and that if Altman and Brockman’s conduct isn’t deemed improper, “this case will become case law and become precedent to looting every charity in America.” Musk’s lawyer, Steven Molo, told the jury the trial will show that Altman and Brockman took advantage of Musk’s money, reputation and guidance to get OpenAI off the ground — and then decided to abandon its public-focused principles and capitalize on the project for their own benefit. Microsoft stood by as they made an “absolute mockery of OpenAI’s charitable mission,” Molo told the jury.

Sociopaths are us

Musk and his acolytes on X haven't just been lobbing smears at Altman in the runup to the trial, they've also been sharing and promoting a recent piece on Altman in The New Yorker, which says that "new interviews and closely guarded documents shed light on the persistent doubts about the head of OpenAI." But for anyone who has been following Altman's career both at Y Combinator and after starting OpenAI, there is very little that is new — or even that shocking — in the piece. It goes into detail about the maneuvering behind the scenes in the attempt to oust Altman, and how OpenAI staffers like Ilya Sutskever (who left to start his own AI safety firm) and Dario Amodei (who left to start Anthropic with his sister Daniela) didn't trust Altman because he allegedly "exhibited a consistent pattern of... lying." Others, including hacker and freedom of information activist Aaron Swartz, are quoted describing Altman as a sociopath.

We have interviewed more than a hundred people with firsthand knowledge of how Altman conducts business: current and former OpenAI employees and board members; colleagues and competitors; his friends and enemies and several people who, given the mercenary culture of Silicon Valley, have been both. Some people defended Altman’s business acumen and dismissed his rivals as failed aspirants to his throne. Yet most of the people we spoke to shared the judgment of Sutskever and Amodei: Altman has a relentless will to power that, even among industrialists who put their names on spaceships, sets him apart. “He’s unconstrained by truth,” the board member told us. “He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.”

I don't want to belittle the accusations of lying and deception, but are there any billionaires in a position of power in Silicon Valley who aren't sociopaths? Peter Thiel? Marc Andreessen? Mark Zuckerberg? I could go on. And Elon himself is definitely on this list, of course. I mean, everything we know about the man contributes to that impression, including the fact that he has at least fourteen children — that we know of — with at least four women, and had two partners (the musician known as Grimes and Shivon Zilis, an exec at his brain implant company) in the same hospital having his children at the same time without telling either one of them. So we'll have to call that one a draw between Musk and Altman. I'm not saying this to excuse any of their behavior, just that we can't really pick sides based on who is more of a sociopath than the other. Is it possible that Musk just wants to F with Altman by filing this lawsuit? Of course.

That said, however, I think it's worth noting that Musk isn't the only one accusing Altman of diverging from the initial vision of OpenAI: last year, a group of ex-OpenAI staffers filed an amicus brief in support of Musk and in opposition to the company’s conversion to a for-profit corporation. The brief was filed by Harvard law professor and Creative Commons founder Lawrence Lessig, and names 12 former OpenAI employees. If profit was the controlling motive for the company, they said, it would “fundamentally violate its mission.” Several of the ex-staffers have spoken out against OpenAI’s practices publicly before, warning that OpenAI is in a “reckless” race for AI dominance and that OpenAI "should not be trusted when it promises to do the right thing later.”

According to the brief, OpenAI’s original structure — a nonprofit controlling a group of other subsidiaries — was a “crucial part” of its overall strategy and “critical” to the organization’s mission. Restructuring that would “breach the trust of employees, donors, and other stakeholders who joined and supported the organization based on these commitments,” the brief said. OpenAI committed to several key principles in setting up its initial mission, according to the signers of the brief. “These commitments were taken extremely seriously within the company and were repeatedly communicated and treated internally as being binding," it reads. "The court should recognize that maintaining the nonprofit’s governance is essential to preserving OpenAI’s unique structure, which was designed to ensure that artificial general intelligence benefits humanity rather than serving narrow financial interests.” Couldn't have said it better myself.

Got any thoughts or comments? Feel free to either leave them here, or post them on Substack or on my website, or you can also reach me on Twitter, Threads, BlueSky or Mastodon. And thanks for being a reader.