Elon Musk makes a good point
Before I go any further, let me be clear about what I mean by saying Elon Musk makes a good point — or rather, let me clear about what I *don't* mean. I don't mean that Elon Musk makes a good point when he says the economy will fail without Donald Trump, or that democracy is at risk unless we give Trump whatever he wants. I don't mean that he makes a good point when he says that free speech is an absolute virtue (unless someone uses the term "cis" on Twitter of course, which he defines as hate speech). And I don't mean that he makes a good point when he says that we all need to have at least a dozen babies with as many different people as possible (he hasn't actually said that, but it's pretty obvious from his behavior that he thinks this is the optimal thing to do).
In fact, there aren't a whole lot of areas where I think Musk has made a good point. But there is one, in my opinion, and it's in the lawsuit he filed against OpenAI and co-founder and current CEO Sam Altman. The suit originally named OpenAI and Altman, as well as OpenAI co-founder and president Greg Brockman — who left the company after the board tried to oust Altman, and then later returned after Altman emerged victorious from the board's maneuvering (more on that below). The Musk lawsuit was withdrawn in July, but an amended version has been filed that adds Shivon Zilis as a plaintiff — she is an employee at Musk-owned brain-implant company Neuralink and also the mother of three of his 12 children, including one named Techno Mechanicus (I am not making this up, although I wish I was). The claim also added Microsoft as a defendant.
Note: In case you are a first-time reader, or you forgot that you signed up for this newsletter, this is The Torment Nexus. You can find out more about me and this newsletter — and why I chose to call it that — in this post.
Zilis was added because she was formerly on the board of OpenAI and had some interactions with Altman that Musk clearly feels might help buttress his case. The reason for adding Microsoft as a defendant is that Musk claims the close relationship between the two has made it harder for OpenAI to form partnerships with other companies, including Musk's own xAI. The claim argues that OpenAI is "actively trying to eliminate competitors" like xAI by "extracting promises from investors not to fund them.” In effect, it says, OpenAI's relationship with Microsoft has become a "de facto merger."
None of those things are the point that I've grudgingly referred to above as good, although they are definitely related to it. The best encapsulation of my point comes from the preamble in the lawsuit, which describes its premise in this way:
"Never before has a corporation gone from tax-exempt charity to a $157 billion for-profit, market-paralyzing gorgon — and in just eight years. Never before has it happened, because doing so violates almost every principle of law governing economic activity. It requires lying to donors, lying to members, lying to markets, lying to regulators, and lying to the public. No amount of clever drafting nor surfeit of creative dealmaking can obscure what is happening here. OpenAI, Inc., co-founded by Musk as an independent charity committed to safety and transparency... is, at the direction of Altman, Brockman, and Microsoft, fast becoming a fully for-profit subsidiary of Microsoft."
For the benefit of humanity
"Market-paralyzing gorgon" might be a little over-the-top. Has OpenAI turned anyone to stone by having snakes for hair? If so, it isn't mentioned in the claim (side note: everyone knows about Medusa, but she had two sisters who were also gorgons, Stheno and Euryale). In any case, Musk's point is that when Sam Altman and his cofounders Brockman and Ilya Sutskever (who has since left to start his own company dedicated to AI safety) came to Musk with the idea for OpenAI in 2015, it was supposed to be exactly what its name implies: Open. In other words, it was supposed to be a public-oriented, nonprofit entity that would investigate artificial intelligence, large-langage models, etc. and build those tools with the prospect of potentially reaching AGI — artificial general intelligence, which is roughly defined as human-like reasoning ability — while making its work public so that others could learn from it for the betterment of society.
Is that what Musk actually thought they were building? I obviously have no idea, and the legal documents filed in support of his lawsuit don't really provide a whole lot of supporting evidence, although the suit does say that Altman and his cofounders deceived him, and sold him on a non-profit that would "decentralize its technology by making it open source," and assured him that OpenAI's structure guaranteed neutrality and a focus on safety and openness "for the benefit of humanity, not shareholder value or individual enrichment." It's possible that Musk's concerns about openess only emerged later, and what he really wanted was to gain control over the company and use it for his own purposes. In fact, OpenAI argued in its initial defense against Musk's claims that this is exactly what happened — that Musk proposed merging OpenAI with Tesla and that he also "wanted majority equity, initial board control, and to be CEO."
But even if Musk's criticisms are driven by naked self-interest, he still makes a good point: What happened to those early commitments to make OpenAI actually open? To share knowledge about the foundations of its model, so that others could learn and so that outsiders could keep track of any concerning developments as it approached AGI?As Musk notes in the lawsuit, these commitments undoubtedly played a role in raising some or all of the financing that OpenAI raised in the beginning. In OpenAI's defense against Musk's suit, the company argues that it had to come up with a hybrid nonprofit/profit structure because building a large-langage model takes a massive amount of computing power, which in turn requires a massive amount of spending power, which would be difficult or maybe even impossible for a nonprofit to come up with. As OpenAI puts it:
We spent a lot of time trying to envision a plausible path to AGI. In early 2017, we came to the realization that building AGI will require vast quantities of compute. We began calculating how much compute an AGI might plausibly require. We all understood we were going to need a lot more capital to succeed at our mission—billions of dollars per year, which was far more than any of us, especially Elon, thought we’d be able to raise as the non-profit.
A non-profit wrapped in a for-profit
That led to OpenAI's arcane original structure, which had a nonprofit at the center, doing the actual research and development, surrounded by a for-profit entity that could sell shares and raise the billions of dollars necessary to build a competitive LLM. The for-profit entity had what OpenAI described at the time (it restructured in 2019) as a "capped profit" structure. This, the company said, was designed to "increase our ability to raise capital while still serving our mission." Investors and employees could see a return on their investment, but the amount was limited — any returns beyond that level would be retained by the nonprofit. The company said that this would ensure that the aim of developing AGI for the benefit of humanity would never take a backseat to profits.
We’ve designed OpenAI LP to put our overall mission—ensuring the creation and adoption of safe and beneficial AGI—ahead of generating returns for investors. The mission comes first even with respect to OpenAI LP’s structure. While we are hopeful that what we describe below will work until our mission is complete, we may update our implementation as the world changes. Regardless of how the world evolves, we are committed—legally and personally—to our mission. OpenAI LP’s primary fiduciary obligation is to advance the aims of the OpenAI Charter, and the company is controlled by OpenAI Nonprofit’s board. All investors and employees sign agreements that OpenAI LP’s obligation to the Charter always comes first, even at the expense of some or all of their financial stake.
This all sounds very laudable and civic-minded. But it might as well have been written in disappearing ink, because OpenAI now is — as Musk's lawsuit points out — a very different beast (although perhaps not an actual gorgon). OpenAI is planning to restructure itself so that while the nonprofit arm will continue to exist, it will no longer control the affairs of the for-profit entity, which will be free to raise funds and generate as much profit as possible. As a result of its latest funding round last month, OpenAI has a market value of $157 billion, as Musk mentioned in his lawsuit, and Sam Altman is worth a billion dollars. Is this what Altman had in mind all along? Was the whole nonprofit "we're doing this for the good of humanity" thing just an act, or a smokescreen? It definitely played a role in the board intrigue that led to Altman's ouster.
Not consistently candid, you say?
Some of the emails released as part of Musk's lawsuit make it clear that senior executives at OpenAI had concerns about Altman's motivation as far back as 2017. The emails cover a period where OpenAI was trying to decide who should run the company, and also whether or not to become a for-profit entity. One email signed by both Brockman and Sutskever said that they "haven't been able to fully trust [Altman’s] judgements throughout this process.” According to Shivon Zilis, who was advising OpenAI at the time, Altman told her that he "lost a lot of trust with Greg and Ilya" as a result of their email. Things festered before reaching a climax last November, when Zilis and LinkedIn founder Reid Hoffman resigned from the OpenAI board of directors, allowing the remaining board to vote Altman out. One of the reasons they gave was that Altman was not "consistently candid in his communications” with the rest of the board.
I won't go into all the details of what happened next, because most of it is moot at this point and this piece is already getting a little long in the tooth. Suffice it to say that when Altman left, everyone lost their minds — investors like Khosla Ventures, partners like Microsoft, and even OpenAI employees, who wrote a letter saying they would all quit unless Altman came back (even Ilya Sutskever signed the letter, saying he regretted his role in ousting Altman). Microsoft offered to hire everyone, but within a matter of days the OpenAI board had given up and announced that Altman would be returning — presumably because everyone came to the conclusion that if they continued on that course, there wouldn't be much of a company left at the end. At that point the writing was clearly on the wall: whatever Sam Altman had in mind, including turning the company into a for-profit entity, was going to become a reality. Open was truly off the table.
Obviously this is good for lots of people, including early investors and employees, many of whom will become millionaires or possibly even billionaires. But is it good for everyone else? We already have multiple trillion-dollar companies devoted to maximizing AI wherever possible, and injecting it into every facet of our lives whether we want them to or not, and racing towards an unknown AGI future with only the profit motive driving them. Wouldn't it have been nice to have a nonprofit, open-source entity whose primary concern was the betterment of mankind and the sharing of research? Clearly Sam Altman thought there was a benefit to this, or at least pretended he did, and raised a lot of money — and hopes — based on that sales pitch. But it turns out he also wanted to run headlong towards AGI as fast as possible while generating as much money as possible. Not really that surprising, perhaps, but still unfortunate.
Got any thoughts or comments? Feel free to either leave them here, or post them on Substack or on my website, or you can also reach me on Twitter, Threads, BlueSky or Mastodon. And thanks for being a reader.