‘Didn’t read the fine print’: Musk’s courtroom admission on OpenAI pivot

The high-stakes legal showdown between Elon Musk and OpenAI has moved from social media jabs to a California courtroom—and the implications could reshape the future of artificial intelligence.

At the heart of the dispute lies a fundamental question: Was OpenAI built to serve humanity, or shareholders?

Credits: Reuters

From Nonprofit Vision to AI Giant

When OpenAI was founded in 2015, it positioned itself as a nonprofit research lab committed to developing AI safely and openly. Musk, one of its early backers, claims he contributed $38 million and lent his influence based on assurances that the organization would remain mission-driven.

However, OpenAI’s evolution into a capped-profit model—and its deep partnership with Microsoft—has become the central point of contention.

Today, OpenAI is reportedly valued at hundreds of billions, with ambitions that include raising massive capital to fuel its computing infrastructure and possibly pursue a future IPO.

Musk’s Claim: “A Promise Broken”

Musk’s lawsuit accuses OpenAI, CEO Sam Altmanand President Greg Brockman of breach of trust.

According to Musk:

  • He was reassured that OpenAI would remain a nonprofit
  • The shift to a for-profit structure diverted value away from the original mission
  • The current structure disproportionately benefits private stakeholders

In court, Musk stated bluntly:

“The for-profit has taken the super majority of the value of the nonprofit.”

He is now seeking $150 billion in damages and wants OpenAI to revert to its nonprofit roots, with leadership changes at the top.

The “I Didn’t Read the Fine Print” Moment

One of the most striking moments from the trial came during cross-examination. When questioned about a 2017 term sheet discussing a for-profit transition, Musk admitted:

“I didn’t read the fine print, just the headline.”

This statement could prove pivotal. OpenAI’s legal team, led by attorney William Savitt, is attempting to show that Musk was aware—or should have been aware—of the company’s evolving structure.

Emails presented in court suggest early discussions among founders about monetization and even making certain technologies closed-source.

OpenAI Fires Back

OpenAI isn’t holding back. The company argues that Musk’s lawsuit is less about principle and more about control.

Their key counterpoints:

  • Musk left OpenAI’s board in 2018 and is now “bitter” about its success
  • He is attempting to slow down a competitorhis own AI venture, xAI
  • Transitioning to a for-profit model was necessary to raise capital, hire talent, and scale infrastructure

They also claim Musk himself did not consistently prioritize safety during his time with the organization—undermining his current stance.

Tensions Spill Into the Courtroom

The trial has been anything but calm. Musk showed visible frustration during questioning, at one point snapping:

“Few answers are going to be complete, especially when you cut me off all the time.”

Even the presiding judge, Yvonne Gonzalez Rogersintervened—admonishing the lawyer for interrupting, though she also dismissed Musk’s complaints about leading questions.

In another twist, Musk acknowledged that his company xAI has used OpenAI outputs to train its own models, calling it “standard practice”—a statement that adds an ironic layer to the rivalry.

Bigger Than a Lawsuit: The Future of AI Governance

This case goes far beyond personal rivalry. It raises critical questions:

  • Can a nonprofit evolve into a profit-driven entity without violating its founding principles?
  • Who should control powerful AI systems—public-interest organizations or private capital?
  • And how do we balance innovation, safety, and profit in a rapidly advancing field?

Musk’s legal team even attempted to introduce arguments about AI posing an existential threat to humanity, though the judge ruled that the trial is not about AI safety risks.

Trial in Elon Musk's lawsuit over OpenAI for-profit conversion at a federal courthouse in Oakland

Credits: Reuters

What Happens Next?

The trial, expected to last several weeks, will feature testimony from key figures including Greg Brockman and AI safety expert Stuart Russell.

With $150 billion at stake and the governance of one of the world’s most influential AI companies under scrutiny, the outcome could set a precedent for how future tech organizations are structured—and held accountable.

One thing is clear: this isn’t just a corporate dispute. It’s a battle over the very philosophy guiding the AI revolution.

Comments are closed.