Inside the OpenAI Trial: Former Board Members Testify Altman Lied, Resisted Oversight
Week two of Elon Musk's $134 billion lawsuit against OpenAI and Sam Altman concluded with explosive video depositions from former board members Helen Toner and Tasha McCauley, ex-CTO Mira Murati, and a former safety researcher who testified that Microsoft deployed GPT-4 in India by bypassing OpenAI's internal safety review board. OpenAI's defense countered by showing Musk himself tried to poach Altman for a Tesla AI role.
Elon Musk’s $134 billion lawsuit against OpenAI, CEO Sam Altman, President Greg Brockman, and Microsoft entered its second week in Oakland federal court with a torrent of testimony that put the internal dysfunction of one of the world’s most consequential technology companies on the public record. Video depositions from former OpenAI board members Helen Toner and Tasha McCauley, live testimony from former CTO Mira Murati, and accounts from a former safety researcher painted a picture of a company whose leadership had systematically bypassed the oversight structures it had built to justify public trust.
The jury of nine, seated April 27, must decide a central legal question: did the communications between Altman, Brockman, and Musk in OpenAI’s founding years create a formal charitable trust — and did the company’s restructuring from a nonprofit-controlled entity into a public benefit corporation breach that trust? Musk is seeking to unwind the restructuring and remove Altman and Brockman from their roles, in addition to the $134 billion in damages.
What the Former Board Members Said
The most damaging testimony for Altman came from former board members who had voted to remove him in November 2023 — a firing that lasted four days before Microsoft’s intervention and a staff revolt forced his reinstatement.
Helen Toner, an AI safety researcher who served on OpenAI’s board from 2021 until 2023, gave her account in a video deposition played for jurors. She described the board’s reasoning for the firing in specific terms: “a pattern of behavior related to his honesty and candor, his resistance of board oversight, as well as the concerns that two of his inner management team raised to the board about his management practices, his manipulation of board processes.”
Toner also testified to a basic failure of board information flow that would be remarkable for any company of OpenAI’s significance: the board learned about the public launch of ChatGPT — the product that transformed OpenAI from a research lab into a global phenomenon — not from company leadership, but from social media. An OpenAI employee had asked another board member whether the board was even aware of the development.
Tasha McCauley, another board member who voted to oust Altman, corroborated the picture in her own video deposition. She described what she called “a toxic culture of lying that was kind of leading to these crisis events,” and testified that the board had developed “buckets of concerns” about Altman: his resistance to oversight, his dishonesty, concerns expressed by senior members of OpenAI’s own leadership, and “repeated crisis events” that she attributed directly to his behavior.
Mira Murati: Chaos From the Top
Perhaps the most striking live testimony came from Mira Murati, OpenAI’s former Chief Technology Officer who served briefly as interim CEO during the four days of Altman’s ouster.
Murati, who left OpenAI in September 2025 to start her own AI company, testified that Altman routinely told different people opposite things — creating organizational dysfunction that she characterized as deliberate chaos. “My concern was about Sam saying one thing to one person and completely the opposite to another person,” she told the court.
Murati said Altman was “creating chaos” at the executive level, and that she had personally experienced what she characterized as deception in her interactions with him. At the same time, she acknowledged a telling contradiction in her own position during the 2023 crisis: despite her concerns about Altman’s behavior, she had wanted him reinstated as CEO out of fear that the company would collapse without him. That admission — that the person she found most problematic was also the person she believed was indispensable — captures the bind that OpenAI’s leadership found itself in during the November 2023 episode.
The Safety Board Bypass
One of the trial’s most significant factual claims came from Rosie Campbell, a former member of OpenAI’s safety team, who testified about a specific incident involving Microsoft’s deployment of GPT-4.
Campbell testified that Microsoft had deployed a version of GPT-4 in India without going through OpenAI’s Deployment Safety Board — the internal review process designed to evaluate new deployments for safety risks before launch. The Safety Board, which OpenAI had positioned publicly as a key element of its responsible AI deployment framework, was apparently circumvented in a commercially significant rollout by OpenAI’s largest investor and strategic partner.
Campbell also described a broader pattern she characterized as a slow erosion of OpenAI’s safety infrastructure and culture over time — an erosion that she said eventually prompted her to leave the company. Her testimony aligned directly with Musk’s central legal argument: that OpenAI’s nominal commitment to safety and its nonprofit mission had progressively given way to commercial and financial pressures.
OpenAI Fires Back: Musk’s Competitive Motives
OpenAI’s defense made its own mark during the second week through the testimony of Shivon Zilis, a Neuralink executive and former OpenAI board member who is also the mother of four of Musk’s children. Zilis’s dual relationship with Musk and OpenAI made her testimony unusually complex.
Under examination by OpenAI’s lawyers, Zilis testified that Musk had offered Altman a seat on Tesla’s board as part of an effort to recruit Altman away from OpenAI to lead a new AI research effort inside Tesla. OpenAI’s lawyers pressed her on whether she had served as a conduit for Musk’s attempts to recruit OpenAI’s co-founders to a Tesla-internal AI lab.
The implication of this testimony for OpenAI’s defense is significant: if Musk was simultaneously trying to poach OpenAI’s leadership to work on a competing AI project at Tesla, his lawsuit’s framing as a principled stand for the company’s charitable mission becomes harder to sustain. Musk’s own legal team has maintained that his motives are purely about OpenAI’s fidelity to its founding charter — a position the Zilis testimony directly complicates.
Greg Brockman, OpenAI’s president and co-founder, also testified during the week, rebutting Musk’s account of OpenAI’s early history and revealing that he had at one point feared that a confrontation with Musk would turn physical. “I actually thought he was going to hit me,” Brockman said of one particularly tense exchange.
The Charitable Trust Question
Beneath the dramatic testimony about management dysfunction, the legal core of the trial remains a dry but consequential question of contract and trust law.
Musk’s legal team must demonstrate that his early communications with Altman and Brockman — promises about OpenAI’s mission, its structure, its commitment to developing AI for humanity’s benefit — constituted a legally binding charitable trust, not merely informal aspirational statements. If those communications created a trust, then OpenAI’s restructuring — which transferred effective control from the nonprofit board to a for-profit subsidiary now structured as a public benefit corporation — would constitute a breach of that trust, potentially giving Musk legal standing for the relief he is seeking.
OpenAI’s defense argues that no such trust was formed, that Musk’s characterization of early conversations as legally binding commitments misrepresents their nature, and that Musk’s own competing commercial interests in AI (through xAI, his AI company) undermine his standing as a disinterested party seeking to protect a charitable mission.
The $134 billion damages figure also encompasses Musk’s claims against Microsoft, which invested $13 billion in OpenAI and whose deployment behavior — including the India GPT-4 incident — has now been put into the trial record.
What Comes Next
With the liability phase of the trial expected to conclude by approximately May 21, the court’s schedule leaves roughly two more weeks of testimony and arguments before the jury is asked to render a verdict on the legal questions.
The outcome will have implications that extend well beyond Musk and OpenAI. A ruling that charitable trust obligations can attach to informal founding communications in a technology company would create significant precedent for how courts treat the founding documents and early conversations of AI labs — many of which used language about benefiting humanity to attract talent, capital, and credibility, while subsequently pursuing commercial paths that look considerably more conventional.
For OpenAI specifically, the trial has already produced a public record of internal governance failures — a board kept in the dark about the company’s most significant product launch, safety review processes bypassed by its largest investor, and executive leadership characterized by its own senior team as systematically deceptive — that will outlast whatever the jury ultimately decides.
The trial resumes Monday, May 11.