
THE LINE TO ENTER THE COURTHOUSE Thursday for courtroom seating in the trial of Elon Musk’s lawsuit against Sam Altman and OpenAI was the shortest since the trial began eight days ago, but inside the jury heard key testimony.
The jury heard from three witnesses — a former OpenAI employee working on safety issues, a board member who voted to fire Altman, and an expert on nonprofit governance — each suggesting that the company was not properly fulfilling its mission.
The suit is based on Musk’s claim that under the leadership of CEO Altman and President Greg Brockman, OpenAI has abandoned its mission as a charitable corporation in favor of enriching its insiders.

Rosie Campbell, former AI safety employee
Musk’s legal team called Rosie Campbell to testify about her experience as an OpenAI employee working on AI safety. She said she’d been attracted to OpenAI’s mission to benefit humanity.
In 2024, Campbell was part of OpenAI’s “Readiness Team” charged with thinking about “the risks of AGI (artificial general intelligence) and how to mitigate them.” The team was particularly concerned about “concentrations of power” in which a small number of companies control disproportionate amounts of AI capability.
She said that, at the time, OpenAI also had a “Superalignment Team” focused on ensuring that AI is developed in alignment with human values and remains under human control.
In the years she worked at OpenAI, she observed changes in the company as it became less focused on research and more focused on commercial products.
Campbell was at OpenAI in November 2023 when OpenAI’s board fired and then re-hired Altman as CEO. After Altman’s reinstatement, two independent board members resigned and were replaced. She said she was concerned that the new board members did not have “the same kind of safety experience” as the previous members.
At the end of 2024, Campbell was told that the AGI Readiness Team was going to be dissolved and that the Superalignment Team would also be disbanded. Members of those teams were given the opportunity to find other jobs within OpenAI, but Campbell didn’t see opportunities to focus on the work she wanted to do. She opted to leave the company.
“It would be difficult to achieve the mission without that kind of work being done,” Campbell said.
Under cross examination, Campbell was asked if she understood that the development of AI required massive amounts of “compute” — such as the computing power supplied to OpenAI through deals with Microsoft. Computing power was needed for developing commercial AI products, she acknowledged, but it was “not necessarily needed to achieve the mission.”
Natasha McCauley, former OpenAI board member
After Campbell testified, segments of Natasha McCauley’s video deposition were played for the jury. McCauley, a tech entrepreneur and AI educator, joined the nonprofit OpenAI’s board of directors in November 2018. She resigned in November 2023 after Altman was reinstated as CEO.
McCauley appeared nervous and at times her testimony was halting.
At the time she’d joined the board, she said she’d been “cautiously optimistic” about OpenAI’s nonprofit structure. The organization, she believed, was “oriented around its mission” and had been “structured to resist pressure from investors.”
That structure, though, needed the organization’s leadership to keep the board informed.
She said that the board had “buckets of concerns” about Altman’s leadership: resistance to oversight, lies and concerns being expressed by OpenAI’s senior leadership. There were “repeated crisis events,” she said, “stemming from Sam’s behavior.”
“We had real doubts that we could trust what the CEO was telling us,” McCauley said.

She said that the board was hearing about a “toxic culture” in which others were “copying” that dishonest behavior.
After the board fired Altman, McCauley and other board members heard reports that Altman and co-founder Greg Brockman were making calls to OpenAI employees saying, “There’s been an evil coup.”
McCauley testified that there were growing doubts about whether OpenAI’s nonprofit board was exercising meaningful oversight over the company’s for-profit arm at a time when, she said, “the stakes [for AI safety] were going to get a lot higher.”
David Schizer, nonprofit governance expert
Among the day’s attractions was the scheduled testimony of David Schizer, former dean of Columbia Law School and an expert witness engaged by Musk’s lawyers to testify on nonprofit governance. Schizer focused on the customary practices of nonprofit corporations, with a particular focus on situations where the nonprofit has a for-profit subsidiary.
Schizer’s testimony touched on different aspects of nonprofit governance, including contracting, relationships with affiliates, independence of directors and board-CEO relationships.
Schizer’s academic work has included nonprofit corporate governance. He brings the personal experience of running a nonprofit organization with more than a thousand employees as CEO for three years.

He said he was paid $1,500 an hour for his work, which is the same rate as he charges for his consulting work. He did not have a precise answer to the question of how many hours he had worked on this case, though he estimated he’d been paid between $300,000 and $375,000 so far. He said he had never testified before as an expert.
His background includes a Yale undergraduate degree followed by law school at Yale. He had a prestigious clerkship for U.S. Supreme Court Justice Ruth Bader Ginsberg.
Genial and straightforward, he explained to the jury during direct testimony the difference between for-profit corporations and nonprofits. The former pursues profits, the latter a mission.
He walked through the role of the board of directors of a nonprofit corporation, explaining that directors have a responsibility to serve the nonprofit with care, loyalty and fidelity to the nonprofit’s mission. Unlike a for-profit corporation, a nonprofit’s fundamental purpose is to pursue its mission, not profits for its owners. Nonprofit corporations do not have owners.
He said that the board doesn’t manage the nonprofit but rather oversees the officers and employees to ensure they are fulfilling the nonprofit’s mission. One particularly important responsibility, he said, was to select and evaluate the work of the CEO. Schizer said that if a board determines that a CEO is not advancing the mission, it is appropriate to discharge the CEO.

Schizer answered a series of questions from Musk’s lawyer Steven Molo that were all based on specific circumstances involving OpenAI that Molo believes will be proved during the trial. Schizer’s role as an expert is not to determine factual matters but to give his opinion of what they mean, if the facts are proven true.
He said — assuming that Molo proves the facts he used — that OpenAI has not followed the customary practices of nonprofit corporations in a number of key areas.
For instance, Molo contends that Altman released the ChatGPT-4 Turbo model without a safety review and falsely told others that no review was required. Schizer said a board cannot accept that conduct, even if it were to turn out that the model was safe. He further stated that the safety of AI is a core part of OpenAI’s mission, and the board should not tolerate that conduct. Similarly, he added, failure of Altman to provide complete and reliable information to the board was unacceptable.
Much of Molo’s examination was devoted to Schizer’s opinion on the impact on OpenAI of a transaction in 2023, when Microsoft invested $10 billion in OpenAI and obtained an enhancement of its financial stake in the company and its future revenue. He also talked about a “recapitalization” of OpenAI in 2025 that restructured Microsoft’s economic stake and expanded its claim to revenues of future revenue if AGI is achieved.
Schizer testified that in both cases the nonprofit had given up significant value to other stakeholders without doing an adequate evaluation of the value of what it received in return. He questioned whether the transactions were fair to the nonprofit.
On cross examination, Randall Jackson, one of Microsoft’s lawyers, tried to downplay Schizer’s credentials and expertise on grounds that he knew nothing about AI technology and that he was not able to value OpenAI’s intellectual property. Schizer said he did not need to know about artificial intelligence to evaluate what a nonprofit board’s responsibilities are.
Jackson’s main point was that the nonprofit’s stake in the for-profit is currently estimated to be worth $200 billion, a staggering amount of money for a nonprofit.
He asked Schizer if there is “any aspect of its mission that OpenAI cannot pursue because it doesn’t have enough money?”
Schizer said that if the board had fulfilled its responsibility, OpenAI could have had more money.


