(AI illustration by Joe Dworetzky/Bay City News via ChatGPT)

AT THE FEDERAL COURTHOUSE in Oakland Wednesday — the seventh day in the trial of Elon Musk’s lawsuit against Sam Altman and OpenAI — the jury heard the testimony of Shivon Zilis, a former OpenAI board member with a close and complicated relationship with Musk.

Zilis provided testimony about her service as a board member of OpenAI and her role during a critical six-week period when the founders who created OpenAI as a nonprofit corporation considered a variety of alternative structures to raise money.

READ MORE

For a deeper dive into the origins of the Musk v. Altman case, see Joe Dworetzky’s four-part report on how OpenAI’s founders went from tech allies to bitter courtroom enemies.

‘Before the Bell Rings’

Part 1 | Part 2 | Part 3 | Part 4

Musk’s suit contends that Altman and Brockman breached their fiduciary duties to devote the charitable assets of OpenAI, a nonprofit corporation, solely to the nonprofit’s mission of developing artificial intelligence for the benefit of humanity.

Zilis is a key witness for both sides because she was involved at OpenAI in a number of different capacities and roles. She was a senior counselor for Musk and had a close personal relationship with him. The two share four children, the oldest two are twins born in 2021. She also served on the board of OpenAI from 2020 to 2023 and voted to approve a 2023 contract with Microsoft in which Microsoft invested $10 billion and also obtained greater rights to some of OpenAI’s intellectual property. Musk contends that the contract was a key part of the defendant’s breach of the charitable trust.

Shivon Zilis’ direct testimony

Zilis walked firmly to the stand, dressed all in black. She spoke crisply and directly, answering questions without embellishment, occasionally saying “of course” rather than “yes.”

She testified that after earning an undergraduate degree from Yale, she obtained a series of positions with organizations in the artificial intelligence universe, including an AI incubator and a venture capital firm focusing on the developing AI industry.

She explained her deep interest in artificial intelligence, recounting that at 13 years old she read a book called “The Age of Spiritual Machines” by Ray Kurzweil, the prominent futurist. The book discussed a world in which machines surpass human intelligence. She said that she read the book 10 to 15 times and it influenced what she wanted to do in life. For the last 15 years, she said AI has been at the center of her life.

In the early days of OpenAI’s corporate existence, Greg Brockman offered her a job as chief operating officer for the fledgling organization. Brockman, along with Altman, Musk and Ilya Sutskever, was one of OpenAI’s founders and he serves as its current president. At the time, they had not yet developed any applications or products, so she didn’t think that it made sense to be a COO. However, she was interested in the mission and offered to work 10 hours a week for free as an advisor.

She met Musk at OpenAI and later went to work for him at three of his companies: SpaceX, Neuralink and Tesla. She said she worked 80 to 100 hours a week, trying to find and fix bottlenecks in the workflow.

“It was just bananas,” she said.

Of course, it was just bananas until everyone split. (AI illustration by Joe Dworetzky/Bay City News via ChatGPT)

Zilis was deeply involved during discussions among the OpenAI founders in August and September of 2017 about strategies for raising capital for the company’s rapidly increasing needs for computing capacity. She said she often provided information to Musk and Sam Teller, another Musk employee, about conversations she had with some or all of the other OpenAI founders.

She testified that the discussions went on for weeks and considered many different strategies — some crazy — to raise the money that would likely be needed to develop artificial general intelligence, or AGI. She said the ideas were very wide-ranging and none of the concepts were ultimately agreed to by all the four founders.

She said that the discussions ended in 2018 in a “weird halfway breakup” between Musk and the other three founders. Altman, Brockman and Sutskever accepted the existing nonprofit structure of OpenAI but did not abandon interest in forming a for-profit company that would be able to attract investment capital.

Musk left OpenAI’s board in February 2018, but Zilis remained involved with OpenAI, sometimes providing information to Musk and Teller about what was happening at the company. She sought advice from Musk about how to handle the “tricky” trust issues.

According to Zilis, she asked Musk, “Do you prefer I stay close and friendly to keep information flowing or begin to disassociate [from the OpenAI founders]?” — to which he said the former.

Zilis testified that thereafter, Altman invited her to serve on the OpenAI board. She said she accepted because not many people in the world were interested in pursuing AGI for the benefit of humanity. She wanted to be part of that mission. She joined the OpenAI board in 2020 and served until 2023.

Zilis’s board service was complicated due to her relationship with Musk. She remained on the board after it was revealed that he was the father of her children.

Zilis’s board service was complicated due to her relationship with Musk. She testified she had a romantic relationship with Musk years before, and later, after she had health issues, she accepted Musk’s offer to provide a deposit for IVF fertilization. By that process, she has had four children with him, the most recent one in February 2025.

She said Musk works “maniac hours” and has a complicated life. They agreed to keep confidential that he was the father of her children, but in July of 2022 after a media outlet told her they were going to report on it, she disclosed it to Altman and Brockman. She said that the board considered whether she would stay a board member and decided that she could. Once she was on the board, Zilis said that she did not discuss her work at OpenAI with Musk.

Although she considered Altman a friend, Zilis said that when she was on the board, there were several incidents that made her concerned about Altman’s candor.

One was a proposal that OpenAI enter a large energy supply agreement with Helion Energy — a company working on producing nuclear energy by fusion — but at that time it had not demonstrated that it could do so at scale. Altman and Brockman had a financial interest in the company. The interests were disclosed to the board, and they did not vote on approval, but Zilis thought the deal was an outlier compared to the other deals they had considered as a board. She said it was “super out of left field” and a “major bet on a speculative technology.” It gave her a bad feeling in the pit of her stomach, she said.

Cross-examining Zilis

Sarah Eddy, one of OpenAI’s lawyers, conducted a lengthy cross-examination of Zilis, repeatedly using excerpts from Zilis’s deposition transcript to point out areas where she thought Zilis’ answer to her question was different than what she had said at her deposition.

Eddy went after several main points.

Because Musk has argued that the other founders should have continued to operate OpenAI in the way it was originally structured, Eddy got Zilis to concede that she did not know of any promise to keep OpenAI as a nonprofit or not to establish a for-profit subsidiary.

To rebut Musk’s contention that the other founders had fully committed to stay with OpenAI and keep it a nonprofit, Eddy walked Zilis through the evidence that Brockman and Sutskever never agreed to Musk’s proposal that they remain for two years and not recruit employees after leaving.

Was Musk a dreamer or just a schemer? (AI illustration by Joe Dworetzky/Bay City News via ChatGPT)

A key component of Musk’s claim is that in 2023, OpenAI entered a transaction with Microsoft that, in Musk’s words, was when the other founders “stole the charity.” Eddy brought out that Zilis, as a board member of OpenAI, voted to approve the Microsoft agreement.

To support the claim of the other founders that Musk was scheming to create an AGI lab at Tesla, Eddy used her cross examination to grill Zilis on email exchanges where Musk considered poaching several OpenAI employees for Tesla.

To rebut the idea that Musk was in the dark about the other founders’ plan for changing the corporate structure after he left the OpenAI board, Eddy focused Zilis on documentation that showed that Musk knew of OpenAI’s creation of a for-profit subsidiary and considered making an investment. Musk ultimately declined to invest but said that he was “supportive in spirit.”

Zilis was composed throughout her cross-examination, generally accepting what the contemporaneous documents said but offering to “put them in context.”

Former board member Toner testifies by video

The trial day concluded with Helen Toner, one of the board members of the OpenAI nonprofit arm who was concerned with AI safety and voted in November 2023 for Altman to be fired before he was reinstated.

Toner is director of strategy at Georgetown’s Center for Security and Emerging Technology. She said she spends the greatest portion of her work for the Center on the “catastrophic risks” posed by AI. She joined the OpenAI board in late 2021.

In a video deposition offered by Musk’s lawyers and played for the jury, Toner testified that during her tenure, Altman did not provide the board “candid and complete information about the safety risks” of OpenAI’s products.

During her time on the board, she observed how the procedures of the OpenAI’s Deployment Safety Board evolved, becoming “less slapdash” over time. The Deployment Safety Board was created by OpenAI and Microsoft to review potentially risky AI models before they are released to users.

Following the release of OpenAI’s ChatGPT-4, Toner was concerned that the board “did not know about it beforehand” and learned that Microsoft had released a version without Deployment Safety Board approval. Reflecting on her service on the OpenAI board, Toner said, “I was used to the board not being informed about things.”

Toner testifies on a tumultuous tenure. (AI illustration by Joe Dworetzky/Bay City News via ChatGPT)

In October 2023, she co-authored a white paper on AI safety in which the authors pointed to criticism the company had received about the susceptibility of their products to “jailbreaks” where users can bypass safety protocols. After publication of the report, Altman asked to speak with Toner. She said he was “concerned about the content of the paper” at a time when OpenAI was being investigated by the Federal Trade Commission.

Her concerns grew when ChatGPT-4 turbo was released in November 2023 without Deployment Safety Board review. Altman told the board that it did not need safety review.

In November 2023, after senior executives at OpenAI had expressed “serious concerns” about Altman, the board voted to remove him as CEO. Toner cited a lack of “honesty and candor” and “resistance to board oversight” as among the grounds for removal. Altman was notified in a video call on Friday, Nov. 17, 2023.

Toner described the “chaotic” weekend that followed, with threats from OpenAI employees to resign and an offer from Microsoft to hire Altman, Brockman, and any other OpenAI employees. The following Tuesday Altman was reinstated as CEO, and Toner and two other board members left the board.

At the time of her deposition, she continued to have concerns about the “fragility” of the Deployment Safety Board.

The trial will continue Thursday in Oakland with the completion of Toner’s video testimony and live testimony of David Schizer, former dean of Columbia Law School with personal experience in running a humanitarian nonprofit.

Schizer was engaged by Musk’s lawyers to give an expert opinion on whether OpenAI lived up to its duties as a nonprofit corporation.