Elon Musk's legal campaign to dismantle OpenAI's for-profit structure has shifted the spotlight from corporate governance to a more fundamental question: whether the company's rush to commercialize artificial intelligence has come at the expense of safety. In federal court this week, testimony from a former employee painted a picture of an organization that progressively prioritized product launches over the careful safeguards it originally promised to uphold.
Rosie Campbell, who led OpenAI's AGI readiness team before departing in 2024, told the court that the lab's culture changed dramatically during her tenure. "When I joined, it was very research-focused and common for people to talk about AGI and safety issues," she testified. "Over time it became more like a product-focused organization." Her statements echo concerns raised by other departed researchers who have publicly warned that competitive pressure to ship models faster has eroded internal review processes.
The testimony arrives as OpenAI faces scrutiny on multiple fronts. The Super Alignment team, which was tasked with ensuring future AI systems remain controllable, was disbanded in 2024 around the same time Campbell's group was dissolved. Musk's legal team argues that these structural changes reflect a broader abandonment of the nonprofit mission that OpenAI's founders originally committed to when the organization was established.
OpenAI's defense maintains that building advanced AI requires enormous capital, and that its capped-profit subsidiary was necessary to attract the funding required to pursue artificial general intelligence responsibly. Under cross-examination, Campbell acknowledged that significant investment was likely unavoidable. However, she maintained that building increasingly powerful models without commensurate safety infrastructure does not align with the mission she signed up for.
Why it matters
The outcome of this case could shape how AI labs balance speed against caution for years to come. If the court finds that OpenAI's governance shift materially weakened its safety commitments, regulators and investors may demand stronger structural protections before approving future model releases or partnerships.
For the broader technology industry, the trial serves as a public stress test of the voluntary safety frameworks that major AI companies have adopted. As models become more capable and more deeply embedded in critical infrastructure, the question of who decides when a system is safe enough to deploy is no longer an academic debate. It is becoming a legal and commercial battleground with consequences for every enterprise betting on AI.