Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Opponents Say Restructuring Will Undermine OpenAI’s Security Commitments
OpenAI’s attempt to convert to a for-profit company is facing opposition from competitors and artificial intelligence safety activists, who argue that the transition would “undermine” the tech giant’s commitment to secure AI development and deployment.
See Also: AI-Driven SOC Transformation with Cortex XSIAM
Non-profit organization Encode on Friday requested the U.S. District Court for the Northern District of California to allow it to file an amicus brief supporting Elon Musk’s motion for an injunction to prevent OpenAI’s planned transition (see: OpenAI Exits, Appointments and New Corporate Model.
Volunteer network Encode is a supporter of the AI safety bill vetoed by California Gov. Gavin Newsom and has contributed to the White House’s AI Bill of Rights and President Joe Biden’s AI executive order.
An early contributor to OpenAI in its nonprofit days, Musk filed a lawsuit in November accusing the company of being anti-competition and of abandoning its philanthropic mission, asking that it scale back the transition. OpenAI labelled Musk’s contention a case of sour grapes.
Encode’s proposed brief from late last week said it sought to support Musk’s injunction petition because OpenAI becoming a for-profit company would “undermine” its mission to develop and deploy “transformative technology in a way that is safe and beneficial to the public.”
“OpenAI and its CEO Sam Altman claim to be developing society-transforming technology, and those claims should be taken seriously,” it said. “If the world truly is at the cusp of a new age of artificial general intelligence, then the public has a profound interest in having that technology controlled by a public charity legally bound to prioritize safety and the public benefit rather than an organization focused on generating financial returns for a few privileged investors.”
OpenAI was set up in 2015 as a non-profit research lab, but it shifted to a hybrid structure to fund projects that required significant funds. It adopted a “capped profit” model, permitting investments from corporations including Microsoft while maintaining non-profit oversight. The organization now intends to convert its for-profit segment into a Delaware public benefit corporation, which would issue ordinary shares of stock. Although the nonprofit branch will continue to exist, OpenAI will trade in its control for ownership stakes in the PBC.
Encode’s lawyers argued OpenAI’s nonprofit division, which has pledged not to compete with “value-aligned, safety-conscious projects” nearing AGI, could lose motivation for such commitments under the for-profit structure. Company board members’ authority to revoke investor equity for safety reasons would also be eliminated following the restructuring, Encode’s brief said.
“The public interest would be harmed by a safety-focused, mission-constrained nonprofit relinquishing control over something so transformative at any price to a for-profit enterprise with no enforceable commitment to safety,” it said.
Other corporations have attempted to prevent the transition as well.
Meta earlier this month reportedly wrote to California attorney general Rob Bonta, saying that the conversion would have “seismic implications for Silicon Valley.”
Several of OpenAI’s top executives have quit the company, citing concerns over the company prioritizing profits over safety.
The company set up a committee to make “critical” safety and security decisions for all of its projects in May, after disbanding its “superalignment” security team dedicated to preventing AI systems from going rogue (see: OpenAI Formulates Framework to Mitigate ‘Catastrophic Risks’).
OpenAI co-founders Ilya Sutskever and Jan Leike quit the company over their disagreement on the approach to security, as did policy researcher Gretchen Krueger. Both Sutskever and Leike were part of OpenAI’s now-disbanded superalignment safety team, working on addressing the long-term safety risks facing the company and the technology. Krueger said she decided to resign a few hours before her other two colleagues did, as she shared their security concerns.
Leike in a social media post criticized OpenAI’s lack of support for the superalignment security team. “Over the past years, safety culture and processes have taken a back seat to shiny products,” Leike said.
Policy researcher Miles Brundage, who quit the company in October, said on social media that he was concerned about OpenAI’s non-profit entity becoming a “side thing.”
If OpenAI was allowed to operate as a for-profit company, Encode said the Sam Altman-led firm’s “touted fiduciary duty to humanity would evaporate, as Delaware law is clear that the directors of a PBC owe no duty to the public at all.”