Connect with us

Bussiness

Internal divisions linger at OpenAI after November’s attempted coup

Published

on

Internal divisions linger at OpenAI after November’s attempted coup

OpenAI is struggling to contain internal rows about its leadership and safety as the divisions that led to last year’s attempted coup against chief executive Sam Altman spill back into the public domain.

Six months after the aborted removal of Altman, a series of high-profile resignations point to continuing rifts inside OpenAI between those who want to develop AI rapidly and those who would prefer a more cautious approach, according to current and former employees.

Helen Toner, one of the former OpenAI board members who tried to remove Altman in November, spoke out publicly for the first time this week, saying he had misled the board “on multiple occasions” about its safety processes.

“For years Sam had made it really difficult for the board to actually do [its] job by withholding information, misrepresenting things that were happening at the company, in some cases outright lying to the board,” she said on the TED AI Show podcast.

The most prominent of several departures in the past few weeks has been that of OpenAI co-founder Ilya Sutskever. One person familiar with his resignation described him as being caught up in Altman’s “conflicting promises” prior to last year’s leadership upset.

In November, OpenAI’s directors — who at the time included Toner and Sutskever — pushed Altman out as chief executive in an abrupt move that shocked investors and staff. He returned days later under a new board, minus Toner and Sutskever.

“We take our role incredibly seriously as the board of a non-profit,” Toner has told the Financial Times. The decision to fire Altman “took an enormous amount of time and thought”, she added.

Sutskever said at the time of his departure that he was “confident” OpenAI would build artificial general intelligence — AI that is as smart as humans — “that is both safe and beneficial” under its current leadership, including Altman.

However, the November affair does not appear to have resolved the underlying tensions inside OpenAI that contributed to Altman’s ejection.

Another recent exit, Jan Leike, who led OpenAI’s efforts to steer and control super-powerful AI tools and worked closely with Sutskever, announced his resignation this month. He said his differences with the company leadership had “reached a breaking point” as “safety culture and processes have taken a back seat to shiny products”. He has now joined OpenAI rival Anthropic.

The turmoil at OpenAI — which has bubbled back to the surface despite the vast majority of employees calling for Altman’s reinstatement as CEO in November — comes as the company prepares to launch a new generation of its AI software. It is also discussing raising capital to fund its expansion, people familiar with the talks have said.

Altman’s direction of OpenAI towards shipping product rather than publishing research led to its breakthrough chatbot ChatGPT and kick-started a wave of investment in AI across Silicon Valley. After securing more than $13bn in backing from Microsoft, OpenAI’s revenue is on track to surpass $2bn this year.

Yet this focus on commercialisation has come into conflict with those inside the company who would prefer to prioritise safety, fearing OpenAI might rush into creating a “superintelligence” that it cannot properly control.

Gretchen Krueger, an AI policy researcher who also quit the company this month, listed several concerns about how OpenAI was handling a technology that could have far-reaching ramifications for business and the public.

“We [at OpenAI] need to do more to improve foundational things,” she said in a post on X, “like decision-making processes; accountability; transparency; documentation; policy enforcement; the care with which we use our own technology; and mitigations for impacts on inequality, rights, and the environment.” 

Altman, responding to Leike’s departure, said his former employee was “right we have a lot more to do; we are committed to doing it”. This week, OpenAI announced a new safety and security committee to oversee its AI systems. Altman will sit on the committee alongside other board members. 

“[Even] with the best of intentions, without external oversight, this kind of self-regulation will end up unenforceable, especially under the pressure of immense profit incentives,” Toner wrote alongside Tasha McCauley, who was also on OpenAI’s board until November 2023, in an opinion article for The Economist magazine, published days before OpenAI announced its new committee.

Responding to Toner’s comments, Bret Taylor, OpenAI’s chair, said the board had worked with an external law firm to review last November’s events, concluding that “the prior board’s decision was not based on concerns regarding product safety or security, the pace of development, OpenAI’s finances, or its statements to investors, customers, or business partners”.

“Our focus remains on moving forward and pursuing OpenAI’s mission to ensure AGI benefits all of humanity,” he said.

One person familiar with the company said that since November’s tumult, OpenAI’s biggest backer, Microsoft, had put more pressure on it to prioritise commercial products. That had amplified tensions with those who would prefer to focus on scientific research.

Many inside the company still want to focus on its long-term goal of AGI, but internal divisions and an unclear strategy from OpenAI’s leadership have demotivated staff, the person said.

“We’re proud to build and release models that lead the industry in both capabilities and safety,” OpenAI said. “We work hard to maintain this balance and think it’s critical to have a robust debate as the technology advances.”

Despite the scrutiny invited by its recent internal ructions, OpenAI continues to build more advanced systems. It announced this week it had recently started training the successor to GPT-4, the large AI model that powers ChatGPT.

Anna Makanju, OpenAI’s vice-president of global affairs, said policymakers had approached her team about the recent exits to find out if the company was “serious” about safety.

She said safety was “something that is the responsibility of many teams across OpenAI”.

“It’s quite likely that [AI] will be even more transformational in the future,” she said. “Certainly, there are going to be a lot of disagreements on what exactly is the right approach to prepare society [and] how to regulate it.”

Continue Reading