Bussiness
OpenAI scraps team that researched risk of ‘rogue’ AI
- OpenAI’s Superalignment team was formed in July 2023 to mitigate AI risks, like “rogue” behavior.
- OpenAI has reportedly disbanded its Superalignment team after its co-leaders resigned.
- One of the former leaders critiqued OpenAI’s focus on “shiny” products over safety in a post on X.
In the same week that OpenAI launched GPT-4o, its most human-like AI yet, the company dissolved its Superalignment team, Wired first reported.
OpenAI created its Superalignment team in July 2023, co-led by Ilya Sutskever and Jan Leike. The team was dedicated to mitigating AI risks, such as the possibility of it “going rogue.”
The team reportedly disbanded days after its leaders, Ilya Sutskever and Jan Leike, announced their resignations earlier this week. Sutskever said in his post that he felt “confident that OpenAI will build AGI that is both safe and beneficial” under the current leadership.
He also added that he was “excited for what comes next,” which he described as a “project that is very personally meaningful” to him. The former OpenAI executive hasn’t elaborated on it but said he will share details in time.
Sutskever, a cofounder and former chief scientist at OpenAI, made headlines when he announced his departure. The executive played a role in the ousting of CEO Sam Altman in November. Despite later expressing regret for contributing to Altman’s removal, Sutskever’s future at OpenAI had been in question since Altman’s reinstatement.
Following Sutskever’s announcement, Leike posted on X, formerly Twitter, that he was also leaving OpenAI. The former executive published a series of posts on Friday explaining his departure, which he said came after disagreements about the company’s core priorities for “quite some time.”
Leike said his team has been “sailing against the wind” and struggling to get compute for its research. The mission of the Superalignment team involved using 20% of OpenAI’s computing power over the next four years to “build a roughly human-level automated alignment researcher,” according to OpenAI’s announcement of the team last July.
Leike added “OpenAI must become a safety-first AGI company.” He said building generative AI is “an inherently dangerous endeavor” and OpenAI was more concerned with releasing “shiny products” than safety.
Jan Leike did not respond to a request for comment.
The Superalignment team’s objective was to “solve the core technical challenges of superintelligence alignment in four years,” a goal that the company admitted was “incredibly ambitious.” They also added they weren’t guaranteed to succeed.
Some of the risks the team worked on included “misuse, economic disruption, disinformation, bias and discrimination, addiction, and overreliance.” The company said in its post that the new team’s work was in addition to existing work at OpenAI aimed at improving the safety of current models, like ChatGPT.
Some of the team’s remaining teammembers have been rolled into other OpenAI teams, Wired reported.
OpenAI didn’t respond to a request for comment.
Axel Springer, Business Insider’s parent company, has a global deal to allow OpenAI to train its models on its media brands’ reporting.