close
close

OpenAI has lost nearly half of its AGI security team, says ex-researcher


OpenAI has lost nearly half of its AGI security team, says ex-researcher

According to Daniel Kokotajlo, one of the company’s former governance researchers, OpenAI has lost nearly half of its team working on AI security.

“It wasn’t a coordinated thing. I think people just gave up one by one,” Kokotajlo told Fortune in a report published Tuesday.

Kokotajlo, who left OpenAI in April 2023, said that the ChatGPT maker initially had about 30 people working on security issues related to artificial general intelligence.

However, according to Kokotajlo, several departures over the course of the year have now resulted in the security team only having around 16 members.

“People who are primarily concerned with AGI safety and preparedness are increasingly being marginalized,” Kokotajlo told the outlet.

Business Insider could not independently confirm Kokotajlo’s information on OpenAI’s headcount. When asked for comment, an OpenAI spokesperson told Fortune that the company is “proud of our track record of delivering the most powerful and secure AI systems and believes in our scientific approach to managing risk.”

OpenAI, the spokesperson added, will “continue to work with governments, civil society and other communities around the world” to address issues related to AI risks and safety.

Earlier this month, John Schulman, the company’s co-founder and head of alignment science, announced that he would be leaving OpenAI to join competitor Anthropic.

Schulman said in an August 5 X-post that his decision was a “personal one” and not “due to a lack of support for alignment research at OpenAI.”

Schulman’s departure came just months after another co-founder, chief scientist Ilya Sutskever, announced his resignation from OpenAI in May. Sutskever founded his own AI company, Safe Superintelligence Inc., in June.

Jan Leike, who co-led OpenAI’s Superalignment team with Sutskever, left the company in May. Like Schulman, he now works at Anthropic.

The task of Leike and Sutskever’s team was to ensure that OpenAI’s superintelligence remained in line with the interests of humanity.

“I joined because I thought OpenAI would be the best place in the world to do this research,” Leike wrote in an X-post in May.

“But in recent years, safety culture and processes have taken a back seat to shiny products,” he added.

OpenAI did not immediately respond to a request for comment from Business Insider sent outside of regular business hours.