Jan Leike, a key safety researcher at the firm behind ChatGPT, quit days after the launch of its latest AI model, GPT-4

A former senior employee at OpenAI claimed that the company behind ChatGPT prioritizes “shiny products” over safety. He revealed that he quit after a disagreement over key aims reached a “breaking point.”

Jan Leike, who was OpenAI’s co-head of superalignment, focused on ensuring that powerful AI systems adhered to human values and aims. His remarks come ahead of a global AI summit in Seoul next week, where politicians, experts, and tech executives will discuss technology oversight.

Leike resigned days after the San Francisco-based company launched its latest AI model, GPT-4o. His departure marks the exit of two senior safety figures at OpenAI this week, following the resignation of Ilya Sutskever, OpenAI’s co-founder and fellow co-head of superalignment.

In a thread on X posted on Friday, Leike detailed the reasons for his departure, stating that safety culture had become a lower priority.

“Over the past years, safety culture and processes have taken a backseat to shiny products,” he wrote.

OpenAI was founded with the goal of ensuring that artificial general intelligence, described as “AI systems that are generally smarter than humans,” benefits all of humanity. In his posts on X, Leike stated that he had been in disagreement with OpenAI’s leadership about the company’s priorities for some time, and this standoff had “finally reached a breaking point.”

Leike expressed that OpenAI, also known for developing the Dall-E image generator and the Sora video generator, should be investing more resources in issues such as safety, social impact, confidentiality, and security for its next generation of models.

“These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there,” he wrote, adding that it was becoming “harder and harder” for his team to conduct its research.

“Building machines smarter than humans is inherently dangerous. OpenAI is taking on an enormous responsibility on behalf of all humanity,” Leike wrote, adding that OpenAI “must prioritize becoming a safety-first AGI company.”

Sam Altman, OpenAI’s chief executive, responded to Leike’s thread on X by thanking his former colleague for his contributions to the company’s safety culture.

“He’s right, we have a lot more to do; we are committed to doing it,” he wrote.

Sutskever, who was also OpenAI’s chief scientist, wrote in his X post announcing his departure that he was confident OpenAI “will build AGI that is both safe and beneficial” under its current leadership. Initially, Sutskever had supported Altman’s removal as OpenAI’s CEO last November, but later backed his reinstatement after days of internal turmoil at the company.

Leike’s warning coincided with the release of an inaugural report on AI safety by a panel of international AI experts. The report noted disagreements over the likelihood of powerful AI systems evading human control. However, it cautioned that regulators might struggle to keep up with rapid technological advancements, highlighting the “potential disparity between the pace of technological progress and the pace of regulatory response.”

By admins

Leave a Reply

Your email address will not be published. Required fields are marked *