OpenAI is starting to train another artificial intelligence model as it struggles with security issues

Unlock Editor’s Digest for free

OpenAI said it has begun training its next-generation artificial intelligence software, even as the startup backtracked on earlier claims it wanted to build “superintelligent” systems that would be smarter than humans.

The San Francisco-based company said on Tuesday that it had begun production of a new artificial intelligence system “to bring us to the next level of capability” and that a new safety and security committee would oversee its development.

But as OpenAI races to develop artificial intelligence, one of OpenAI’s executives appeared to back away from its CEO Sam Altman’s previous comments that its goal was to eventually build a “superintelligence” far more advanced than humans.

Anna Makanju, OpenAI’s vice president of global affairs, told the Financial Times that its “mission” was to build an artificial general intelligence capable of “cognitive tasks that a human could do today.”

“Our mission is to build AGI; I wouldn’t say our mission is to build superintelligence,” Makanju said. “Superintelligence is technology that will be orders of magnitude more intelligent than human beings on Earth.”

Altman told the FT in November that he spent half his time researching “how to build superintelligence”.

Liz Bourgeois, a spokeswoman for OpenAI, said superintelligence is not “the company’s mission.”

“Our mission is AGI that benefits humanity,” she said after the FT story was first published on Tuesday. “To achieve this, we are also studying superintelligence, which we generally think of as systems even more intelligent than AGI.” She disputed any suggestion that the two were at odds.

While fending off competition from Google Gemini and Elon Musk’s xAI startup, OpenAI is trying to reassure policymakers that it is prioritizing responsible AI development after several leading security researchers quit this month.

His new committee will be led by Altman and board directors Bret Taylor, Adam D’Angelo and Nicole Seligman and will report to the remaining three board members.

The company hasn’t said what the follow-up version of GPT-4, which powers its ChatGPT app and received a major upgrade two weeks ago, can do, or when it will launch.

Earlier this month, OpenAI disbanded its so-called superalignment team — tasked with focusing on the security of potentially superintelligent systems — after Ilya Sutskever, the team’s leader and co-founder of the company, quit.

Sutskever’s departure came months after he led a shock coup against Altman in November that ultimately proved unsuccessful.

The closure of the superalignment team led to the departure of several employees from the company, including Jan Leike, another lead AI security researcher.

Makanju stressed that the “long-term possibilities” of artificial intelligence are still being worked on, “even if they are theoretical”.

“AGI doesn’t exist yet,” Makanju added, saying such technology would not be released until it was safe.

Training is the primary step in how an AI model learns, drawing from the vast amount of data and information fed to it. After processing the data and improving its performance, the model is validated and tested before being deployed in products or applications.

This lengthy and highly technical process means that the new OpenAI model may not become a tangible product for many months.

More news from Madhumita Murgia in London

Video: AI: Blessing or Curse for Humanity? | FT Tech

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top