OpenAI calls for global watchdog focused on 'existential risk' posed by superintelligence

<movie voiceover>In a world united against one threat, one AI starts to fight bac...</movievoiceover>... Hey, who's writing this flick?

An international agency should be in charge of inspecting and auditing artificial general intelligence to ensure the technology is safe for humanity, according to top executives at GPT-4 maker OpenAI.

CEO Sam Altman and co-founders Greg Brockman and Ilya Sutskever said it's "conceivable" that AI will obtain extraordinary abilities that exceed humans over the next decade.

"In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there," the trio wrote on Tuesday.

The costs of building such powerful technology is only decreasing as more people work towards advancing it, they argued. In order to control progress, the development should be supervised by an international organization like the International Atomic Energy Agency (IAEA).

The IAEA was established in 1957 during a time when governments feared that nuclear weapons would be developed during the Cold War. The agency helps regulate nuclear power, and sets safeguards to make sure nuclear energy isn't used for military purposes.

"We are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc," they said.

Such a group would be in charge of tracking compute and energy use, vital resources needed to train and run large and powerful models.

"We could collectively agree that the rate of growth in AI capability at the frontier is limited to a certain rate per year," OpenAI's top brass suggested. Companies would have to voluntarily agree to inspections, and the agency should focus on "reducing existential risk," not regulatory issues that are defined and set by a country's individual laws.

Last week, Altman put forward the idea that companies should obtain a license to build models with advanced capabilities above a specific threshold in a Senate hearing. His suggestion was later criticized since it could unfairly impact AI systems built by smaller companies or the open source community who are less likely to have the resources to meet the legal requirements.

"We think it's important to allow companies and open source projects to develop models below a significant capability threshold, without the kind of regulation we describe here (including burdensome mechanisms like licenses or audits)," they said.

Elon Musk in late March was one of 1,000 signatories of an open letter that called for a six-month pause in developing and training AI more powerful than GPT4 due to the potential risks to humanity, something that Altman confirmed in mid-April it was doing.

"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter stated.

Alphabet and Google CEO Sundar Pichai wrote a piece in the Financial Times at the weekend, saying: "I still believe AI is too important not to regulate, and too important not to regulate well". ®

More about

TIP US OFF

Send us news


Other stories you might like