Sam Altman wants a US-led freedom coalition to fight authoritarian AI

Team America AI Police?

Sam Altman has called for a US-led coalition of nations to ensure AI remains a vehicle for freedom and democracy, and not a tool for authoritarians to keep themselves in power and dominate others. 

Altman – the billionaire off-again, on-again CEO of OpenAI – wrote in a Washington Post op-ed today that the question of "who would control AI" is "the urgent question of our time." Not climate change, which his and others' AI buddies are undoubtedly contributing to, nor political misinformation enabled by the technology.

He argues we need to ensure the Western world – led by the United States – are the ones who dominate the space. Only the uncharitable would interpret Altman's call to action as him simply wanting to protect his California-based OpenAI from Chinese competition.

"There is no third option — and it's time to decide which path to take," Altman said. "The United States currently has a lead in AI development, but … authoritarian governments the world over are willing to spend enormous amounts of money to catch up and ultimately overtake us."

Altman believes such regimes will use AI's potential scientific, health, and educational benefits to maintain a grip on power, specifically naming Russia and China as threats. If allowed to do so, he warns, "they will force US companies and those of other nations to share user data … spy on their own citizens or create next-generation cyberweapons to use against other countries." 

(Because a democratic nation would never do such a thing - right?)

"The first chapter of AI is already written," Altman said, referring to "limited assistants" such as ChatGPT and Microsoft Copilot. "More advances will soon follow and will usher in a decisive period in the story of human society.

"If we want to ensure that the future of AI is a future built to benefit the most people possible, we need a US-led global coalition of like-minded countries and an innovative new strategy to make it happen," Altman added. 

That strategy, the CEO said, needs to involve four things: Improving AI security; the government building out the infrastructure needed to power the latest, greatest AI models; developing a "diplomacy policy for AI;" and ensuring there's a set of new norms established around developing and deploying AI.

Altman said he sees a future AI freedom force playing a role akin to the International Atomic Energy Agency. Alternatively, he said, an ICANN-style body might also work. 

Naturally, Altman sees this as a job for US policymakers working in close collaboration with private sector AI businesses - his, in all likelihood. Altman and OpenAI's record is hardly spotless, however. 

Altman is no stranger to begging the government to regulate AI startups, but that call for control is frequently undercut by his other actions. He's signed an open letter, alongside other industry heavyweights, warning of apocalyptic threats triggered by rogue models, but when some of those same leaders called for a moratorium on training powerful AIs, Altman's name was conspicuously absent from the list. 

Altman's also gone before Congress to tell members how much the AI industry needs to be regulated, while at the same time lobbying other lawmakers to exclude OpenAI from stricter regulations. 

All the while, OpenAI has chosen to not report security problems that it didn't consider critical enough to mention, and has been accused of being a bit authoritarian itself, while seemingly violating Europe's GDPR rules, by not allowing EU citizens to request corrections of their own personal data.

One former OpenAI board member Helen Toner even said in a recent interview that Altman, on multiple occasions, "gave us inaccurate information about the small number of formal safety processes that the company did have in place." 

That meant "it was basically impossible for the board to know how well those safety processes were working or what might need to change," Toner said. When confronted on the matter, Altman reportedly tried to push Toner out of the super lab while continuing to shield the reality of the safety of OpenAI products from the rest of the board.

Whether Altman or OpenAI should be influencing the future of international AI policy raises a lot of questions at the very least. We've reached out to OpenAI and Altman for comment. ®

More about

TIP US OFF

Send us news


Other stories you might like