Google torpedoes 'no AI for weapons' rules

Will now happily unleash the bots when 'likely overall benefits substantially outweigh the foreseeable risks'

Google has published a new set of AI principles that don’t mention its previous pledge not to use the tech to develop weapons or surveillance tools that violate international norms.

The Chocolate Factory's original AI principles, outlined by CEO Sundar Pichai in mid-2018, included a section on "AI applications we will not pursue." At the top of the list was a commitment not to design or deploy AI for "technologies that cause or are likely to cause overall harm" and a promise to weigh risks so that Google would “proceed only where we believe that the benefits substantially outweigh the risks”.

Other AI applications Google vowed to steer clear of that year included:

  • Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  • Technologies that gather or use information for surveillance violating internationally accepted norms.
  • Technologies whose purpose contravenes widely accepted principles of international law and human rights.

Those principles were published two months after some 3,000 Googlers signed a petition opposing the web giant's involvement in a Pentagon program called Project Maven that used Google's AI to analyze drone footage.

The same month Pichai published Google’s AI principles post, the search and ads giant decided not to renew its contract for Project Maven after its expiry in 2019.

In December 2018 the Chrome maker challenged other tech firms building AI to follow its lead and develop responsible tech that "avoids abuse and harmful outcomes."

On Tuesday this week, Pichai's 2018 blog post added a notice advising readers that, as of February 4, 2025, Google has "made updates to our AI Principles" that can be found at AI.Google.

The Chocolate Factory's updated AI principles center on three things: Bold innovation; responsible development and deployment; and collaborative process.

These updated principles don't mention applications Google won’t work on nor pledges to not use AI for harmful purposes or weapons development.

They do state that Google will develop and deploy AI models and apps “where the likely overall benefits substantially outweigh the foreseeable risks."

There’s also a promise to always use "appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights," plus a pledge to invest in "industry-leading approaches to advance safety and security research and benchmarks, pioneering technical solutions to address risks, and sharing our learnings with the ecosystem."

The Big G has also promised "rigorous design, testing, monitoring, and safeguards to mitigate unintended or harmful outcomes and avoid unfair bias" along with “promoting privacy and security, and respecting intellectual property rights."

A section of the new principles offers the following example of how they will operate:

We identify and assess AI risks through research, external expert input, and red teaming. We then evaluate our systems against safety, privacy, and security benchmarks. Finally, we build mitigations with techniques such safety tuning, security controls, and robust provenance solutions.

Also on Tuesday, Google published its annual Responsible AI Progress Report, which addresses the current AI arms race.

"There's a global competition taking place for AI leadership within an increasingly complex geopolitical landscape," said James Manyika, SVP for research, labs, technology and society, and Demis Hassabis, co-founder and CEO of Google DeepMind.

"We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights," the Google execs continued. "And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security."

Google will continue to pursue "AI research and applications that align with our mission, our scientific focus, and our areas of expertise, and stay consistent with widely accepted principles of international law and human rights," the duo added.

Google did not immediately respond to The Register's inquiries, including if there are any AI applications it won't pursue under the updated AI principles, why it removed the weapons and surveillance mentions from its banned uses, and if it has any specific policy or guidelines around how its AI can be used for these previously not-OK purposes.

We will update this article if and when we hear back from the Chocolate Factory.

Meanwhile, Google's rivals happily provide machine-learning models and IT services to the United States military and government, at least. Microsoft has argued America's armed forces deserve the best tools, which in the Windows giant's mind is its technology. OpenAI, Amazon, IBM, Oracle, and Anthropic work with Uncle Sam on various projects. Even Google these days. The internet titan is just less openly squeamish about it. ®

More about

TIP US OFF

Send us news


Other stories you might like