Oh no, you're thinking, yet another cookie pop-up. Well, sorry, it's the law. We measure how many people read us, and ensure you see relevant ads, by storing cookies on your device. If you're cool with that, hit “Accept all Cookies”. For more info and to customize your settings, hit “Customize Settings”.

Review and manage your consent

Here's an overview of our use of cookies, similar technologies and how to manage them. You can also change your choices at any time, by hitting the “Your Consent Options” link on the site's footer.

Manage Cookie Preferences
  • These cookies are strictly necessary so that you can navigate the site as normal and use all features. Without these cookies we cannot provide you with the service that you expect.

  • These cookies are used to make advertising messages more relevant to you. They perform functions like preventing the same ad from continuously reappearing, ensuring that ads are properly displayed for advertisers, and in some cases selecting advertisements that are based on your interests.

  • These cookies collect information in aggregate form to help us understand how our websites are being used. They allow us to count visits and traffic sources so that we can measure and improve the performance of our sites. If people say no to these cookies, we do not know how many people have visited and we cannot monitor performance.

See also our Cookie policy and Privacy policy.

This article is more than 1 year old

Good news, everyone: The US military says it will be ethically minded about how it develops AI

Whew! Defense dept will 'detect and avoid unintended consequences' – so that's fine, right?

The US Department of Defense has formally adopted a set principles to ensure the ethical development and deployment of AI technology for military use.

"We owe it to the American people and to our men and women in uniform to adopt AI ethics principles that reflect our nation's values of a free and open society," Lieutenant General Jack Shanahan, director of the DoD's Joint Artificial Intelligence Center (JAIC), said during the briefing.

The principles boil down to five areas of concern described below that were initially proposed last year:

  • Responsible: DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment and use of AI capabilities
  • Equitable: The department will take deliberate steps to minimise unintended bias in AI capabilities
  • Traceable: The department's AI capabilities will be developed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation
  • Reliable: The department's AI capabilities will have explicit, well-defined uses, and the safety, security and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life cycles
  • Governable: The department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior

The move comes after Michael Kratsios, chief technology officer of the United States and deputy assistant to the President at the White House Office of Science and Technology Policy, drafted a proposal (PDF) to regulate AI applications from the private sector.

What has changed?

Now it looks like the DoD also supports some sort of internal regulation surrounding AI too. So what does this mean for the US military? Well, probably not much really. None of these principles are legally binding in any way. Although the military pledges to be ethically minded when it comes to rolling out AI technology in combat and non-combat applications, it's not clear who or if any other agency is holding them accountable to their principles.

The DoD has made it clear that it hopes to open up and ink deals with companies that have the technical expertise to develop algorithms for warfare. Under Project Maven, Google was initially employed to help the Pentagon build computer-vision software that would automatically analyse and identify useful information in video footage captured by drones, first revealed in 2018.

After the Chocolate Factory faced internal revolt and public criticism, CEO Sundar Pichai decided to can the whole thing altogether. Maybe by adopting ethical principles, the DoD is trying to portray its technical efforts as wholesome in an attempt to woo back sceptical Silicon Valley companies. That strategy probably won't work, however. If anyone knows how to front AI principles to appear more ethical, it'll be the tech giants themselves. ®

 

Similar topics

TIP US OFF

Send us news


Other stories you might like