This article is more than 1 year old

That AI scanning your X-ray for signs of COVID-19 may just be looking at your age

Plus: DARPA wants to spend $1m on AI research studying information warfare

In brief Machines are like humans – they’re lazy. When given the chance to take the easy route to complete an easy task, they will.

Academics at the University of Washington found that algorithms trained to diagnose COVID-19 from chest X-rays often look at secondary features, such as a patient’s age, rather than focusing on the images themselves – something known as shortcut learning.

“A physician would generally expect a finding of COVID-19 from an X-ray to be based on specific patterns in the image that reflect disease processes,” said Alex DeGrave, a medical science student at the American university and co-author of a paper published this week in Nature Intelligence.

“But, rather than relying on those patterns, a system using shortcut learning might, for example, judge that someone is elderly and, thus, infer that they are more likely to have the disease because it is more common in older patients. The shortcut is not wrong per se, but the association is unexpected and not transparent. And, that could lead to an inappropriate diagnosis.”

Shortcut learning makes models less robust, making them less reliable, and this perhaps explains why performance normally decreases in clinical settings. You can find code from the research project here on GitHub.

Have autonomous killer drones really hunted their first humans in war?

You may have seen headlines saying autonomous killer robots may have attacked and possibly slain their first humans in war. Where did that come from?

A 550-odd-page UN report chronicling the end of the Second Libyan Civil War, which ran from 2019 to 2021, was published in March. Over the past few days, it has caught the attention of the machine-learning community, policy experts, and journalists. A small paragraph in the document described an attack on soldiers using a mix of remote-controlled drones and autonomous weapons.

“Logistics convoys and retreating [Haftar-affiliated forces] were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 and other loitering munitions,” the UN's dossier stated. "The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true 'fire, forget and find' capability."

Haftar-affiliated forces being fighters aligned with renegade military commander Khalifa Haftar, who attacked Libya’s capital Tripoli in early 2020, and the STM Kargu-2 being a deadly dive-bomb drone built by Turkey. This device, like other loitering munitions, can be thought of as patient cruise missiles that stay for long periods in a set area and attack whatever comes near that they're programmed to take out.

The report added that these pro-Haftar forces were “subject to continual harassment from the unmanned combat aerial vehicles and lethal autonomous weapons systems” and “suffered significant casualties."

But as The Verge's James Vincent noted, the report doesn’t specifically say that soldiers were killed by autonomous weapons running purely under their own artificial instinct.

What's more, those aforementioned loitering munitions have been around for years, so is it really the first time so-called killer robots have attacked humans? As Ulrike Franke, a senior policy fellow of the European Council on Foreign Relations, put it: "It seems to me that what's new here isn't the event, but that the UN report calls them lethal autonomous weapon systems."

In other words: more detail is needed. And rather than this being the first time autonomous robots have killed people on a battlefield, this is the UN sticking a label on loitering munitions.

DARPA wants to fund AI research studying information warfare

The Pentagon's boffinry nerve-center DARPA is looking to fund a project that will build an open-source machine-learning-based system that can measure how authoritarian regimes control their corners of the internet.

The proposed system, dubbed the Measuring the Information Control Environment (MICE), “will measure how countries censor, block, or throttle specific internet-based activities [and] will also try to determine the technical capabilities that countries use to enable such repressive activities.”

Algorithms are expected to monitor changes to content shared online in the form of text, images, audio, and video, and inspect things like packet routing, IP address filtering, and domain name resolution to figure out who is controlling the information.

DARPA is looking to splash out a total of $1m for the project, according to MeriTalk, a government IT blog. The contract opportunity was posted this month, and you can find out more about it here. The deadline for submitting a MICE proposal is June 30. ®

More about


Send us news

Other stories you might like