This article is more than 1 year old

AI algorithms uncannily good at spotting your race from medical scans, boffins warn

Plus: British MP wants to ban AI deepfake smut tools

In brief Neural networks can correctly guess a person’s race just by looking at their bodily x-rays and researchers have no idea how it can tell.

There are biological features that can give clues to a person’s ethnicity, like the colour of their eyes or skin. But beneath all that, it’s difficult for humans to tell. That’s not the case for AI algorithms, according to a study that’s not yet been peer reviewed.

A team of researchers trained five different models on x-rays of different parts of the body, including chest and hands and then labelled each image according to the patient’s race. The machine learning systems were then tested on how well they could predict someone’s race given just their medical scans.

They were surprisingly accurate. The worst performing was able to predict the right answer 80 per cent of the time, and the best was able to do this 99 per cent, according to the paper.

"We demonstrate that medical AI systems can easily learn to recognise racial identity in medical images, and that this capability is extremely difficult to isolate or mitigate," the team warns [PDF].

"We strongly recommend that all developers, regulators, and users who are involved with medical image analysis consider the use of deep learning models with extreme caution. In the setting of x-ray and CT imaging data, patient racial identity is readily learnable from the image data alone, generalises to new settings, and may provide a direct mechanism to perpetuate or even worsen the racial disparities that exist in current medical practice."

Ban AI nudity tools, says British MP

Maria Miller, a Member of Parliament for Basingstoke for the Conservative Party, reckons machine learning algorithms that generate fake nude images should be banned.

These so-called “deepfakes” have been doctored using AI software for years. Several tools on the internet that allow perverts to feed the algorithms a picture of someone and get a naked image of them in return. The face is kept the same, but the body is made-up.

Miller is known for being vocal about revenge porn. She believes even if the computer-generated images are fake, the harm inflicted on victims is real.

“At the moment, making, taking or distributing without consent intimate sexual images online or through digital technology falls mostly outside of the law,” she told the Beeb.

"It should be a sexual offence to distribute sexual images online without consent, reflecting the severity of the impact on people's lives." Miller wants to raise the issue in a parliamentary debate, and introduce new legislation to ban deepfake-making software in the UK’s upcoming Online Safety Bill.

AI researchers at Facebook want to pry into your secret encrypted chats without decrypting them

Facebook has employed a team of AI engineers to figure out ways to analyze encrypted messages without decrypting them first.

Homomorphic encryption could help the social media biz sniff through WhatsApp chats to collect data that can be used to better target users with adverts, according to The Information [paywalled].

Facebook could, in theory, figure out what products and services people are interested in using homomorphic encryption techniques. Adverts for these goods could then pop up whenever they logged onto their social media accounts.

The effort appears to be led by Kristin Lautner, a cryptography expert who recently left Microsoft after two decades to join Facebook as head of its West Coast AI research group. A Facebook spokesperson, however, told the publication that the project is "too early for us to consider homomorphic encryption for WhatsApp at this time." ®

More about

TIP US OFF

Send us news


Other stories you might like