This article is more than 1 year old

Google Cloud's AI recog code 'biased' against black people – and more from ML land

Including: Yes, that nightmare smart toilet that photographs you mid... er, process

Roundup Here's your latest summary of recent machine-learning developments.

Google Cloud’s computer vision algos are accused of being biased: An experiment probing Google’s commercial image recognition models, via its Vision API, revealed the possibly biased nature of its training data.

Algorithm Watch fed an image of someone with dark skin holding a temperature gun into the API, and the object was labelled as a “gun.” But when a photo of someone with fair skin holding the same object was fed into the cloud service, the temperature gun was recognized as an “electronic device.”

To verify the difference in labeling was caused by the difference in skin color, the experiment was repeated with the image of the darker-skinned person tinted using a salmon-colored overlay. Google Cloud’s Vision API said the temperature gun in the altered picture was, bizarrely, a “monocular.”

Tracy Frey, director of product strategy and operations at Google, apologized, and called the results “unacceptable,” though denied the mistake was down to “systemic bias related to skin tone”.

“Our investigation found some objects were mis-labeled as firearms and these results existed across a range of skin tones. We have adjusted the confidence scores to more accurately return labels when a firearm is in a photograph,” she told Algorithm Watch.

Intel and Georgia Tech win four-year DARPA AI contract: Research has shown adversarial examples fool machine-learning systems into making wrong decisions – such as mistaking toasters for bananas and vice-versa – by confusing them with maliciously crafted data. So far, adversarial examples that hoodwink a particular AI system will fail to trick any other AI, even if they are similar, due to their narrow nature.

However, DARPA set up the Guaranteeing Artificial Intelligence Robustness against Deception (GARD) program to fund research into adversarial examples that are able to fool multiple similar machine-learning systems at once. Now, researchers at Intel and the Georgia Institute of Technology are leading that effort.

“Intel is the prime contractor in this four-year, multimillion-dollar joint effort to improve cybersecurity defenses against deception attacks on machine learning models,” it announced this week.

The eggheads will look at how to make machine-learning models more robust against adversarial attacks in realistic settings for “potential future attacks.”

UCSD to analyze coronavirus lung scans with machine learning: AI researchers are studying chest X-rays to look for telltale signs of pneumonia associated with the COVID-19 coronavirus at the University of California, San Diego.

It’s unclear whether COVID-19 lung infections are particularly distinctive compared to other diseases, so it’s difficult to use machine learning as a diagnostic tool.

But researchers at UCSD can help combat the disease by suggesting patients with early signs of pneumonia be tested for COVID-19. “Patients may present with fever, cough, shortness of breath, or loss of smell,” Albert Hsiao, an associate professor of radiology at UCSD’s School of Medicine, told The Register.

“Depending on the criteria, they may or may not be eligible for RT-PCR testing for COVID-19. False negative rate on RT-PCR is estimated around 70 per cent in some studies, so it can be falsely reassuring.

“However, if we see signs of COVID-19 pneumonia on chest x-ray, which may be picked up by the AI algorithm, we may decide to test patients with RT-PCR who have not yet been tested, or re-test patients who have had a negative RT-PCR test already. Some patients have required 4 or more RT-PCR tests before they ultimately turn positive, even when x-ray or CT already show findings”

You can read more about that study here.

You don’t want to know what this AI smart toilet uses to identify its users: Get this, a team of researchers at Stanford University have built a smart toilet packed full of cameras and sensors to analyze your urine and stool samples with machine learning algorithms.

If that’s not bad enough, this high-tech lav is able to capture the data and link it back to the right user by identifying people with pictures of their, er, let’s just refer to the study’s abstract.

“Each user of the toilet is identified through their fingerprint and the distinctive features of their anoderm, and the data are securely stored and analysed in an encrypted cloud server.”

Sod it, if you haven’t worked out what that means, it’s a picture of your butthole. Yes, as soon as you sit down to do your business on this toilet, any data will be stored under the right user profile because there might be multiple different people that use the same toilet in your house. All of that will be sent to a cloud server somewhere and stored so that you can monitor the health of your bowels.

“We know it seems weird, but as it turns out, your anal print is unique,” said Sanjiv Gambhir, a radiology professor at Stanford University, who led the study. “The scans — both finger and nonfinger — are used purely as a recognition system to match users to their specific data. No one, not you or your doctor, will see the scans.”

He believes that the AI toilet might help people with irritable bowel syndrome, prostate cancer, or kidney failure. Seung-Min Park, a senior research scientist at Stanford University's School of Medicine, told El Reg that they aim to sell their odd contraption for "a few hundreds of dollars".

The researchers are already working "version 2" to offer more features like measuring glucose levels in urine or blood in stool samples.

Read the shocking paper for a good giggle or two. ®

More about

TIP US OFF

Send us news


Other stories you might like