Google's cloudy image recognition is easily blinded, say boffins

Hooray for humans! We can pick out images too obscure for Google's AI


Google's Cloud Vision API is easily blinded by the addition of a little noise to the images it analyses, say a trio of researchers from the Network Security Lab at the University of Washington, Seattle.

Authors Hossein Hosseini, Baicen Xiao and Radha Poovendran have hit arXiv with a pre-press paper titled Google’s Cloud Vision API Is Not Robust To Noise (PDF) that says “In essence, we found that by adding noise, we can always force the API to output wrong labels or to fail to detect any face or text within the image.”

The authors explain that if one can add different types of noise to an image, the Cloud Vision API will always incorrectly analyse the pictures presented to it. The image at the top of this story (or here for m.reg readers) shows the false results the API returned. It doesn;t need to be a lot of noise: the authors found an average of 14.25 per cent "impulse noise" got the job done.

Google's Cloud Vision API getting stuff wrong once noise is added

At this point readers may ask why this matters: the authors suggest that deliberately adding noise to images could be an attack vector because “an adversary can easily bypass an image filtering system, by adding noise to an image with inappropriate content.”

Which sounds interesting because bad actors could easily learn that images are subject to machine analysis. For example, The Register recently learned of a drone designed to photograph supermarket shelves so that image analysis can automatically figure out what stock needs to be re-ordered. An attack on such a trove of images that results in empty shelves could see customers decide to shop elsewhere.

And let's not even start to think what would happen if photos of wanted criminals were corrupted so that known villains could walk in front of CCTV cameras with impunity.

The researchers also strike a small blow for humanity, because they found images that fool Google remain easily-recognised by real, live, flesh-and-blood people.

In related news, Google made its Cloud Speech API and Automatic Speech Recognition service generally available. Here's hoping it kant bee fuelled buy ad-ding some noize to speetch. ®


Keep Reading

AI brain drain to Google and pals threatens public sector's ability to moderate machine-learning bias

With top research talent focused on commercial machine-learning goals, where do we go from here?

US gov sets up the National Artificial Intelligence Initiative Office at the last minute before Trump's presidency ends

In brief Plus: Google trains 1.6-trillion-parameter AI model, popular software used for job screenings scraps facial-recognition feature

There's nothing AI and automation can't solve – except bias and inequality in the workplace, says report

RoTM Nope, it just makes them worse

Amazon's not saying its warehouse staff are dumb... but it feels they need artificial intelligence to understand what 'six feet' means

Vid The yellow markings on the floor aren't enough for real neural networks

Another reminder that bias, testing, diversity is needed in machine learning: Twitter's image-crop AI may favor white men, women's chests

Strange, it didn't show up during development, says social network

Twitter: Our image-cropping AI seems to give certain peeps preferential treatment. Solution: Use less AI

Let's just go back to human-selected cropping, eh?

Maybe there is hope for 2020: AI that 'predicts criminality' from faces with '80% accuracy, no bias' gets in the sea

Updated Springer ditches paper from research tome, boffins rail against junk science

Prez Trump orders Uncle Sam to step up AI efforts – we all know the White House knows a lot about artificial intelligence

Ah, wait, the other kind of artificial

Biting the hand that feeds IT © 1998–2021