This article is more than 1 year old

Research finds data poisoning can't defeat facial recognition

Someone can just code an antidote and you're back to square one

If there was ever a reason to think data poisoning could fool facial-recognition software, a recently published paper showed that reasoning is bunk.

Data poisoning software alters images by manipulating individual pixels to trick machine-learning systems. These changes are invisible to the naked eye, but if effective they make the tweaked pictures useless to facial-recognition tools – whatever is in the image can't be recognized. This could be useful for photos uploaded to the web, for example, to avoid recognition. It turns out, this code may not be that effective.

Researchers at Stanford University, Oregon State University, and Google teamed up for a paper in which they single out two particular reasons why data poisoning won't keep people safe. First, the applications written to "poison" photographs are typically freely available online and can be studied to find ways to defeat them. Second, there's no reason to assume a poisoned photo will be effective against future recognition models.

Far from providing security, the paper's authors said data poisoning to prevent facial recognition provides a false sense of security, and could actually harm users who wouldn't have posted photographs online otherwise. 

The researchers faced off against two data poisoning programs: Fawkes and LowKey, both which subtly alter images at the pixel level that, while invisible to humans, is enough to confuse facial recognition software. Both are freely available online, and that's problem number one, the authors said. 

"An adaptive model trainer with black-box access to the [poisoning method] employed by users can immediately train a robust model that resists poisoning," they said. In other words, you can train a recognition system to ignore the poisoning.

With the availability of these programs in mind, the paper said it stands to reason facial recognition software companies are aware of poisoning software like Fawkes and LowKey. As the researchers show in the paper, all they needed was black box access to image poisoner code. There's no reason to assume the major players haven't already accounted for them, too. 

There's another problem with data poisoning, though, and that is time. 

"We find there exists an even simpler defensive strategy: model trainers can just wait for better facial recognition systems, which are no longer vulnerable to these particular poisoning attacks," the paper stated. 

In the cases the researchers examined, they didn't even have to wait that long: both Fawkes and LowKey were ineffective against versions of facial recognition software released within a year of their appearance online (the same month for LowKey).

There's no arms race between poisoning and facial recognition to be found here, the boffins said. Poisoning attacks are only effective once, can likely be countered via a black box, and if that fails all the system has to do is wait for an update. 

There's been plenty of experiments done on fooling facial recognition with varying levels of success, and it looks as though data poisoning is yet another unsuccessful attempt at promoting online privacy. 

"In light of this, we argue that users' only hope is a push for legislation that restricts the use of privacy-invasive facial recognition systems," the paper stated. ®

More about

TIP US OFF

Send us news


Other stories you might like