This article is more than 1 year old

LowKey cool: This web app will tweak your photos to flummox facial-recognition systems, apparently

Boffins develop improved image poisoning technique to preserve privacy

A group of computer scientists has released a privacy-focused web application to poison people's online images so they confuse commercial facial recognition systems.

The application, called LowKey, is intended to protect people from unauthorized surveillance. It's based on an adversarial attack technique developed by University of Maryland boffins Valeriia Cherepanova, Micah Goldblum, Shiyuan Duan, John Dickerson, Gavin Taylor, Tom Goldstein, and US Naval Academy researcher Harrison Foley. It alters images so facial recognition systems can't easily use the data to find the depicted person in another image.

The researchers describe their work in a paper titled, "LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition," distributed via ArXiv and scheduled to be presented at the International Conference on Learning Representations (ICLR) 2021 in May.

The authors say that the facial recognition systems deployed by government agencies, contractors, and private companies depend on massive databases of images harvested from the internet.

Screenshot from LowKey paper

First row: Original images; Second row: Images protected with LowKey (medium); Third row: Images protected with LowKey (large) ... Click to enlarge

"Practitioners populate their databases by hoarding publicly available images from social media outlets, and so users are forced to choose between keeping their images outside of public view or taking their chances with mass surveillance," they explain. "LowKey is the first such evasion tool that is effective against commercial facial recognition APIs."

There have been other such systems proposed, notably Fawkes and image classification attack Camera Adversaria [PDF], to say nothing of physical camouflage techniques like CV Dazzle and the new normal of coronavirus masks.

However, Cherepanova, Goldblum, and their fellow academics contend that Fawkes is based on several flawed assumptions having to do with the way high-performance facial recognition systems get trained, the size of the dataset used for testing, and the emphasis on single-result accuracy rather than ranked lists. They also note that Fawkes has not yet released an app or web tool and that most social media users are unlikely to bother running its code.

What's more, they claim LowKey performs far better than Fawkes in a test that measures whether a set of gallery images used for facial recognition training can be matched against test, or "probe," images of people depicted in gallery images.

Less than 1%

"We observe that LowKey is highly effective, and even in the setting of rank-50 accuracy, Rekognition can only recognize 2.4 per cent of probe images belonging to users protected with LowKey," the research paper says, where rank-50 refers to finding a true face match within the top 50 results. "In contrast, Fawkes fails, with 77.5 per cent of probe images belonging to its users recognized correctly in the rank-1 setting and 94.9 per cent of these images recognized correctly when the 50 closest matches are considered."

In the rank-1 test – where the facial recognition algorithm is asked to match an image to a single person from its database – Rekognition gets its right 93.7 per cent of the time with a clean image but only 0.6 per cent of the time with a LowKey-processed image.

"witch" Effigy burns..

Sick of AI engines scraping your pics for facial recognition? Here's a way to Fawkes them right up

READ MORE

LowKey does even better with the Microsoft Azure Face Recognition API, which only recognizes 0.1 per cent of probe images from a LowKey-protected gallery. With Fawkes, the paper says, the Azure system can recognize more than 74 per percent of images altered using Fawkes while managing about 90 per cent accuracy with a clean image.

LowKey is effective, the authors claim, because it alters gallery images (those that end up in the facial recognition data set) so they don't match the probe (test) images. It does so by generating a perturbed image with feature vectors that differ substantially from the original, but in a way that makes such differences hard to perceive.

"Intuitively, this means that machines interpret the facial features in the original and perturbed images very differently, while humans interpret them nearly the same," explained Cherepanova and Goldblum in an email to The Register today. "Thus, humans can still recognize who is in the perturbed image, but the facial recognition system’s representation ('feature vector') lies really far away from where it should be."

Cherepanova, a doctoral student in applied mathematics at the University of Maryland, and Goldblum, a postdoctoral researcher at UMD, told The Register that they hope LowKey will be integrated into popular software, particularly social media platforms, though no such discussions have yet taken place.

We believe that in order for such a tool to become widely used, it will need to be convenient for users

"We believe that in order for such a tool to become widely used, it will need to be convenient for users," they said. "Adoption by companies like Facebook of LinkedIn would go a long way to this end."

Cherepanova and Goldblum argue that while the most sinister sounding aspect of facial recognition is mass-surveillance, the technology is already being used more specifically by police departments to arrest protestors and can easily be abused outside of mass-surveillance scenarios.

"Furthermore, the behavior of machine learning systems is not interpretable to humans, so facial recognition systems, despite being fast and fairly accurate, can make mistakes and can exhibit a high degree of racial discrimination unbeknownst to the organizations using them," they explain. "There are many reasons why ordinary people would not want to be exposed to these systems."

They say they hope their work and related projects will help convince people to share less information online even as tools like LowKey help protect the images people choose to share. They also acknowledge that systems such as LowKey can degrade the quality of the images and they hope further research will improve output quality. But as they point out in their paper, LowKey is not 100 per cent effective and may be defeated by specially engineered systems.

"The difficulty of fooling these machine learning systems should make people consider releasing less personal data to the public," said Cherepanova and Goldblum. ®

More about

TIP US OFF

Send us news


Other stories you might like