This article is more than 1 year old

If it were possible to evade facial-recognition systems using just subtle makeup, it might look something like this

Interested in poking away at machine-learning models? This academic study could be a good start

Makeup carefully applied to the forehead, cheeks, and nose may help you evade facial recognition systems, judging from these computer scientists' experimental work.

Their described method is a form of adversarial attack, which generally involves subtly tweaking input data to trick machine-learning algorithms into misidentifying things in images, text, or audio.

In this case, the goal is to prevent a facial-recognition system from identifying you. In the past we've seen stickers you can put on your face or paper glasses you can wear to fool these kinds of technologies, though they aren't very inconspicuous. Guards, operators, or anyone else nearby will probably realize something's up when you walk by with this stuff on you.

That said, perhaps you won’t stick out as much if you wear a t-shirt with an adversarial print though, again, trained observers may be wise to your caper and stop you.

An adversarial attack described in a pre-print paper, released via arXiv this week, by folks at Ben-Gurion University of the Negev in Israel and Japanese IT giant NEC, claims to be much more discreet. So much so, it shouldn't be obvious you're trying to hoodwink an AI facial-recognition system.

“In this paper, we propose a dodging adversarial attack that is black-box, untargeted, and based on a perturbation that is implemented using natural makeup,” the researchers wrote. “Since natural makeup is physically inconspicuousness, its use will not raise suspicion. Our method finds a natural-looking makeup, which, when added to an attacker’s face, hides his/her identity from a face recognition system.”

In their experiment – which we stress is a lab-level project at this stage – there is a black-box AI system that scans faces for a banned person and raises an alarm if they are spotted. The goal is to apply makeup on this banned person in a way that they evade being identified by the facial-recognition model.

By black-box, we mean that the researchers have no access to or idea of the inner-workings of the facial-recognition algorithm they’re trying to outwit, which is how it would be if they were attacking a real-world system. Instead, they employ a surrogate system as a substitute for the black-box AI software.

adversarial_makeup

In this figure taken from the paper, natural-looking adversarial makeup makes the facial-recognition system mark a known participant as unknown. Credit: Guetta et al

First, they feed photographs of the banned person and a random person into the surrogate model to generate a heat map. The map highlights areas of the face that are most important for the substitute system in identifying a particular person’s face. Next, makeup is applied to those regions to alter their appearances to hopefully trick the surrogate model. The nose can be made to look thinner with powder or the cheekbones more pronounced with contouring, for example. This process is repeated until the surrogate is fooled.

The researchers are banking on the black-box facial-recognition system having similar weaknesses to the substitute one. Whatever hoodwinks the surrogate system should stump the black-box model, too, if the attack is to work. When the person with the adversarial makeup walks past a camera, its facial-recognition software should fail to match their face.

In the small experiment performed on ten men and ten women aged between 20 and 28, the facial-recognition cameras – powered by the LResNet100E-IR,ArcFace@ms1m-refine-v2 model – were apparently only able to correctly identify banned people with adversarial makeup 1.22 per cent of the time. The surrogate used was Facenet model.

The paper is just a proof-of-concept, and didn't test commercially deployed applications. The results should be taken with a pinch of salt. It looks as though the facial-recognition model the researchers used in their test wasn't very accurate to begin with. It was only able to correctly identify participants with no makeup on at all 47.57 per cent of the time, and when cosmetics were applied randomly it dropped down to 33.73 per cent, according to the paper.

Still, the researchers claimed the drop in performance when the adversarial makeup was applied on people “is below a reasonable threshold of a realistic operational environment.” The Register has asked the boffins for further comment and info. ®

More about

TIP US OFF

Send us news


Other stories you might like