Face masks hamper the spread of coronavirus. Know what else they hamper? Facial-recognition systems (except China's)

Uncle Sam tests AI models with pics of immigrants, travelers. Wait, what?

Face masks worn to reduce the spread of the COVID-19 coronavirus typically decrease the accuracy of commercial facial recognition algorithms by up to 50 per cent, according to an investigation by America's technical standards watchdog, NIST.

“With the arrival of the pandemic, we need to understand how face recognition technology deals with masked faces,” said Mei Ngan, a computer scientist at NIST who cowrote the investigation's final report. “We have begun by focusing on how an algorithm developed before the pandemic might be affected by subjects wearing face masks.

Computer-vision algorithms learn to recognize faces by picking out features, such as the distance between the eyes or the shape of a jaw. When parts of the face are covered up by face masks, it’s not surprising that machines perform worse. The drop in accuracy depends on the algorithm – some are more robust than others.

NIST probed 89 algorithms from organizations including Intel, Samsung, Acer, Panasonic, and various universities around the world. Given an image of someone wearing what NIST called a digital mask – one pasted onto the picture – each AI system was told to identify the individual by matching them to with an unmasked picture of the same person in a database. The task is described as one-to-one matching in the report [PDF].


The digital masks used to fool some software ... Credit: B. Hayes/NIST

The digital masks were generated because it proved difficult to collect a large, high-quality image dataset of people wearing and not wearing masks, so NIST decided to create fake, synthetic masks on images it had already collected.

“Using unmasked images, the most accurate algorithms fail to authenticate a person about 0.3 per cent of the time. Masked images raised even these top algorithms’ failure rate to about 5 per cent, while many otherwise competent algorithms failed between 20 per cent to 50 per cent of the time,” according to NIST.

To test the images, the institute used two datasets made up of people applying for immigrant benefits or entering the US, weirdly enough – or aptly if you consider the possible future application of this sort of tech in America:

We used these algorithms with two large datasets of photographs collected in US governmental applications that are currently in operation: unmasked application photographs from a global population of applicants for immigration benefits and digitally-masked border crossing photographs of travelers entering the United States. Both datasets were collected for authorized travel or immigration processes.

The application photos (used as reference images) have good compliance with image capture standards. The digitally-masked border crossing photos (used as probe images) are not in good compliance with image capture standards given constraints on capture duration and environment. The application photos were left unmasked, and synthetic masks were applied to the border crossing photos. This mimics an operational scenario where a person wearing a mask attempts to authenticate against a prior visa or passport photo.

Together these datasets allowed us to process a total of 6.2 million images of 1 million people through 89 algorithms.

A closer look at the results revealed that error rates depend on a number of factors, including the overall shape of the digital mask as well as its color. Machines find it more difficult to recognize faces when the mask is black and covers up the top of the nose and bottom half of the face, compared to when they’re light blue and block the mouth and jaw only.

The accuracy also varies wildly. The best performing algorithm was from DeepGlint, a computer vision and AI startup based in China, that was able to correctly identify faces obscured by digital light blue masks with high coverage just over 96 per cent of the time. Many had error rates between 20 to 50 per cent. There are a couple anomalies too, where some algorithms had error rates of near or up to 100 per cent.

NIST said it plans to examine facial-recognition algorithms that have been specifically trained to identify images of people wearing masks later this year. It’s also considering adding in the effects of patterns or multi-colored masks in future tests, we’re told. ®

Other stories you might like

  • AMD claims its GPUs beat Nvidia on performance per dollar
    * Terms, conditions, hardware specs and software may vary – a lot

    As a slowdown in PC sales brings down prices for graphics cards, AMD is hoping to win over the market's remaining buyers with a bold, new claim that its latest Radeon cards provide better performance for the dollar than Nvidia's most recent GeForce cards.

    In an image tweeted Monday by AMD's top gaming executive, the chip designer claims its lineup of Radeon RX 6000 cards provide better performance per dollar than competing ones from Nvidia, with all but two of the ten cards listed offering advantages in the double-digit percentages. AMD also claims to provide better performance for the power required by each card in all but two of the cards.

    Continue reading
  • Google opens the pod doors on Bay View campus
    A futuristic design won't make people want to come back – just ask Apple

    After nearly a decade of planning and five years of construction, Google is cutting the ribbon on its Bay View campus, the first that Google itself designed.

    The Bay View campus in Mountain View – slated to open this week – consists of two office buildings (one of which, Charleston East, is still under construction), 20 acres of open space, a 1,000-person event center and 240 short-term accommodations for Google employees. The search giant said the buildings at Bay View total 1.1 million square feet. For reference, that's less than half the size of Apple's spaceship. 

    The roofs on the two main buildings, which look like pavilions roofed in sails, were designed that way for a purpose: They're a network of 90,000 scale-like solar panels nicknamed "dragonscales" for their layout and shimmer. By scaling the tiles, Google said the design minimises damage from wind, rain and snow, and the sloped pavilion-like roof improves solar capture by adding additional curves in the roof. 

    Continue reading
  • Pentester pops open Tesla Model 3 using low-cost Bluetooth module
    Anything that uses proximity-based BLE is vulnerable, claim researchers

    Tesla Model 3 and Y owners, beware: the passive entry feature on your vehicle could potentially be hoodwinked by a relay attack, leading to the theft of the flash motor.

    Discovered and demonstrated by researchers at NCC Group, the technique involves relaying the Bluetooth Low Energy (BLE) signals from a smartphone that has been paired with a Tesla back to the vehicle. Far from simply unlocking the door, this hack lets a miscreant start the car and drive away, too.

    Essentially, what happens is this: the paired smartphone should be physically close by the Tesla to unlock it. NCC's technique involves one gadget near the paired phone, and another gadget near the car. The phone-side gadget relays signals from the phone to the car-side gadget, which forwards them to the vehicle to unlock and start it. This shouldn't normally happen because the phone and car are so far apart. The car has a defense mechanism – based on measuring transmission latency to detect that a paired device is too far away – that ideally prevents relayed signals from working, though this can be defeated by simply cutting the latency of the relay process.

    Continue reading

Biting the hand that feeds IT © 1998–2022