This article is more than 1 year old

Facial-recognition algos vary wildly, US Congress told, as politicians try to come up with new laws on advanced tech

Most-accurate algorithms showed 'little to no bias', so nothing to fear, eh?

Vid A recent US government report investigating the accuracy of facial recognition systems across different demographic groups has sparked fresh questions on how the technology should be regulated.

The House Committee on Oversight and Reform held a hearing to discuss the dossier and surrounding issues on Wednesday. “Despite the private sector’s use of the technology, it’s just not ready for prime time,” said Rep Carolyn Maloney (D-NY), who chaired the meeting.

The report [PDF], published by America's National Institute of Standards (NIST) in December, reveals how accurate, or rather inaccurate, some of latest state-of-the-art commercial facial recognition algorithms really are.

NIST tested 189 algorithms submitted by 99 developers across a four datasets comprising of 18.27 million images taken of 8.49 million people.

“Contemporary face recognition algorithms exhibit demographic differentials of various magnitudes,” the report said. “Our main result is that false positive differentials are much larger than those related to false negatives and exist broadly, across many, but not all, algorithms tested. Across demographics, false positives rates often vary by factors of ten to beyond 100 times. False negatives tend to be more algorithm-specific.”

In other words, “different algorithms perform differently,” Charles Romine, director, of the Information Technology Laboratory at NIST and a witness at the hearing, explained. The rate of misidentifications in false positives and false negatives is dependent on the application.

The most risky applications were when false positives occurred in what Romine described as “one to many searches,” where an image is ran against a database of many images to look for a match. “False positives of one to many search is particularly important as the applications could include false accusations,” he said.

For example, a high risk one-to-many search would be matching people’s faces across a database of mugshots to look for suspected criminals. “This issue was not even on my radar until the ACLU study misidentified me,” said congressman Jimmy Gomez (D-CA), who was amongst one of the 28 politicians incorrectly identified amongst a database of mugshots in an experiment conducted by the American Civil Liberties Union.

“I have no doubt that it misidentified me because of my color. The technology is fundamentally flawed,” he added.

The NIST report highlighted the already well-established fact that most facial recognition systems struggle with identifying women, people of color, and the elderly, compared to white Caucasian men. False positives were between two and five times higher in women than men and were highest for West and East African and East Asian people, according to the investigation.

It should be noted, however, that the error rates for identifying East Asian people were less of a problem for facial recognition systems developed in East Asian countries like China, suggesting that the distribution of demographics in training data plays a big part in determining accuracy.

But for the most accurate algorithms identification problems across different demographics were diminished.

AI algorithms are rapidly improving over time

“Over the past year, I’ve seen headlines suggesting that facial recognition technology is inaccurate, inequitable, and invasive,” said Daniel Castro, a witness at the hearing and the veep and director of Center for Data Innovation at the non-profit Information Technology and Innovation Foundation. “If that was true then I would be worried too, but it isn’t.”

“There are many facial recognition systems on the market; some perform better than others across sex, gender, race, and age," he said. "Notably, the most accurate algorithms NIST has evaluated showed little to no bias. These systems continue to get measurably better every year and they can outperform the average human.”

Castro urged Congress to continue supporting computer vision research and to fund deployment federal use of facial recognition systems to improve security in federal buildings. He did, however, agree that application of the technology was particularly important and suggested that Congress consider enforcing legislation that would require law enforcement to obtain a warrant to track people’s movements if geolocation data is collected from facial recognition systems.

facial_recognition

The Feds are building an America-wide face surveillance system – and we're going to court to prove it, says ACLU

READ MORE

Brenda Leong, the director of AI and ethics at the Future of Privacy Forum, also agreed that the context of how facial recognition was paramount. “The level of certainty acceptable for verifying an individual’s identity when unlocking a mobile device is below the standard that should be required for verifying that an individual is included in on a terrorist watch list,” she said.

If the most accurate algorithms are performing increasingly better across challenging demographics like gender, race, and age, will it be acceptable to use facial recognition systems in potentially riskier applications one day, the politicians wanted to know.

Meredith Whittaker, co-founder of AI Now, a research institute studying the social impacts of algorithms, said Congress should “halt the use of facial recognition in sensitive domains for private companies.”

Accuracy simply doesn’t seem to matter in some particularly concerning use cases, she opined. For example, Whittaker pointed out that algorithms used to analyse the facial expressions of candidates in job applications to look for certain characteristics are not backed by scientific evidence.

Instead, they could create a “bias feedback loop”, she explained, where the people that have already been awarded and promoted become a model for the type of people you want to hire. For example, if its white men in these higher positions then an AI could create a confirmation bias to prefer other white men.

Congress has been mulling over evidence in the hopes of crafting federal policies to regulate the technology for months. It held two other facial recognition hearings last year in May and July.

“We have a responsibility to not only encourage innovation, but to protect the privacy and safety of American consumers,” Maloney said. “Our committee is committed to introducing and marking up common sense facial recognition legislation in the very near future, and our hope is that we can do that in a truly bipartisan way.”

You can find a video replay of the hearing below. ®

Youtube Video

More about

TIP US OFF

Send us news


Other stories you might like