This article is more than 1 year old

Smile! UK cops reckon they've ironed out gremlins with real-time facial recog

Report says code has improved – and thousands could still end up falsely ID'd, argue privacy advocates

Police in the UK are preparing to reintroduce real-time facial recognition technology after a report found the latest versions of software used by law enforcement have improved accuracy and have fewer false positives.

The report [PDF] from the National Physical Laboratory found that when face-match thresholds in Neoface were set to 0.6 (the default setting), correct identification occurred 89 percent of the time someone walked into a recognition zone. False positive rates, per the report, were just 0.017 percent.

The NPL said the true positive identification rate showed no statistically significant deviations across gender and ethnic lines.

"This is a significant report for policing as it is the first time we have had independent scientific evidence to advise us on the accuracy and any demographic differences of our Facial Recognition Technology," said Lindsey Chiswick, the Metropolitan Police's Director of Intelligence. 

"We know that at the setting we have been using it, the performance is the same across race and gender and the chance of a false match is just 1 in 6,000 people who pass the camera," Chiswick said, adding the study was large enough that demographic differences would have been seen.

If tweaked a bit more, say by upping the threshold to 0.64, the report said there weren't any false positives, while at 0.62 only a single false positive occurred during testing. Put a bit more slack on the line by moving it to below 0.6 and the system starts to exhibit "a statistically significant imbalance between demographics with more Black subjects having a false positive than Asian or White subjects," the NPL said.

The report was commissioned in 2021 by the Metropolitan and South Wales Police in response to widespread concerns over use of the technology. Alongside using it to catch crooks wandering through public spaces, the software was being tested as a way to automatically debit children for the cost of school lunches, though that move was put on hold not long after it was announced.

Better, but not great

In a 2020 report [PDF] that looked at facial recognition technology used between 2016 and 2019 in the UK, the NPL reported that positive ID rates were just 72 percent, while false positives occurred 0.1 percent of the time, meaning one in 1,000 people who walked in front of a facial recognition camera would be falsely flagged as a potential criminal.

Things have definitely improved since then, but it's still not good enough, said Big Brother Watch's Legal and Policy Director Madeleine Stone.

"This report confirms that live facial recognition does have significant race and sex biases, but says that police can use settings to mitigate them. Given repeated findings of institutional racism and sexism within the police, forces should not be using such discriminatory technology at all," Stone said.

The Met's Chiswick said her force understands concerns over bias, but the research shows they're not an issue. "This research means we better understand the performance of our algorithm. We understand how we can operate to ensure the performance across race and gender is equal," Chiswick said.

Racial and gender bias aside, Stone said a false positive rate of one in 6,000 is still unacceptable considering how many people would be scanned by such systems daily in large cities like London – tens of thousands of people across the UK could be forced to prove their innocence if it was rolled out nationally, the Big Brother Watch officer said.

"Live facial recognition is not referenced in a single UK law, has never been debated in Parliament, and is one of the most privacy-intrusive tools ever used in British policing. Parliament should urgently stop police from using this dangerously authoritarian surveillance tech," Stone said. ®

More about


Send us news

Other stories you might like