Cops told live facial recog needs oversight, rigorous trial design, protections against bias

How about only using face-scan tech if it, er, actually works, is the only option, eh?


Cops should only use facial recognition tech if it is proven to be effective at identifying people, can be used without bias and is the only method available, a UK government advisory group has said.

The Biometrics and Forensics Ethics Group – set up to shadow and advise on the Home Office's activities in those areas – has published a report into the controversial use of facial recognition tech by police.

It looked at the use of automated facial recognition tools in real-time: when live images drawn from cameras trained on specific areas of moving people – entrances or exits, for instance – are matched with a curated watchlist so police can approach that person there and then.

London, UK - March, 2018. Police officers patrolling Leicester Square and Piccadilly Circus in central London. Pic Paolo Paradiso / Shutterstock.com

Zero arrests, 2 correct matches, no criminals: London cops' facial recog tech slammed

READ MORE

The overall tone was that the design of live facial recognition trials needed to be more rigorous, and the tech should be used with a large dollop of caution, plenty of independent oversight and only if it actually works. Crucially, police were also advised to take its intrusiveness seriously.

This seems to jar with what has been seen as an ad-hoc approach – and a sometimes cavalier attitude – to the use of the facial recognition technology by forces. Even London police commissioner Cressida Dick said last year she didn't think the use of facial recog would result in lots of arrests, but claimed the public "expected" cops to trial nascent technologies.

The report was commissioned by the Home Office following a series of highly scrutinised deployments by South Wales Police and London's Metropolitan Police at sports events and in shopping areas.

However, such deployment has taken place without a legal framework, leading to widespread criticism from pressure groups, MPs and watchdogs.

The police have gone on to reframe the rollouts as "trials" – but have typically been unwilling to discuss them in detail, and forces only release information at the very last minute, often just the day before the event.

The new report echoed these concerns, and said there was an "inherent ambiguity" in the use of the tech in that they are both operational deployments and "trial-like" experiments.

"It is difficult to discern the purpose of the recent police field trials; were they police operations or experiments?" it asked, saying this raised questions about trial design, consent for participation, and whether the pilots risked undermining public confidence.

The document – drawn up by a working group of three academics, Nina Hallowell, Louise Amoore and Simon Caney, and the lead of IBM's emerging tech unit, Peter Waggett – set out nine ethical principles for the government's policymaking and the police’s use of the kit.

These said facial recognition should be based on measures of necessity, effectiveness, impartiality, proportionality, cost-effectiveness and public trust.

Metropolitan Police at Notting Hill Carnival

London's top cop isn't expecting facial recog tech to result in 'lots of arrests'

READ MORE

It emphasised that the use of the technology interferes with people's rights to conduct their lives without being monitored. As such it should only be used when less invasive techniques aren't available.

And its use "can be justified only if it is an effective tool for identifying people" – a point campaign groups are likely to seize upon, given that responses to Freedom of Informations requests have revealed the Met's use of the tech had a 98 per cent false positive rate and had led to zero arrests.

The report also referenced work by Cardiff University (PDF) late last year that questioned the technology's abilities in low light and crowds – a major feature of most of the roll-outs of the kit to date.

A non-exhaustive list of potential questions the group said could be used to assess effectiveness included questions on the training human operators have, the trade-offs between speed and accuracy, and details of image quality, system set up and criteria for success.

The group also urged the government to ensure its use of the kit avoid bias and "algorithmic injustice", and said it had the potential to be unjust in two ways.

First, some kinds of mis-recognition are "inherently demeaning"; and secondly, biased tech can result in unequal and discriminatory treatment of some individuals, with some groups more likely to be detained or asked to identify themselves.

"Automated biometric recognition systems (including data training sets) that will be used in public places should be to be open to scrutiny and effective oversight," it said.

The report also stressed the need for proper oversight in another principle to do with the way watchlists are drawn up, who produces them and how accurate the images included actually are. This should also be subject to oversight by an independent body, it said.

People shopping in Piccadilly Circus, London

Who's watching you from an unmarked van while you shop in London? Cops with facial recog tech

READ MORE

There should be clarity on who has oversight of the deployment and how it will be evaluated after the fact. In addition, the group said it was important that facial recog should not be used in ways that disproportionately target certain events but not others "without compelling justification".

In their conclusions, the authors noted the lack of independent oversight and governance of the use of live facial recognition and said that, in lieu of this, trials should comply with "the usual standards of experimental trials, including rigorous and ethical scientific design".

The ethics group said it would continue to monitor development in this field and advise Home Office ministers "as appropriate". ®

Similar topics


Other stories you might like

  • Prisons transcribe private phone calls with inmates using speech-to-text AI

    Plus: A drug designed by machine learning algorithms to treat liver disease reaches human clinical trials and more

    In brief Prisons around the US are installing AI speech-to-text models to automatically transcribe conversations with inmates during their phone calls.

    A series of contracts and emails from eight different states revealed how Verus, an AI application developed by LEO Technologies and based on a speech-to-text system offered by Amazon, was used to eavesdrop on prisoners’ phone calls.

    In a sales pitch, LEO’s CEO James Sexton told officials working for a jail in Cook County, Illinois, that one of its customers in Calhoun County, Alabama, uses the software to protect prisons from getting sued, according to an investigation by the Thomson Reuters Foundation.

    Continue reading
  • Battlefield 2042: Please don't be the death knell of the franchise, please don't be the death knell of the franchise

    Another terrible launch, but DICE is already working on improvements

    The RPG Greetings, traveller, and welcome back to The Register Plays Games, our monthly gaming column. Since the last edition on New World, we hit level cap and the "endgame". Around this time, item duping exploits became rife and every attempt Amazon Games made to fix it just broke something else. The post-level 60 "watermark" system for gear drops is also infuriating and tedious, but not something we were able to address in the column. So bear these things in mind if you were ever tempted. On that note, it's time to look at another newly released shit show – Battlefield 2042.

    I wanted to love Battlefield 2042, I really did. After the bum note of the first-person shooter (FPS) franchise's return to Second World War theatres with Battlefield V (2018), I stupidly assumed the next entry from EA-owned Swedish developer DICE would be a return to form. I was wrong.

    The multiplayer military FPS market is dominated by two forces: Activision's Call of Duty (COD) series and EA's Battlefield. Fans of each franchise are loyal to the point of zealotry with little crossover between player bases. Here's where I stand: COD jumped the shark with Modern Warfare 2 in 2009. It's flip-flopped from WW2 to present-day combat and back again, tried sci-fi, and even the Battle Royale trend with the free-to-play Call of Duty: Warzone (2020), which has been thoroughly ruined by hackers and developer inaction.

    Continue reading
  • American diplomats' iPhones reportedly compromised by NSO Group intrusion software

    Reuters claims nine State Department employees outside the US had their devices hacked

    The Apple iPhones of at least nine US State Department officials were compromised by an unidentified entity using NSO Group's Pegasus spyware, according to a report published Friday by Reuters.

    NSO Group in an email to The Register said it has blocked an unnamed customers' access to its system upon receiving an inquiry about the incident but has yet to confirm whether its software was involved.

    "Once the inquiry was received, and before any investigation under our compliance policy, we have decided to immediately terminate relevant customers’ access to the system, due to the severity of the allegations," an NSO spokesperson told The Register in an email. "To this point, we haven’t received any information nor the phone numbers, nor any indication that NSO’s tools were used in this case."

    Continue reading

Biting the hand that feeds IT © 1998–2021