The Chinese government has an unlikely supporter of its facial recognition program: the head of London's Metropolitan Police union.
Ken Marsh, who represents the force's officers in the UK capital, appeared on BBC Radio Essex on Friday to respond to a report by the University of Essex that said the Met's own use of facial recognition was highly inaccurate, flawed and likely illegal.
As you might expect, Marsh defended the Met's system and argued that facial recognition could be an extremely valuable tool in tackling crime. He also attacked the report itself, claiming that it was "not balanced correctly" and disputed a central claim that the Met's system was wrong 80 per cent of the time when it came to correctly identifying someone.
"That's absurd," he told the BBC Essex Breakfast Show [he starts talking around 2:11:30]. "I'd like to find out where they got that [figure] from. I've not heard that from anyone else." He added that if the fail rate was actually 80 per cent then it wouldn't be "fit for purpose."
But Marsh then went beyond criticizing the report and started praising facial recognition – even ending up in the bizarre situation when, unprompted, he praised the Chinese government's use of it.
"It's absolutely fantastic in recognizing what we're trying to do in catching criminals and, I have to add, terrorists as well," he said. Marsh did acknowledge there were some problems: "I do accept there are areas that we need to get correct, we need to make sure we have it spot on." And then he identified one group who was getting it "spot on" – the Chinese government.
"Although China is a very intrusive country and I don't agree with a lot of what they do, they've got it absolutely correct. They're recognizing individuals per second and they've got it spot on." He said that he hoped the Met's own system would soon be as effective as the Chinese government's.
So about that...
The Chinese government's use of facial recognition has been widely condemned for being intrusive and not respecting its citizens' privacy or human rights. It has been accused of specifically targeting and tracking the Uighurs, a largely Muslim minority, within its borders, some of whom are being held against their will and without lawyers in internment camps.
Meanwhile, in the city of Shenzhen, the city has a system that automatically fines jaywalkers: facial recognition software on its vast network of cameras identify anyone not crossing the street in the correct place, connect the name to their government ID and mobile phone number and send them a text message informing they have been fined. A large display at one intersection also displays the person's face, name and ID of anyone identified jaywalking.
The Chinese government is proud of its efforts and technological advances and has been openly boasting about its systems. Experts are far from persuaded that the system is an efficient as the government claims, however, and have highlighted the country's long and ongoing abuse of human rights, including imprisonment without trial.
It is the Chinese government's use of facial recognition technology that has, in large part, led to reviews and studies elsewhere in order to ensure that basic human rights are protected.
The University of Essex was given unprecedented access to the final six of 10 trial runs with the technology by the Metropolitan Police - running from June 2018 to February 2019 - and found that the system itself and the way it was operated by the police amounted to "arbitrary interference of rights."
One of the report's authors Dr Daragh Murray said that he wanted the trials to stop and not start again until compliance with human rights was baked into the system. He also called for a public debate over the "incredibly intrusive" technology.
One of the most worrying statistics that came out of their observation was that of the 42 matches that the system identified, just eight were correct matches: which led to the 80 per cent figure.
If you've done nothing wrong...
Dr Murray was also concerned that the police had a "presumption to intervene – to trust the technology." In other words, if the system flagged someone as a match, the police took that a sufficient indicator to act and stop someone, as opposed to a possible flag that required further investigation before stopping someone.
The Met's man, Ken Marsh, also had some thoughts about people being wrongly identified and stopped. He felt everyone would be fine with it. "If we stop someone incorrectly and they've done absolutely nothing wrong and we explain to them 'so sorry we've got this one wrong,'" he told BBC Essex, "if you've done nothing wrong, I personally wouldn’t have any problem with it whatsoever because I'd like to think they're doing a great job and trying to catch criminals and terrorists and get them off our streets."
His comments strengthen the argument for a broad public discussion over facial recognition technology before it is implemented by police forces. Such technology has already been banned by San Francisco over concerns about how it would be used and other US cities and even states are considering a similar moratorium. That approach has led to concerns that fears over facxial recognition misuse are limiting its potential positive impact.
If could well be that people are fine with having their faces constantly scanned and checked while out in public, and that they are more than happy to be stopped and questioned by the police if they have been wrongly identified by systems with a lower than 20 per cent accuracy rate – all in the interests of improving the systems and tackling crime. But somehow we doubt it. ®