Chinese government has got it 'spot on' when it comes to face-recog tech says, er, London's Met cops' top rep

Thinks British public will be fine getting stopped and searched on faulty software's say so

The Chinese government has an unlikely supporter of its facial recognition program: the head of London's Metropolitan Police union.

Ken Marsh, who represents the force's officers in the UK capital, appeared on BBC Radio Essex on Friday to respond to a report by the University of Essex that said the Met's own use of facial recognition was highly inaccurate, flawed and likely illegal.

As you might expect, Marsh defended the Met's system and argued that facial recognition could be an extremely valuable tool in tackling crime. He also attacked the report itself, claiming that it was "not balanced correctly" and disputed a central claim that the Met's system was wrong 80 per cent of the time when it came to correctly identifying someone.

"That's absurd," he told the BBC Essex Breakfast Show [he starts talking around 2:11:30]. "I'd like to find out where they got that [figure] from. I've not heard that from anyone else." He added that if the fail rate was actually 80 per cent then it wouldn't be "fit for purpose."

But Marsh then went beyond criticizing the report and started praising facial recognition – even ending up in the bizarre situation when, unprompted, he praised the Chinese government's use of it.

"It's absolutely fantastic in recognizing what we're trying to do in catching criminals and, I have to add, terrorists as well," he said. Marsh did acknowledge there were some problems: "I do accept there are areas that we need to get correct, we need to make sure we have it spot on." And then he identified one group who was getting it "spot on" – the Chinese government.

"Although China is a very intrusive country and I don't agree with a lot of what they do, they've got it absolutely correct. They're recognizing individuals per second and they've got it spot on." He said that he hoped the Met's own system would soon be as effective as the Chinese government's.

So about that...

The Chinese government's use of facial recognition has been widely condemned for being intrusive and not respecting its citizens' privacy or human rights. It has been accused of specifically targeting and tracking the Uighurs, a largely Muslim minority, within its borders, some of whom are being held against their will and without lawyers in internment camps.

Meanwhile, in the city of Shenzhen, the city has a system that automatically fines jaywalkers: facial recognition software on its vast network of cameras identify anyone not crossing the street in the correct place, connect the name to their government ID and mobile phone number and send them a text message informing they have been fined. A large display at one intersection also displays the person's face, name and ID of anyone identified jaywalking.

The Chinese government is proud of its efforts and technological advances and has been openly boasting about its systems. Experts are far from persuaded that the system is an efficient as the government claims, however, and have highlighted the country's long and ongoing abuse of human rights, including imprisonment without trial.

It is the Chinese government's use of facial recognition technology that has, in large part, led to reviews and studies elsewhere in order to ensure that basic human rights are protected.

The University of Essex was given unprecedented access to the final six of 10 trial runs with the technology by the Metropolitan Police - running from June 2018 to February 2019 - and found that the system itself and the way it was operated by the police amounted to "arbitrary interference of rights."

One of the report's authors Dr Daragh Murray said that he wanted the trials to stop and not start again until compliance with human rights was baked into the system. He also called for a public debate over the "incredibly intrusive" technology.

One of the most worrying statistics that came out of their observation was that of the 42 matches that the system identified, just eight were correct matches: which led to the 80 per cent figure.

If you've done nothing wrong...

Dr Murray was also concerned that the police had a "presumption to intervene – to trust the technology." In other words, if the system flagged someone as a match, the police took that a sufficient indicator to act and stop someone, as opposed to a possible flag that required further investigation before stopping someone.

The Met's man, Ken Marsh, also had some thoughts about people being wrongly identified and stopped. He felt everyone would be fine with it. "If we stop someone incorrectly and they've done absolutely nothing wrong and we explain to them 'so sorry we've got this one wrong,'" he told BBC Essex, "if you've done nothing wrong, I personally wouldn’t have any problem with it whatsoever because I'd like to think they're doing a great job and trying to catch criminals and terrorists and get them off our streets."

His comments strengthen the argument for a broad public discussion over facial recognition technology before it is implemented by police forces. Such technology has already been banned by San Francisco over concerns about how it would be used and other US cities and even states are considering a similar moratorium. That approach has led to concerns that fears over facxial recognition misuse are limiting its potential positive impact.

If could well be that people are fine with having their faces constantly scanned and checked while out in public, and that they are more than happy to be stopped and questioned by the police if they have been wrongly identified by systems with a lower than 20 per cent accuracy rate – all in the interests of improving the systems and tackling crime. But somehow we doubt it. ®

Similar topics

Other stories you might like

  • Venezuelan cardiologist charged with designing and selling ransomware
    If his surgery was as bad as his opsec, this chap has caused a lot of trouble

    The US Attorney’s Office has charged a 55-year-old cardiologist with creating and selling ransomware and profiting from revenue-share agreements with criminals who deployed his product.

    A complaint [PDF] filed on May 16th in the US District Court, Eastern District of New York, alleges that Moises Luis Zagala Gonzalez – aka “Nosophoros,” “Aesculapius” and “Nebuchadnezzar” – created a ransomware builder known as “Thanos”, and ransomware named “Jigsaw v. 2”.

    The self-taught coder and qualified cardiologist advertised the ransomware in dark corners of the web, then licensed it ransomware to crooks for either $500 or $800 a month. He also ran an affiliate network that offered the chance to run Thanos to build custom ransomware, in return for a share of profits.

    Continue reading
  • China reveals its top five sources of online fraud
    'Brushing' tops the list, as quantity of forbidden content continue to rise

    China’s Ministry of Public Security has revealed the five most prevalent types of fraud perpetrated online or by phone.

    The e-commerce scam known as “brushing” topped the list and accounted for around a third of all internet fraud activity in China. Brushing sees victims lured into making payment for goods that may not be delivered, or are only delivered after buyers are asked to perform several other online tasks that may include downloading dodgy apps and/or establishing e-commerce profiles. Victims can find themselves being asked to pay more than the original price for goods, or denied promised rebates.

    Brushing has also seen e-commerce providers send victims small items they never ordered, using profiles victims did not create or control. Dodgy vendors use that tactic to then write themselves glowing product reviews that increase their visibility on marketplace platforms.

    Continue reading
  • Oracle really does owe HPE $3b after Supreme Court snub
    Appeal petition as doomed as the Itanic chips at the heart of decade-long drama

    The US Supreme Court on Monday declined to hear Oracle's appeal to overturn a ruling ordering the IT giant to pay $3 billion in damages for violating a decades-old contract agreement.

    In June 2011, back when HPE had not yet split from HP, the biz sued Oracle for refusing to add Itanium support to its database software. HP alleged Big Red had violated a contract agreement by not doing so, though Oracle claimed it explicitly refused requests to support Intel's Itanium processors at the time.

    A lengthy legal battle ensued. Oracle was ordered to cough up $3 billion in damages in a jury trial, and appealed the decision all the way to the highest judges in America. Now, the Supreme Court has declined its petition.

    Continue reading

Biting the hand that feeds IT © 1998–2022