Metropolitan Police's facial recognition tech not only crap, but also of dubious legality – report

Just 8 out of 42 matches correct, say uni researchers


Facial recognition technology trialled by the Metropolitan Police is highly inaccurate and its deployment is likely to be found "unlawful" if challenged in court, an excoriating independent report has found.

Researchers from the Human Rights, Big Data & Technology Project, based at the University of Essex Human Rights Centre, identified significant flaws with the way live facial recognition (LFR) technology has been trialled in London by the Metropolitan Police Service.

So far the Met has used the controversial technology on 10 separate occasions in the last three years, including twice at Notting Hill Carnival.

facial recog cameras

Cops told live facial recog needs oversight, rigorous trial design, protections against bias

READ MORE

In May, the Met fined a man for covering his face while conducting a test of the technology in London.

Professor Fussey and Dr Murray were granted unprecedented access to the final six trials, running from June 2018 to February 2019.

From those, the authors found that just eight correct matches were made out of 42 suggested in total.

LFR technology allows for the real-time biometric processing of video imagery to identify particular individuals

The software processes the images in order to identify any faces, creates a digital signature of identified faces, and then analyses those digital signatures against a database referred to as the "watch list". An alert is then issued by the police control room and may be available on officers' portable devices.

However, accuracy of the watch list data also remains a challenge. Legacy data-handling systems meant data relevant to watch lists was spread across different databases and each watch list entry needed to be assembled by manually extracting and merging records from each of these locations, the Uni found.

The report noted it is possible for LFR software to be integrated into police body-worn cameras. That could be used to create a database of individuals' movements within a city, which in turn could be automated to identify any unusual patterns.

arrest

London's Met police confess: We made just one successful collar in latest facial recog trial

READ MORE

Professor Fussey and Dr Murray are calling for all live trials of LFR to be ceased until these concerns are addressed. They noted it is essential that human rights compliance is ensured before deployment, and there be an appropriate level of public scrutiny and debate on a national level.

Murray said: "This report raises significant concerns regarding the human rights law compliance of the trials.

"The legal basis for the trials was unclear and is unlikely to satisfy the 'in accordance with the law' test established by human rights law.

"Ultimately, the impression is that human rights compliance was not built into the Metropolitan Police's systems from the outset, and was not an integral part of the process."

In January this year, it emerged that the Met had blown more than £200,000 on facial-recognition trials with little or no arrests to show for it.

The Met's Deputy Assistant Commissioner Duncan Ball sent us a stament:

"We are extremely disappointed with the negative and unbalanced tone of this report. The MPS maintains we have a legal basis for this pilot period and have taken legal advice throughout. We will again review this once we have the outcome of the South Wales judicial review. This is new technology, and we’re testing it within a policing context. The Met’s approach has developed throughout the pilot period, and the deployments have been successful in identifying wanted offenders. We believe the public would absolutely expect us to try innovative methods of crime fighting in order to make London safer.

"We fully expect the use of this technology to be rigorously scrutinised and want to ensure the public have complete confidence in the way we police London.” ®


Other stories you might like

  • US won’t prosecute ‘good faith’ security researchers under CFAA
    Well, that clears things up? Maybe not.

    The US Justice Department has directed prosecutors not to charge "good-faith security researchers" with violating the Computer Fraud and Abuse Act (CFAA) if their reasons for hacking are ethical — things like bug hunting, responsible vulnerability disclosure, or above-board penetration testing.

    Good-faith, according to the policy [PDF], means using a computer "solely for purposes of good-faith testing, investigation, and/or correction of a security flaw or vulnerability."

    Additionally, this activity must be "carried out in a manner designed to avoid any harm to individuals or the public, and where the information derived from the activity is used primarily to promote the security or safety of the class of devices, machines, or online services to which the accessed computer belongs, or those who use such devices, machines, or online services."

    Continue reading
  • Intel plans immersion lab to chill its power-hungry chips
    AI chips are sucking down 600W+ and the solution could be to drown them.

    Intel this week unveiled a $700 million sustainability initiative to try innovative liquid and immersion cooling technologies to the datacenter.

    The project will see Intel construct a 200,000-square-foot "mega lab" approximately 20 miles west of Portland at its Hillsboro campus, where the chipmaker will qualify, test, and demo its expansive — and power hungry — datacenter portfolio using a variety of cooling tech.

    Alongside the lab, the x86 giant unveiled an open reference design for immersion cooling systems for its chips that is being developed by Intel Taiwan. The chip giant is hoping to bring other Taiwanese manufacturers into the fold and it'll then be rolled out globally.

    Continue reading
  • US recovers a record $15m from the 3ve ad-fraud crew
    Swiss banks cough up around half of the proceeds of crime

    The US government has recovered over $15 million in proceeds from the 3ve digital advertising fraud operation that cost businesses more than $29 million for ads that were never viewed.

    "This forfeiture is the largest international cybercrime recovery in the history of the Eastern District of New York," US Attorney Breon Peace said in a statement

    The action, Peace added, "sends a powerful message to those involved in cyber fraud that there are no boundaries to prosecuting these bad actors and locating their ill-gotten assets wherever they are in the world."

    Continue reading

Biting the hand that feeds IT © 1998–2022