Home Office is cruising for a lawsuit over police use of face recog tech

Q: When can you ignore a High Court ruling? A: When you're a police employee


The UK Home Office has been warned that its delays in addressing police use of facial recognition technology on innocent people's custody photographs risks inviting a legal challenge.

In his 122-page report (PDF) Blighty's Biometrics Commissioner stated he saw “no reason to believe that the situation [regarding the lack of regulation over police use of facial recognition technology] will quickly change”, and that he was “concerned at the absence of any substantial progress in relation to these matters.”

He warned the Home Office that “the considerable benefits that could be derived from the searching of custody images on the Police National Database (PND) may be counterbalanced by a lack of public confidence in the way in which the process is operated, by challenges to its lawfulness and by fears of ‘function creep’.”

The independent commissioner overseeing the police's retention of biometric data first raised the issue of facial recognition on innocent people's photographs in his annual report last year.

At the time, The Register noted that the commissioner's report revealed a whopping 12 million custody photographs had been uploaded to the Police National Database (PND), and were being scanned by automated facial recognition technology.

Although the Protections of Freedoms Act 2012 (PoFA) states the commissioner's role only extends to DNA and fingerprint profiles, its incumbent, Alastair MacGregor QC, has kept an eye on the field of facial biometrics, accepting the view of a Parliamentary committee that the currently unregulated area of police use of facial recognition technology required urgent action.

At the time of his last report MacGregor stated that “hundreds of thousands” of facial images held in the PND belonged to “individuals who have never been charged with, let alone convicted of, an offence.” This provoked legal concerns from the commissioner, who noted a High Court ruling in 2012, R (RMC and FJ) v MPS, in which Lord Justice Richards offered his view that:

[T]he just and appropriate order is to declare that the [Metropolitan Police's] existing policy concerning the retention of custody photographs … is unlawful. It should be clear in the circumstances that a 'reasonable further period' for revising the policy is to be measured in months, not years.

Despite this, the government's promised strategy on dealing with biometrics – which was first due in 2013 – remains unpublished. According to Lord Bates, an additional "Review of the use and Retention of Custody Images has concluded and will be published in due course."

Last year the Science and Technology Committee stated its alarm over the lack of regulatory oversight of police facial recognition technology, as used on the enormous national database of custody photographs police forces have amassed.

98. Over two and half years later, no revised policy has been published. However, when giving evidence, the Minister announced a new “policy review of the statutory basis for the retention of facial images” on the grounds that “the chief constable, the police and the Home Office” all accepted that “the current governance of the data being held is not sufficiently covered” by existing policy and legislation.

99. We are concerned that it has taken over two and half years for the Government to respond to the High Court ruling that the existing policy concerning the retention of custody photographs was “unlawful”. Furthermore, we were dismayed to learn that, in the known absence of an appropriate governance framework, the police have persisted in uploading custody photographs to the Police National Database, to which, subsequently, facial recognition software has been applied.

Legal experts The Register has spoken to have stated that they believe that the current retention regime is indeed unlawful, and we understand solicitors are seeking clients to claim against the Metropolitan Police on those grounds.

In his 2015 report, MacGregor also wrote that the facial recognition mechanism, which is separately known to have been provided by CGI/Cognitech, was of “questionable efficiency”.

He added that “although a searchable police database of facial images arguably represents a much greater threat to individual privacy than searchable databases of DNA profiles or fingerprints, this new database is subject to none of the governance controls or other protections which apply as regards the DNA and fingerprint databases by virtue of PoFA. ”

A Home Office spokesman told The Register: "Our Biometrics Strategy represents an important opportunity for the Home Office to set out how we will use biometrics to deliver our objectives over the next five years." ®

Bootnote

Debaleena Dasgupta, a legal officer for Liberty, has been in touch since the publication of this story to say: “The court was clear in 2012 that guidance on destruction of custody photographs was unsatisfactory to the point of breaching human rights.”

Liberty also shared with us a request made to the Metropolitan Police Service (MPS) under the Freedom of Information Act in December 2015 regarding the existing guidance they followed in relation to the retention of custody photographs.

The MPS responded:

The MPS is currently in talks with the Home Office, ACPO Criminal Records Office (ACRO) and The College of Policing to quantify the policy around the retention of custody images within the MPS, to ensure we meet the findings of the case.

By way of further context to our response, following the legal case you have quoted above, it was decided in 2012 to include facial images (custody images) in the deletion process following an application under what was then the Exceptional Case Procedure. This has been implemented as MPS policy.

Since the introduction of this procedure approximately 560 persons have had their custody image deleted.

The current I.T. system which holds MPS custody images was not designed or built to accommodate a complex retention policy, it will be replaced early in the new year by a new system and it is our intention to implement a new deletion policy using the new system.

Debaleena added: “Liberty has been keeping an eye on the situation and recently submitted a Freedom of Information request to find out what police had done to address the problem – it seems they did nothing. Almost four years later, innocent men and women are still suffering under an unlawful system. The police’s failure to act on this flagrant disregard of privacy is completely unacceptable.”


Other stories you might like

  • Did ID.me hoodwink Americans with IRS facial-recognition tech?
    Senators want the FTC to investigate "evidence of deceptive statements"

    Democrat senators want the FTC to investigate "evidence of deceptive statements" made by ID.me regarding the facial-recognition technology it controversially built for Uncle Sam.

    ID.me made headlines this year when the IRS said US taxpayers would have to enroll in the startup's facial-recognition system to access their tax records in the future. After a public backlash, the IRS reconsidered its plans, and said taxpayers could choose non-biometric methods to verify their identity with the agency online.

    Just before the IRS controversy, ID.me said it uses one-to-one face comparisons. "Our one-to-one face match is comparable to taking a selfie to unlock a smartphone. ID.me does not use one-to-many facial recognition, which is more complex and problematic. Further, privacy is core to our mission and we do not sell the personal information of our users," it said in January.

    Continue reading
  • Clearview AI promises not to sell face-recognition database to most US businesses
    Caveats apply, your privacy may vary

    Clearview AI has promised to stop selling its controversial face-recognizing tech to most private US companies in a settlement proposed this week with the ACLU.

    The New-York-based startup made headlines in 2020 for scraping billions of images from people's public social media pages. These photographs were used to build a facial-recognition database system, allowing the biz to link future snaps of people to their past and current online profiles.

    Clearview's software can, for example, be shown a face from a CCTV still, and if it recognizes the person from its database, it can return not only the URLs to that person's social networking pages, from where they were first seen, but also copies that allow that person to be identified, traced, and contacted.

    Continue reading
  • Research finds data poisoning can't defeat facial recognition
    Someone can just code an antidote and you're back to square one

    If there was ever a reason to think data poisoning could fool facial-recognition software, a recently published paper showed that reasoning is bunk.

    Data poisoning software alters images by manipulating individual pixels to trick machine-learning systems. These changes are invisible to the naked eye, but if effective they make the tweaked pictures useless to facial-recognition tools – whatever is in the image can't be recognized. This could be useful for photos uploaded to the web, for example, to avoid recognition. It turns out, this code may not be that effective.

    Researchers at Stanford University, Oregon State University, and Google teamed up for a paper in which they single out two particular reasons why data poisoning won't keep people safe. First, the applications written to "poison" photographs are typically freely available online and can be studied to find ways to defeat them. Second, there's no reason to assume a poisoned photo will be effective against future recognition models.

    Continue reading
  • 1,000-plus AI-generated LinkedIn faces uncovered
    More than 70 businesses created fake profiles to close sales

    Two Stanford researchers have fallen down a LinkedIn rabbit hole, finding over 1,000 fake profiles using AI-generated faces at the bottom.

    Renée DiResta and Josh Goldstein from the Stanford Internet Observatory made the discovery after DiResta was messaged by a profile reported to belong to a "Keenan Ramsey". It looked like a normal software sales pitch at first glance, but upon further investigation, it became apparent that Ramsey was an entirely fictitious person.

    While the picture appeared to be a standard corporate headshot, it also included multiple red flags that point to it being an AI-generated face like those generated by websites like This Person Does Not Exist. DiResta was specifically tipped off by the alignment of Ramsey's eyes (the dead center of the photo), her earrings (she was only wearing one) and her hair, several bits of which blurred into the background. 

    Continue reading
  • Ukraine uses Clearview AI facial-recognition technology
    Controversial search engine being used to identify dead and Russian operatives

    The Ukrainian government is using facial recognition technology from startup Clearview AI to help them identify the dead, reveal Russian assailants, and combat misinformation from the Russian government and its allies.

    Reuters reported yesterday that the country's Ministry of Defense began using Clearview's search engine for faces over the weekend.

    The vendor offered free access to the search engine, which Ukraine is using for such tasks as identifying people of interest at checkpoints and identifying people killed during Russia's invasion, the news organization wrote, citing Lee Wolosky, who currently advises Clearview and formerly worked as a US diplomat under Presidents Barack Obama and Joe Biden.

    Continue reading
  • UK Home Office dangles £20m for national gun licence database system
    But potential bidders will have to move fast on this one

    The Home Office is looking to replace its ancient and creaky National Firearms Licensing Management System (NFLMS) in a £20m contract.

    NFLMS is the central police database of every firearm owner and every individual firearm in England and Wales. Whoever wins the contract will have a relatively low profile but critically important system to deliver.

    "NFLMS is used by forces teams across England and Wales and these teams conduct approximately 170,000 licence grants, renewals and variations per year," said a notice on procurement website Bidstats.uk.

    Continue reading
  • Face Off: IRS kills plan to verify taxpayers with facial recognition database
    Uncle Sam takes security, privacy concerns seriously, it says here

    Updated The Internal Revenue Service has abandoned its plan to verify the identities of US taxpayers using a private contractor's facial recognition technology after both Democrats and Republicans actively opposed the deal.

    US Senator Ron Wyden (D-OR) on Monday said Treasury Department officials informed his office that the agency has decided to move away from using the private facial recognition service ID.me to verify IRS.gov accounts.

    "The Treasury Department has made the smart decision to direct the IRS to transition away from using the controversial ID.me verification service, as I requested earlier today," Wyden said in a statement. "I understand the transition process may take time, but I appreciate that the administration recognizes that privacy and security are not mutually exclusive and no one should be forced to submit to facial recognition to access critical government services."

    Continue reading

Biting the hand that feeds IT © 1998–2022