Cops' use of biometric images 'gone far beyond custody purposes'

Lord Scriven says UK.gov response 'not worth the paper it is printed on'


The use of 20 million facial images by British police "has said gone far beyond using them for custody purposes," according to the UK Biometric Commissioners’ annual report.

Concerns have been previously raised by the commissioner over the retention of hundreds-of-thousands of innocent individuals' images. The Register was the first to report the commissioner's concerns about the legality of uploading the images onto to the Police National Database.

Biometric Commissioner Paul Wiles said the current process for deleting biometric records “is not encouraging."

He said:

"Whether the limited changes proposed will be sufficient in the face of any future legal challenge may depend on the extent to which those individuals without convictions successfully make an application for deletion of their police held custody images."

“In addition, not all forces are uploading images to [the Police National Database], including the MPS who hold their own extensive collection, so 19 million is an underestimate.”

Responding to the report, Minister of State at the Home Office Baroness Williams, said there ought to be a presumption that unconvicted individuals be deleted from the police databases unless retention is necessary for a policing purpose.

“I consider this strikes a reasonable balance between privacy and public protection,” she said.

However, concerns have previously been raised that the automatic deletion of images of unconvicted individuals would be too costly, due to the complexity of police IT systems.

Liberal Democrat Peer Lord Scriven, told The Register the notion there is a “presumption” police should delete images, is not the same as requiring them to do so. “Why are the government desperate to have a database of millions of people’s images?”

He said: “This response is not worth the two pages it is written on, it is as if the government is doing everything it can do to avoid Parliamentary scrutiny. Their response is essentially: this is going on and it is fine. We are giving a carte blanche to the police to do what they want.”

Renate Samson, director of Big Brother Watch, said the response showed calls to address the serious shortcomings in biometric image retention have fallen on deaf ears.

“It is profoundly disappointing that government believes this is the right approach. We want the government to bring custody images in line with DNA and fingerprints.”

Big Brother Watch is campaigning for the automatic deletion on proof of innocence to be implemented for custody images and facial biometrics, under its FaceOff campaign.

The publication of the 2016 report was delayed because it was not possible to arrange a slot for laying and publication before Parliament rose for the summer recess, said Williams today.

It follows the controversial use of facial recognition technology by the Metropolitan Police at the Notting Hill Carnival this year, which resulted in 35 false matches and one wrongful arrest of someone erroneously tagged as being wanted on warrant for a rioting offence. ®


Other stories you might like

  • Clearview AI promises not to sell face-recognition database to most US businesses
    Caveats apply, your privacy may vary

    Clearview AI has promised to stop selling its controversial face-recognizing tech to most private US companies in a settlement proposed this week with the ACLU.

    The New-York-based startup made headlines in 2020 for scraping billions of images from people's public social media pages. These photographs were used to build a facial-recognition database system, allowing the biz to link future snaps of people to their past and current online profiles.

    Clearview's software can, for example, be shown a face from a CCTV still, and if it recognizes the person from its database, it can return not only the URLs to that person's social networking pages, from where they were first seen, but also copies that allow that person to be identified, traced, and contacted.

    Continue reading
  • Research finds data poisoning can't defeat facial recognition
    Someone can just code an antidote and you're back to square one

    If there was ever a reason to think data poisoning could fool facial-recognition software, a recently published paper showed that reasoning is bunk.

    Data poisoning software alters images by manipulating individual pixels to trick machine-learning systems. These changes are invisible to the naked eye, but if effective they make the tweaked pictures useless to facial-recognition tools – whatever is in the image can't be recognized. This could be useful for photos uploaded to the web, for example, to avoid recognition. It turns out, this code may not be that effective.

    Researchers at Stanford University, Oregon State University, and Google teamed up for a paper in which they single out two particular reasons why data poisoning won't keep people safe. First, the applications written to "poison" photographs are typically freely available online and can be studied to find ways to defeat them. Second, there's no reason to assume a poisoned photo will be effective against future recognition models.

    Continue reading
  • 1,000-plus AI-generated LinkedIn faces uncovered
    More than 70 businesses created fake profiles to close sales

    Two Stanford researchers have fallen down a LinkedIn rabbit hole, finding over 1,000 fake profiles using AI-generated faces at the bottom.

    Renée DiResta and Josh Goldstein from the Stanford Internet Observatory made the discovery after DiResta was messaged by a profile reported to belong to a "Keenan Ramsey". It looked like a normal software sales pitch at first glance, but upon further investigation, it became apparent that Ramsey was an entirely fictitious person.

    While the picture appeared to be a standard corporate headshot, it also included multiple red flags that point to it being an AI-generated face like those generated by websites like This Person Does Not Exist. DiResta was specifically tipped off by the alignment of Ramsey's eyes (the dead center of the photo), her earrings (she was only wearing one) and her hair, several bits of which blurred into the background. 

    Continue reading
  • UK police lack framework for adopting new tech like AI and face recognition, Lords told
    Governance structure is 'a bush, not a tree' – whatever that means

    UK police forces have no overarching rules for introducing controversial technologies like AI and facial recognition, the House of Lords has heard.

    Baroness Shackleton of the Lords' Justice and Home Affairs Committee said the group had found 30 organisations with some role in determining how the police use new technologies, without any single body to guide and enforce the adoption of new technologies.

    Under questioning from the Lords, Kit Malthouse, minister for crime and policing, said: "It is complicated at the moment albeit I think most [police] forces are quite clear about their own situation."

    Continue reading

Biting the hand that feeds IT © 1998–2022