UK's Surveillance Commissioner warns of 'ethically fraught' facial recognition tech concerns

How about being an anonymous face in a crowd? Is that not allowed anymore?

Facial recognition technology (FRT) may need to be regulated in much the same way as some ethically sensitive medical techniques to ensure there are sufficient safeguards in place to protect people's privacy and freedoms.

That’s according to Professor Fraser Sampson, the UK Government’s Surveillance Camera Commissioner (SCC), who works with the Home Office overseeing tech-related surveillance in the UK.

Where biometric surveillance systems are being bought with public money and deployed in the public interest then there is surely a legitimate expectation that all parties will adopt an ethical and human rights compliant approach

He was responding to last week’s report by the Geneva-based Human Rights Council (HRC) which argued that the protection of human rights should be at the heart of the development of AI-based systems including areas such as law enforcement.

The report went on to say that unless sufficient safeguards are in place to protect human rights, there should be a moratorium on the sale of AI systems and those that fail to meet international human rights laws should be banned.

Now, the SCC has added his voice to the debate as lawmakers around the world attempt to create a workable legal framework in the face growing calls for human rights protections.

“This is a fast-evolving area and the evidence is elusive but it may be that the aspects currently left to self-determination present the greatest risk to communities or simply to give rise to the greatest concern among citizens,” he told The Register.

“It may even be the case that some technological biometric and surveillance capabilities such as FRT are so ethically fraught that they can only be acceptably carried out under licence in the future – perhaps akin to the regulatory arrangements for human fertilisation and embryology.

“That is a matter of policy for others," he said.

Professor Sampson: "But we need as a minimum a single set of clear principles by which those using the biometric and surveillance camera systems will be held to account, transparently and auditably.”

Asked to comment further on the HRC’s report he told us: “Where biometric surveillance systems are being bought with public money and deployed in the public interest then there is surely a legitimate expectation that all parties will adopt an ethical and human rights compliant approach.”

“I agree that, if used without sufficient regard to how they affect people’s human rights, the emerging technological capabilities in the area of surveillance and biometrics can be negative and potentially catastrophic,” he added.

The use of AI and technologies such as FRT has recently been the subject of governmental scrutiny both in the UK and the US.

In 2019, London's Metropolitan Police deployed a system that was not only extremely inaccurate, but led to them arresting people based on dodgy matches anyway.

In May of that year, Met cops fined a man for covering his face while cops were conducting a test of the technology in Romford, London.

In August 2020, a court of appeal found that use of facial recognition technology by South Wales police had been unlawful.

In April, the EU published its own proposals for harmonised rules on artificial intelligence (Artificial Intelligence Act) where it too recognised the benefits while acknowledging the “new risks or negative consequences for individuals or the society.” ®

Broader topics

Narrower topics

Other stories you might like

  • Clearview AI promises not to sell face-recognition database to most US businesses
    Caveats apply, your privacy may vary

    Clearview AI has promised to stop selling its controversial face-recognizing tech to most private US companies in a settlement proposed this week with the ACLU.

    The New-York-based startup made headlines in 2020 for scraping billions of images from people's public social media pages. These photographs were used to build a facial-recognition database system, allowing the biz to link future snaps of people to their past and current online profiles.

    Clearview's software can, for example, be shown a face from a CCTV still, and if it recognizes the person from its database, it can return not only the URLs to that person's social networking pages, from where they were first seen, but also copies that allow that person to be identified, traced, and contacted.

    Continue reading
  • Google bans third-party call-recording apps from Play Store
    Apps pre-installed in the device will still be able to have call recording functionality

    Google has made changes to its Play Store policies, effectively banning third-party call-recording apps beginning May 11, claiming it seeks to close alternative use accessibility APIs for things other than accessibility.

    Google has for a while blocked real call recording on Android 6 and over the microphone on Android 10. Developers have been using accessibility APIs as a workaround to enable the recording of calls on Android.

    Accessibility Service APIs are tools that offer additional services that can help those with disabilities overcome challenges. Using these services against their designed intentions, i.e. to achieve a goal not geared at overcoming disabilities, remains the only way for third-party apps to record calls.

    Continue reading
  • UK spy agencies sharing bulk personal data with foreign allies was legal, says court
    Yes, that thing they've never publicly admitted they do

    A privacy rights org this week lost an appeal [PDF] in a case about the sharing of Bulk Personal Datasets (BPDs) of UK residents by MI5, MI6, and GCHQ with foreign intelligence agencies.

    The British agencies have never stated, in public, whether any of them have shared BPDs with foreign intelligence agencies – they have a so-called "neither confirm nor deny" (NCND) policy – but the judgment noted it "proceeds on the assumption that sharing has taken place."

    The true position, as noted by Queen's Bench Division president Dame Victoria Sharp in the judgement, was revealed to the defendant in its closed hearings.

    Continue reading
  • Research finds data poisoning can't defeat facial recognition
    Someone can just code an antidote and you're back to square one

    If there was ever a reason to think data poisoning could fool facial-recognition software, a recently published paper showed that reasoning is bunk.

    Data poisoning software alters images by manipulating individual pixels to trick machine-learning systems. These changes are invisible to the naked eye, but if effective they make the tweaked pictures useless to facial-recognition tools – whatever is in the image can't be recognized. This could be useful for photos uploaded to the web, for example, to avoid recognition. It turns out, this code may not be that effective.

    Researchers at Stanford University, Oregon State University, and Google teamed up for a paper in which they single out two particular reasons why data poisoning won't keep people safe. First, the applications written to "poison" photographs are typically freely available online and can be studied to find ways to defeat them. Second, there's no reason to assume a poisoned photo will be effective against future recognition models.

    Continue reading

Biting the hand that feeds IT © 1998–2022