The robot at the border: UK bets big on face scanning

Home Secretary tells us where to stick our faces


Home Secretary Jacqui Smith is taking risks with public safety, whilst simultaneously condemning thousands of airline passengers to long delays this winter. That is the fear expressed by a spokesperson for the Public and Commercial Services Union (PCS), in reaction to government plans to test new face recognition software at airports.

From this month, the UK Border Agency is trialling new technology at Manchester Airport (and Stansted, according to the PCS) which is claimed to balance high security with quicker times at immigration control. New facial recognition gates will use scanning equipment to compare the faces of UK and EEA passengers to their biometric passports. If successful these gates could be rolled out across the country.

As we understand it, the approach applies only to those individuals who hold e-chipped passports or, we estimate, around ten to 15 per cent of the travelling population. On arriving in the UK, instead of passing their passport to an Immigration Official for the usual baleful once-over, new arrivals will place their passport on a scanner.

This will check the ID of the passport holder against a list of passengers flagged up by e-Borders as barred from flying into the UK. It will then access photographic data held on the chip, so that when the passenger steps forward to have their face scanned, a second “visual” check may be carried out.

The entire operation will continue to be supervised by e-Borders staff, who will now glare at passengers from a slightly greater distance. They may intervene at any time, and will pull a random quota of individuals from the queue for further checking. As far as the Home Office is concerned, this is an additional layer of security – not a replacement for what is already there.

There are several serious obstacles to the smooth running of this plan.

First are concerns by the Biometrics Assurance Group (pdf), reported previously in El Reg, that there is still work to do on both the facial recognition standards and the format in which facial images are stored.

It is not clear that this issue has been resolved – although the Reg has been informed that the software recently had to be recalibrated, because it was rejecting too many individuals.

This brings us to the second problem: all recognition systems throw up a number of false positives and false negatives. The precise number of each will be determined by an operational decision on where to set the acceptance criterion for individuals. A “risk averse” policy will increase the number of false positives (people stopped for no good reason), while a “risk-tolerant” policy will increase the false negatives (dodgy individuals allowed into the country).

This is a difficult call to make. One argument advanced by the Home Office is that this system will speed up passenger through-put: a few seconds per passenger, but a very large improvement when multiplied by the millions that arrive every year. It would be a stroke of wondrous serendipity to discover that the point at which this software saved all this time was also, magically, the point at which it identified as many potential terrorists and criminals as the present human method - or more.

Although in fact, for it to do that, it would need to be detecting wrong-doers far more effectively than the present system does. According to the PCS, no such comparisons have been carried out – or at least none that the Home Office has yet shared.

Computer recognition systems can work, but most have a lot to do before they even get close to matching the results from a human operator and good old-fashioned experience and intuition.

Which brings us to difficulty number three. The trial will take place in a live setting. If the software works, no problem; if it doesn’t, then expect Manchester to become destination number one for criminals seeking to avoid detection on their way into the UK.

Again, the Home Office responds that this technology has been used in numerous locations around the world, with “no major problems” reported. However, the calibration issue is one that is itself very risky to resolve. Too many false positives, and the Home Office can expect letters from outraged citizens.

Even a couple of false negatives, and the next we will know about them is when a major incident goes down somewhere in Central London.

And finally - the PCS themselves are advising staff to have nothing to do with these trials. That is unlikely to have much of an impact. Not all Immigration Staff are members of the PCS (many belong to an unaffiliated in-house body called the Immigration Services Union): nor will the PCS go so far as to declare official action over this issue.

But it won’t help.

One more time, we worry, as David Davies opined back in July, that Nu Labour are addicted to untried hi-tech solutions, taking risks with public safety that a more measured, less headline-obsessed approach would avoid. ®

Similar topics


Other stories you might like

  • Clearview AI promises not to sell face-recognition database to most US businesses
    Caveats apply, your privacy may vary

    Clearview AI has promised to stop selling its controversial face-recognizing tech to most private US companies in a settlement proposed this week with the ACLU.

    The New-York-based startup made headlines in 2020 for scraping billions of images from people's public social media pages. These photographs were used to build a facial-recognition database system, allowing the biz to link future snaps of people to their past and current online profiles.

    Clearview's software can, for example, be shown a face from a CCTV still, and if it recognizes the person from its database, it can return not only the URLs to that person's social networking pages, from where they were first seen, but also copies that allow that person to be identified, traced, and contacted.

    Continue reading
  • Research finds data poisoning can't defeat facial recognition
    Someone can just code an antidote and you're back to square one

    If there was ever a reason to think data poisoning could fool facial-recognition software, a recently published paper showed that reasoning is bunk.

    Data poisoning software alters images by manipulating individual pixels to trick machine-learning systems. These changes are invisible to the naked eye, but if effective they make the tweaked pictures useless to facial-recognition tools – whatever is in the image can't be recognized. This could be useful for photos uploaded to the web, for example, to avoid recognition. It turns out, this code may not be that effective.

    Researchers at Stanford University, Oregon State University, and Google teamed up for a paper in which they single out two particular reasons why data poisoning won't keep people safe. First, the applications written to "poison" photographs are typically freely available online and can be studied to find ways to defeat them. Second, there's no reason to assume a poisoned photo will be effective against future recognition models.

    Continue reading
  • 1,000-plus AI-generated LinkedIn faces uncovered
    More than 70 businesses created fake profiles to close sales

    Two Stanford researchers have fallen down a LinkedIn rabbit hole, finding over 1,000 fake profiles using AI-generated faces at the bottom.

    Renée DiResta and Josh Goldstein from the Stanford Internet Observatory made the discovery after DiResta was messaged by a profile reported to belong to a "Keenan Ramsey". It looked like a normal software sales pitch at first glance, but upon further investigation, it became apparent that Ramsey was an entirely fictitious person.

    While the picture appeared to be a standard corporate headshot, it also included multiple red flags that point to it being an AI-generated face like those generated by websites like This Person Does Not Exist. DiResta was specifically tipped off by the alignment of Ramsey's eyes (the dead center of the photo), her earrings (she was only wearing one) and her hair, several bits of which blurred into the background. 

    Continue reading
  • Ukraine uses Clearview AI facial-recognition technology
    Controversial search engine being used to identify dead and Russian operatives

    The Ukrainian government is using facial recognition technology from startup Clearview AI to help them identify the dead, reveal Russian assailants, and combat misinformation from the Russian government and its allies.

    Reuters reported yesterday that the country's Ministry of Defense began using Clearview's search engine for faces over the weekend.

    The vendor offered free access to the search engine, which Ukraine is using for such tasks as identifying people of interest at checkpoints and identifying people killed during Russia's invasion, the news organization wrote, citing Lee Wolosky, who currently advises Clearview and formerly worked as a US diplomat under Presidents Barack Obama and Joe Biden.

    Continue reading
  • Face Off: IRS kills plan to verify taxpayers with facial recognition database
    Uncle Sam takes security, privacy concerns seriously, it says here

    Updated The Internal Revenue Service has abandoned its plan to verify the identities of US taxpayers using a private contractor's facial recognition technology after both Democrats and Republicans actively opposed the deal.

    US Senator Ron Wyden (D-OR) on Monday said Treasury Department officials informed his office that the agency has decided to move away from using the private facial recognition service ID.me to verify IRS.gov accounts.

    "The Treasury Department has made the smart decision to direct the IRS to transition away from using the controversial ID.me verification service, as I requested earlier today," Wyden said in a statement. "I understand the transition process may take time, but I appreciate that the administration recognizes that privacy and security are not mutually exclusive and no one should be forced to submit to facial recognition to access critical government services."

    Continue reading
  • IRS doesn't completely scrap facial recognition, just makes it optional
    But hey, new rules on deleting your selfies

    America's Internal Revenue Service has confirmed taxpayers will not be forced to use facial recognition to verify their identity. The agency also set out rules for which images will be deleted.

    Folks setting up an online IRS account will be given the choice of providing biometric data to an automated system, or speaking with a human agent in a video call, to authenticate. Those who are comfortable with facial recognition tech can upload a copy of their photo ID and then be authenticated by their selfie, and those who aren't can talk to someone to prove they are who they say they are. An online IRS account can be used to view tax documents and the status of payments among other things.

    "Taxpayers will have the option of verifying their identity during a live, virtual interview with agents; no biometric data – including facial recognition – will be required if taxpayers choose to authenticate their identity through a virtual interview," the IRS said in a statement on Monday.

    Continue reading
  • Sri Lanka to adopt India’s Aadhaar digital identity scheme
    Biometric IDs for all, cross-border interoperability not on the table

    Sri Lanka has decided to adopt a national digital identity framework based on biometric data and will ask India if it can implement that nation’s Aadhaar scheme.

    The island nation had previous indicated it would work with the Modular Open Source Identity Platform (MOSIP), an organisation based in India that offers tools governments can use to create and manage digital identities.

    But a list of Cabinet decisions published on Tuesday, Sri Lanka’s government announced its intention to ask India for a grant of its scheme, which has been widely interpreted as meaning India share Aadhaar technology.

    Continue reading

Biting the hand that feeds IT © 1998–2022