How do we stop facial recognition from becoming the next Facebook: ubiquitous and useful yet dangerous, impervious and misunderstood?

We talk to one CEO about why bans aren't the answer but federal regulation is

Facial recognition is having a rough time of it lately. Just six months ago, people were excited about Apple allowing you to open your phone just by looking at it. A year ago, Facebook users joyfully tagged their friends in photos. But then the tech got better, and so did the concerns.

In May, San Francisco became the first major city in the world to effectively ban facial recognition. A week later, Congress heard how it also needed to ban the tech until new rules could be drawn up to cover its safe use.

That same week, Amazon shot down a stakeholder proposal to ban the sale of its facial recognition technology from being sold to law enforcement. Then, in June, the state of Michigan started debatinga state wide ban, and the FBI was slammed by the Government Accountability Office (GAO) for failing to measure the accuracy of its face-recognition technology.

The current sentiment, especially given a contentious political environment where there is an overt willingness, even determination, to target specific groups, is that facial recognition is dangerous and needs to be carefully controlled. Free market America has hit civil rights America.

It hasn't helped that China's use of the technology has created a situation previously only imagined in dystopian sci-fi movies: where a man who jaywalked across a road is identified several weeks later while walking down a different street, arrested and fined.

This turn is especially frustrating for one CEO of a facial recognition firm, Shaun Moore of Trueface, a company that until recently was based in the city that voted to ban its product, San Francisco.


Moore is keen to point out that San Francisco didn't actually ban the technology; it can still be used if the authorities get a warrant. This is true.

The decision is more of a moratorium: any local government authority that wants to use facial recognition will need to apply to do so, and be approved, before it can. That system will only be lifted when new rules designed to balance privacy and accuracy with technological ability are drawn up.

He is, unsurprisingly, not happy about his company's product being blocked by legislation. "It is not the right way to regulate," he complains, especially since it has led to a broader sense that the technology is inherently dangerous. "We risk creating a Facebook situation," he warns – where Congress feels obligated to act against a specific technology based on fears but with little or no understanding of how it works.

For one, Moore argues, he doesn't know of any law enforcement agency that wants to use the technology for real-time surveillance. They want to use it as an investigative tool after the fact by scouring footage. "It can take five to seven days off investigation time," he told us. "It is one piece of evidence that can be used to search for other evidence."

In other words, fear of what facial recognition could be used for is limiting its usefulness in current investigations. Faster, more effective investigations mean better results and more available police time to cover more crimes: a win-win.

He also argues that the fear of ubiquitous surveillance is simply not possible, at least not yet. "We don't have the processing power, we can't physically do that," he says in reference to the fear that widespread cameras could be turned into tools of constant surveillance.

But as we dig into the concerns around facial recognition, it increasingly feels like the proposed moratoriums make a lot of sense.

One of the biggest concerns is around accuracy: how confident can we be that someone on a camera, identified as a specific individual through facial recognition, is really that person? The answer is always given as a percentage likelihood. But that raises a whole host of other questions: what level of accuracy is sufficient for someone – like a police officer – to act?

Color blind

Combined with a well-recognized problem that the datasets used to train these systems are heavily skewed toward white-skinned men – which results in more accurate results for them but less accurate results for anyone who isn't a man, or white – and you have a civil rights nightmare waiting to happen.

Moore says that his company – and the facial recognition industry as a whole – is "absolutely" aware of that dangerous bias. While stressing that it is not the technology itself that is racist but there are "lots of racist people" and that the data itself can cause bias, he says that the industry is working hard on fixing those biases.

Trueface is paying people in other countries across Asia and Africa to send them photos of their faces in order to build a much larger database of faces with different features and darker skin tones and that approach is "actively pushing the bias down."

He says that combined with improvements in the technology, we are rapidly getting to the point where within two-to-three years, the degree of accuracy in facial recognition will be in "high 90s" for all types of people – which is basically the same as other forms of identification that we all accept within society, like banking.

He even argues that level of accuracy could help counteract human biases: it would be harder for a police officer to justify, say, stopping a black man because he thought he looked like a suspect if there was a facial recognition result that said it was only 80 per cent accurate.

But then, of course, we delve into the complex and fraught world of what is supposed to happen versus what really happens on the street. Moore admits that if there isn't a clear picture of someone or the individual in question is wearing a hat, then it is never going to be possible to get a high-90s accuracy.

Except he describes it in a way that many of his clients are likely to see it: "If someone is actively avoiding cameras, or pulls on a hat, then there's nothing we can do." We relay the recent story from London where a man was stopped, questioned and fined £90 ($115) for "disorderly behavior" because he tried to hide his face. He didn't want to be on camera; the cops immediately assumed he was up to no good.


Moore admits that facial recognition use is going to be based on a "social contract" and that "to me, that was inappropriate" to stop and fine the man. It was "probably his right" to avoid the cameras, he notes, but then quickly adds that he "would like to assume that the police officers are trained to recognize behavior." And, he points out, the issue only got a "spotlight on it because facial recognition was in the same sentence."

Which is a fair point. Like any new technology, the initial sense of amazement at what has become possible is soon replaced with a fear of the new, of its possible abuses. And when any abuses do come to light, there are given disproportionate weight, leading to a sense of crisis that then drives lawmakers to believe they need to act and pass new laws.

This technology journalist often cites the wave of newspaper headlines in 1980s that surrounded the terror that once was "mobile phones." There were even calls to ban them entirely because they being used by football supporters to organize fights.

Facial recognition has already proven its worth, Moore argues. One recent example was how a man traveling on a false ID was identified and arrested at a Washington DC airport thanks to their facial recognition system. And, faced with the unpleasant reality of gun violence and mass shootings in the US, its use at live events could end up saving lives and keeping everyone safer. "Guns are a serious problem," he notes. "This technology is there to make better decisions."

Which gets us back to the rules and regulations. Which don't exist yet. Moore feels strongly that there is one area where federal – rather than local – regulation is needed. And that should include restrictions on use.

How then?

The question is what do those rules look like, how are they applied, and around what specific issues can they be drawn. Moore says he doesn't have the answers, but he does help identify some key building blocks:

  • Government versus commercial use
  • Real-time use versus analysis of recorded footage
  • Opt-in use (where identification is used to provide access) versus recognition (where identification is used to stop, prevent or limit someone)
  • Transparency and benchmarks

The use of facial recognition is always going to be "situational," Moore argues. And, he notes, it may well be necessary for the use of facial recognition within the US to be reliant on the use of technology that is created within the US, in order to make sure that the new rules are baked into hardware and software.

Even assuming new federal rules, a bigger question then is: how do you stop companies and/or specific police departments from abusing the technology?

Moore seeks to reassure us. "There are bad people. We have turned down multiple clients where their use of the technology was not aligned with what we wanted to do." It would hard for companies to hide their planned use for such technology, he argues, because "we spend six months at a minimum with clients. If they were trying to deceive us, we would know it, and just shut it down."

But what was intended as a reassurance in some respect only serves to further highlight concerns: this technology can be used in wrong and dangerous ways, and there are people out there who are already seeking to spend money to create systems that make the company that sells the technology uncomfortable enough to walk away.

Moore is right when he says that facial recognition is an "inevitability." The big question is: is this the sort of technology that should be introduced and then scaled back – like ride-sharing or social media – or is the sort of technology that needs to forced to argue its case before it is introduced?

Moore thinks it's the former. His former hometown thinks the latter. ®

Similar topics

Other stories you might like

  • Clearview AI promises not to sell face-recognition database to most US businesses
    Caveats apply, your privacy may vary

    Clearview AI has promised to stop selling its controversial face-recognizing tech to most private US companies in a settlement proposed this week with the ACLU.

    The New-York-based startup made headlines in 2020 for scraping billions of images from people's public social media pages. These photographs were used to build a facial-recognition database system, allowing the biz to link future snaps of people to their past and current online profiles.

    Clearview's software can, for example, be shown a face from a CCTV still, and if it recognizes the person from its database, it can return not only the URLs to that person's social networking pages, from where they were first seen, but also copies that allow that person to be identified, traced, and contacted.

    Continue reading
  • Research finds data poisoning can't defeat facial recognition
    Someone can just code an antidote and you're back to square one

    If there was ever a reason to think data poisoning could fool facial-recognition software, a recently published paper showed that reasoning is bunk.

    Data poisoning software alters images by manipulating individual pixels to trick machine-learning systems. These changes are invisible to the naked eye, but if effective they make the tweaked pictures useless to facial-recognition tools – whatever is in the image can't be recognized. This could be useful for photos uploaded to the web, for example, to avoid recognition. It turns out, this code may not be that effective.

    Researchers at Stanford University, Oregon State University, and Google teamed up for a paper in which they single out two particular reasons why data poisoning won't keep people safe. First, the applications written to "poison" photographs are typically freely available online and can be studied to find ways to defeat them. Second, there's no reason to assume a poisoned photo will be effective against future recognition models.

    Continue reading
  • 1,000-plus AI-generated LinkedIn faces uncovered
    More than 70 businesses created fake profiles to close sales

    Two Stanford researchers have fallen down a LinkedIn rabbit hole, finding over 1,000 fake profiles using AI-generated faces at the bottom.

    Renée DiResta and Josh Goldstein from the Stanford Internet Observatory made the discovery after DiResta was messaged by a profile reported to belong to a "Keenan Ramsey". It looked like a normal software sales pitch at first glance, but upon further investigation, it became apparent that Ramsey was an entirely fictitious person.

    While the picture appeared to be a standard corporate headshot, it also included multiple red flags that point to it being an AI-generated face like those generated by websites like This Person Does Not Exist. DiResta was specifically tipped off by the alignment of Ramsey's eyes (the dead center of the photo), her earrings (she was only wearing one) and her hair, several bits of which blurred into the background. 

    Continue reading
  • Ukraine uses Clearview AI facial-recognition technology
    Controversial search engine being used to identify dead and Russian operatives

    The Ukrainian government is using facial recognition technology from startup Clearview AI to help them identify the dead, reveal Russian assailants, and combat misinformation from the Russian government and its allies.

    Reuters reported yesterday that the country's Ministry of Defense began using Clearview's search engine for faces over the weekend.

    The vendor offered free access to the search engine, which Ukraine is using for such tasks as identifying people of interest at checkpoints and identifying people killed during Russia's invasion, the news organization wrote, citing Lee Wolosky, who currently advises Clearview and formerly worked as a US diplomat under Presidents Barack Obama and Joe Biden.

    Continue reading
  • Face Off: IRS kills plan to verify taxpayers with facial recognition database
    Uncle Sam takes security, privacy concerns seriously, it says here

    Updated The Internal Revenue Service has abandoned its plan to verify the identities of US taxpayers using a private contractor's facial recognition technology after both Democrats and Republicans actively opposed the deal.

    US Senator Ron Wyden (D-OR) on Monday said Treasury Department officials informed his office that the agency has decided to move away from using the private facial recognition service to verify accounts.

    "The Treasury Department has made the smart decision to direct the IRS to transition away from using the controversial verification service, as I requested earlier today," Wyden said in a statement. "I understand the transition process may take time, but I appreciate that the administration recognizes that privacy and security are not mutually exclusive and no one should be forced to submit to facial recognition to access critical government services."

    Continue reading
  • IRS doesn't completely scrap facial recognition, just makes it optional
    But hey, new rules on deleting your selfies

    America's Internal Revenue Service has confirmed taxpayers will not be forced to use facial recognition to verify their identity. The agency also set out rules for which images will be deleted.

    Folks setting up an online IRS account will be given the choice of providing biometric data to an automated system, or speaking with a human agent in a video call, to authenticate. Those who are comfortable with facial recognition tech can upload a copy of their photo ID and then be authenticated by their selfie, and those who aren't can talk to someone to prove they are who they say they are. An online IRS account can be used to view tax documents and the status of payments among other things.

    "Taxpayers will have the option of verifying their identity during a live, virtual interview with agents; no biometric data – including facial recognition – will be required if taxpayers choose to authenticate their identity through a virtual interview," the IRS said in a statement on Monday.

    Continue reading
  • Sri Lanka to adopt India’s Aadhaar digital identity scheme
    Biometric IDs for all, cross-border interoperability not on the table

    Sri Lanka has decided to adopt a national digital identity framework based on biometric data and will ask India if it can implement that nation’s Aadhaar scheme.

    The island nation had previous indicated it would work with the Modular Open Source Identity Platform (MOSIP), an organisation based in India that offers tools governments can use to create and manage digital identities.

    But a list of Cabinet decisions published on Tuesday, Sri Lanka’s government announced its intention to ask India for a grant of its scheme, which has been widely interpreted as meaning India share Aadhaar technology.

    Continue reading

Biting the hand that feeds IT © 1998–2022