Did ID.me hoodwink Americans with IRS facial-recognition tech?

Senators want the FTC to investigate "evidence of deceptive statements"


Democrat senators want the FTC to investigate "evidence of deceptive statements" made by ID.me regarding the facial-recognition technology it controversially built for Uncle Sam.

ID.me made headlines this year when the IRS said US taxpayers would have to enroll in the startup's facial-recognition system to access their tax records in the future. After a public backlash, the IRS reconsidered its plans, and said taxpayers could choose non-biometric methods to verify their identity with the agency online.

Just before the IRS controversy, ID.me said it uses one-to-one face comparisons. "Our one-to-one face match is comparable to taking a selfie to unlock a smartphone. ID.me does not use one-to-many facial recognition, which is more complex and problematic. Further, privacy is core to our mission and we do not sell the personal information of our users," it said in January.

That would suggest ID.me created a system in which people provide a photo of themselves when creating an account, and when they try to log in, their picture is taken again and compared against the photo on file, and if it matches, they are authenticated. It may not be a perfect solution, as some facial-recognition tech struggles with women and people of color, though it's simple enough: you're either who you say you are, or not.

Just days later, however, CEO Blake Hall revealed ID.me does, in fact, use one-to-many facial recognition at some point. "ID.me uses a specific 'one-to-many' check on selfies tied to government programs targeted by organized crime to prevent prolific identity thieves and members of organized crime from stealing the identities of innocent victims en masse," he wrote on a LinkedIn post.

Now, Senators Ron Wyden (D-OR), Cory Booker (D-NJ), Ed Markey (D-MA), and Alex Padilla (D-CA) claim the company "likely misled consumers" through its messaging. It appears the four are unhappy that the biz went from saying: we do one-to-one matching only, to well, OK, a small amount of one-to-many, too.

"We therefore request that you investigate evidence of ID.me's deceptive public statements to determine whether they constitute deceptive and unfair business practices under the Section 5 of the FTC Act," the lawmakers wrote in a letter [PDF] addressed to FTC chair Lina Khan.

One main problem with one-to-many matching is that your face is compared against a wider database of images of other people, including yourself. This means you can be mistaken for someone else, and accused of trying to defraud someone or creating multiple fake accounts. There's also the elevated security and privacy risk of storing such a database of images.

Staff at the biz were concerned by the claims that ID.me was only using one-to-one face matching when they knew internally the startup was, in fact, using Amazon's one-to-many Rekognition technology. "We could disable the 1:many face search, but then lose a valuable fraud-fighting tool. Or we could change our public stance on using 1:many face search," an engineer said in a message posted in a company Slack channel, first reported by Cyberscoop. "But it seems we can't keep doing one thing and saying another as that's bound to land us in hot water." 

Hall in his LinkedIn post claimed ID.me uses a one-to-many facial recognition system only during enrollment to prevent a person from registering multiple accounts, and that its database for this purpose was for internal-use only and not part of a government program. This is, by the way, despite earlier denials and references to the method being "tied to surveillance applications."

That followup clarification by the CEO, seemingly provoked by its staff displeasure, has got the Dems fired up: they would prefer organizations are clear and upfront about biometric data use.

"According to media reports," the senators wrote, "the company's decision to correct its prior misleading statements came after mounting internal pressure from its employees ... ID.me's statements, therefore, appear deceptive, and were harmful in two ways.

"First, they likely misled consumers about how the company was using their sensitive biometric data, including that it would be stored in a database and cross-referenced using facial recognition whenever new accounts were created in the future. Second, the statements may have influenced officials at state and federal agencies as they chose an identity verification provider for government services.

"These officials had the right to know that selecting ID.me would force millions of Americans – many of them in desperate circumstances – to submit to scanning using a facial recognition technique that ID.me itself acknowledged was problematic."

A spokesperson for ID.me did not directly address our questions on how it came to find itself correcting its own statements on its tech use. The spokesperson, instead, pointed to how the company's facial-recognition technology has helped government agencies detect fraud.

"Five state workforce agencies have publicly credited ID.me with helping to prevent $238 billion dollars in fraud," they said in a statement to The Register. "Conditions were so bad during the pandemic that the deputy assistant director of the FBI called the fraud 'an economic attack on the United States'.

"ID.me played a critical role in stopping that attack in more than 20 states where the service was rapidly adopted for its equally important ability to increase equity and verify individuals left behind by traditional options. We look forward to cooperating with all relevant government bodies to clear up any misunderstandings." ®


Other stories you might like

  • Face Off: IRS kills plan to verify taxpayers with facial recognition database
    Uncle Sam takes security, privacy concerns seriously, it says here

    Updated The Internal Revenue Service has abandoned its plan to verify the identities of US taxpayers using a private contractor's facial recognition technology after both Democrats and Republicans actively opposed the deal.

    US Senator Ron Wyden (D-OR) on Monday said Treasury Department officials informed his office that the agency has decided to move away from using the private facial recognition service ID.me to verify IRS.gov accounts.

    "The Treasury Department has made the smart decision to direct the IRS to transition away from using the controversial ID.me verification service, as I requested earlier today," Wyden said in a statement. "I understand the transition process may take time, but I appreciate that the administration recognizes that privacy and security are not mutually exclusive and no one should be forced to submit to facial recognition to access critical government services."

    Continue reading
  • Clearview AI promises not to sell face-recognition database to most US businesses
    Caveats apply, your privacy may vary

    Clearview AI has promised to stop selling its controversial face-recognizing tech to most private US companies in a settlement proposed this week with the ACLU.

    The New-York-based startup made headlines in 2020 for scraping billions of images from people's public social media pages. These photographs were used to build a facial-recognition database system, allowing the biz to link future snaps of people to their past and current online profiles.

    Clearview's software can, for example, be shown a face from a CCTV still, and if it recognizes the person from its database, it can return not only the URLs to that person's social networking pages, from where they were first seen, but also copies that allow that person to be identified, traced, and contacted.

    Continue reading
  • IRS doesn't completely scrap facial recognition, just makes it optional
    But hey, new rules on deleting your selfies

    America's Internal Revenue Service has confirmed taxpayers will not be forced to use facial recognition to verify their identity. The agency also set out rules for which images will be deleted.

    Folks setting up an online IRS account will be given the choice of providing biometric data to an automated system, or speaking with a human agent in a video call, to authenticate. Those who are comfortable with facial recognition tech can upload a copy of their photo ID and then be authenticated by their selfie, and those who aren't can talk to someone to prove they are who they say they are. An online IRS account can be used to view tax documents and the status of payments among other things.

    "Taxpayers will have the option of verifying their identity during a live, virtual interview with agents; no biometric data – including facial recognition – will be required if taxpayers choose to authenticate their identity through a virtual interview," the IRS said in a statement on Monday.

    Continue reading
  • FTC sues Intuit for false advertising, says 'free' TurboTax isn't always free
    Folks fill out online forms only to be told to upgrade to paid versions

    Intuit, makers of the tax-filing software TurboTax, deceives folks with false advertising and claims its product is free to use when it isn't always free, the US Federal Trade Commission claimed in a lawsuit filed Monday.

    “TurboTax is bombarding consumers with ads for ‘free’ tax filing services, and then hitting them with charges when it’s time to file,” said Samuel Levine, director of the FTC's Bureau of Consumer Protection. “We are asking a court to immediately halt this bait-and-switch, and to protect taxpayers at the peak of filing season.”

    Tens of millions of US citizens and other taxpayers use TurboTax to file their annual tax returns. The program helps peeps prepare documents, and automatically estimates any tax refunds or rebates they might be eligible to receive. Intuit produces TurboTax's Free Edition software, claiming users filing simple tax returns don't have to pay anything to use it at all. 

    Continue reading
  • Research finds data poisoning can't defeat facial recognition
    Someone can just code an antidote and you're back to square one

    If there was ever a reason to think data poisoning could fool facial-recognition software, a recently published paper showed that reasoning is bunk.

    Data poisoning software alters images by manipulating individual pixels to trick machine-learning systems. These changes are invisible to the naked eye, but if effective they make the tweaked pictures useless to facial-recognition tools – whatever is in the image can't be recognized. This could be useful for photos uploaded to the web, for example, to avoid recognition. It turns out, this code may not be that effective.

    Researchers at Stanford University, Oregon State University, and Google teamed up for a paper in which they single out two particular reasons why data poisoning won't keep people safe. First, the applications written to "poison" photographs are typically freely available online and can be studied to find ways to defeat them. Second, there's no reason to assume a poisoned photo will be effective against future recognition models.

    Continue reading
  • 1,000-plus AI-generated LinkedIn faces uncovered
    More than 70 businesses created fake profiles to close sales

    Two Stanford researchers have fallen down a LinkedIn rabbit hole, finding over 1,000 fake profiles using AI-generated faces at the bottom.

    Renée DiResta and Josh Goldstein from the Stanford Internet Observatory made the discovery after DiResta was messaged by a profile reported to belong to a "Keenan Ramsey". It looked like a normal software sales pitch at first glance, but upon further investigation, it became apparent that Ramsey was an entirely fictitious person.

    While the picture appeared to be a standard corporate headshot, it also included multiple red flags that point to it being an AI-generated face like those generated by websites like This Person Does Not Exist. DiResta was specifically tipped off by the alignment of Ramsey's eyes (the dead center of the photo), her earrings (she was only wearing one) and her hair, several bits of which blurred into the background. 

    Continue reading
  • Ukraine uses Clearview AI facial-recognition technology
    Controversial search engine being used to identify dead and Russian operatives

    The Ukrainian government is using facial recognition technology from startup Clearview AI to help them identify the dead, reveal Russian assailants, and combat misinformation from the Russian government and its allies.

    Reuters reported yesterday that the country's Ministry of Defense began using Clearview's search engine for faces over the weekend.

    The vendor offered free access to the search engine, which Ukraine is using for such tasks as identifying people of interest at checkpoints and identifying people killed during Russia's invasion, the news organization wrote, citing Lee Wolosky, who currently advises Clearview and formerly worked as a US diplomat under Presidents Barack Obama and Joe Biden.

    Continue reading

Biting the hand that feeds IT © 1998–2022