Remote ID verification tech is often biased, bungling, and no good on its own
Only 2 out of 5 tested products were equitable across demographics
A study by the US General Services Administration (GSA) has revealed that five remote identity verification (RiDV) technologies are unreliable, inconsistent, and marred by bias across different demographic groups.
In a pre-press version of the GSA study, shared this month, the agency said that only two of the RiDV products it tested were equitable for all users; two others had at least one demographic where error rates and false rejections were notably higher, with one product showing significantly higher rejection rates for Black participants and individuals with darker skin tones. The fifth product showed more favorable, but still inequitable, performance for Asian American and Pacific Islander participants.
According to the study, one of the products barely merits the term "functional," as it had a false negative rate of around 50 percent, and even the best performer still failed 10 percent of the time.
In short, the technology used for remote identity verification is a mess.
"This study confirms that it is necessary to evaluate products across demographic groups to fully understand the performance of remote identity verification technologies," The GSA said.
While none of the vendors were named in the study itself, the GSA published a privacy impact assessment [PDF] for the study in September 2023 that lists the five products as coming from TransUnion, Socure, Jumio, LexisNexis, Incode and Red Violet. It's not clear if there were any changes to the products tested after that document was published; we reached out to all five vendors listed, and most haven't responded.
LexisNexis acknowledged the study in a statement and highlighted its longstanding relationship with the GSA, particularly through its work on Login.gov. Due to anonymizing of the experiment, its CEO for government risk solutions Haywood 'Woody' Talcove told The Register in an interview that he's not sure if the outfit's RidV tech was tested.
Meet the new biased tech, same as the old biased tech
If you're wondering - hey, haven't we hashed out bias in facial recognition before? Sure, we have - real-time facial recognition tech used in public places by law enforcement has long been established to have racial biases and other issues that could impinge on civil liberties.
This is a different kind of biased tech, though.
- IRS doesn't completely scrap facial recognition, just makes it optional
- Rise of deepfake threats means biometric security measures won't be enough
- Can I phone a friend? How cops circumvent face recognition bans
- Deepfake attacks can easily trick live facial recognition systems online
"Prior work by NIST and others have considered the fairness of face matching systems," the researchers wrote in their report. "Our work expands upon prior work by testing full end-to-end remote identity verification systems which include the face matcher, as well as the user interface, capture process, document verification check, and liveness check."
Remote identity verification technology involves submitting a photo ID, a selfie and/or other forms of identification to verify that a person signing up for a new account, or trying to access an existing one, actually is who they claim to be.
The GSA, which handles procurement for the US government, has previously expressed trepidation about using RidV tech due to fears of inequity surrounding it. GSA announced plans to test the technology in August 2023, and in October it said it was adopting RidV. The tech is now available to use on the Administration's login.gov service.
It's not immediately clear which of the tested vendors, if any, the GSA uses for login.gov aside from LexisNexis. We're told the vendors were anonymized and sealed in the study, making it unclear how performance maps onto vendors, or which, if any, are being used by the GSA.
"Login.gov is currently using a vendor with an algorithm that was one of the highest performers in the NIST FRVT study," a GSA spokesperson told The Register Friday. "[We] look forward to continuing to evaluate research, such as the final equity study results once complete, to assess all aspects of its performance and inform future efforts."
It's worth noting that the RidV study only included results for volunteers who completed testing with all five vendors, which the GSA said might mean error rates could be higher if users dropped out due to frustration. Performance for fraud detection also wasn't tested as part of the study.
The GSA said it plans to release the final peer-reviewed version of the study in 2025, which will include further analysis into causes of false negatives and inconclusive results, as well as product performance at each step of the identity verification process.
"GSA looks forward to using the results of the study to help GSA and other federal agencies advance equity in new, modern technologies and deliver services more effectively for the public," an agency spokesperson told us in an emailed statement.
Facial recognition just one part of the RidV puzzle
LexisNexis's Talcove told us that while he's glad the government is implementing RidV technology, he's not convinced it's a good solo solution.
"The best at 10 percent and the worst at 50/50 is a real challenge - how are you just better than a coin toss," Talcove asked of the products tested.
Talcove said he thinks both the government and commercial sectors have become overly reliant on NIST's IAL2 remote identity proofing standard, leading to a focus on using visual identification as the be-all and end-all of RidV.
"Some people's licenses are in better condition, some have better cameras or internet connections than others," the LexisNexis exec said. "Folks lose weight, or change their appearances."
In other words, people change, so it makes sense that purely visual RidV tech would fail so often.
"Any time you rely on a single tool, especially in the modern era of generative AI and deep fakes … you are going to have this problem," Talcove said. "I don't think NIST has gone far enough with this workflow."
Talcove said LexisNexis is pushing for a multi-layer approach that, instead of relying on visual identification, uses a variety of data points to identify an individual. Those could include things like analyzing the machine a person is using for previous fraud attempts, validating email addresses, or cross-referencing other records.
He doesn't even believe facial recognition should be the first string tool, as there are more reliable and user-friendly methods of verifying identity online.
"You need a multi-layered approach with multiple data sources to triangulate an identity," Talcove told us. "It's pretty easy to game [pure facial recognition]."
"What this study shows is that there's a level of risk being injected into government agencies completely relying on one tool," Talcove said. "Some customers are enamored with the IAL2 workflow - which I think is great, but we've gotta go further." ®