This article is more than 1 year old

TSA wants to expand facial recognition to hundreds of airports within next decade

Digital rights folks, as you can imagine, want the tech grounded

America's Transportation Security Agency (TSA) intends to expand its facial-recognition program used to screen US air travel passengers to 430 domestic airports in under a decade.

The TSA's program, which uses Idemia's biometric technology, has come under fire from some privacy and civil-rights organizations, which argue the software amounts to large-scale surveillance that does little to stop terror in the skies. 

The government agency, unsurprisingly, has a differing opinion of its facial-recognition systems, currently being trialed at two airports across the country. Its abilities have been assessed over the past two years, and the system has boosted identity verification efficiency without infringing on travelers' privacy rights, a TSA spokesperson told us.

The technology essentially scans a passenger's face, and automates the process of checking that the person showing up to catch a flight is who they say they are, that they are the person expected to be there, and that they're not subject to any additional security checks or barred from flying entirely.

"Right now we are at six percent fully operational capacity," TSA press secretary Carter Langston said in an interview with The Register.

"We would have to expand rapidly to get to 430 — nothing in the federal government happens rapidly. It will take us years to get from six percent up to 100 percent."

When asked if "years" means more or less than a decade, Langston said it will take less than 10 years to reach 430 airports.

The agency's plan to expand its facial-recognition program, first reported by Fast Company on Friday, "is about transportation security," Langston added, noting that participation in TSA's biometric, mobile driver's license, and digital ID programs are all voluntary.

Those pilot programs, being conducted at 25 airports, have shown that using biometrics does four key things very well, he said.

First, it improves identify verification. "It can detect a fake ID very quickly," Langston said.

Next, it verifies the person pictured on the identification card is the same person standing at the TSA podium, while also verifying the person is, in fact, traveling in the next 24 hours, and whether they have PreCheck, regular screening status, or are on a list to receive additional screening, Langston said.

And, assuming it doesn't make any mistakes, the tech does all of this quickly and effectively, ensuring less wait-time and happier travelers — not to mention better airport and airplane safely — or so it's argued.

"It identifies those four very key and critical elements in identity verification, which are the lynch pin for transportation security," Langston said. "And it does so without any decrease in efficiency — at the same level of efficiency as does a manual verification process." 

The Electronic Privacy Information Center (EPIC), has a different view on the airport facial scans. EPIC has urged [PDF] Congress to suspend TSA's use of facial-recognition technology and supported a group of senators earlier this year who called for an end to the pilot program.

Safety first? Or 'dangerous surveillance'?

"Facial recognition is an invasive and dangerous surveillance technology," wrote Jeramie Scott, director of EPIC's Project on Surveillance Oversight, in a June 30 write-up.

"When the government moves forward with pilot programs that will, if fully implemented, subject millions of people on a daily basis to the technology that should give us all pause."

Scott called the pilot "a mistake," citing not just privacy but also bias concerns — the technology has been known to misidentify women and people with darker skin — as well as a lack of regulatory oversight.

"This is a mistake — not only because of the ongoing privacy and bias issues but because of the long term implications of using our face as our ID," Scott said.

"This is because the US lacks an overarching law to regulate the use of facial recognition to ensure the necessary transparency, accountability, and oversight to protect our privacy, civil liberties, and civil rights," he added.

Langston said the TSA has addressed all of the above concerns.

In terms of data privacy: live photos and ID photos are overwritten by the next passenger's scan, we're told. They only remain in RAM and are purged when the officer logs off or turns off the machine, which happens automatically after 30 minutes of non-use. That is to say, there is no log kept of people's faces from these terminals as they go through airport security. That said, you'll still be on airport CCTV ... and if you have government-issued photo ID then the government will have what you look like on file anyway.

"All of those privacy concerns have been addressed in assessments, in working with privacy advocacy groups," Langston said. We should note: the assessments are not public, so The Register can't verify the findings.

"In terms of skin tone and misidentifying people: our assessment has revealed this algorithm we are using is the top of the line as published in the NIST studies," for accuracy across race, gender, and ethnicity, he added.

In June 2021, TSA worked with Homeland Security's Science and Technology Directorate to review how well the technology identified people, and according to the agency, it found no consistent statistically significant differences in the way the software identified people, regardless of their gender, race, and skin tone.

And, of course, if people aren't comfortable having Uncle Sam scan their face at the airport, they can always opt out. At least for now. ®

More about

TIP US OFF

Send us news


Other stories you might like