This article is more than 1 year old

US govt to use software to finger immigrants as potential crims? That's really dumb – boffins

Algorithms will label innocent people terrorists, DHS warned

A group of 54 computer scientists and academic researchers on Thursday asked the US Department of Homeland Security to rethink its plan for employing software algorithms to determine whether immigrants to the country should be admitted or deported.

To implement various White House executive orders to limit immigration through "extreme vetting," DHS's Immigration and Customs Enforcement (ICE) agency earlier this year outlined its intention to automate its screening process.

In its statement of objectives, ICE said it must develop a means of predicting what people will do in the future. The agency said it's looking for a system that allows it "to assess whether an applicant intends to commit criminal or terrorist acts after entering the United States."

The technical experts writing to DHS contend its extreme vetting plan is extremely unlikely to work.

"Simply put, no computational methods can provide reliable or objective assessment of the traits that ICE seeks to measure," says the group's letter to DHS Acting Secretary Elaine Duke. "In all likelihood, the proposed system would be inaccurate and biased. We urge you to reconsider your program."

The researchers note that characteristics sought by the government – e.g. whether an individual will become a "positively contributing member of society" – are ill defined, both for policymakers trying to come up with workable rules and for programmers trying to turn that concept into code.

License, registration... and, er, Facebook, please. Photo by Shutterstock

Official: America auto-scanned visitors' social media profiles. Also: It didn't work properly

READ MORE

"Algorithms designed to predict these undefined qualities could be used to arbitrarily flag groups of immigrants under a veneer of objectivity," the letter states.

A separate letter sent to DHS by 56 civil liberties groups, including the Brennan Center for Justice, Georgetown Law’s Center on Privacy and Technology, and the Electronic Frontier Foundation, offers similar criticism and chides ICE for demanding a system that will "generate a minimum of 10,000 investigative leads annually" rather than reporting leads that would actually be appropriate to investigate.

In an email to The Register, Jeff Bigham, associate professor at Carnegie Mellon's Human-Computer Interaction Institute, echoed the concerns of the researchers, noting that he counts several as colleagues.

"It is dangerous to make impactful decisions based on black box machine learning models that can't be inspected or verified," said Bigham. "We now have numerous examples of such models picking up on patterns in the data on which they were trained that do not correspond to meaningful insights from past examples. For instance, such models routinely pick up and then replicate discriminatory bias exhibited by humans who have in the past made these decisions."

Bigham said he found it particularly troubling that there's no information about the data models being considered for making predictions.

"As a result, biased data will likely be used to predict outcomes of unknown value with unknown reliability," he said. "It is simply unacceptable that real human lives be dramatically affected by the unknown influences put upon an opaque algorithm." ®

More about

TIP US OFF

Send us news


Other stories you might like