Homeland Security will test out using genAI to train US immigration officers

It's all about privacy, civil rights, civil liberties, ok?

The US Department of Homeland Security (DHS) has an AI roadmap and a trio of test projects to deploy the tech, one of which aims to train immigration officers using generative AI. What could possibly go wrong?

No AI vendors were named in the report, which claimed the use of the tech was intended to help enhance trainees to better understand and retain "crucial information," as well as to "increase the accuracy of their decision-making process."

It's said that OpenAI, Anthropic, and Meta, as well as cloud giants Microsoft, Google, and Amazon, have provided Homeland Security with AI technology and online services to experiment with, with regards to training and decision making.

"US Citizenship and Immigration Services (USCIS) will pilot using LLMs to help train Refugee, Asylum, and International Operations Officers on how to conduct interviews with applicants for lawful immigration," Uncle Sam's roadmap [PDF], released last night, explains.

Despite recent work on mitigating inaccuracies in AI models, LLMs have been known to generate inaccurate information with the type of confidence that might bamboozle a young trainee.

The flubs – referred to as "hallucinations" – make it hard to trust the output of AI chatbots, image generation and even legal assistant work, with more than one lawyer getting into trouble for citing fake cases generated out of thin air by ChatGPT.

LLMs have also been known to exhibit both racial and gender bias when deployed in hiring tools, racial and gender bias when used in facial recognition systems, and can even exhibit racist biases when processing words, as shown in a recent paper where various LLMs make a decision about a person based on a series of text prompts. The researchers reported in their March paper that LLM decisions about people using African American dialect reflect racist stereotypes.

Nevertheless, DHS claims it is committed to ensuring its use of AI "is responsible and trustworthy; safeguards privacy, civil rights, and civil liberties; avoids inappropriate biases; and is transparent and explainable to workers and folks being processed. It doesn't say what safeguards are in place, however.

The agency claims the use of generative AI will allow DHS to "enhance" immigration officer work, with an interactive application using generative AI under development to assist in officer training. The goal includes limiting the need for retraining over time.

The larger DHS report outlines the Department's plans for the tech more generally, and, according to Alejandro N Mayorkas, US Department of Homeland Security Secretary, "is the most detailed AI plan put forward by a federal agency to date."

Another other two pilot projects will involve using LLM-based systems in investigations and applying generative AI to the hazard mitigation process for local governments.

History repeating

The DHS has used AI for more than a decade, including machine learning (ML) tech for identity verification. Its approach can best be described as controversial, with the agency on the receiving end of legal letters over using facial recognition technology. However, the US has pushed ahead despite disquiet from some quarters.

Indeed, the DHS cites AI as something it is using to make travel "safer and easier" – who could possibly object to having a photo taken to help navigate the security theater that is all too prevalent in airports? It is, after all, still optional.

Other examples of AI use given by the DHS include trawling through older images to identify previous unknown victims of exploitation, assessing damage after a disaster, and picking up smugglers by identifying suspicious behavior.

In its roadmap, the DHS noted the challenges that exist alongside the opportunities. AI tools are just as accessible by threat actors as well as the authorities, and the DHS worries that larger scale attacks are within reach of cybercriminals, as well as attacks on critical infrastructure. And then there is the threat from AI generated content.

A number of goals have been set for 2024. These include creating an AI Sandbox in which DHS users can play with the technology and hiring 50 AI experts. It also plans a HackDHS exercise in which vetted researchers will be tasked with finding vulnerabilities in its systems. ®

More about

TIP US OFF

Send us news


Other stories you might like