This article is more than 1 year old

Clean up this hot sticky facial-recog mess for us, Microsoft begs politicos

Redmond also insists ICE is not using its AI to snare immigrants, split families at the border

Microsoft has urged US Congress to regulate the American government's use of facial-recognition technology provided by, er, Microsoft and others.

This plea comes just weeks after the Windows giant came under heavy fire for offering facial-recognition services to Uncle Sam's controversial Immigration and Customs Enforcement (ICE) agency.

After young children were separated from their immigrant parents by ICE agents at America's borders, under President Donald Trump’s zero-tolerance immigration crackdown, several corporations, including Microsoft, Thomson Reuters, and Amazon, were slammed for working with the nation's border cops.

In January, Microsoft’s general manager Tom Keane said its cloud platform Azure Government was hired by ICE to “process data on edge devices or utilize deep learning capabilities to accelerate facial recognition and identification.”

"We're proud to support this work with our mission-critical cloud," Keane said at the time. Now, Microsoft insists that, actually, ICE isn't using Azure's face-recognition services.

“We’ve since confirmed that the contract in question isn’t being used for facial recognition at all,” said Redmond president and legal chief Brad Smith on Friday. "Nor has Microsoft worked with the US government on any projects related to separating children from their families at the border, a practice to which we’ve strongly objected."

Government needs to set standards

Changing tack, Smith today called on US Congress to look into how best to regulate Microsoft's technology to prevent federal agencies and departments from using it to carry out racial profiling, invasions of privacy, and similar sticky operations.

Microsoft admitted it is "far from perfect" in stopping its machine-learning code from falling to prejudices and missteps, and that its software suffers from biases in training data and makes mistakes.

However, rather than exclusively police the use of its software and services itself, instead Microsoft wants laws and rules introduced to prevent any abuse by Uncle Sam. Thus, any public outcry over the misuse of AI systems conveniently becomes the US government's problem, not Redmond's nor that of any other tech giant.

“The only effective way to manage the use of technology by a government is for the government proactively to manage this use itself,” Smith said.

Metropolitan Police at Notting Hill Carnival

London's top cop isn't expecting facial recog tech to result in 'lots of arrests'


“The competitive dynamics between American tech companies – let alone between companies from different countries – will likely enable governments to keep purchasing and using new technology in ways the public may find unacceptable in the absence of a common regulatory framework.”

Microsoft presented a list of questions US politicians should address, including whether or not companies should obtain permission before collecting people’s faces, whether the technology could be used as evidence in criminal cases, and whether systems should be required to meet a minimum performance on accuracy.

“We believe Congress should create a bipartisan expert commission to assess the best way to regulate the use of facial recognition technology in the United States," said Smith. "This should build on recent work by academics and in the public and private sectors to assess these issues and to develop clearer ethical principles for this technology." ®

More about


Send us news

Other stories you might like