Microsoft doesn't want cops using Azure AI for facial recognition
Facial recognition based on body cam footage? Absolutely not ... in our cloud, says Microsoft
An update to Microsoft's Azure Open AI Service code of conduct makes it clear who Redmond doesn't want using its hottest new tech: Cops.
Microsoft's Azure OpenAI Service code of conduct was quietly updated this week to clarify restrictions on law enforcement use of the tech for facial recognition, which while present in older versions of the document were much less robust.
Whereas before Azure OpenAI prohibited identification of individuals via facial recognition for everyone "including … state or local police in the United States" the latest version takes those restrictions global.
Along with not being authorized for use by US law enforcement, Microsoft now prohibits "any real-time facial recognition … on mobile cameras used by any law enforcement globally." In other words, body-cam footage can't be processed using facial recognition technology.
The code of conduct specifically mentions "mobile cameras" and "in the wild" environments, meaning there are some particular exemptions for the restriction. Based on the way it's worded, it would seem stationary cameras can be freely used by law enforcement to perform facial ID outside the US, and body camera footage recorded in controlled environments (e.g., a police station) may be usable as well.
When asked whether such usage would be allowed, a Microsoft spokesperson told us that its guidelines for the responsible use of facial recognition apply to all customers - including law enforcement. According to the spokesperson, said guidelines "would prevent the technology's usage" on stationary cameras in the wild.
Microsoft declined to answer questions as to why it changed the policy now other than to say it regularly does so. Despite that lack of a reason, the latest round of changes may have been triggered by Axon, a US company whose law enforcement tech ideas - like drones equipped with tasers - has landed it in hot water before.
- TSA wants to expand facial recognition to hundreds of airports within next decade
- Facial recognition tech has outpaced US law – and don't expect the Feds to catch up
- Microsoft promises to tighten access to AI it now deems too risky for some devs
- After IBM axed its face-recog tech, the rest of the dominoes fell like a house of cards: Amazon and now Microsoft. Checkmate
Axon last week announced a new AI technology that the company specifically said uses OpenAI's Chat GPT4 to translate audio captured by body cameras into police reports. While it might not involve video processing, the new tech definitely relies on Microsoft-backed AI tech to function. It's not immediately clear, however, whether Axon was using the Azure OpenAI Service.
Axon, which is the largest supplier of body cameras to American law enforcement, has considered embedding facial recognition technology in its body cameras as well but was ultimately stopped by its AI and Policing Technology Ethics board, which said it would be best to avoid the technology.
"Current face matching technology raises serious ethical concerns. In addition, there are technological limitations to using this technology on body cameras," Axon said in 2019. "Consistent with the board's recommendation, Axon will not be commercializing face matching products on our body cameras at this time."
Facial recognition technology has been demonstrated on numerous occasions to exhibit racial bias, leading it to more frequently misidentify non-white subjects. Those biases have been the reason why some systems have been scrapped, though that hasn't stopped canned systems from reappearing. ®