This article is more than 1 year old

Microsoft promises to tighten access to AI it now deems too risky for some devs

Deep-fake voices, face recognition, emotion, age and gender prediction ... A toolbox of theoretical tech tyranny

Microsoft has pledged to clamp down on access to AI tools designed to predict emotions, gender, and age from images, and will restrict the usage of its facial recognition and generative audio models in Azure.

The Windows giant made the promise on Tuesday while also sharing its so-called Responsible AI Standard, a document [PDF] in which the US corporation vowed to minimize any harm inflicted by its machine-learning software. This pledge included assurances that the biz will assess the impact of its technologies, document models' data and capabilities, and enforce stricter use guidelines.

This is needed because – and let's just check the notes here – there are apparently not enough laws yet regulating machine-learning technology use. Thus, in the absence of this legislation, Microsoft will just have to force itself to do the right thing.

"The need for this type of practical guidance is growing," Microsoft's Chief Responsible AI Officer Natasha Crampton said in a statement. "AI is becoming more and more a part of our lives, and yet, our laws are lagging behind. They have not caught up with AI's unique risks or society's needs. While we see signs that government action on AI is expanding, we also recognize our responsibility to act. We believe that we need to work towards ensuring AI systems are responsible by design."

We believe that we need to work towards ensuring AI systems are responsible by design

And so, in a bid to steer machine-learning developers away from causing strife and misery using its technologies, Microsoft is winding down access to tools trained to classify people's gender, age, and emotions, and analyse their smile, facial hair, hair, and makeup, via its Face API in Azure. New customers will not be able to use this API in Microsoft's cloud, and existing customers have until June 30, 2023 to migrate to other services before the software is officially retired.

Though these capabilities will not be offered via its API platform, they will still be used in other parts of Microsoft's empire. For example, the features will be integrated into Seeing AI, an app that identifies and narrates descriptions of people and objects for those with visual impairments.

Experts have long debated the accuracy and value of machine-learning models used to predict people's feelings and gender, citing also privacy concerns. It is feared AI-powered software programs will, through incorrect or unfair classifications, automatically make some people's lives difficult, and thus usage of this technology should be regulated or restricted.

Access to other types of Microsoft tools considered risky, such as realistic-sounding audio generation (putting words in someone's mouth) and facial recognition (useful for surveillance) will be restricted. New customers will have to apply to use the tools; Microsoft will assess if the applications they want to develop are appropriate or not. Existing customers will also have to obtain permission to continue using these tools for their products from June 30, 2023. 

Mimicking the sound of someone's voice using generative AI models is now not allowed without consent of the speaker. Products and services built using Microsoft's Custom Neural Voice software will have to disclose the voices are fake, too. Use guidelines for the company's facial recognition tools are also stricter if they're applied in public spaces and cannot be used for tracking individuals for surveillance purposes.

In 2020, Microsoft pledged to stop selling facial-recognition software to state and local police in the US until the technology is regulated by federal law. Two years later, and the company is still sticking to its promise. The federal government, however, can still use these tools but will have to abide by the updated use guidelines, a Microsoft spokesperson told us.

"This is a critical norm-setting period for AI," Crampton added to the New York Times. "We hope to be able to use our standard to try and contribute to the bright, necessary discussion that needs to be had about the standards that technology companies need to be held to." ®

More about

TIP US OFF

Send us news


Other stories you might like