Microsoft picks perfect time to dump its AI ethics team
Machine-learning safety and controls now everyone's problem
Microsoft has eliminated its entire team responsible for ensuring the ethical use of AI software at a time when the Windows giant is ramping up its use of machine learning technology.
The decision to ditch the ethics and society team within its artificial intelligence organization is part of the 10,000 job cuts Microsoft announced in January, which will continue rolling through the IT titan into next year.
The hit to this particular unit may remove some guardrails meant to ensure Microsoft's products that integrate machine learning features meet the mega-corp's standards for ethical use of AI. And it comes as discussion rages about the effects of controversial artificial intelligence models on society at large.
Baking AI ethics into the whole business – as something for all employees to consider – seems kinda like when Bill Gates told his engineers in 2002 to make security an organization-wide priority, which obviously went really well. You might think a dedicated team overseeing that internally would be helpful.
Platformer first reported the layoffs in the ethics and society group and cited unnamed current and former employees. The group was supposed to advise teams as Redmond accelerated the integration of AI technologies into a range of products – from Edge and Bing to Teams, Skype, and Azure cloud services.
Microsoft still has in place its Office of Responsible AI, which works with the company's Aether Committee and Responsible AI Strategy in Engineering (RAISE) to spread responsible practices across operations in day-to-day work. That said, employees told the newsletter that the ethics and society team played a crucial role in ensuring those principles were directly reflected in how products were designed.
A Microsoft spokesperson told The Register that the impression that the layoffs meant the tech goliath is cutting its investment in responsible AI is wrong. The unit was key in helping to incubate a culture of responsible innovation as Microsoft got its AI efforts underway several years ago, we were told, and now Microsoft executives have adopted that culture and seeded it throughout the company.
"That initial work helped to spur the interdisciplinary way in which we work across research, policy, and engineering across Microsoft," the spokesperson said.
"Since 2017, we have worked hard to institutionalize this work and adopt organizational structures and governance processes that we know to be effective in integrating responsible AI considerations into our engineering systems and processes."
- OpenAI claims GPT-4 will beat 90% of you in an exam
- Microsoft and GM deal means your next car might talk, lie, gaslight and manipulate you
- Microsoft dips Teams in the metaverse vat with avatars ahead
- Here's how Microsoft hopes to inject ChatGPT into all your apps and bots via Azure
There are hundreds of people working on these issues across Microsoft "including net new, dedicated responsible AI teams that have since been established and grown significantly during this time, including the Office of Responsible AI, and a responsible AI team known as RAIL that is embedded in the engineering team responsible for our Azure OpenAI Service," they added.
By contrast, fewer than ten people on the ethics and society team were affected, and some were moved to other parts of the biz – with the Office of Responsible AI and the RAIL unit.
Death by many cuts
According to the Platformer report, the team had been shrunk from about 30 people to seven through a reorganization within Microsoft in October 2022.
Team members lately had been investigating potential risks involved with Microsoft's integration of OpenAI's technologies across the organization. Unnamed sources reportedly said CEO Satya Nadella and CTO Kevin Scott were anxious to get those technologies integrated into products and out to users as fast as possible.
Microsoft is investing billions of dollars into OpenAI – a startup whose products include Dall-E2 for generating images, GPT for text (OpenAI this week introduced its latest iteration, GPT-4), and Codex for developers. Meanwhile, OpenAI's ChatGPT is a chatbot trained on mountains of data from the internet and other sources that takes in prompts from humans – "Write a two-paragraph history of the Roman Empire," for example – and spits out a written response.
Microsoft also is integrating a new large language model into its Edge browser and Bing search engine in hopes of chipping away at Google's dominant position in search.
Since being opened up to the public in November 2022, ChatGPT has become the fastest app to reach 100 million users, crossing that mark in February. However, problems with the technology – and with similar AI apps like Google's Bard – cropped up fairly quickly, ranging from wrong answers to offensive language and gaslighting.
The rapid innovation and mainstreaming of these large language model AI systems is fuelling a larger debate about their impact on society.
Redmond will shed more light on its ongoing AI strategy during an event on March 16 hosted by Nadella and titled "The Future of Work with AI," which The Register will be covering. ®