Software

AI + ML

Microsoft picks perfect time to dump its AI ethics team

Machine-learning safety and controls now everyone's problem


Microsoft has eliminated its entire team responsible for ensuring the ethical use of AI software at a time when the Windows giant is ramping up its use of machine learning technology.

The decision to ditch the ethics and society team within its artificial intelligence organization is part of the 10,000 job cuts Microsoft announced in January, which will continue rolling through the IT titan into next year.

The hit to this particular unit may remove some guardrails meant to ensure Microsoft's products that integrate machine learning features meet the mega-corp's standards for ethical use of AI. And it comes as discussion rages about the effects of controversial artificial intelligence models on society at large.

Baking AI ethics into the whole business – as something for all employees to consider – seems kinda like when Bill Gates told his engineers in 2002 to make security an organization-wide priority, which obviously went really well. You might think a dedicated team overseeing that internally would be helpful.

Platformer first reported the layoffs in the ethics and society group and cited unnamed current and former employees. The group was supposed to advise teams as Redmond accelerated the integration of AI technologies into a range of products – from Edge and Bing to Teams, Skype, and Azure cloud services.

Microsoft still has in place its Office of Responsible AI, which works with the company's Aether Committee and Responsible AI Strategy in Engineering (RAISE) to spread responsible practices across operations in day-to-day work. That said, employees told the newsletter that the ethics and society team played a crucial role in ensuring those principles were directly reflected in how products were designed.

A Microsoft spokesperson told The Register that the impression that the layoffs meant the tech goliath is cutting its investment in responsible AI is wrong. The unit was key in helping to incubate a culture of responsible innovation as Microsoft got its AI efforts underway several years ago, we were told, and now Microsoft executives have adopted that culture and seeded it throughout the company.

"That initial work helped to spur the interdisciplinary way in which we work across research, policy, and engineering across Microsoft," the spokesperson said.

"Since 2017, we have worked hard to institutionalize this work and adopt organizational structures and governance processes that we know to be effective in integrating responsible AI considerations into our engineering systems and processes."

There are hundreds of people working on these issues across Microsoft "including net new, dedicated responsible AI teams that have since been established and grown significantly during this time, including the Office of Responsible AI, and a responsible AI team known as RAIL that is embedded in the engineering team responsible for our Azure OpenAI Service," they added.

By contrast, fewer than ten people on the ethics and society team were affected, and some were moved to other parts of the biz – with the Office of Responsible AI and the RAIL unit.

Death by many cuts

According to the Platformer report, the team had been shrunk from about 30 people to seven through a reorganization within Microsoft in October 2022.

Team members lately had been investigating potential risks involved with Microsoft's integration of OpenAI's technologies across the organization. Unnamed sources reportedly said CEO Satya Nadella and CTO Kevin Scott were anxious to get those technologies integrated into products and out to users as fast as possible.

Microsoft is investing billions of dollars into OpenAI – a startup whose products include Dall-E2 for generating images, GPT for text (OpenAI this week introduced its latest iteration, GPT-4), and Codex for developers. Meanwhile, OpenAI's ChatGPT is a chatbot trained on mountains of data from the internet and other sources that takes in prompts from humans – "Write a two-paragraph history of the Roman Empire," for example – and spits out a written response.

Microsoft also is integrating a new large language model into its Edge browser and Bing search engine in hopes of chipping away at Google's dominant position in search.

Since being opened up to the public in November 2022, ChatGPT has become the fastest app to reach 100 million users, crossing that mark in February. However, problems with the technology – and with similar AI apps like Google's Bard – cropped up fairly quickly, ranging from wrong answers to offensive language and gaslighting.

The rapid innovation and mainstreaming of these large language model AI systems is fuelling a larger debate about their impact on society.

Redmond will shed more light on its ongoing AI strategy during an event on March 16 hosted by Nadella and titled "The Future of Work with AI," which The Register will be covering. ®

Send us news
30 Comments

OpenAI pops an enterprise sticker on ChatGPT to give big biz some peace of mind

Here's what you actually get for this VIP level. And how is Microsoft happy with this?

Microsoft to shield paid-up Copilot customers from any AI copyright brawls it starts

Tough luck, freeloaders: You're on your own

After injecting pop-up ads for Bing into Windows, Microsoft now bends to Europe on links

Clicking a URL from a system service will actually open in your chosen browser. For some. How fancy

OpenAI snaps up role-playing game dev as first acquisition

Plus: Bing AI hasn't helped Microsoft eat into Google Search, and more

OpenAI urges court to throw out authors' claims in AI copyright battle

ChatGPT's prose harvesting protected by fair use, super-lab argues

X may train its AI models on your social media posts

Plus: AI luminary Douglas Lenat passed away, and US newspaper chain halts publishing of AI-generated articles

Brain-computer interface and AI helps stroke victim speak through avatar

ALSO: News publishers block OpenAI's text-crawling bot; YouTube does a deal for AI tunes

Hope for nerds! ChatGPT's still a below-average math student

Detection algorithms also fail to distinguish between answers from real people and large language models

d-Matrix bets on in-memory compute to undercut Nvidia in smaller AI models

Microsoft among in-memory AI chip startup's backers

Taiwanese infosec researchers challenge Microsoft's China espionage finding

PLUS: India calls for global action on AI and crypto; Vietnam seeks cybersecurity independence; China bans AI prescribing drugs

Microsoft, recently busted by Beijing, thinks it's across China's ever-changing cyber-offensive

Sometimes using AI to make hilariously wrong images that still drive social media engagement

IT needs more brains, so why is it being such a zombie about getting them?

Open-book exams aren’t nearly open enough