Tackling potty-mouth chatbots to leaky LLMs. What's life like in Microsoft's AI red team?

Group founder reveals all to The Reg

Interview Sometimes people want to abuse AI systems to leak corporate secrets. Other times they just want to force an LLM to talk like a pirate.

Either way, it's the AI red team's job to anticipate how someone may try to weaponize these systems, and then help address those the issues.

Ram Shankar Siva Kumar founded Microsoft's AI red team in 2019, long before the rise of ChatGPT. Kumar and his colleagues have encountered various AI failures, and you can hear about them and more in an interview with The Register below.

Youtube Video

"The common misconception is: AI red teams can replace all traditional red teaming," he said. "That's absolutely not the case."

That being said, there are certain new-ish concerns that are very specific to AI and go beyond more traditional security failures. This includes things like biases in ML models or triggers for hallucinations.

In finding and fixing these failures, Kumar says it's useful to assume different attacker personas, and in our discussion he gave examples of each.

"The goal of the AI red team is not just to think about what is possible today with the current state of adversaries," he said. "We also want to push the boundaries [with] attacks that have not yet manifested." ®

More about


Send us news

Other stories you might like