Microsoft enlarges its cockpit of Copilots to include security

It starts with chat bots inventing D&D campaigns and ends with AI all over your Excel and network logs

Microsoft's sprint to push generative AI into all parts of its broad portfolio is reaching the cybersecurity realm with the introduction today of Security Copilot, a GPT-4-based service that might assist security teams pushing back against modern threats.

Security Copilot is supposed to help security professionals identify current attacks and anticipate coming threats, respond more quickly, suss out who the attackers are, and detect threats that typically are missed by connecting dots through its security data analysis, according to Microsoft executives, who introduced the new tool during the company's inaugural Microsoft Security virtual event.

It's similar in some ways to the domain-specific AI-powered Copilot initiatives that Redmond already put in place in Microsoft 365, Dynamics 365, GitHub, Bing, and Edge, though with a focus on cybersecurity.

Security Copilot is designed to ingest and make sense of huge amounts of data – including the 65 trillion various security signals Microsoft pulls every day – and essentially scale an enterprise's security capabilities to meet the rapidly increasing and sophistication of threats.

Since September 2021, the number of password attacks per second has risen from 579 to 1,287, according to Vasu Jakkal, corporate vice president of security, compliance, and identity at Microsoft. The median time for an attacker to access data in a phishing attack is an hour and 12 minutes, apparently.

While cybersecurity is about planning defenses to reduce complexity cost, and risk, "it's also a real-time intelligence game for us," Jakkal said during the event. "It's about how we translate our products and the trillions of threat signals we see every day into one feedback cycle to improve operational security posture."

We can't help but point out that GPT-4, the non-intelligent brains behind this security service, was described two weeks ago as "flawed" and "still limited," by the CEO of OpenAI, the outfit that built the model with the Windows giant.

A change in how AI is used

Microsoft Copilot, which runs on Azure, also marks a change in the way AI systems and machine learning are used, said John Lambert, distinguished engineer and corporate vice president for security at Microsoft.

"ML is commonplace, but it's often deep inside the tech," Lambert said. "Customers benefit from it, but they couldn't really interact with it directly. We're going from a world of task-based machine learning – good at phishing or ransomware – to generative AI based on foundation models. A world with Copilots that can simplify the complex, catch what others miss, and address the cybersecurity talent gap by bridging critical knowledge gaps. Copilots will upskill defenders everywhere."

Microsoft is investing billions of dollars in OpenAI and aggressively integrating the upstart's technology into its products. This strategy – and Microsoft's hype around it – has garnered both praise for the possibilities and criticism for incorrect answers, odd performance, adding fuel to the ongoing debate about AI ethics.

Forrester senior analyst Allie Mellen is falling on the "pro" side of Security Copilot, telling The Register "this is the first time a product is poised to deliver on true improvement to investigation and response with AI."

"With this announcement, we leave an era behind where AI was relegated to detection and enter one where AI has the potential to improve one of the most important issues in security operations: analyst experience," Mellen added.

Takes in internal and external data

We're told Microsoft Copilot, which is in private preview and thus not available to world-plus-dog for the time being, combines OpenAI's chat-bot tech with Microsoft's internal security-specific model of skills and signals, and will link to such security products as Microsoft Defender, Sentinel, Purview, and Intune. The idea being that you can query this system just like you'd ask ChatGPT and similar services a question, and it'll answer it using Redmond's custom security-specific model.

Those queries could be general ones – such as, explain this vulnerability to me – or ones specific about your environment, such as: is there evidence in my logs of exploitation of a specific flaw in Exchange. The system will drill into what it knows, and what it can find out from your IT telemetry.

The generative AI focus is important, Jakkal said. As in other instances of GPT or ChatGPT integration into Microsoft products, Security Copilot won't always get everything correct, she said, noting that it once in an answer referred to Microsoft 9, an operating that never existed. But it will learn from feedback from such mistakes and improve over time.

The security logs and other data organizations feed into it for analysis will be protected by keeping it within their walls, according to the US giant. The security-only tool works on a closed-loop learning system, so the data won't be used to train the foundational AI models, we're assured.

Prompts and pinboards

As we said, the information enterprises can draw from the tool is broad. It will respond to natural language questions to detail the security threats from the previous month, list incidents in the enterprise, summarize a vulnerability, or show alert information from other security tools.

Security Copilot includes a developing list of prompts that security pros can use and collaboration capabilities, such as a pinboard section to enable group work. The AI tool can create graphs and slideshows based on data it ingests and can reverse engineer a threat or vulnerability, it is claimed.

Using generative AI in cybersecurity in some ways will be enabling security pros to keep up with threat groups, which are incorporating ChatGPT and similar tools into their malicious operations. ®

More about

TIP US OFF

Send us news


Other stories you might like