Go ahead, let the unknowable security risks of Windows Copilot onto your PC fleet
Or maybe don't let Microsoft's desire to defeat Google dictate your defensive strategy
Column I am still amazed how few people – even in IT – have heard of Windows Copilot. Microsoft's deep integration of Bing Chat into Windows 11 was announced with much fanfare back in May.
Microsoft hasn't been quiet about it – indeed it can’t seem to shut up about Copilot this and Copilot that – yet it seems that the real impact of this sudden Copilotization of all the things has somehow managed to fly under the radar.
Perhaps enterprise IT managers are currently blocking the rollout of anything Copilot to keep a handle on what people use in the office. That would make sense if folks were reliably in the office in 2023.
Microsoft has rushed to get Copilot into its operating system
This isn't that world. People are working-from-anywhere. So what about that home laptop someone bought during the pandemic to sit through those endless Teams calls? Is that machine covered by enterprise policies?
Those home machines – there’s only a few hundred million of them – are gradually getting the latest Windows 11 upgrade, and many of those users are slapping a technicolor Copilot icon on the taskbar. (Wisely, it's not enabled by default – yet.) The "PRE" embossed on the bottom of the Copilot icon tells you that it's still in preview – in other words, Microsoft's public beta of a very new and still broadly untested technology.
Examining a range of machines on sale at a few electronics retailers this past weekend, I'd reckon the upgrade is about halfway complete. Microsoft aims to have every Windows 11 PC able to upgrade with a Copilot by the time the 23H2 OS update rolls into November.
Windows Copilot looks just like Bing Chat – which may be why IT folks haven't given it a second look. Bing Chat has been available in Microsoft's Edge Browser for months – no biggie.
But Windows Copilot only looks like Bing Chat. While Bing Chat runs within the isolated environment of the web browser, Copilot abandons those safeties. Copilot can touch and change Windows system settings – not all of them (at least not yet) but some of them, with more being added all the time. That means Microsoft's AI chatbot has broken loose of its hermetically sealed browser, and has the run of our PCs.
In theory that should be fine. But if we've learned anything about AI chatbots, it's that theory is a very poor guide to practice. Nice people ask AI chatbots nice questions and get nice answers. Less nice people ask pointed and dangerous questions so that AI chatbots generate nasty answers.
That's enough of a concern when an AI chatbot is running inside a browser – where it could convince someone that it's a good idea to murder a monarch. Outside the confines of the browser that's a threat capable of causing any serious infosec type to break out in a cold sweat.
Every day we learn of new prompt injection attacks – weaponizing the ambiguities of human language (and, sometimes, just the right level of noise) to override the guardrails keeping AI chatbots on the straight and narrow. Consider a prompt injection attack hidden within a Word document: Submitted to Windows Copilot for an analysis and summary, the document also injects a script that silently transmits a copy of the files in the working directory to the attacker.
That sort of potential attack means that Microsoft needs to be very careful exactly what to enable in Windows Copilot, and how to enable it. Unfortunately, the strange-loop nature of AI chatbots means that it's difficult – maybe even impossible – to game out every possible attack scenario. Human language is just too weird, and the AI chatbots themselves are still very poorly understood.
Microsoft has rushed to get Copilot into its operating system. Nadella and co. feel as though they've stumbled on the best opportunity they've ever had to checkmate Google – the boogeyman they imagine as their biggest competitor. But this year has not been a stellar one for Microsoft's security profile – nor for the way it's fronted up about those issues.
When things start to go pear-shaped with Windows Copilot, will we know? Does Microsoft really believe it can win the race against a generation of Black Hat hackers who use language as a weapon? Or will we see this feature removed after release, as Microsoft rethinks desktop security in the age of pervasive AI? ®