GCHQ's NCSC warns of 'realistic possibility' AI will help state-backed malware evade detection
That means Brit spies want the ability to do exactly that, huh?
The idea that AI could generate super-potent and undetectable malware has been bandied about for years – and also already debunked. However, an article published today by the UK National Cyber Security Centre (NCSC) suggests there is a "realistic possibility" that by 2025, the most sophisticated attackers’ tools will improve markedly thanks to AI models informed by data describing successful cyber-hits.
"AI has the potential to generate malware that could evade detection by current security filters, but only if it is trained on quality exploit data," the report by the GCHQ-run NCSC claimed. "There is a realistic possibility that highly capable states have repositories of malware that are large enough to effectively train an AI model for this purpose."
Although the most advanced use cases will likely come in 2026 or later, the most effective generative AI tools will be in the hands of the most capable attackers first – and these tools will also potentially usher in many other benefits for attackers.
AI is set to make the discovery of vulnerable devices easier, the NCSC predicted, reducing the window defenders have in which to ensure vulnerable devices are patched with the latest security fixes before attackers detect and compromise them.
Once initial access to systems has been established, AI is also expected to make the real-time analysis of data more efficient. That will mean attackers can more quickly identify the most valuable files before commencing exfiltration efforts – potentially increasing the effectiveness of disruptive, extortion, and espionage efforts.
"Expertise, equipment, time, and financial resourcing are currently crucial to harness more advanced uses of AI in cyber operations," the report reads. "Only those who invest in AI, have the resources and expertise, and have access to quality data will benefit from its use in sophisticated cyber attacks to 2025. Highly capable state actors are almost certainly best placed amongst cyber threat actors to harness the potential of AI in advanced cyber operations."
Attackers with more modest skills and resources will also benefit from AI over the next four years, the report predicts.
At the lower end, cyber criminals who employ social engineering are expected to enjoy a significant boost thanks to the wide-scale uptake of consumer-grade generative AI tools such as ChatGPT, Google Bard, and Microsoft Copilot.
It's likely we'll be seeing far fewer amateur hour phishing emails and instead read more polished and plausible prose that’s tailored to the target's locale. Lack of language proficiency may become less obvious.
For ransomware gangs, the data analysis benefits afforded criminals post-breach could allow for more effective data extortion attempts.
- UK water giant admits attackers broke into system as gang holds it to ransom
- Atlassian Confluence Server RCE attacks underway from 600+ IPs
- Slug slimes aerospace biz AerCap with ransomware, brags about 1TB theft
- Australia imposes cyber sanctions on Russian it says ransomwared health insurer
Ransomware players often steal hundreds of gigabytes of data at a time – most of which is comprised of ancient documents containing little of value. The NCSC predicts that with more advanced, AI-driven tools, it's possible we'll see criminals more easily able to identify the most valuable data available to them and holding that to ransom – potentially for far greater ransom demands.
Those with the greatest ambitions may also want to target data that will help them develop their own proprietary tools and push their capabilities closer to those of the most sophisticated nation-states.
"Threat actors, including ransomware actors, are already using AI to increase the efficiency and effectiveness of aspects of cyber operations, such as reconnaissance, phishing, and coding. This trend will almost certainly continue to 2025 and beyond," the report states.
"Phishing, typically aimed either at delivering malware or stealing password information, plays an important role in providing the initial network accesses that cyber criminals need to carry out ransomware attacks or other cyber crime. It is therefore likely that cyber criminal use of available AI models to improve access will contribute to the global ransomware threat in the near term."
All this is expected to intensify the challenges faced by UK cyber security practitioners over the coming years – and they’re already struggling with today’s threats.
Cyber attacks will "almost certainly" increase in volume and impact over the next two years, directly influenced by AI, the report concludes.
The NCSC will be keeping a watchful eye on AI. Delegates of its annual CYBERUK conference in May can expect the event to be themed around the emerging tech – highlighting in greater depth the considerable threat it presents to national security.
"We must ensure that we both harness AI technology for its vast potential and manage its risks – including its implications on the cyber threat," declared the NCSC's outbound CEO Lindy Cameron today.
"The emergent use of AI in cyber attacks is evolutionary not revolutionary, meaning that it enhances existing threats like ransomware but does not transform the risk landscape in the near term.
"As the NCSC does all it can to ensure AI systems are secure by design, we urge organizations and individuals to follow our ransomware and cyber security hygiene advice to strengthen their defenses and boost their resilience to cyber attacks."
Today's report comes just a few months after the inaugural AI Safety Summit was held in the UK. That summit saw the agreement of The Bletchley Declaration – a global effort to manage AI's risks and ensure its responsible development.
It's just one of the many initiatives that governments have taken in response to realizing the threat AI presents to cyber security and civil society.
Another outcome of the AI Safety Summit was the plan for AI testing, which will see the biggest AI developers share code with governments so they can ensure everything is above board and prevent any unwanted implementations from spreading widely.
That said, the 'plan' is just that – it's not a legally binding document and doesn't have the backing of the countries of which the West is most fearful. Which raises the obvious question of how useful it's going to be in real terms. ®