This article is more than 1 year old
UK and US lead international efforts to raise AI security standards
17 countries agree to adopt vision for artificial intelligence security as fears mount over pace of development
The UK's National Cyber Security Agency (NCSC) and US's Cybersecurity and Infrastructure Security Agency (CISA) have published official guidance for securing AI applications – a document the agencies hope will ensure that safety is inherent in AI's development.
The British spy agency says the guidance document is the first of its kind and is being endorsed by 17 other countries.
Driving the publication is the long-running fear that security would be an afterthought as providers of AI systems work to keep up with the pace of AI development.
Lindy Cameron, CEO at the NCSC, earlier this year said the tech industry has a history of leaving security as a secondary consideration when the pace of technological development is high.
Today, the Guidelines for Secure AI System Development again drew attention to this issue, adding that AI will invariably be exposed to novel vulnerabilities too.
"We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up," said Cameron.
"These Guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout.
"I'm proud that the NCSC is leading crucial efforts to raise the AI cyber security bar: a more secure global cyber space will help us all to safely and confidently realize this technology's wonderful opportunities."
The guidelines adopt a secure-by-design approach, ideally helping AI developers make the most cyber-secure decisions at all stages of the development process. They'll apply to applications built from the ground up and to those built on top of existing resources.
The full list of countries that endorse the guidance, along with their respective cybersecurity agencies, is below:
- Australia – Australian Signals Directorate's Australian Cyber Security Centre (ACSC)
- Canada – Canadian Centre for Cyber Security (CCCS)
- Chile - Chile's Government CSIRT
- Czechia - Czechia's National Cyber and Information Security Agency (NUKIB)
- Estonia - Information System Authority of Estonia (RIA) and National Cyber Security Centre of Estonia (NCSC-EE)
- France - French Cybersecurity Agency (ANSSI)
- Germany - Germany's Federal Office for Information Security (BSI)
- Israel - Israeli National Cyber Directorate (INCD)
- Italy - Italian National Cybersecurity Agency (ACN)
- Japan - Japan's National Center of Incident Readiness and Strategy for Cybersecurity (NISC; Japan's Secretariat of Science, Technology and Innovation Policy, Cabinet Office
- New Zealand - New Zealand National Cyber Security Centre
- Nigeria - Nigeria's National Information Technology Development Agency (NITDA)
- Norway - Norwegian National Cyber Security Centre (NCSC-NO)
- Poland - Poland's NASK National Research Institute (NASK)
- Republic of Korea - Republic of Korea National Intelligence Service (NIS)
- Singapore - Cyber Security Agency of Singapore (CSA)
- United Kingdom of Great Britain and Northern Ireland – National Cyber Security Centre (NCSC)
- United States of America – Cybersecurity and Infrastructure Agency (CISA); National Security Agency (NSA; Federal Bureau of Investigations (FBI)
The guidelines are broken down into four key focus areas, each with specific suggestions to improve every stage of the AI development cycle.
1. Secure design
As the title suggests, the guidelines state that security should be considered even before development begins. The first step is to raise awareness among staff of AI security risks and their mitigations.
Developers should then model the threats to their system, considering future-proofing these too, like accounting for the greater number of security threats that will come as the technology attracts more users, and future technological developments like automated attacks.
Security decisions should also be made with every functionality decision. If in the design phase a developer is aware that AI components will trigger certain actions, questions need to be asked about how best to secure this process. For example, if AI will be modifying files, then the necessary safeguards should be added to limit this capability only to the confines of the application's specific needs.
2. Secure development
Securing the development stage includes guidance on supply chain security, maintaining robust documentation, protecting assets, and managing technical debt.
Supply chain security has been a particular focus point for defenders over the past few years, with a spate of high-profile attacks leading to huge numbers of victims.
Ensuring the vendors used by AI developers are verified and operate to high security standards is important, as is having plans in place for when mission-critical systems experience issues.
3. Secure deployment
Secure deployment involves protecting the infrastructure used to support an AI system, including access controls for APIs, models, and data. If a security incident were to manifest, developers should also have response and remediation plans in place that assume issues will one day surface.
The model's functionality and the data on which it was trained should be protected from attacks continuously, and they should be released responsibly, only when they have been subjected to thorough security assessments.
AI systems should also make it easy for users to be safe by default, where possible making the most secure option or configuration the default for all users. Transparency about how users' data is used, stored, and accessed is also key.
- Alibaba shuts down quantum lab, donates it to university
- Amazon says it's ready to train future AI workforce
- AI chip outfit Graphcore's sales to China hit by US export rules
- North Korea makes finding a gig even harder by attacking candidates and employers
4. Secure operation and maintenance
The final section covers how to secure AI systems after they've been deployed.
Monitoring is at the heart of much of this, whether it's the system's behavior to track changes that may impact security, or what's input into the system. Fulfilling privacy and data protection requirements will require monitoring and logging inputs for signs of misuse.
Updates should also be issued automatically by default so out-of-date or vulnerable versions aren't in use. Lastly, being an active participant in information-sharing communities can help the industry build an understanding of AI security threats, offering more time for defenders to devise mitigations which in turn could limit potential malicious exploits. ®