Google DeepMind minds the patch with AI flaw-fixing scheme
CodeMender has been generating fixes for vulnerabilities in open source projects
Google says its AI-powered security repair tool CodeMender has been helping secure open source projects through automated patch creation, subject to human approval.
The Chocolate Factory is already convinced that its AI-driven fuzzing tool, OSS-Fuzz, can find software vulnerabilities that humans miss. CodeMender closes the loop by proposing fixes for flawed code.
CodeMender is based on the company's Gemini Deep Think model. According to Raluca Ada Popa, senior staff research scientist at Google's DeepMind, and John "Four" Flynn, VP of security at DeepMind, the AI-based agent can identify the root cause of a vulnerability and can generate and review an appropriate patch before final human sign off.
"Over the past six months that we’ve been building CodeMender, we have already upstreamed 72 security fixes to open source projects, including some as large as 4.5 million lines of code," wrote Popa and Flynn in a blog post.
Other AI bug hunting systems have also demonstrated that they can help repair vulnerabilities when wielded by knowledgeable security practitioners. Google's AI folk argue that attackers are already using AI models to help them craft attacks, so it's necessary for defenders to arm themselves similarly.
CodeMender is described as an agent because it's not simply a large language model (e.g. Gemini). It has access to a variety of tools for tasks like static analysis, dynamic analysis, differential testing, fuzzing, and SMT analysis. These allow the agentic system to assess the underlying root cause of the vulnerability and to verify the proposed patch so it doesn't introduce regressions.
- Senate report says AI will take 97M US jobs in the next 10 years, but those numbers come from ChatGPT
- Stargate is nowhere near big enough to make OpenAI's tie-ups with AMD and Nvidia work
- OpenAI tells developers ChatGPT is ready to be their gatekeeper
- OpenAI IP promises ring hollow to Sora losers
Popa and Flynn say that CodeMender has proven useful not only for fixing vulnerabilities, but also for rewriting existing code to use more secure data structures as a proactive form of defense.
They point to how CodeMender was used to apply -fbounds-safety annotations to portions of an image compression library called libwebp. The annotations tell the compiler to add a bounds check to the code, which prevents the exploitation of buffer overflow or underflow conditions. Had these been in place two years ago when a heap buffer overflow vulnerability in libwebp (CVE-2023-4863) was abused, iOS users would not have been affected by the zero-click exploit, DeepMind claims.
The DeepMinders say that while CodeMender's early results show promise, the system's patches are all being vetted by humans for the sake of reliability. They hope at some point to release CodeMender to the general public.
Google has also launched a dedicated AI Vulnerability Reward Program (VRP) that revises and clarifies the rules related to AI bugs that were issued under its Abuse VRP in 2023. AI issues reported under the Abuse VRP have led to payouts totalling more than $430,000 to date. The top award under the AI VRP is $20,000.
In addition, the search biz has updated its Secure AI Framework to SAIF 2.0, with new details on the risks posed by AI agents.
Google's SAIF 2.0 guidelines for AI agents recall sci-fi author Isaac Asimov's three laws of robotics: "agents must have well-defined human controllers, their powers must be carefully limited, and their actions and planning must be observable." Expect that advice to be ignored with the same enthusiasm that put robocars on US streets. ®