What if AI produces code not just quickly but also, dunno, securely, DARPA wonders
As 70% of boffinry nerve center's projects involve machine learning
A DARPA leader has revealed that around 70 percent of the US government agency's programs involve AI in some shape or form, and those projects could have serious ramifications for the future of jobs in software development.
Speaking at a Center for Strategic and International Studies event last week, Dr Matt Turek, deputy director of DARPA's Information Innovation Office (I2O), talked about a wide array of AI projects DARPA is working on and the overwhelming dominance of this technology within the agency currently.
"There is really broad penetration across the agency," Turek said. "From an I2O perspective we're really looking to try and advance, you know, how do we get to highly trustworthy AI – AI that we can bet our lives on – and that not be a foolish thing to do."
The I2O currently has four research thrusts: Proficient AI; resilient, adaptable and secure systems; advantage in cyber operations; and confidence in the information domain. Only one of those four thrusts directly mentions AI, but that doesn't mean it isn't involved in all of them.
"There's a lot of synergies across those thrust areas," Turek stated. "We have efforts that are blending both advancing AI and advancing the state of capability in cyber … I think it's worth saying that AI and autonomy is really being used broadly across the agency now."
ChatGPT creates mostly insecure code, but won't tell you unless you ask
READ MOREWhile many of the AI projects at DARPA are focused on how the technology can benefit the Department of Defense, that's hardly the only focus area, nor is I2O limiting its research to staying ahead of the US's military adversaries.
"It's not just [the] US government that needs to have these capabilities. The attack surface is broad," Turek said.
Citing the importance of commercial industries like scientific research, critical infrastructure and even online commerce to national security, Turek said I2O wants to "create commercial industry in this space" through its research.
One of the key ways to do that, according to Turek, is developing artificial intelligence that can not only write code, but do it in a secure and "provably correct" manner. We all know today's LLMs have a habit of inventing bad or insecure code.
"There's really interesting use cases that our commercial industry is pursuing now around using LLMs to help with the code generation process," Turek said. "But what if we could make it so that they produce not just code more quickly, but secure code?"
"That would allow us to scale out, you know, robust, secure software development processes," Turek said, noting it's a critical concept for the Department of Defense, but a concept area, not an actual area of investment – yet.
- Simon Willison interview: AI software still needs the human touch
- If you use AI to teach you how to code, remember you still need to think for yourself
- How DARPA wants to rethink the fundamentals of AI to include trust
- Mamas, don't let your babies grow up to be coders, Jensen Huang warns
While AI isn't writing secure code for DARPA or commercial industries yet, the agency is seeking solutions to turn it toward examining existing software for vulnerabilities. That initiative, the AI Cyber Challenge, was discussed last year at Black Hat, and Turek mentioned it last week as well, saying it's looking for vulnerabilities in critical infrastructure software and open source projects.
Developers aren't the only category of tech professionals that DARPA's AI initiatives could endanger, though. During his talk, Turek also mentioned the CASTLE program, an I2O initiative training autonomous AI agents to handle network security. At the outside end of the program, Turek said CASTLE AI agents would ideally be able to prevent the need to rebuild networks during an APT compromise, which he noted often results in the need to "start from scratch and rebuild."
"CASTLE is really focused on trying to build those sorts of automated defensive agents that, again, can preserve some level of critical network functions," Turek said.
Another program, PROVERS, is seeking to use AI to guide software development toward the development of "proof-friendly" systems.
All of this relies on developing AI that is itself understandable in its processes – something that Turek admits isn't quite there yet.
"Modern statistical machine learning approaches oftentimes are opaque and they're not introspectable," Turek said. "I still feel like there's a lot of work that needs to be done."
So don't worry about an AI taking your software development job yet – we've seen plenty of examples of AI developing lousy code, but that doesn't mean the tech won't be pawned off on developers anyways. It's just a matter of time. ®