This article is more than 1 year old

In the red corner: Malware-breeding AI. And in the blue corner: The AI trying to stop it

Behind the scenes of infosec's cat-and-mouse game

AI is still immature

Although this method is capable of beefing up security, it's still pretty crude, says David Evans, a professor of computer science at the University of Virginia interested in adversarial machine learning. Evans was not directly involved with the research.

Evans said the issue of security in AI “is essential and pressing.”

“We are deploying machine learning models in systems that have serious consequences for humans, and this trend is set to accelerate," he told The Register. "It is easy to be overconfident in these systems based on the impressive results they obtain in laboratory testing, but we need to understand how robust the systems are when adversaries try to fool them in deployment, and we need to learn methods for making AI systems that are more robust and trustworthy.

“Deep learning, in particular, is an area where the experimental results have gotten way ahead of our understanding of how things work, and this especially matters in contexts where an adversary may be motivated to make them misbehave.”

The researchers from Endgame and the University of Virginia are hoping that by integrating the malware-generating system into OpenAI’s Gym platform, more developers will help sniff out more adversarial examples to improve machine-learning virus classifiers.

Although Evans believes that Endgame's research is important, using such a method to beef up security “reflects the immaturity” of AI and infosec. “It’s mostly experimental and the effectiveness of defenses is mostly judged against particular known attacks, but doesn’t say much about whether it can work against newly discovered attacks," he said.

“Moving forward, we need more work on testing machine learning systems, reasoning about their robustness, and developing general methods for hardening classifiers that are not limited to defending against particular attacks. More broadly, we need ways to measure and build trustworthiness in AI systems.”

The research has been summarized as a paper, here if you want to check it out in more detail, or see the upstart's code on Github. ®

More about

TIP US OFF

Send us news


Other stories you might like