This article is more than 1 year old

Prepare for weaponized AI that adapts in real-time to your defenses, says prof

And machine learning that flags your CEO as suspicious

GTC Weaponized AI that goes beyond Zelenskyy deepfakes and is keeping some security researchers and data scientists up at night.

"We should expect that AI will be weaponized and used to make attacks so that the attack you're confronting is itself trying to adapt in real time to the countermeasures that you're taking," said Oregon State University professor emeritus Thomas Dietterich during a panel discussion this week at Nvidia's GTC.

"That's pretty frightening," he continued. "We need to start preparing for those kinds of attacks."

We need to start preparing for those kinds of attacks

Interesting point, though when these sorts of attacks – from network intrusion to viral misinformation – will start, we're not sure. Right now, miscreants are getting along fine with good old-fashioned phishing emails, fake installers, stolen or purchased credentials, social engineering, and bogus online posts.

Sophos chief scientist Joshua Saxe meanwhile pointed to Russian government-created fake social media accounts used to spread anti-Ukraine propaganda since the start of the war. "It looks very convincing," he said. "And it's almost cost free."

While this is a near-term look at weaponized AI, adding language models to the machine-learning arsenal and developing better social media bots will likely fuel the chaos that nation-states and cybercriminals can create, Saxe added. "Disinformation like that is a big issue."

However, defenders can also use data science and AI to keep intruders out of their networks. "There are some mature applications of AI in cybersecurity, and they all focus on augmenting traditional detection approaches in cybersecurity with machine learning models," Saxe said. 

Most security vendors are already doing this, and augmenting signature-based detection with machine-learning models, he added. "Whether it's detecting phishing emails and detecting malicious binary programs or detecting malicious scripts. Those are mature applications. We know how to build those models."

There's still room to improve them, but that's more of an engineering challenge, he added. 

Then there are the more "cutting-edge" AI applications that Sophos and other security firms are developing. Saxe said his biz is working on feedback loops between its own security analysts that are defending Sophos' and customers' networks and machine-learning models.

Feedback loops

"These models are like recommender systems," he explained. "They recommend alerts to analysts, they prioritize alerts, this kind of thing. There's a bunch of areas that are testing the boundaries of what's possible."

But don't cut analysts out of this feedback loop, Dietterich added. 

Traditional supervised learning can teach machines what a phishing email looks like compared to a normal, "safe" email so they can spot the malicious one. And while machine learning algorithms can support novelty detection — these can be used to detect zero days or other novel attacks — "a classic problem is false alarms," Dietterich said.

"The false alarm rate can be really severe," he continued. "After all, you're looking for maybe one needle in 10,000 haystacks."

This is where an analyst comes into play. "We're recommending that an analyst takes a look at this process, or this file, or this part of the log and then the analyst can give us feedback and say that's a false alarm, or that's interesting. Show me more of those."

By incorporate analysts' feedback, the false alarm rate typically drops from 80 percent or 90 percent down to about 10 percent, Dietterich said. "It's still high, it's still expensive, but it's getting into the realm of being usable."

These high false-alarm rates highlight the importance of teaching machines to separate abnormal behavior from normal behavior. 

AI to detect insider threats

Dietterich worked on a couple of Defense Advanced Research Projects Agency (DARPA) initiatives around insider threats and detecting advanced persistent threats in networks. DARPA is the US Department of Defense's agency that develops emerging technologies for the military, and in one of these projects the team monitored 5,000 government employees for abnormal behavior.

"You could have a case where 80 percent of the employees on the same day visit a website they'd never visited before," he said. "It turns out, it's an HR requirement that they have to go visit it — but you don't want to turn that into a false alarm. So you really need to be normalizing employees against each other."

However, normal behavior for a systems administrator is different than normal behavior for a research assistant, so this requires identifying "sub communities" within an organization that behave in similar ways, and tracking these smaller groups' behaviors over time.

"By normalizing in this way you can filter out a lot of the false alarms and the noise," Dietterich added."The other thing is: when you give an alarm to an analyst, you don't dump it and say 'I think this person is suspicious.' You need to say here's why, and give some kind of an explanation."

And another helpful hint: don't allow AI-based detection to alienate the C-suite. 

"One risk of using statistical anomalies as a basis for trying to identify attacks is that if you are in a small subpopulation within a larger organization, everything you do is gonna look weird compared to the majority," Dietterich. This puts these small subpopulations at risk of becoming a major source of false alarms because "machine learning tends to penalize the people who are unusual."

In corporate environments, "the rarest users are the C-suite," he added. "And they get very unhappy if we keep cutting them off because we think they're attacking the organization because of false alarms in the machine learning." ®

More about

TIP US OFF

Send us news


Other stories you might like