Oh no, you're thinking, yet another cookie pop-up. Well, sorry, it's the law. We measure how many people read us, and ensure you see relevant ads, by storing cookies on your device. If you're cool with that, hit “Accept all Cookies”. For more info and to customize your settings, hit “Customize Settings”.

Review and manage your consent

Here's an overview of our use of cookies, similar technologies and how to manage them. You can also change your choices at any time, by hitting the “Your Consent Options” link on the site's footer.

Manage Cookie Preferences
  • These cookies are strictly necessary so that you can navigate the site as normal and use all features. Without these cookies we cannot provide you with the service that you expect.

  • These cookies are used to make advertising messages more relevant to you. They perform functions like preventing the same ad from continuously reappearing, ensuring that ads are properly displayed for advertisers, and in some cases selecting advertisements that are based on your interests.

  • These cookies collect information in aggregate form to help us understand how our websites are being used. They allow us to count visits and traffic sources so that we can measure and improve the performance of our sites. If people say no to these cookies, we do not know how many people have visited and we cannot monitor performance.

See also our Cookie policy and Privacy policy.

This article is more than 1 year old

Whoa, bot wars: As cybercrooks add more AI to their arsenal, the goodies will have to too

The future is automated, says Trend Micro bod

Infosec techies should prepare to both fend off AI attacks and welcome the technology into their armoury of tools, reckons Trend Micro's director of cybercrime research.

The security world is standing on the brink of an AI-powered arms race, claimed Rob McArdle at the firm's Cloudsec conference in London today.

Speaking on stage alongside Rik Ferguson, Trend's refreshingly British research veep, McArdle warned that "deepfake ransomware" was one potential attack vector of the near future.

Describing the technique, McArdle said an attacker could use deepfake tech to create a video with blackmail potential: the obvious use case is something involving nudity, or perhaps someone making outrageous statements. The attacker could then upload that video somewhere and send the mark a private link along with threats to publish the video widely unless large sums of money were paid immediately.

"It works against politicians and teenagers in particular," warned McArdle. "Politicians have the problem that even if it's found out to be fake, nobody fact-checks any more," referring to the infamously doctored video of American politician Nancy Pelosi, which was slowed down to make her seem drunk or unwell.

As for teenagers, the attack vector is hideously obvious: adolescents in relatively high-pressure social situations filled with lots of immaturity. "Teens are very judgy," observed McArdle. "That's exactly what happens today with sextortion scams… teens have committed suicide."

The technique is less likely to work against you or I, though: "My friends will say Bob drinks too much Guinness, his chest doesn't look like that."

No need to live on a remote mountain

Away from the misery of modern scammers targeting individuals, McArdle also waxed lyrical about how we could potentially end up in an AI-on-AI arms race. Building on one of Ferguson's points made earlier in their talk, about the rate at which machine learning tech improves over time, Trend's cybercrime research director said we could see techniques used to play Go reconfigured to help breach corporate networks – needing a similar escalation by human defenders.

"AI kind of brings two things," he told The Register after his on-stage talk. "It comes into play on the defensive side to deal with the scale of the attacks we're seeing today."

"Dealing with tens or hundreds of thousands of alerts per hour. Simply put, humans can't process that kind of data. On the defensive side we need AI to make smart decisions; not 'if packet bad, drop it'."

And it's not a case of buying some magical black box to make all the nasties go away: humans will still need to be in the loop, but with added information at their fingertips. Security operations centre inhabitants of the near future are "going to have to start trusting AI decisions" before they become overwhelmed.

"Even if it's triaging for them," said McArdle, "there's a point we're going to have to say we trust the algorithms to take action in these swathes of scenarios. They should be there to help those analysts [and] give them way more intelligence about the things that do bubble up."

mouse

Who will save us from deepfakes? Other AIs? Humans? What about vastly hyperintelligent pandimensional beings?

READ MORE

Lest we be disheartened by the idea of computers racing ahead of humans' ability to fend off deepfakes and ever more sophisticated attempts to compromise networks and extract ransoms, it's not all bad news. Deepfake tech is "still determinable" in McArdle's view: humans (and mice, as it happens) can still spot the subtle signs that a video isn't all that it seems, even if it's "harder for your ears to recognise a fake voice".

"It's very easy on the infosec side to talk about the doom and gloom but all these technologies bring with them some real positives," said McArdle after El Reg pondered whether taking up a new life as a Tibetan monk on a remote mountain top might be the ultimate defence against cybercrime. "At least in my mind it outweighs the things we use for negative purposes but those don't necessarily make as good news." ®

Similar topics

TIP US OFF

Send us news


Other stories you might like