This article is more than 1 year old

How AI Is powering a new generation of cyber-attacks

The battle of the algorithms has begun

Sponsored It was 2017 and a hacker had gained access to a digital system at an organization in India. At first it seemed like just a normal intrusion - the kind that happens thousands of times each day. But this one was different.

When it examined the hacking incident, cybersecurity company Darktrace found that the attacker had analysed the organization’s digital behaviour so that they could mimic it to stay hidden. They hadn't done this manually - they'd used machine learning software to do it for them. Darktrace spotted the attack because it knew what normal behaviour looked like for that organization. The cyber security company uses AI to understand organizations’ ‘patterns of life’ to spot subtle deviations indicative of a cyber-threat. The incident in India indicated a worrying trend: attackers were beginning to use AI too.

Changing the balance of power

AI is a game changer in the ‘cat and mouse’ conflict between defenders of critical digital environments, and hackers looking to attack those systems. Traditional attacks have relied on human hackers who do their best to navigate a company’s digital defenses. Now, just as legitimate businesses use AI to disrupt key sectors, hackers are using AI to automate their attacks, driving new efficiencies into their operations. Welcome to the world of offensive AI.

The use of AI for cyber-attacks looks set to tip the balance even further in the attackers' favor, and it's happening more quickly than you might think. In its report on offensive AI, Forrester asked over 100 cybersecurity decision makers about the security threats they face. Almost nine in ten thought it inevitable that these attacks would go mainstream and almost half expected to see some this year.

Weaponising AI

Intelligence agencies have already clued into AI's potential as a hacking tool. The Defense Advanced Research Projects Agency (DARPA) held an AI-powered hacking challenge in 2016 to explore how the technology could automate both attack and defence techniques. This Cyber Grand Challenge was only the beginning. Since then, it has explored human-assisted AI hacking techniques in a project called CHESS (Computers and Humans Exploring Software Security).

Why are attackers so attracted to AI as an offensive tool, particularly when attacks without it are still working? To a cyber-criminal ring, the benefits of leveraging AI in their attacks are at least four-fold:

  • It gives them an understanding of context
  • It helps to scale up operations
  • It makes detection and attribution harder
  • It ultimately increases their profitability

Attackers have finite resources, like everyone else, so they are always on the hunt for technologies that can enable them to do more with less. Using AI to automate these attacks gives them a virtual digital army of attackers, all operating at computer speed.

Forrester found that 44 percent of organizations already take over three hours to spot an infection, while almost 60 percent took more than three hours to understand its scope. An attacker using AI can infect a system in seconds and increase that time gap still further, giving them more time to locate and steal valuable data.

AI-powered reconnaissance

Offensive AI and automation will touch every part of the attack lifecycle, from initial reconnaissance through to the final stage: usually either ransomware or data exfiltration.

During the reconnaissance phase, automated bots can sift through thousands of social media accounts at machine speed, autonomously selecting prime targets for spearphishing. Chatbots can interact with these employees via social media to gain trust, building relationships with them. The technology behind this is improving daily - for example, consider the success of OpenAI's GPT-3 text generation model.

Before sending spearphishing emails at scale, you might need to put a face to a name on your newly created email account. Deepfake faces, created by generative adversarial neural networks, are available online for free. They also make great profile pictures for fake people on social media sites like LinkedIn.

Deepfakes offer another strong attack vector for cybercriminals. Not all the attacks that use them are visual. Fraudsters used AI to mimic the voice of an energy company's German chief executive during a phone conversation with its UK-based CEO. The scam was good enough to fleece the company for €220,000. Things will only get worse: Forrester predicts that deepfakes will cost businesses $250m this year.

Once upon a time, spearphishers would have to spend hours tracking and profiling specific targets before trying to scam them. AI's ability to analyse and mimic language automatically means it can launch attacks in volume via social media sites. Security company ZeroFOX demonstrated this, writing a neural network-based prototype tool that 'read' target Twitter posts and then crafted convincing-sounding tweets targeting specific people. Those tweets could easily persuade people to download malicious documents and infect a computer.

Hiding in plain sight

These capabilities now allow attackers to launch targeted, sophisticated email attacks that appear indistinguishable from legitimate communication. Relying on employees to tell friend from foe becomes a lost cause. It only takes one malicious email to land for attackers to carry the keys to the kingdom.

The next stage involves moving laterally through the network to find other machines to exploit, latching onto different parts of the company's infrastructure.

The key is to do it stealthily, and AI can help here too. Empire is a post-exploitation hacking framework that makes it easier for attackers to communicate with their malware once it's on a system. According to Darktrace, it also enables them to hide their activities in plain sight by restricting command and control traffic to periods of peak activity.

Malware can also use AI to hide itself by making its behaviour unpredictable. In 2018 IBM Research announced DeepLocker, a system that embedded its payload in an innocuous-looking application like videoconferencing software. It used a deep neural network to decide when it would trigger its payload, making it difficult to test in a sandbox. In tests, IBM programmed the AI to only trigger if it recognised the face of a specific system user.

In the future, automated decision making tools could move around a system without any guidance from the hackers at all, minimising or even eliminating telltale command and control traffic.

After attaching themselves to a range of systems, it's time for the hacker to elevate their privileges. That requires login credentials. Password cracking has traditionally been a brute force affair, involving dictionary attacks against lists of known words and obvious alphanumeric combinations. Attackers can refine those attacks by using keywords that are more relevant to that user or organisation. To do that, it needs to read the target's website.

Are key portions of the site protected by a CAPTCHA? No problem. An AI-powered CAPTCHA breaker can gain access to sites by mindlessly selecting pictures of traffic lights or recognising text (there are online APIs available for under a dollar a go). Once in, it's easy to spider the site using a unique word extraction tool like CeWL. While this tool doesn't use AI, there are other proofs of concept that take things a stage further. Researchers at the Stevens Institute of Technology in Hoboken, New Jersey created PassGAN, a tool that feeds unique word lists into generative adversarial networks (the same technique used in deepfakes) to generate large volumes of likely passwords.

Where do we go from here?

The difficulty in attribution of cyber-crime makes it hard to tell when an attack doesn’t have a human behind the keyboard. However, there are hallmarks, such as attacks that seem to blend into the environment, and malware that is aware of sandboxing environments and changes its behavior accordingly. And according to Darktrace, those hallmarks are becoming more and more common in today’s cyber-attacks. As AI-powered attacks increase, legacy security tools won't be able to see them because they use inflexible rule sets. AI's adaptive algorithms will calculate ways to evade them.

Instead, we must prepare to fight fire with fire, countering AI's weaponisation by using it to defend our own networks. Over 80 percent of Forrester's respondents believe that we need tools that 'think' the same way these new AI-powered attacks do.

Instead of using rules to identify pre-defined signs of malicious behaviour, AI-powered cybersecurity looks for anything that deviates from behaviour it perceives as normal. To do this, the technology looks beyond individual data points such as email content, domains, and IP addresses. It dynamically assesses hundreds of data points, assessing them not against static rules but as part of a broader statistical model that takes the full history of events across the email, cloud and network realms into account.

By learning the ‘patterns of life’ for every user and device in an organization, defensive AI can establish what is and what is not ‘normal’ behaviour. When it sees activity that deviates from those patterns of life, it raises the alarm. In order to cause damage, attackers - whether human or AI - must by definition do something out of the ordinary. Its constantly evolving understanding of what ‘ordinary’ looks like is what helps defensive AI fight back.

Adaptability is a key part of this approach because the technology is constantly retraining to accommodate new data. That's important because what's normal for an organization one month may not be normal three or six months later, especially after a global event like a pandemic that seismically alters working patterns.

And visibility is crucial in deciding the ultimate winner in this battle of algorithms. AI cyber defense has an overview of the entire digital environment - not just a subsection of it. The attacker can make an inroad or inroads but ultimately will never have that complete visibility, and so defenders’ understanding of ‘normal’ will always be more precise, more up-to-date and more informed.

Part of any combat strategy involves looking not just at today's fight, but tomorrow's. Smart defenders will anticipate the next generation of attack weapons and prepare appropriate defenses. At a point where we're facing a step change in capability, that forward-looking approach is more important than ever.

Sponsored by Darktrace

More about

TIP US OFF

Send us news