Thinking straight in the SoC: How AI erases cognitive bias

The whispering voice presents an alternative point of view to steer cyber security pros in the right direction

Sponsored Feature What do bears and cyber criminals have in common? Both of them are scary, and they both have the same effect on security teams.

Hanah-Marie Darley has spent years studying the effect of threats on the human brain. Now the head of threat research at cyber security company Darktrace, she uses her psychology background to understand how cybersecurity pros tackle incoming threats. One thing she's found is that the brain is built for efficiency. "It's always looking to maximise resource and minimise decision making because it takes a lot of cognitive processing."

When faced with a stressful situation like running into a grumpy bear, the brain puts its energy exactly where it's needed. It draws blood from the frontal cortex (which makes logical decisions) to the amygdala, a deep-seated cluster of neurons that makes visceral decisions. "That means you're operating in fight, flight, or freeze mode, and you're unable to make thoughtful decisions," Darley warns.

Living in their amygdala made perfect sense to our ancestors. They didn't need to mull the nuances of ursine escape strategies or talk their way out of a situation. They just needed their brains to make their legs move very quickly.

When fight, flight, or freeze leads us wrong

The problem is that human society evolved far more quickly than the brain. Security analysts who run into stressful situations like ransomware intrusions behave in the same way as their neanderthal ancestors, but with a little less grunting. Their brains switch from logic mode to fight or flight mode, says Darley.

Fast-moving legs are rarely needed in a security operations centre (SoC), because analysts are for the most part sitting down. What they do need is their logical circuitry, because these problems are highly nuanced. Not only is the amygdala not needed, but it's actually a hinderance, because it is also the part of the brain that does most of our emotional processing. Emotion is the last thing you want in the driving seat when your stocks are falling, the helicopter you're piloting loses power, or you've just discovered an unknown program deleting endpoint data without permission.

The brain, however, is the little organ that could. It doesn't give up. Instead, it does the best that it can in this situation, using the most efficient decision-making apparatus that it can in fight or flight mode.

These scaled-down reasoning mechanisms are known as cognitive biases. "They are just pathways that your brain creates to make decisions quickly, because it is an efficient organism," says Darley.

Tunnel vision

Cognitive bias is like tunnel vision. It blocks off large areas of cognition to focus on what the brain sees as the most likely options. This is great in situations where the threat and the parameters of the decision are clear. If you face a bear, then running away is the most obvious decision. During a phishing attack, though, the parameters are more complex. In this situations, cognitive biases can lead you in the wrong direction by blinding you to all the potential decisions you could make.

"Human psychology is important to consider from a security team perspective, because we all have cognitive biases," explains Darley. "You're making thoughtful decisions quickly, so you're not considering a wide range of ideas," she says. "That means as soon as you can find a template, you will apply it to that decision."

These cognitive biases are baked into our thinking and show up everywhere. "A really great example is confirmation bias," says Darley. Let's say you've been reading a lot about the Emotet malware strain lately. The brain groups things into categories, and then tends to look for the category that's most obvious to it. Recent inputs - like lots of Emotet articles - tend to bring that category to the fore. "When you look at your next threat and find either consciously or subconsciously that it starts to look like malware, you will probably look for Emotet even if you don't realise it," she explains.

There are many more of these biases affecting security teams. Attention bias leads us to focus on some things without paying attention to others. Anyone who can't remember their drive home from work because they had been running through a past conversation in their head has been a victim of this. Security pros often have vast amounts of log information at their disposal, and won't be able to consume all of it, meaning that they must choose what to digest. That makes them highly prone to attention bias, which is deadly, because moonwalking bears are the worst. Another is bikeshedding, aka Parkinson's law of triviality - concentrating too much on simpler issues at the expense of more difficult, complex ones.

These hardwired tendencies can lead to the wrong operational security outcomes, and it gets worse with fatigue, Darley points out. "We make worse decisions at the end of the day, and oftentimes ransomware strikes at 5pm," she says.

How cognitive bias affects strategies

Cognitive bias also takes its toll at a tactical and strategic level, she warns. She recalls one organisation she dealt with that was reviewing where to spend its cybersecurity budget.

The company invested a little in endpoint detection and upgrading its security information and event management (SIEM) system. However, the management team was preoccupied with detecting novel threats and protecting itself against zero-day attacks, driving it to invest mostly in hiring threat researchers and threat intelligence software. Midway through the year, the company found itself at the sharp end of a phishing attack that compromised ten devices in its legal department. The attackers got to 5GB of sensitive data.

"There's a lot of internal restructuring that has to happen after a compromise like that," says Darley. But instead of investing more in endpoint detection, anti-phishing, or email gateway tools, the management team focused even more on threat hunting.

"This was a mix of confirmation bias and a couple of other biases," Darley explains. "Instead of letting it inform them and change and pivot their security strategy, it made them double down."

How machine learning can help

How can AI help us to overcome these cognitive biases? Machine learning can analyse network traffic and identify anomalies or suspicious behaviour that could indicate a cyber attack. The algorithm can interpret events spanning infrastructure and user activities, even examining email metadata to get a holistic view of online activity.

"If you have applied AI to a cybersecurity problem, you can get evidence-based solutions that can show you which threat is empirically the biggest to your organisation," Darley says.

The AI can collect and interpret as much evidence as the infrastructure can throw at it. It doesn't tire, and it doesn't let cognitive biases get in the way.

Darley stresses that this doesn't signal the end of the human cybersecurity pro. AI can be a useful and impartial assistant that can relieve the stress that puts teams in fight or flight mode.

"This takes a huge decision making element out of the human team's hands, which gives them brain space to ideally de-stress a little bit and come back with logic."

An unflappable assistant

For example, in the middle of a compromise an autonomous decision-making agent could make time-critical decisions about containment. The human security team doesn't need to decide whether to quarantine a system or how far that quarantine should extend. The system can make a recommendation to the team or even execute the recommendation automatically. "They can think more about the broader picture that they're dealing with and how the compromise will impact them." Darley says.

When they refocus their attention on those strategic decisions, cybersecurity pros can rely on qualitative evidence, distilled by AI, to inform their thinking. This once again breaks them free of cognitive bias.

"When you use as much metric and data as you can, you move away from cognitive bias because you're not operating on instinct," Darley explains. "You're operating on data alone, which is a lot harder to argue your way out of."

AI tools are adaptive, learning from past attacks and behavioural patterns to watch more accurately for things that deviate from the norm. This makes many applications of AI less prone to bias, because instead of looking for past malicious patterns, it looks for things that lie outside the norm before drilling down into them to understand what's happening.

Cognitive bias is buried deep in our brain's code, and we can't change that algorithm easily. It can be an asset in some simpler situations, but in others, such as complex technical environments, we must work hard to tame it. Darley sees AI is the whispering voice in the security pro's ear; the gentle assistant who presents an alternative point of view based on hard facts, helping to steer us right. In an environment where the threats are greater in volume than ever, that could be a game-changer for defenders.

Sponsored by Darktrace.

More about

More about

More about

TIP US OFF

Send us news