Keep calm and carry on when the supply chain goes up in flames

Lessons learned from the front-line responders

RSA Conference After something really bad happens on a company's network – say, a SolarWinds or Log4J-esque supply-chain attack – comes the chatter among infosec friends. Usually before anyone knows the scope or even the details.  

"That's how my colleague first got a tip-off for 3CX: 'Hey, I heard there's a supply chain thing,'" said Katie Nickels, director of intelligence at security shop Red Canary, during a panel session at RSA Conference this week. She was referring to the supply-chain attack on 3CX, which resulted in miscreants quietly slipping malware into the VOIP business's desktop client.

After an intrusion like that, incident responders get called in. It's their job to cut through the panic and chaos, clearly assess the situation, and come up with a plan to mitigate the damage. Nickels's first piece of advice for others in her position: "Anything you hear in about the first 24 hours, be really skeptical."

This means looking at the initial data set though an investigative, scientific lens, added Wendi Whitmore, SVP of Palo Alto Networks, who leads the security vendor's Unit 42 consulting and threat research group.

"We want people who not only want to prove an allegation, but disprove it in the same degree," Whitmore said. "That's going to allow us to go through those critical decision-making skills to determine how much of a thing it is." 

rsa_panel_incident_response

Experts ... The incident response panel at RSA Conference. From left, Wired's Lily Hay Newman, Dragos's Lesley Carhart, Red Canary's Katie Nickels, and Palo Alto Networks' Wendi Whitmore

When Lesley Carhart, director of incident response for North America at Dragos, gets a call from one of her company's industrial clients, the potential consequences of a compromise can look very different from a basic IT security incident. 

"Life, safety, environment, facilities catching on fire. That's very serious stuff that could happen immediately, and sometimes triage has to happen before we have a full view of everything that's going on," Carhart said, adding that skepticism remains important.

"Sometimes you have to be the skeptic. You have to be the one doing the reality check for people who are panicking and think things are much worse than they potentially are. They could really be that bad. But in those first 24 hours, we just don't know for sure."

3CX lessons learned

This is especially true when it comes to responding to supply-chain attacks, like the 3CX compromise earlier this month. These types of intrusions can be difficult to detect – particularly when the malware has been inserted into trusted software. 

And once they have been detected, it can be tricky to determine the scope – and whether an organization has been hit – unless there is a really good picture of all the software in use, and all the code in each piece of software. 

Earlier crises with Solar Winds, Kaseya, and Log4j all highlight these difficulties. But there are also some specific lessons learned from the more recent 3CX 'mare, according to the panelists. 

As a refresher: the software maker's desktop app was compromised after a 3CX worker installed on their computer a trojanized version of the X_Trader futures trading app published by Trading Technologies. That allowed miscreants to jump into 3CX's systems from the employee's infected machine and tamper with the vendor's desktop app to include more malware, which was then offered to customer networks.

On March 29, CrowdStrike issued a warning about the 3CX intrusion – both on its blog and in a Reddit post

"It's a lesson in collaboration, and the power of actually sharing publicly," Nickels said. "CrowdStrike, really early on, shared that GitHub was being used for infrastructure. And GitHub, y'all took that infrastructure down quickly," she continued, adding that she believes both of these things helped prevent more businesses from being compromised further down the supply chain.

"I think a lot of orgs actually got saved by GitHub," Nickels said. "It's a nice example of how sharing and taking down infrastructure can stop these things from being a lot worse."

Find your Zen

When it comes to incident response, calmness is also a critical skill required to navigate potentially chaotic situations, the panelists noted.

Whitmore, for example, shared a story about her team getting a phone call from a CISO at a "major corporation" on a Friday night (note: it's always on a Friday night) about suspicious traffic that initially appeared to be coming from a Palo Alto Networks' firewall.

Spoiler alert: it wasn't. 

"When we got on the phone, tensions were very high, and so it took not only a lot of technical skills to be able to work through the situation … but that calm manner in which we responded initially started to tamper down the amount of chaos and frustration on the call," Whitmore said.

Nickels called it "security therapy," and added "panic is not a necessary part of the incident response. There's a difference between panicking and having a sense of urgency."

Remember that sense of everything-will-be-OK that your parents used to (hopefully) project? Tap into that. "You have to be able to exude that to the people you're doing incident response for," Carhart said. 

It's a learned skill, it takes time, and yes, it can be scary, they added. "You're never sure if you're going to find that initial piece of evidence you really need to to catch the adversary," the incident response exec said. 

"Once you start finding threads to pull on, then it becomes really engaging and interesting. But it's always a little scary the first day. We have to work on our internal Zen and be calm about dealing with these intense crises that can have really serious consequences." ®

More about

TIP US OFF

Send us news


Other stories you might like