Just when it looked as if the US Democratic National Committee (DNC) had finally got one over on the phishing hackers that had been owning it since 2016, the triumph was torn away by a moment of rebellious fakery.
On August 20, DNC security partner Lookout's machine-learning system spotted a site impersonating the DNC VoteBuilder portal, a red flag that a phishing campaign designed to nab login credentials was afoot.
It hadn't been up for long, perhaps 30 minutes at most, which meant they'd probably caught it before much or any damage had been done.
With the FBI on the case, and Russia as usual the number-one suspect, politicians railed against the Republican's refusal to fund voter protection, with New York Rep R Carolyn B Maloney tweeting: "Our intel community warned us about this, and now it’s happening. This isn't 'fake news' – it's a REAL attack on our democracy. We need to act."
Democracy was under attack all right – by the Michigan Democratic Party, who'd helpfully decided to do a spot of red teaming without telling anyone.
"Cybersecurity experts agree this kind of testing is critical to protecting an organization's infrastructure, and we will continue to work with our partners, including the DNC, to protect our systems and our democracy," said an unrepentant Michigan party chair Brandon Dillon.
It was a toss-up as to who was more embarrassed – the DNC for not being able to tell a fake from the real thing or the Michigan Democrats for not realising that confusing its own side wouldn't be well received.
Where's the lesson?
Nobody doubts that phishing is ubiquitous. The question is whether and how employees can be trained to resist these attacks by prepping them on common phishing techniques and formats. What's become abundantly clear is that it should be done by experts who've thought through the pitfalls.
For sure, blind phishing simulation – tests where important people are not in on the test – tend to end badly. In 2014, a US Army commander (reported here – subscription required) thought it would be a good idea to send a small group of colleagues a bogus warning that their federal Thrift Savings 401(k) retirement plans had been hacked and they needed to log into their accounts.
Spooked, the recipients forwarded the test email to thousands of others, who flooded the plan's call centre with enquiries. Ironically, many employees weren't taken in by the phishing test but thought it would be helpful to tell those who might be.
The secret of blind phishing simulation, then, is good blind phishing simulation, which means following a few rules. The first of these is that running the test should generate useful data, both for the testers but also the people being tested.
Simply proving that some people fall for phishing attacks is an empty discovery because that much is known. The point of simulations should be to reduce the likelihood of this in a way that can be measured over time, which involves giving feedback to targets so they can improve. A second rule is not to test the wrong part of the system. It's not clear whether the DNC incidents had got as far as sending fake phishing emails, but that would have been impossible to ascertain once the phishing domains connected to the ruse had been taken offline.
Now it's dark
There's also the contentious possibility that anti-phishing simulation doesn't work anyway. More than one survey over the years has confirmed that even expert users aware of phishing tricks can find some impossible to spot, which turns phishing simulations into a version of phishing, which is to say a percentages game in which everyone is susceptible to some extent.
In fairness to the companies that sell anti-phishing systems, none would claim they are enough on their own. It's just another layer that might be useful that should be deployed along with other protections designed to detect phishing attacks before and just after they make it to users' inboxes.
What has the DNC simulation-gone-awry taught us? That, at least as far as the DNC is concerned, not everyone in that organisation has faith in its security to the extent they felt it might be useful to prove that. It's the sort of complicated human problem no amount of tech will ever solve. ®