This article is more than 1 year old
We need to go deeper: Meltdown and Spectre flaws will force security further down the stack
Turns out performance at all costs has been rather costly
Around 2003, a computer security portent that had been cheerlessly simmering away for years suddenly came to the boil.
This was an era stricken by malware attacks on a scale few had prepared for, running software beset with flaws some vendors seemed disinclined to acknowledge let alone fix.
Vulnerabilities, including high-severity ones, were nothing new, of course, but on the back of the internet megatrend they seemed to be getting more dangerous, causing global trouble in a matter of hours, infamously through fast-spreading worms such as that year's Blaster and SQL Slammer.
Blaster was a particularly ironic example because the vulnerability it targeted – a buffer overrun in Windows DCOM RPC – had ostensibly been patched a month before the attack. But having a patch and applying it were not, it turned out, the same thing.
What was going on? On the face of it, it appeared that high-rated vulnerabilities – especially ones exploiting the innovation of zero-day flaws – were supercharging malware in ways that were going to require new thinking and far better processes.
Vulnerability numbers quickly grew to thousands each year and migrated from the OS and server layer to mainstream applications. What counted now was response. If attackers could deploy an exploit over a period of hours or days, how long would it take defenders to peg it by deploying a software patch or mitigation?
As Gerhard Eschelbeck, then CTO of vulnerabilities management outfit Qualys, stood up to give a presentation at that year's Black Hat show in Las Vegas, he thought he had come up with a way of measuring that gap.
Now Google's vice president security and privacy engineering (CISO), Eschelbeck's big idea was the Laws of Vulnerabilities (PDF), a way to understand how quickly Qualys's enterprise customers were patching flaws.
What interested him was vulnerability "half-life", or how long it took to reduce the occurrence of a flaw by 50 per cent, which in 2003 was an average of 30 days in a world where exploits could appear within days.
"It is quite interesting to look back and realise the Laws of Vulnerabilities are very much applicable more than ten years later, even though vulnerability half-life has shortened substantially," says Eschelbeck. "What was measured in days a decade ago is now measured in hours. At the same time, vulnerability management has evolved from a tactical tool to a critical component of any sound security strategy, and Common Vulnerabilities Scoring System has become the golden standard for vulnerability prioritisation."
This MO has at least contained the threat posed by software vulnerabilities. "While the complexity of vulnerabilities found has increased, modern computing paradigms such as cloud computing have shifted infrastructure management to a centralized model, allowing for better scale, and more rapid deployment of security updates."
Perma-flaws
And yet despite this, vulnerabilities march on with a predictable logic. Having colonised OSes and web and PC applications, the vulnerability problem is now menacing firmware and side-channel microcode through the proof-of-concept (PoC) vulnerabilities such as Meltdown and Spectre.
Just as in 2003, vendors today seem surprised and under-prepared – not this time by attackers armed with malware but by tiny groups of researchers who simply decided to unpick two decades of assumptions.
During 2017, the low-level theme bloomed. In March, Embedi told Intel about a serious flaw in the Active Management Technology (AMT) vPro firmware that is part of the mysterious Management Engine (ME), followed in July by a second "is it a bug or a feature?" weakness in the same interface courtesy of F-Secure.
In June, two Russian researchers at Positive Technologies had given Intel the bad news that they'd found problems in the ME proper stretching back to 2008. Alarmed, Intel ran an audit and found eight serious flaws it eventually made public in November, the same month a Google engineer let slip at a conference that the company planned to rip the ME out of its servers because the idea of a hidden remote management computer-within-a-computer (complete with its own modified MINIX OS, memory, and web browser) didn't sound like a great idea in cloud data centres.
Popping a cherry on the turd of woe, October saw an urgent security vulnerability in the Infineon Trusted Platform Modules (TPMs) that sit at the root of security in many PCs, laptops and all Google Chromebooks, which would need a power wash to install updates.
All PoC rather than criminal exploits, but as the Meltdown and Spectre superflaws were later to show, it mattered not. None were easy to fix, and in the case of Intel the only meaningful option short of buying new hardware was a series of complex mitigations that, with novel PoC exploits popping up more regularly, will haunt endpoints for years to come.
Software patching half-life is perhaps days to a month or two at worst. On the side-channel, patching or mitigation looks as if it will stretch to years.
Hotel insomnia
The good news, notes Carsten Eiram, chief research officer at vulnerability analysis firm Risk Based Security, is that none so far involves remote code execution, which gives defenders a chance of detecting and blocking them.
Even when fixes are not easy or even possible, mitigations are. It's messy and slow but liveable providing the industry can quickly fashion a reliable mitigation channel.
"In general, these types of vulnerabilities are very rare compared to the total number of vulnerabilities reported each year," Eiram says. "The bar is higher than many other types of vulnerabilities."
The question is how many will emerge this year and how easily they can be mitigated. Eiram expects to see more but not in any numbers. Which is fortunate because: "If a serious vulnerability was disclosed in a low-level component even if the researcher only provided a PoC, there should still be a fair number of actors in this space with the capabilities to potentially turn it into a working exploit.
"If a low-level remote code execution issue is discovered that for some reason cannot be properly mitigated or fixed without replacements, it would be a huge problem."
What constrains mitigation is the number of moving parts. For Meltdown and Spectre, the hardware maker (Intel) had to push the mitigation to work with what the OS maker (Microsoft) deemed possible. The latter then had to tell antivirus vendors about this in case their products were making unsupported calls into memory that might interfere with OS Kernel Patch Protection (KPP), setting a registry key to indicate compatibility.
Tellingly, Microsoft ended up hosting Intel's patches to speed distribution in case Intel's own efforts fell short. Cooperation between industry tiers suddenly mattered.
Liviu Arsene, senior e-threat analyst at AV company BitDefender, reckons we will see more technologies that sit between the hardware and the software.
That will create fresh challenges. Once people run out of plasters, they'll need something stronger, he says.
"We've been trying to get as low level as possible. Security is leaving the operating system to go deeper down the stack... to sit between the CPU and the software," according to Arsene.
His company last year announced Hypervisor Introspection (HVI), a data centre security technology developed in conjunction with Citrix that protects virtualized servers from the thorny problem of malware exploiting shared memory.
At the time it looked like an interesting sledgehammer for a peanut-sized problem, less so now that people have had time to speculate as to how Meltdown and Spectre-primed malware might escape hypervisors in ways that not long ago sounded hypothetical.
This is not something the company could mashup on its own – Citrix's involvement was essential to avoid breaking things.
In 2003, security professionals suddenly grasped the size of the challenge facing them and its long-term consequences for development and software management. In hindsight, it's clear the hardware makers didn't get the memo, and so the cult of performance-at-all-costs barrelled on.
Now it's as if the brakes have been applied as the industry re-learns the same lessons all over again.
"While patching is good, that doesn't address the core issue which is at some point you need to upgrade your hardware," says Liviu. "If until now we thought of security as exploiting vulnerabilities in code, this goes to prove that this code can run much deeper than we thought." ®