The mitigations applied to exorcise Spectre, the family of data-leaking processor vulnerabilities, from computers hinders performance enough that disabling protection for the sake of speed may be preferable for some.
Disclosed in 2018 and affecting designs by Intel, Arm, AMD and others to varying degrees, these speculative execution flaws encompass multiple variants. They can be potentially exploited by malware via various techniques to extract sensitive information, such as cryptographic keys and authentication tokens, from operating system and application memory that should be off limits.
Though a lot of research has gone into the Spectre flaws, and work done to prevent their exploitation, basically no miscreants are abusing the weaknesses in the real world to steal information, to the best of our knowledge. There in lies the rub; does one keep the protections on and take whatever performance hit arises (it does depend enormously on the type of workload running) or switch them off because the risk is low? Or, from another point of view, put speed promised by chip manufacturers over security that was supposed to be present.
Robert O'Callahan, a former Mozilla distinguished engineer who's currently developing the Pernosco debugger, was recently doing some work on the open source rr debugger (which Pernosco extends) and found that frequent system calls on Linux were slowing down code execution in user space, where applications run.
Assuming this was at least partially due to the impact of various Spectre mitigations, he disabled the defenses and reran his test of the
rr sources command, which serves to cache the results of many
access system calls checking for the existence of a directory. The results were not encouraging.
"So those Spectre mitigations make pre-optimization userspace run 2x slower (due to cache and TLB flushes I guess) and the whole workload overall 1.6x slower!" he wrote in a blog post on Saturday. "Before Spectre mitigations, those system calls hardly slowed down userspace execution at all."
The performance consequences of the various software and hardware mitigations vary significantly, depending on what you're running. Cited figures range from negligible to around 20 per cent typically, with some outliers that go significantly higher.
Tests published by Phoronix show a slowdown of two per cent to 16 per cent on Intel Rocket Lake chips – its 11th generation Core microprocessors – with mitigations enabled. Others tell of slowdowns in the 10 per cent to 30 per cent range. AWS noted, "We have not observed meaningful performance impact for the overwhelming majority of EC2 workloads."
So O'Callahan's finding that his debugging code ran 60 per cent slower is noteworthy even if it's not universally applicable.
- Spectre/Meltdown fixes in HPC: Want the bad news or the bad news? It's slower, say boffins
- Google emits data-leaking proof-of-concept Spectre exploit for Intel CPUs to really get everyone's attention
- One more reason for Apple to dump Intel processors: Another SGX, kernel data-leak flaw unearthed by experts
- Complexity has broken computer security, says academic who helped spot Meltdown and Spectre flaws
"The overall slowdown was only 1.6x," he explained in an email to The Register. "The userspace part of the work slowed down 2x, but it only happens because we're making frequent system calls, and those system calls consume the majority of the time in that test. I expect similar results would occur for other workloads that are similarly system-call-intensive."
Asked whether he felt the commonly cited figures downplay the possible consequences of Spectre mitigations, and defenses for its Meltdown cousin, O'Callahan suggested the commonly cited figures should be viewed skeptically.
"I think people should be aware that for system-call-intensive workloads 1.5x slowdown or more is possible, at least on older CPUs like Skylake," he said. "In my case I was able to rewrite the code to be much less system-call-intensive, but that won't always be possible."
While this represents the extreme end of reported delays, there have been other similar reports of sluggishness for specific tests. In 2018, for example, MIT researchers found [PDF] the GRSecurity-enabled kernel (using the UDEREF mitigation rather than the standard KPTI) slowed down by 90 per cent in a local disk test to copy 10MB of data from /dev/zero to a new file with a block size of 1 byte using the dd utility. This is an extreme example to illustrate the point, we note.
Results of this sort underscore why some people prefer to optimize for speed at the expense of security by offering guidance on how to disable mitigations. There's even a website for this express purpose – make-linux-fast-again.com – which simply displays the command line parameters to get rid of all that cumbersome security.
However, O'Callahan advises caution for those thinking about dropping Spectre and Meltdown defenses.
"If you trust all the code running on the system you can turn these mitigations off safely," he said. "If you don't (eg because you use a Web browser and you never know what ad scripts are doing), you should not turn off those mitigations IMHO." ®