Intel's SGX cloud-server security defeated by $30 chip, electrical shenanigans
VoltPillager breaks enclave confidentiality, calls anti-rogue data-center operator promise into question
Boffins at the University of Birmingham in the UK have developed yet another way to compromise the confidentiality of Intel's Software Guard Extensions (SGX) secure enclaves, supposed "safe rooms" for sensitive computation.
Over the past few years, the security of SGX, a set of security-oriented instructions used to set up so-called secure enclaves, has been assailed repeatedly by infosec types. These enclaves are intended to house software and data that not even the computer's administrators, operating system, applications, users, and owners can access: we're talking software like anti-piracy aka DRM measures that decode encrypted media streams, and sensitive cryptography in cloud servers. The enclaves are supposed to ensure that no one can snoop on code and information whether it's running in people's bedrooms or in cloud environments.
Skepticism in 2016 was followed a year later by Prime+Probe and Rowhammer attacks, which chipped away at SGX's protections. Then came Spectre in 2018 and a series of other technique followed that broke enclave protections.
The Birmingham boffins – computer scientists Zitai Chen, Georgios Vasilakis, Kit Murdock, Edward Dean, David Oswald, and Flavio D. Garcia – have managed a variation on an attack that some of them helped develop last year called Plundervolt.
Plundervolt is a software-based attack on recent Intel processors running SGX enclaves that lowers the voltage to induce faults or errors that allow the recovery of secrets like encryption keys. Following disclosure last December, Intel mitigated the vulnerability by removing the ability to reduce processor voltage, via microcode and BIOS updates.
Now, the researchers have implemented a similar attack in hardware, using about $36 in off-the-shelf electronics. They plan to present a paper describing their work [PDF] next year at the Usenix Security 2021 conference.
One more reason for Apple to dump Intel processors: Another SGX, kernel data-leak flaw unearthed by expertsREAD MORE
Their technique, named VoltPillager in the tradition of dramatic bug branding, works on SGX systems, even those that have received Intel's Plundervolt patch (CVE-2019-11157). It involves injecting messages on the Serial Voltage Identification bus between the CPU and the voltage regulator in order to control the voltage in the CPU core.
This is not a remote attack, so it's not the sort of thing to send system administrators or users scrambling to patch vulnerable systems. It requires physical access to a server – opening it up to attach a malicious circuit board – so it's a threat mainly for multi-tenant computing scenarios. Half the point of SGX, though, is to protect sensitive code and data from rogue server administrators when said servers are out of reach and in someone else's data center – such as a cloud provider's – and yet it is possible for someone at a cloud provider with physical access to a box to jolt an Intel processor into breaking its SGX protections.
"This attack is quite relevant because it is often claimed that SGX can defend against malicious insiders/cloud providers," said David Oswald, a lecturer in the security and privacy group at the University of Birmingham, and one of the paper's co-authors, in an email to The Register.
Oswald pointed to what Intel says about Microsoft Azure DCsv2-series virtual machines running on Intel Xeon E processors with SGX: "Even cloud administrators and datacenter operators with physical access to the servers cannot access the Intel SGX-protected data."
"We show that this is not the case, i.e. that physical attacks on SGX are possible at very low cost (about $30)," he said. "And in contrast to previous SGX attacks, our findings cannot easily be patched (say in microcode)."
In contrast to previous SGX attacks, our findings cannot easily be patched (say in microcode)
The paper touches on possible mitigations, like adding cryptographic authentication to the SVID protocol, having the CPU monitor the SVID bus for injected packets, and countermeasures in enclave code. But it argues none of these techniques look particularly promising. Hardware-based mitigation like voltage monitoring circuitry in smartcards is one possibility, but the paper notes this would require chip design changes and would incur overhead.
It appears Intel isn't even going to try. The researchers disclosed the attack to Intel in March, and were told that "opening the case and tampering of internal hardware to compromise SGX is out of scope for SGX threat model. Patches for CVE-2019-11157 (Plundervolt) were not designed to protect against hardware-based attacks as per the threat model." So there goes that whole promise for Microsoft Azure, then.
An Intel spokesperson said much the same in an emailed statement to The Register: "Techniques that require an attacker to physically open the case, including removing screws or breaking plastic casing to gain access to the internal hardware of a device are typically not addressed as a vulnerability. As always, we recommend keeping systems up to date and maintaining physical possession of devices."
That recommendation would be more meaningful if data centers could be carried around in a pocket. For what it's worth, we advise against that: the heat becomes hard to bear after a while.
Intel may not be the only company confronted with the challenge posed by VoltPillager. The academics did not try their attack on processors from other vendors, but they note in their paper that AMD relies on a similar processor design that includes a voltage regulator connected to the CPU via its SVI bus.
The boffins conclude that, in light of their findings, relying on third-parties and secure enclaves to protect computational secrets may be unwise.
"The results in this paper, together with the manufacturer’s decision to not mitigate this type of attack, prompt us to reconsider whether the widely believed enclaved execution promise of outsourcing sensitive computations to an untrusted, remote platform is still viable," the paper concludes. ®