At some point this fall, a team of researchers from MIT's CSAIL and UC Berkeley's EECS aim to deliver an initial version of an open source, formally verified, secure hardware enclave based on RISC-V architecture called Keystone.
"From a security community perspective, having trustworthy secure enclaves is really important for building secure systems," said Dawn Song, a professor of computer science at UC Berkeley and founder and CEO of Oasis Labs, in a phone interview with The Register. "You can say it's one of the holy grails in computer security."
Song just recently participated in a workshop to advance Keystone, involving technical experts from Facebook, Google, Intel, Microsoft, UC Berkeley, MIT, Stanford and the University of Washington, among other organizations.
Keystone is intended to be a component for building a trusted execution environment (TEE) that's isolated from the main processor to keep sensitive data safe. TEEs have become more important with the rise of public cloud providers and the proliferation of virtual machines and containers. Those running sensitive workloads on other people's hardware would prefer greater assurance that their data can be kept segregated and secure.
There are already a variety of security hardware technologies in the market: Intel has a set of instructions called Software Guard Extensions (SGX) that address secure enclaves in its chips. AMD has its Secure Processor and SEV. ARM has its TrustZone. And there are others.
But these are neither as impenetrable as their designers wish nor as open to review as cyber security professionals would like. The recently disclosed Foreshadow side-channel attack affecting Intel's SGX offers a recent example of the risk.
That's not say an open source secure element would be immune to such problems, but an open specification with source code would be more trustworthy because it could be scrutinized.
"All these solutions are closed source, so it's difficult to verify the security and correctness," said Song. "With the Keystone project, we'll enable a fully open source software and hardware stack."
In addition, the RISC-V microarchitecture looks to be less vulnerable to side-channel attacks. As the RISC-V Foundation said following the disclosure of the Spectre and Meltdown vulnerabilities earlier this year, "No announced RISC-V silicon is susceptible, and the popular open-source RISC-V Rocket processor is unaffected as it does not perform memory accesses speculatively."
(The RISC-V Berkeley Out–of–Order Machine, or "BOOM" processor, supports branch speculation and branch prediction, so immunity to side-channel attacks should not be assumed.)
The off-brand 'military-grade' x86 processors, in the library, with the root-granting 'backdoor'READ MORE
RISC-V is relatively new to the scene, having been introduced back in 2010. Established chipmakers like ARM, however, view it as enough of a threat to attack it.
But its not yet clear whether makers of RISC-V hardware will go all-in on openness. Ronald Minnich, a software engineer at Google and one of the creators of coreboot, recently noted that HiFive RISC-V chips have proprietary pieces.
"I realize there was a lot of hope in the early days that RISC-V implied 'openness' but as we can see that is not so," he wrote in a mailing list message in June. "...Open instruction sets do not necessarily result in open implementations. An open implementation of RISC-V will require a commitment on the part of a company to opening it up at all levels, not just the instruction set."
RISC-V may end up being a transition to more secure chip designs that incorporate the lessons of Spectre, Meltdown and Foreshadow. According to Song, there was discussion at the workshop about "whether we can build a new hardware architecture from ground up." ®