Three more data-leaking security holes found in Intel chips as designers swap security for speed

Apps, kernels, virtual machines, SGX, SMM at risk from attack

Intel will today disclose three more vulnerabilities in its processors that can be exploited by malware and malicious virtual machines to potentially steal secret information from computer memory.

These secrets can include passwords, personal and financial records, and encryption keys. They can be potentially lifted from other applications and other customers' virtual machines, as well as SGX enclaves, and System Management Mode (SMM) memory. SGX is Intel's technology that is supposed to protect these secrets from snooping code. SMM is your computer's hidden janitor that has total control over the hardware, and total access to its data.

Across the board, Intel's desktop, workstation, and server CPUs are vulnerable. Crucially, they do not work as documented: where their technical manuals say memory can be marked off limits, it simply is not. This means malicious software on a vulnerable machine, and guest virtual machines on a cloud platform, can potentially lift sensitive data from other software and other customers' virtual machines.

Another data-leaking Spectre CPU flaw among Intel's dirty dozen of security bug alerts today


It is the clearest example yet that, over time, Chipzilla's management traded security for speed: their processors execute software at a screaming rate, with memory protection mechanisms a mere afterthought. In the pursuit of ever-increasing performance, defenses to protect people's data became optional.

Redesigned Intel processors without these speculative execution design blunders are expected to start shipping later this year. Mitigations in the form of operating system patches, and hypervisor fixes, should be arriving any time now, and should be installed if you're worried about malware or malicious virtual machines slurping data. Keep your eyes peeled for these. Some of these software mitigations require Intel's Q2 2018 microcode update to be installed.

Here are the three cockups, which Intel has dubbed its L1 Terminal Fault (L1TF) bugs because they involve extracting secret information from the CPU level-one data cache:

  • CVE-2018-3615: This affects Software Guard Extensions (SGX), and was discovered by various academics who will reveal their findings this week at the Usenix Security Symposium. According to Intel, "systems with microprocessors utilizing speculative execution and software guard extensions (Intel SGX) may allow unauthorized disclosure of information residing in the L1 data cache from an enclave to an attacker with local user access via side-channel analysis." This vulnerability was named Foreshadow by the team who uncovered it. This will require the microcode update to fix.
  • CVE-2018-3620: This affects operating systems and SMM. According to Intel, "systems with microprocessors utilizing speculative execution and address translations may allow unauthorized disclosure of information residing in the L1 data cache to an attacker with local user access via a terminal page fault and side-channel analysis." Operating system kernels will need patching, and the SMM requires the microcode update, to be protected.
  • CVE-2018-3646: This affects hypervisors and virtual machines. According to Intel, "systems with microprocessors utilizing speculative execution and address translations may allow unauthorized disclosure of information residing in the L1 data cache to an attacker with local user access with guest OS privilege via a terminal page fault and side-channel analysis." This will require the microcode, operating system, and hypervisor updates to protect data.

The operating system and hypervisor-level flaws – CVE-2018-3620 and CVE-2018-3646 – were discovered by Intel's engineers after they were tipped off about CVE-2018-3615, the SGX issue, by the university researchers. The impact of these vulnerabilities, according to Chipzilla, is as follows:

Malicious applications may be able to infer the values of data in the operating system memory, or data from other applications.

A malicious guest virtual machine (VM) may be able to infer the values of data in the VMM’s memory, or values of data in the memory of other guest VMs.

Malicious software running outside of SMM may be able to infer values of data in SMM memory.

Malicious software running outside of an Intel SGX enclave or within an enclave may be able to infer data from within another Intel SGX enclave.

It should be noted that on cloud platforms running multiple customer-supplied virtual machines, these guest operating systems must be patched – otherwise, they can exploit the underlying host hardware they share to steal information from neighboring VMs.

That means customers must only be allowed to use platform-supplied kernels that have the mitigations baked in, or the hypervisor software must be tweaked to, for example, not schedule strangers' virtual machines to run on the same physical CPU cores, or disable hyper-threading. As Red Hat noted, there is a potential performance hit from these mitigations.

According to Intel:

There is a portion of the market – specifically a subset of those running traditional virtualization technology, and primarily in the datacenter – where it may be advisable that customers or partners take additional steps to protect their systems. This is principally to safeguard against situations where the IT admin or cloud provider cannot guarantee that all virtualized operating systems have been updated. These actions may include enabling specific hypervisor core scheduling features or choosing not to use hyper-threading in some specific scenarios. While these additional steps might be applicable to a relatively small portion of the market, we think it’s important to provide solutions for all our customers.

For these specific cases, performance or resource utilization on some specific workloads may be affected and varies accordingly. We and our industry partners are working on several solutions to address this impact so that customers can choose the best option for their needs. As part of this, we have developed a method to detect L1TF-based exploits during system operation, applying mitigation only when necessary. We have provided pre-release microcode with this capability to some of our partners for evaluation, and hope to expand this offering over time.

Intel will today publish a technical white paper, here, with more information, and an FAQ here. Red Hat also has an explanation, here, and Oracle's take is here.

Meanwhile, SUSE has an advisory online, as does Microsoft over here for Windows, Xen has details here along with VMware, and Linux kernel patches can be inspected here. Check your operating system and hypervisor makers for updates, in other words.

What went wrong?

To summarize the problem: essentially, Intel's CPUs ignore their operating system kernel page tables. OSes such as Microsoft Windows and Linux maintain special data structures, called page tables, in memory that describe how portions of physical RAM are carved up and allocated to running applications.

These tables, defined in Intel's manuals, specify whether or not information can be read, or written to, in sections of memory by applications. Crucially, they also have a setting called "present", which when set to 1 indicates an actual chunk of physical RAM is available to store some information for a running application. When it is zero, there is no physical RAM allocated, so any accesses to that area should be blocked by a page fault.

When an application tries to touch some of its data in memory, it references the information using a virtual memory address. This address has to be converted into a physical memory address, which points to some part of a RAM chip in the system.

The processor therefore may consult the page tables to convert the app's virtual memory address to the corresponding physical RAM address. This takes time, and today's Intel CPUs will not wait for a page table walk to complete when they could be doing something more useful. They will speculatively execute code based on a copy of the requested information cached in the L1 data cache, even if the page tables specify that this data is no longer present in physical memory and thus should not be read.

The upshot is malware or a malicious guest operating system can exploit this to ascertain data it shouldn't be able to read, by forcing pages to be marked as not present and observing what's fetched speculatively from the L1 cache before the page fault circuitry in the processor can step in and halt proceedings.

This requires the exploit code to run on the same physical CPU core as the victim code, because it needs to observe the L1 data cache.

Below is a video from Red Hat illustrating the problem:

Youtube Video

There is more in-depth detail on Microsoft's TechNet and virtualization blog.

You'll also no doubt be pleased to know that Microsoft Azure, Amazon Web Services, and Google Compute Engine have mitigations already in place.

Bullets dodged

It must be said that no malware, to the best of our knowledge, is exploiting the related Meltdown and Spectre flaws, nor the aforementioned speculative-execution vulnerabilities – partly because mitigations are rolling out across the industry, and partly because there are easier ways to hack people.

It is easier to trick someone into entering their online banking password into a bogus website than developing malicious software that tickles the underlying hardware in such a specific way to slowly extract secrets from memory. In a warped way, we should be thankful for that.

“L1 Terminal Fault is addressed by microcode updates released earlier this year, coupled with corresponding updates to operating system and hypervisor software that are available starting today," an Intel spokesperson told The Register.

"We’ve provided more information on our web site and continue to encourage everyone to keep their systems up to date, as its one of the best ways to stay protected. We’d like to extend our thanks to the researchers at imec-DistriNet, KU Leuven, Technion-Israel Institute of Technology, University of Michigan, University of Adelaide and Data61 and our industry partners for their collaboration in helping us identify and address this issue.” ®

Similar topics

Broader topics

Other stories you might like

  • Linux Foundation thinks it can get you interested in smartNICs
    Step one: Make them easier to program

    The Linux Foundation wants to make data processing units (DPUs) easier to deploy, with the launch of the Open Programmable Infrastructure (OPI) project this week.

    The program has already garnered support from several leading chipmakers, systems builders, and software vendors – Nvidia, Intel, Marvell, F5, Keysight, Dell Tech, and Red Hat to name a few – and promises to build an open ecosystem of common software frameworks that can run on any DPU or smartNIC.

    SmartNICs, DPUs, IPUs – whatever you prefer to call them – have been used in cloud and hyperscale datacenters for years now. The devices typically feature onboard networking in a PCIe card form factor and are designed to offload and accelerate I/O-intensive processes and virtualization functions that would otherwise consume valuable host CPU resources.

    Continue reading
  • AMD to end Threadripper Pro 5000 drought for non-Lenovo PCs
    As the House of Zen kills off consumer-friendly non-Pro TR chips

    A drought of AMD's latest Threadripper workstation processors is finally coming to an end for PC makers who faced shortages earlier this year all while Hong Kong giant Lenovo enjoyed an exclusive supply of the chips.

    AMD announced on Monday it will expand availability of its Ryzen Threadripper Pro 5000 CPUs to "leading" system integrators in July and to DIY builders through retailers later this year. This announcement came nearly two weeks after Dell announced it would release a workstation with Threadripper Pro 5000 in the summer.

    The coming wave of Threadripper Pro 5000 workstations will mark an end to the exclusivity window Lenovo had with the high-performance chips since they launched in April.

    Continue reading
  • Qualcomm wins EU court battle against $1b antitrust fine
    Another setback for competition watchdog as ruling over exclusive chip deal with iPhone nullified

    The European Commission's competition enforcer is being handed another defeat, with the EU General Court nullifying a $1.04 billion (€997 million) antitrust fine against Qualcomm.

    The decision to reverse the fine is directed at the body's competition team, headed by Danish politico Margrethe Vestager, which the General Court said made "a number of procedural irregularities [which] affected Qualcomm's rights of defense and invalidate the Commission's analysis" of Qualcomm's conduct. 

    At issue in the original case was a series of payments Qualcomm made to Apple between 2011 and 2016, which the competition enforcer had claimed were made in order to guarantee the iPhone maker exclusively used Qualcomm chips.

    Continue reading
  • Intel says Sapphire Rapids CPU delay will help AMD catch up
    Our window to have leading server chips again is narrowing, exec admits

    While Intel has bagged Nvidia as a marquee customer for its next-generation Xeon Scalable processor, the x86 giant has admitted that a broader rollout of the server chip has been delayed to later this year.

    Sandra Rivera, Intel's datacenter boss, confirmed the delay of the Xeon processor, code-named Sapphire Rapids, in a Tuesday panel discussion at the BofA Securities 2022 Global Technology Conference. Earlier that day at the same event, Nvidia's CEO disclosed that the GPU giant would use Sapphire Rapids, and not AMD's upcoming Genoa chip, for its flagship DGX H100 system, a reversal from its last-generation machine.

    Intel has been hyping up Sapphire Rapids as a next-generation Xeon CPU that will help the chipmaker become more competitive after falling behind AMD in technology over the past few years. In fact, Intel hopes it will beat AMD's next-generation Epyc chip, Genoa, to the market with industry-first support for new technologies such as DDR5, PCIe Gen 5 and Compute Express Link.

    Continue reading

Biting the hand that feeds IT © 1998–2022