This article is more than 1 year old
Qubes kicks Xen while it's down after finding 'fatal, reliably exploitable' bug
You left the stable door open? AGAIN? C'mon guys, keep those guests locked up
Qubes is once again regretting how long it's taken to abandon Xen's PV hypervisor, disclosing another three bugs including host escape vulnerabilities.
“An attacker who exploits either of these bugs can break Qubes-provided isolation. This means that if an attacker has already exploited another vulnerability, e.g. in a Web browser or networking or USB stack, then the attacker would be able to compromise a whole Qubes system” Qubes says in this note.
The bug in XSA-213 only affects 64 bit x86 systems and relates to how root and user mode page tables are handled by 64-bit PV guests. The IRET hypercall, which stands in for identically-named CPU instructions, transfers control from user mode to kernel mode.
“If such an IRET hypercall is placed in the middle of a multicall batch, subsequent operations invoked by the same multicall batch may wrongly assume the guest to still be in kernel mode”, Xen explains, with the result that the guest could get writable access to the wrong root page table.
This means a buggy or malicious PV guest “may be able to access all of system memory, allowing for all of privilege escalation, host crashes, and information leaks.”
The bug in XSA-214 is in an operation that lets guests pass pages between each other, but there's a bug in the GNTTABOP_transfer operation.
“The internal processing of this, however, does not include zapping the previous type of the page being transferred. This makes it possible for a PV guest to transfer a page previously used as part of a segment descriptor table to another guest while retaining the 'contains segment descriptors' property.”
The result is, once again, a complete host escape: a malicious pair of guests can get access to all system memory, resulting in “privilege escalation, host crashes, and information leaks.”
“Pair of guests” is an important qualifier, as Qubes notes in its document, since it “requires cooperation between two VMs of different types, which somewhat limits its applicability.”
Finally, there's XSA-215. It falls short of the “all of system memory”-level seriousness of the previous two, but it's plenty bad enough.
A guest attack could modify “part of a physical memory page not belonging to it”, with the resulting attack vectors covering privilege escalation, host crashes, crashing other guests, and information leaks.
The bug is in Xen's exception handling, which under some conditions means it returns to guest mode “not via ordinary exception entry points, but via a so call failsafe callback. This callback, unlike exception handlers, takes 4 extra arguments on the stack (the saved data selectors DS, ES, FS, and GS).
“Prior to placing exception or failsafe callback frames on the guest kernel stack, Xen checks the linear address range to not overlap with hypervisor space. The range spanned by that check was mistakenly not covering these extra 4 slots.”
As with XSA-214, this bug is confined to 64-bit Xen on x86 (in this case, version 4.6 and earlier), and with particular physical memory boundaries (5 TB or 3.5 TB).
We're getting sick of this
Of the four bugs, Qubes says XSA-213 is the worst (“fatal, reliably exploitable” it says), and there's more than a hint of frustration in its discussion.
Over eight years, Qubes complains, there have been four Xen bugs in the same class, all of them relating in “Xen mechanisms for handling memory virtualisation for paravirtualised (PV) VMs.”
Qubes says after XSA-212 emerged ten months ago, “we immediately began working on a way to move away from using PV-based VMs and toward using only hardware-based virtualization (HVM) VMs in Qubes 4.x” – but this is turning out to be harder than it looks.
The major undertaking delayed Qubes 4.0, the outfit says, and even then, there's still stuff on the to-do list.
“We originally hoped we could transition to running all Linux VMs in a so-called PVH mode of virtualization, where the I/O emulator is not needed at all, but it turned out the Linux kernel is not quite ready for this.
“So, in Qubes 4.0, we will use the classic HVM mode, where the I/O emulator is sandboxed within... a PV VM (which is also the case when one runs Windows AppVMs on Qubes 3.x). This makes it possible for an attacker to chain attacks: one for QEMU with another hypothetical for PV virtualisation, to break out of a VM.”
If there were an alternative which Qubes believed was both more secure than Xen, and which supported all the architectural features Qubes needs (the post includes running network and storage backends in unprivileged VMs), the organisation would consider replacing Xen.
That's not on the cards yet, but the post notes that since Qubes 3.0, the system's architecture should be able to treat Xen as a replaceable component, should that be necessary.
All of the bugs are credited to Jann Horn of Google's Project zero. ®