This article is more than 1 year old
Xen hypervisor gets tech preview support for ARM processors
Red Hat's KVM has not won all hearts and minds – yet
The Xen Project, which is now part of the Linux Foundation and which is responsible for extending the open source virtualization hypervisor formerly controlled by Citrix Systems, has updated Xen with a new 4.3 release that brings it to ARM processors.
In a blog post, George Dunlap, a senior engineer at Citrix Systems in charge of the Xen hypervisor, pulled the trigger on the 4.3 release. Dunlap put the Xen hypervisor on a nine-month release cycle late last year, and Xen 4.3 is coming in right on time with 90 people, 25 of them independents, contributing 1,362 changesets with a total of 136,128 new lines of code to the hypervisor.
Citrix still did 41 per cent of the contributions, as reckoned by changeset count, with SUSE Linux picking up 23 per cent, individuals as a group doing 8 percent, Intel doing 6 per cent, and the US National Security Agency doing another 5 per cent.
The Xen 4.3 hypervisor runs on so-called "fast models" of ARM cores that support the 32-bit ARMv7-A architecture with virtualization extensions, and is also functioning on top of emulated 64-bit ARMv8-A processors in the labs. In both cases, the Xen hypervisor has been tested against simulated ARMv7-A and ARMv8 processors, called either fast models or real-time systems in the ARM lingo. It has been booted up on a Google Chromebook using a Samsung XE303C12-A01 chip, and has been tested on Samsung's Exynos5 system-on-chip in 32-bit mode with 40-bit memory extensions.
Xen 4.3 will run on 64-bit x86 iron, of course, and the Xen 4.2 release was the last one to support 32-bit x86 chips when it was announced last September.
The NUMA-aware scheduler is an important component in Xen on machines with multiple processor sockets and chipsets using non-uniform memory access methods to glue those chips together and create a single memory space.
With NUMA designs, each socket has local memory, which it can access readily. If the data needed for a piece of work on the CPU is on the local memory linked to its socket, then performance will be acceptable. A hypervisor spanning multiple sockets has to be able to tell the chipset how to make sure VMs pinned to a particular thread or core in a particular socket need their data pushed over to that socket, otherwise they will have to reach out across the NUMA chipset and grab data from adjacent sockets.
As the number of VMs climbs in the machine, the effect of NUMA-aware scheduling increases, as you can see in these preliminary benchmark test results, and presumably this will also be the case as the number of sockets increases. It gets a bit dicey when a machine becomes overloaded with work, but even then the tweaks to make Xen appreciate the eccentricities of NUMA systems seems to help some.
Xen 4.3 has some modest scalability enhancements as well. The amount of physical main memory supported by the Xen hypervisor has been boosted from a maximum of 5TB with Xen 4.2 to 16TB with Xen 4.3. The earlier Xen had a bottleneck that limited the number of virtual CPUs supported by a single instance of Xen to 300 virtual CPUs, and that has been boosted to 750 with the 4.3 update.
That is a tested limit, not a theoretical one. Citrix just released its own XenServer 6.2 commercial-grade Xen last week, and has pushed the hypervisor scalability up to 3,250 virtual CPUs test on a single host with what it reckons is a theoretical limit of 4,000 virtual CPUs at the moment.
The Xen 4.3 hypervisor also integrates the Open vSwitch virtual switch created by Nicira and now controlled by VMware, which bought Nicira last summer for $1.26bn. Open vSwitch is replacing other virtual interface bridging code that was part of Xen, but you have to be careful because Open vSwitch integration is still in tech preview.
Until now, Xen had been using a fork of the QEMU hardware emulator to underpin the hypervisor, but now it is using a new iteration of the hardware emulator called qemu-xen that is based on an upstream release of QEMU with backports to make it work with Xen. Now Linux operating system distributors can more easily integrate Xen into their distros. The old qemu-traditional hardware emulator will still be available. (QEMU also underpins the KVM hypervisor, by the way.)
The Xen Project reckons that there are over 10 million users across enterprise computing and public clouds who are using the Xen hypervisor in one of its many forms, and uncounted others that are using Xen in embedded and mobile devices. ®