Xen accelerates ARM server race with version 4.5

1TB memory for guests on ARM and lots of VM-wrangling fun for admins


Version 4.5 of the Xen hypervisor is upon us.

Those responsible for the new release are chuffed to report they've packed a couple of dozen big new features in this time around, doubling the 4.4 release's count. And there's more to come: some useful stuff couldn't be squeezed into this release.

But before we get to those features, let's cast our eyes to the major contributors list. As one would expect, the likes of Intel, AWS, AMD, Citrix, Oracle and Verizon Cloud have been big contributors (so's the NSA, but don't say it too loud).

And this time around Cavium also makes the list. That's Cavium as in the mob that last year revealed a 48-core ARM SoC. That Xen can now allocate a terabyte of RAM to guest VMs should therefore not surprise, nor will the Xen folks' belief that this is a big advance that brings ARM servers into play like never before. The new release can also boot using UEFI and supports AMD's Seattle 64-bit server SoC.

That all adds up to a pitch from Xen that it can let users run up big VMs on the two major CPU platforms. Try doing that with your fancy Hyper-V or VMware.

Beyond that, the headline features are:

  • Xen PVH virtualization mode adding support for running in dom0 for Linux in Intel, a security-enhancer;
  • Introspection of HVM Guests, a hardware-isolation effort said to make it harder for kernel exploits, rootkits and other low-level nasties to get a foothold;
  • VM teleportation thanks to Coarse-grained Lock-stepping (COLO), which makes it easier to replicated to a second site. COLO's not fully done: Fujitsu tossed in a ton of code and this is expected to mature in Xen 4.6;
  • Improved realtime scheduling, a feature said to be mighty handy in embedded and/or automative applications in which VMs need to know exactly what resources they can get, and when they can get them. Data centre and cloud operator types will also like this as it will let them define resource levels for VMs;
  • More support for Intel's Resource Director Technology (RDT) to give a more granular view of how threads are using a CPU. Should help to silence noisy neighbours;
  • Systemd support, which will thrill the Devuan splitters.

Overall this looks a solid release that will please Xen users running just about any application, which may or not be a good thing seeing as the hypervisor is being advanced as suitable for running anything from a colossal cloud to a very small device. It also gives ARM server aspirants a … erm … shot in the arm.

New Xen versions land about every eight or nine months, so 4.6 should be with us by about October. By which time we should know rather more about the next version of Windows Server and vSphere 6 will probably be not far off its first service pack. ®

Similar topics


Other stories you might like

  • VMware claims ‘bare-metal’ performance from virtualized Nvidia GPUs
    Is... is that why Broadcom wants to buy it?

    The future of high-performance computing will be virtualized, VMware's Uday Kurkure has told The Register.

    Kurkure, the lead engineer for VMware's performance engineering team, has spent the past five years working on ways to virtualize machine-learning workloads running on accelerators. Earlier this month his team reported "near or better than bare-metal performance" for Bidirectional Encoder Representations from Transformers (BERT) and Mask R-CNN — two popular machine-learning workloads — running on virtualized GPUs (vGPU) connected using Nvidia's NVLink interconnect.

    NVLink enables compute and memory resources to be shared across up to four GPUs over a high-bandwidth mesh fabric operating at 6.25GB/s per lane compared to PCIe 4.0's 2.5GB/s. The interconnect enabled Kurkure's team to pool 160GB of GPU memory from the Dell PowerEdge system's four 40GB Nvidia A100 SXM GPUs.

    Continue reading
  • Nvidia promises annual datacenter product updates across CPU, GPU, and DPU
    Arm one year, x86 the next, and always faster than a certain chip shop that still can't ship even one standalone GPU

    Computex Nvidia's push deeper into enterprise computing will see its practice of introducing a new GPU architecture every two years brought to its CPUs and data processing units (DPUs, aka SmartNICs).

    Speaking on the company's pre-recorded keynote released to coincide with the Computex exhibition in Taiwan this week, senior vice president for hardware engineering Brian Kelleher spoke of the company's "reputation for unmatched execution on silicon." That's language that needs to be considered in the context of Intel, an Nvidia rival, again delaying a planned entry to the discrete GPU market.

    "We will extend our execution excellence and give each of our chip architectures a two-year rhythm," Kelleher added.

    Continue reading
  • Now Amazon puts 'creepy' AI cameras in UK delivery vans
    Big Bezos is watching you

    Amazon is reportedly installing AI-powered cameras in delivery vans to keep tabs on its drivers in the UK.

    The technology was first deployed, with numerous errors that reportedly denied drivers' bonuses after malfunctions, in the US. Last year, the internet giant produced a corporate video detailing how the cameras monitor drivers' driving behavior for safety reasons. The same system is now apparently being rolled out to vehicles in the UK. 

    Multiple camera lenses are placed under the front mirror. One is directed at the person behind the wheel, one is facing the road, and two are located on either side to provide a wider view. The cameras are monitored by software built by Netradyne, a computer-vision startup focused on driver safety. This code uses machine-learning algorithms to figure out what's going on in and around the vehicle.

    Continue reading

Biting the hand that feeds IT © 1998–2022