This article is more than 1 year old

Citrix stretches XenServer 6.0 to cover bigger iron

Chubbier VMs for heftier apps

Citrix Systems doesn't make a lot of noise about server virtualization these days, now that the two founders of the Xen project have left to start Bromium. But the company, and the open source Xen project that it sponsors, continues to hammer out code to make Xen a credible alternative to VMware's ESXi, Microsoft's Hyper-V, and Red Hat's KVM.

On Friday, Citrix announced the XenServer 6.0 commercial-grade server virtualization hypervisor, code-named "Boston," (it is based on the latest open source Xen 4.1 hypervisor) went into beta in July. Xen hypervisor is at the 4.1.1 release and has had over 102 people from 25 organizations make over 400 commits to the Xen subsystem and driver stack in the past 11 months.

As you can see from the Xen 4.1.1 release notes, one of the powerful new features in the hypervisor is the ability to create CPU pools that VMs are put within, rather than pinning a particular VM to a specific CPU. Each pool also gets its own scheduler.

The Xen 4.1.1 hypervisor can support machines with more than 255 CPUs and 1GB superpages (made up of elements with Xeon "Westmere" class processors), and also allows applications to make use of Advanced Vector Extension (AVX) floating point instructions in Xeon processors.

It also supports discrete GPU and GPU co-processor pass through, which means a GPU can be de-virtualized for a particular VM on a server, much as network cards can be pinned directly to a VM for performance reasons. This GPU pass-through support will be important for delivering virtual CAD, and perhaps even remote gaming, through virtual desktops.

XenServer 6.0 makes Xen 4.1 ready for biz

The raw hypervisor is not particularly useful all by its lonesome. To be consumed by corporations, it needs to be wrapped with lots of features and tech support. The commercial-grade XenServer 6.0, as you can see from its release notes, now supports hosts with as many as 64 logical CPUs.

That's either 64 cores in a machine that doesn't have HyperThreading (or doesn't turn it on, or 32 cores with HyperThreading turned on) 1TB of main memory, and 16 physical Ethernet network interface cards.

The prior XenServer 5.6 hypervisor supported up to 64 logical CPUs and 16 NICs as well, but could only address 256GB of physical memory. Within the hypervisor, a XenServer 6.0 guest can now address 16 virtual CPUs and 128GB of virtual memory, which is double what XenServer 5.6 could do. Your mileage on that virtual memory may vary by guest OS.

This is neither the fattest hypervisor nor the beefiest guest VM out there in x86 server land, but any enbiggening is always appreciated by server buyers and XenServer customers will be grateful (if not somewhat envious) of KVM and ESXi.

The KVM hypervisor embedded in Red Hat Enterprise Linux 6.1 can span 128 cores (or 256 threads if HyperThreading is on) and up to 2TB of physical memory, while KVM guest VMs can span 64 virtual CPUs and up to 2TB of memory.

This embedded hypervisor is in beta now as the freestanding Enterprise Virtualization 3.0 hypervisor and is expected to ship later this year.

VMware's ESXi 5.0 hypervisor, launched in July can span 160 cores and up to 2TB of physical memory, and an ESXi 5.0 guest can consume as much as 32 virtual CPUs and 1TB of virtual memory. Well, provided the physical server has as much or more physical resources to support this.

Open vSwitch, the open source virtual switch that was added with XenServer 5.6 Feature Pack 1as an option is now the default switch with XenServer 6.0.

These virtual switches can be ganged up across hypervisors to create a distributed virtual switch akin to the vSwitch created by VMware for its ESXi hypervisor. See also the Nexus 1000V, made by Cisco for its "California" Unified Computing Systems as a replacement for VMware's vSwitch, and speaking NX-OS like Cisco network admins like.

Next page: StorageLink

More about

TIP US OFF

Send us news


Other stories you might like