This article is more than 1 year old

VMware taxes your virtual memory

The hidden cost of vSphere 5

Analysis The rumored feeds and speeds of the latest ESXi 5.0 hypervisor at the heart of VMware's just-announced vSphere 5.0 server virtualization stack were pretty much on target and something that customers will applaud.

But no one had heard about VMware's new pricing model for the vSphere 5.0 software, which attaches a fee to the use of vSphere on each socket in a physical server as well as on the amount of virtual memory that a hypervisor makes use of.

The latter is a big change, and one that is bound to get IT shops out there playing with their the back of their drinks napkins and calculators to see how the price change will affect their virtualization budgets.

VMware Monster VM

VM monster: How VMware sees its new hypervisor

But before we get into the price chances with the vSphere 5.0 server virtualization tools, let's go over the feeds and speeds of the ESXi hypervisor. Remember, there is no ESX Server hypervisor with the management console bundled in any more; ESX Server 4.1 was the last release of that style of hypervisor from VMware. By removing the console, ESX goes from around 2GB to 100MB in the 5.0 release, and more importantly, has a lot fewer elements to secure and patch.

Now VMware is down to one hypervisor, and that simplifies testing and certification for ISV partners as well as making the freebie ESXi hypervisor the heart of the vSphere stack. One bare-metal hypervisor is enough, and the wonder is why VMware didn't make this change before.

Virtual line of succession

It is hard to remember how primitive the original ESX Server 1.0 bare metal hypervisor was when it came out in 2001. It was arguably the best x86 hypervisor out there, and it didn't have much scalability at all, and hence the product was limited to its design goal, which was to help automate the development and testing environments where production applications are born but don't live.

The virtual machines that could run inside the ESX Server 1.0 hypervisor would be barely able to do any useful work these days. A guest VM could have a single virtual CPU (meaning a single core or a single thread if the processor had Intel's HyperThreading implementation of simultaneous multithreading) and could have at most 2GB of virtual memory. The VM could deliver about 500Mb/sec of network bandwidth on a virtual LAN connection and under 5,000 I/O operations per second (IOPs) on virtual disks. (That's a little more oomph than a fondleslab has these days).

With ESX Server 2, launched in 2003, the guest VM included a feature called VirtualSMP, which allowed that VM to span two cores on a dual-core processor or two sockets on a server using single-core processors. This was a nifty feature, and one that immediately made ESX Server more useful for production workloads such as Web, print, and file servers.

ESX Server 2 topped out at 3.6GB of virtual memory, 900Mb/sec of virtual network bandwidth, and about 7,000 IOPs for virtual disk per VM. (Depending on the capabilities of the underlying hardware, of course. Hypervisors cannot magically make up network and I/O bandwidth, although through over-committing and memory ballooning, they can make a certain aggregate amount of physical memory look larger, as far as the operating systems are concerned running on the VMs, than it really is.)

House of vSphere

In 2006, with the launch of ESX Server 3.0 and the Virtual Infrastructure 3.0 stack, VMware spent years gutting the microcode in the hypervisor to make VirtualSMP scale better. And even then, with the advent of dual-core processors, the guest VM scalability fell behind what the underlying hardware could deliver to a certain extent, which was frustrating to many customers. However, the real problem with putting big jobs like application, email, and database servers onto hypervisors was not virtual CPU scalability, but rather memory capacity and network and disk I/O scalability.

With the ESX Server 3/VI3 stack, VMware pushed VirtualSMP to four cores (or threads) and boosted virtual memory to 64GB; more importantly network bandwidth coming out of a VM was now pushed up to 9Gb/sec and disk I/O was pushed up to 100,000 IOPs. The assumption there is that you have a server with lots of disk drives and network interfaces that can support those bandwidths, of course, and in this case, those rates come from the fastest four-socket x64 servers on the market at the time.

With the ESX Server 4.0/vSphere 4.0 stack in 2009, VirtualSMP capability was doubled again to eight cores, virtual memory was quadrupled to 255GB (not 256 despite what the presentations say), network bandwidth rose to 30Gb/sec, and disk IOPs topped out at 300,000. The ESX Server and ESXi hypervisors could span as far as 128 cores and address as much as 1TB of physical memory.

With the ESX Server and ESXi 4.1 Update 1 last fall, the hypervisor was updated to span as many as 160 cores, matching the scalability of an 16-socket server using Intel's ten-core "Westmere-EX" Xeon E7 processor. Thus far, no one has delivered such a 16-socket box, but IBM is rumored to be working on one.

Next page: The monster VM

More about

TIP US OFF

Send us news


Other stories you might like