Beyond virtual servers
The four key elements of the vSphere stack will include vCompute, vNetwork, and vStorage - the three virtualization features for the three parts of a modern computer system - plus vCenter Suite, the management tools for the stack.
vCenter Suite, says Maritz, will move the "management layer up," letting administrators set up software stacks, set service level targets, and policies that determine what is to happen when service levels are not being met (add more machines to the cluster, shut down other workloads, and so forth).
The vCenter interface is a simple dashboard and it is meant to be, but if admins want to drill down and muck about in the code that sets policies or do something manually, they can. But the whole point is to let vCenter do the work of managing the compute, storage, and network pools.
vCompute is where the server hypverisor lives, and as we previously reported, the future hypervisor would have double the VirtualSMP capability of ESX Server 3.5. So vCompute 4 will be able to have a single VM span with as many as eight physical processor cores in a machine. Each virtual machine will also be able to have as much as 256 GB of main memory allocated to it, up from 64 GB from ESX Server 3.5, and up to ten network interface cards per VM as well.
In Maritz' presentation, it is clear that VMware is thinking about managing a large pool of servers and masking this from administrators and applications; he flashed some data across the screen very quickly, showing that vCompute will span 4,096 cores in a single cluster, up to 64 TB of main memory, and I could have sworn I saw 25 million I/O operations per second (IOPS) for aggregate disk bandwidth. (The numbers went by very fast, but that is twice what people are expecting on the memory and IOPS capacities.)
"This has gone way beyond server virtualization," Maritz boasted. "This is about building a single, giant computer." And, by the way, one that can support any workload - bar none - according to VMware.
Someone attending the VMworld event last month published these feeds and speeds at the VMwaretips blog, but Maritz did not go into such detail today. But that posting claims the future vCompute hypervisor will allow as many as 20 VMs per core, up to a maximum of 256 VMs per host, with physical servers having as much as 64 cores; a cluster of machines in one pool and managed by one instance of vCenter will span up to 64 hosts, for a total maximum of 16,384 VMs across those 64 machines, which would have a maximum of 4,096 processor cores.
The posting said that main memory per host will top out at 512 GB and total memory per cluster would max out at 32 TB. Maximum network bandwidth (four 10 Gigabit Ethernet cards per host machine) was specified as 40 GB/sec. VMware has said previously that it would be able to deliver 200,000 IOPS per host on the kicker to ESX Server 3.5, doubling up disk bandwidth with vCompute.
The vNetwork part of the vSphere software stack essentially means setting a big virtual switch, written in software and running in a virtual machine itself, between server VMs and the physical switches that these VMs actually need to talk to. This way, when a virtual machine moves from one physical server to another, it is still talking to the virtual switch and doesn't have to be reconfigured to talk to a new physical switch attached to that physical server that is now hosting the VM.
It's like a hypervisor layer for switching, and the Nexus 1000V switch that VMware co-developed with Cisco Systems, and which will be a key element of next week's "California" blade server launch.
The vNetwork layer is not being written by Cisco to give it some kind of monopoly, but according to Maritz is being virtualized so customers who like using Cisco switches and Cisco administration tools can keep on using them and see the Nexus 1000V just like they would a real switch. Presumably, other switch vendors have been invited to create their own virtual switches and play in the vNetwork layer.
The vStorage layer, which virtualizes storage and meshes with the various high availability, snapshotting, thin provisioning and other features of modern disk arrays, doesn't just work with EMC products, but those of other key storage providers in the data center - NetApp, IBM, Hitachi, Hewlett-Packard, Sun Microsystems, and myriad niche but clever players.
And all of this openness is important to VMware and thus EMC because if vSphere isn't open, if VMware doesn't let server and storage and networking gear suppliers stay in the game somehow, they won't help sell it. And they will then try to block it or thwart it with other point products. And considering the scope of what VMware is shooting for, they might just better try to kill the company now before it sucks much of the profits out of the data center.
It might be cheaper at around $10bn or so (with VMware having a market capitalization of around $7.8bn today as we go to press) to just buy VMware now. But EMC is not, as Tucci said, interested in selling. Only those with the stomach for a hostile takeover and great big bags of cash need apply. ®