VMware goes after biz critical apps with vSphere 5.5

Virtual SAN and networks complete grand unified theory of virtualisation - eventually


VMworld 2013 Server virtualization juggernaut VMware has been talking about the software-defined data center (SSDC) for so long that you have to stop and remember that all of the software that enables this strategy is not actually available. But at VMworld 2013 this week in San Francisco, Virtzilla will be rolling out some major pieces of its SDDC stack and putting other pieces into beta and positioning itself as best as can be hoped to capitalize on its vast ESXi hypervisor installed base and defend against the encroachment of Microsoft's System Center/Hyper-V combo and the rapidly rising OpenStack/KVM alternative.

Ahead of the keynotes rolling out the updated ESXi hypervisor, the vSphere editions based on it, the vCloud management tools that turn virtual servers into private clouds, and the new NSX virtual networking and vSAN virtual storage components, VMware graciously gave El Reg a prebriefing on the major new features in these updated and new projects. Team Reg has lots of feet on the street at VMworld and will be digging into the details for each product for a more thorough analysis.

The time is right for virtualizing storage and networks, says John Gilmartin, who is vice president of cloud infrastructure products at VMware. He cites statistics from VMware's own ESXi/vSphere customer base that shows that 77 per cent of customers polled by Virtzilla want to expand virtualization from compute out to networks and storage.

We surmise that many customers would have liked to have been able to virtualize storage and networks years ago as VMware's server virtualization products had matured. And, in fact, many will be impatient to wait longer - as they must - for some of the network and storage virtualization products being announced today will not ship for a while yet.

But VMware has to walk a fine line, trying to not alienate storage and network hardware providers (including parent company EMC) and countering various open source alternatives to virtualizing all components of the data center and orchestrating the movement of compute, data, and connectivity between the two and end users. This SDDC concept is a massive software development effort, and even with acquisitions like Nicira helping to speed up the effort (in this case for network virtualization), there is still a lot of code that has to be developed and thoroughly tested so it is enterprise-ready. This takes time. And money. And the good news is that VMware's competition has to marshal huge resources to take on a similar SDDC effort, even if they are not trying to do all parts of the stack alone. This doesn't make VMware late to the party by any stretch, but the heat is on and it will never relent. There is little margin for error, and everyone in IT can't help but root for the underdog, which VMware most certainly is not when it comes to virtualized servers.

The ESXi foundation updated to release 5.5

With VMware, everything starts and ends with the server virtualization hypervisor, and that is never going to change. For whatever reason, VMware is jumping from 5.1 this time last year to 5.5 now with the ESXi hypervisor and the vSphere editions that progressively turn on more features in that hypervisor as you pay more money. The hypervisor has not had a major overhaul, so a 5.5 designation makes sense. It is able to support the impending "Ivy Bridge-EP" Xeon E5 v2 processors from Intel as well as the existing Opteron 3300, 4300, and 6300 processors that came out from Advanced Micro Devices last fall.

VMware's monster VM

With the 5.5 release, VMware is doubling up the capability of the hypervisor, but only tweaking some aspects of the virtual machines that ride on top of the hypervisor. It is unclear why VMware doesn't re-architect ESXi so a virtual machine can scale from tiny slice of a processor out to encompass an entire host if a customer wants to do that - this is how IBM architects its PowerVM hypervisor for its Power Systems servers as well as z/VM for its System x mainframes - but ESXi has always had separate maximum capacities for the hypervisor and the VMs that frolic upon it. (It no doubt has to do with the architectural decisions in the guts of the hypervisor, which are tough to change.) The important part for VMware is that Microsoft's Hyper-V, Citrix Systems XenServer, and Red Hat's KVM have similar architectures, so x86 server customers are used to things being this way.

With ESXi 5.5, the hypervisor can scale across a maximum of 320 logical cores and address up to 4TB of main memory on a host server; it can have up to 4,096 virtual CPUs (or vCPUs) carved up on that host. ESXi's previous incarnation could ride 160 logical cores and 2TB of memory, and have as many as 2,048 vCPUs on that host.

At the moment, machines using Opteron servers top out at 64 cores and do not have multiple threads per core, so this is plenty of room. A two-socket Ivy Bridge Xeon E5-2600 server is expected to have 24 cores (a dozen per socket), and with HyperThreading turned on that doubles up to 48 logical cores in a single system image. If Intel gets an E5-4600 server into the field sometime soon for four-socket servers, that would double up again to 96 cores and 192 threads for ESXi to stretch across. With HyperThreading turned on that would have been beyond the 160 logical core maximum supported by last year's ESXi 5.1 release. And for customers using Xeon E7 processors, the update allows them to scale beyond the current iron and, presumably, has enough scalability to allow ESXi 5.5 to span "Ivy Bridge-EX" processors that are expected to start shipping for revenue before the end of the year and appear in products early next year. The current "Westmere-EX" Xeon E7 processors have ten cores and twenty threads per socket, so ESXi 5.1 could span an eight-socket machine with HyperThreading turned on, but that was it. Intel will no doubt be adding more cores with the Ivy Bridge versions of the Xeon E7 - rumors have the core counts at anywhere from 12 to 16 cores per socket - so ESXi has to be able to span those cores and threads. If the future Xeon E7s have sixteen cores and 32 threads per socket, that's only 256 logical cores, and ESXi 5.5 can more than handle that. This is no coincidence, of course.

If ESXi 5.5 is constrained anywhere, it could turn out to be in memory addressing. ESXi 5.1 could only span 2TB of main memory, which doesn't seem like a lot these days, and it doesn't look like it will be enough for the fattest Ivy Bridge Xeon E7 v2 configurations, which Intel has said will top out at 12TB, three times the current top-end Xeon E7 v1 chips in an eight-socket setup. Presumably VMware will be able to increase physical main memory addressing to match the iron in quick order. This will be important to give customers some ceiling room, even if they don't use it. The mantra is for vSphere to virtualize more business critical applications, or BCAs in the VMware lingo, and that should mean always matching the capabilities of the iron.

The virtual machines that run atop the ESXi 5.5 hypervisor did get one big capacity change with the update, and that is for the virtual disk locally attached to a VM to be extended from 2TB with ESXi 2.1 to 64TB for ESXi 5.5.

"This is what customers talk about the most," says Gilmartin.

The amount of virtual memory that a VM can address with ESXi 5.5 stays the same at 1TB and the number of vCPUs it can span stays at 64; other aspects of the VMs also remain for the most part the same, Gilmartin says. (The capacity maximum documents for vSphere 5.5 are not yet available as we go to press, but we will bootnote a link to them as soon as they are.)

In addition to the scalability updates above for the hypervisor and VMs, vSphere 5.5 has some other goodies tossed in.

The most interesting one is called vSphere Flash Read Cache, and as the name suggests, it is a way of using local solid state drives running off disk controllers or PCI-Express flash cards as a read cache for the hypervisor and its VMs. This is a read-only virtual cache to speed up access to hot data and does writes through to disk drives that store this data persistently. This read cache feature for vSphere allows you to size and allocate a slice of virtual flash to each VM individually (or not, as you see fit), and if you do a VMotion from on physical server host to another one, provided that both servers have flash storage, those caches will teleport along with the VM. (It is not clear what happens if you VMotion from a server with cache to one without it, but El Reg did ask. We assume that the VMotion will work and the VM will continue running, albeit with slower reads coming off disks instead of out of flash memory.) The performance benefit of this read cache feature is obviously dependent on how read-intensive workloads running on top of vSphere are. The performance gains are akin to the delta in I/O operations per second between disk drives and flash in the system, says Gilmartin, which makes perfect sense. Your mileage will vary, and presumably VMware will soon be showing off some benchmarks.

vSphere 5.5 also includes a tweak to the vSphere HA high availability cluster tool, called vSphere App HA, that can reach into VMs and see when an app has failed. Sometimes, apps fail and need ro be remotely recovered even if the underlying operating systems are running just fine. The HA tool needs to know how to distinguish between the two, and hence the new feature. The vSphere HA feature is still, as far as we know, limited to VMs that span only a single core.

Finally, vSphere 5.5 now includes the "Project Serengeti" tools for implementing Hadoop atop virtual servers, which is called the vSphere Big Data Extensions. El Reg has covered these extensions when they went into beta in June. The only new thing here is that they have been rolled into the vSphere Enterprise and Enterprise Plus editions and therefore formally have technical support from Virtzilla for the first time.

Similar topics


Other stories you might like

  • VMware claims 'bare-metal' performance on virtualized GPUs
    Is... is that why Broadcom wants to buy it?

    The future of high-performance computing will be virtualized, VMware's Uday Kurkure has told The Register.

    Kurkure, the lead engineer for VMware's performance engineering team, has spent the past five years working on ways to virtualize machine-learning workloads running on accelerators. Earlier this month his team reported "near or better than bare-metal performance" for Bidirectional Encoder Representations from Transformers (BERT) and Mask R-CNN — two popular machine-learning workloads — running on virtualized GPUs (vGPU) connected using Nvidia's NVLink interconnect.

    NVLink enables compute and memory resources to be shared across up to four GPUs over a high-bandwidth mesh fabric operating at 6.25GB/s per lane compared to PCIe 4.0's 2.5GB/s. The interconnect enabled Kurkure's team to pool 160GB of GPU memory from the Dell PowerEdge system's four 40GB Nvidia A100 SXM GPUs.

    Continue reading
  • Nvidia promises annual updates across CPU, GPU, and DPU lines
    Arm one year, x86 the next, and always faster than a certain chip shop that still can't ship even one standalone GPU

    Computex Nvidia's push deeper into enterprise computing will see its practice of introducing a new GPU architecture every two years brought to its CPUs and data processing units (DPUs, aka SmartNICs).

    Speaking on the company's pre-recorded keynote released to coincide with the Computex exhibition in Taiwan this week, senior vice president for hardware engineering Brian Kelleher spoke of the company's "reputation for unmatched execution on silicon." That's language that needs to be considered in the context of Intel, an Nvidia rival, again delaying a planned entry to the discrete GPU market.

    "We will extend our execution excellence and give each of our chip architectures a two-year rhythm," Kelleher added.

    Continue reading
  • Amazon puts 'creepy' AI cameras in UK delivery vans
    Big Bezos is watching you

    Amazon is reportedly installing AI-powered cameras in delivery vans to keep tabs on its drivers in the UK.

    The technology was first deployed, with numerous errors that reportedly denied drivers' bonuses after malfunctions, in the US. Last year, the internet giant produced a corporate video detailing how the cameras monitor drivers' driving behavior for safety reasons. The same system is now apparently being rolled out to vehicles in the UK. 

    Multiple camera lenses are placed under the front mirror. One is directed at the person behind the wheel, one is facing the road, and two are located on either side to provide a wider view. The cameras are monitored by software built by Netradyne, a computer-vision startup focused on driver safety. This code uses machine-learning algorithms to figure out what's going on in and around the vehicle.

    Continue reading
  • AWS puts latest homebrew ‘Graviton 3’ Arm CPU in production
    Just one instance type for now, but cheaper than third-gen Xeons or EPYCs

    Amazon Web Services has made its latest homebrew CPU, the Graviton3, available to rent in its Elastic Compute Cloud (EC2) infrastructure-as-a-service offering.

    The cloud colossus launched Graviton3 at its late 2021 re:Invent conference, revealing that the 55-billion-transistor device includes 64 cores, runs at 2.6GHz clock speed, can address DDR5 RAM and 300GB/sec max memory bandwidth, and employs 256-bit Scalable Vector Extensions.

    The chips were offered as a tech preview to select customers. And on Monday, AWS made them available to all comers in a single instance type named C7g.

    Continue reading
  • Beijing reverses ban on tech companies listing offshore
    Announcement comes as Chinese ride-hailing DiDi Chuxing delists from NYSE under pressure

    The Chinese government has announced that it will again allow "platform companies" – Beijing's term for tech giants – to list on overseas stock markets, marking a loosening of restrictions on the sector.

    "Platform companies will be encouraged to list on domestic and overseas markets in accordance with laws and regulations," announced premier Li Keqiang at an executive meeting of China's State Council – a body akin to cabinet in the USA or parliamentary democracies.

    The statement comes a week after vice premier Liu He advocated technology and government cooperation and a digital economy that supports an opening to "the outside world" to around 100 members of the Chinese People's Political Consultative Congress (CPPCC).

    Continue reading

Biting the hand that feeds IT © 1998–2022