The GUI clients
The first thing VMware administrators need to know is that there are three GUI clients that the need to care about. The Flash/Flex-based vSphere Web Client is still the big cheese, but it has grown up rather a lot since I first started using it in earnest.
The vSphere 6.0 web client is all grown up, and ready for prime time in most environments. While functionally quite similar to the previous iterations, VMware has made many much neaded changes. The most important change is that it is now service at the HTTPS port on your vSphere Appliance. No memorizing port numbers. Use the computer name or IP, remember its https:// address, and you're good.
VMware has cleaned up the layout a lot, and turned the sanity wrecking "turtles all the way down" nightmare into something actually usable. Navigation and menus have been flattened and the overall interface is organised such that it's laid out far more like the old C# client, and thus is comfortably familiar to those of us who have resisted the jump thus far.
The vSphere 6.0 web client is faster. It's still annoyingly slow, with many actions taking up to 5 seconds for the UI to register, but it's far - far - better than the 15+ seconds those same actions would take in the 5.5 version of the web client, and light years ahead of the "get the coffee" timeframes that simple actions like right-clicking caused in 5.1.
The new old web client comes with a lot of additional wizards, and generally is more "newbie" friendly. If the web client was VMware's Metro, vSphere 6.0 feels a lot like someone took your Windows 8 and installed Classic Shell.
That's not to say everything is perfect. Those who know me know I have not been a fan of the web client, and I still have a few minor gripes. The new vSphere Web Client is still shite on 1377x768 screens, so if you happen to be unfortunate enough to have to work on it remotely, while at a conference (for example, VMworld) you may end up cursing a blue streak if you are using what is currently the world's most common resolution for notebooks.
The old C# client is still around, and it's being referred to now as the vSphere Host Client. Rumours of VMware threats to do away with C# compatibility in 6.0 proved not to be true, and thank $deity for that! Hurray and cheering! Unicorns rain from the sky and we are saved!
Not quite. You see, there's a catch.
The legacy C# client can only manage VMs running hardware versions 8 to 11. As a quick recap, hardware version 4 was ESXi 3.5, hardware version 7 was ESXi 4.0 and hardware version 8 was ESXi 5.0. So if you have anything older than ESXi 5.0 in your environment and you make the jump to vSphere 6.0, you can kiss the C# client goodbye.
Oh, and all the features that are newer than those released as part of hardware version 8? They can only be managed in read only mode by the C# client. These would be things like virtual SATA controllers, SR-IOV, virtualised GPUs, vFlash and so forth. In other words, it should behave just like the C# client does today when you are using it with ESXi 5.1 or 5.5, except that it can perform the same basic set of tasks on 6.0.
If that didn't reinforce for you the fact that this is the C# client's last gasp, the nice error message you'll get about the C# client's deprication should. You won't get it when you connect directly to a host (remember it's the vSphere Host Client now), but you will get it if you use it to connect to a vSphere server.
The third UI you need to worry about is the VUM client. VMware's short-term direction regarding VUM hadn't exactly been clear through 2014, but vSphere 6.0 makes it perfectly clear that the current path is "band-aid it until we can get its successor going." The VUM client is basically a special install of the vSPhere Host Client with the VUM plugin. Its crap, but it works, and at the end of the day we won't have to put up with it for much longer.
Let the VSAN team can sit atop their mound of hubris and cast aspersions upon the people, VMware's other storage teams aren't going be shown up. VMware's top boffins have put their mind to tearing down the barriers between block storage and file storage choices, introducing technical and ease of use changes that promise to reignite the storage holy wars and devastate podcasts around the world.
That's right, ladies and gentlemen, VVOL is ready for prime time, and so is NFS multipathing. The number one argument against NFS – that it doesn't support multipathing – is now worth less than the oxygen used to utter the words. At the same time, the ease of use arguments against iSCSI and Fibre Channel – namely that businesses don't make money resizing LUNs – have taken a huge step towards similar irrelevance.
Whether you choose iSCSI, Fibre Channel or NFS, you can now get an easy to use storage setup that doesn't get in your way and which offers multipathing. VMware's NFS 4.1 even supports Kerberos Authentication, which as an NFS acolyte, I must say is a fair sight better than CHAP, eh iSCSI believers?
And before anyone starts in with "nothing supports NFS 4.1", I'll ask you all politely to bite your tongue. We got this all working using an ioSafe 1513+ cluster running a beta version of the Synology DSM, and claims from folks who should know what they're talking about are that NetApp (amongst others) also support NFS 4.1. And, quite frankly, if Synology can do it, everyone else can too.
Another critical item is that vSphere 6.0 supports NVMe out of the box. For those that don't know NVMe is successor to AHCI, enabling PCI-E-enabled SSDs to talk to servers without needing more than the one standardised driver.
NVMe also allows for a far greater queue depth than AHCI (65,536 commands per queue for 65,536 queues in NVMe versus 32 commands in one queue for AHCI). This is absolutely critical for VSAN, whose performance has been bounded by queue depth issues since its inception.
I saw what Micron was able to do with its all-flash VSAN at VMworld, and it was pretty impressive. I'd dearly love to put Supermicro's new line of hotswap capable NVMe servers together with an NVMe version of Micron's upcoming datacenter spin of the M600. It'd be interesting to see if I could get Maxta on VMware up past five million IOPS. NVMe is the future, and those companies that are out in front (like VMware and Supermicro) and the ones who get to write it.
As for VSAN itself, the biggest change that I noted was the ability to flag drives as SSD (or not) as you desire. This is an absolutely huge piece of "ease of use" for those of us running virtualised VSAN environments in our labs, or those using experimental drivers for some hardware elements that don't quite report everything to ESXi as they should.