END-OF-YEAR ROUND UP 2016 was a year in which virtualisation became so mainstream, so expected, so accepted that it started to look like a moribund market.
But virtualisation's really only just getting started because storage virtualisation, network virtualisation and network function virtualisation are just getting started, while other uses for the idea are on the horizon.
It's easy to assume virtualisation's just part of the furniture if you start by looking at server virtualisation, a technology that's nobody's idea of a cool new toy.
VMware openly admitted as much, saying it will happily slow vSphere development because customers are happy with the platform and don't want rapid changes. Microsoft just about made the same point by hardly mentioning Hyper-V during the launch of Windows Server 2016, before moving on to make lots more noise about anything coloured Azure.
The Distributed Management Task Force even ended work on the open virtualisation format, saying it is a mature standard that needs no extra effort.
Hybrid cloud and virtualisation's role in making it happen is what most market players want to talk about instead. The entire industry assumes hybrid clouds are going to be a long-term thing, sometimes because “snowflake applications” won't ever be allowed into the cloud, sometimes for cost reasons, sometimes because of latency and sometimes just because people want to watch blinkenlights blinking.
Whatever the reason for hybrid clouds, the virtual machine will be their basic unit of workload portability. But other forms of virtualisation will make hybrids hop. Virtual storage and network virtualisation will make it easier to move VMs around inside hybrid clouds.
Some readers may have read the last paragraph and asked if El Reg has missed containers' role enabling workload portability. Both of the big virtualisers are also certainly keen on facilitating containers, because its agreed that new and/or cloud-native applications will often use them. They're betting that in many cases the containers will run inside VMs, so that operations teams can keep using the tools they know, rightly trust and have already paid for.
Virtualisation is also going to get big on “the edge”, a term now being used to describe substantial IT kit deposited on the fringes of telco networks. Edge computing makes sense because there's only so many places in the world with reliable power sources and fat fibres to carry data from network cores to consumers. Many of those places are filling up fast, it's expensive to build more and even when breaking ground is possible the new bit barns may not be close to users. But carriers' exchanges and cell towers are both robustly-built-and-connected locales into which a lot of computing power can be installed and VMs deployed for network function virtualisation.
That edge will also include cars and all manner of other connected gadgets, because designers have come to realise that they're often going to be better off running VMs deployed in the pleasantly-constrained environs of a hypervisor rather than exposing themselves on whatever bare metal runs inside an internet thing.
VMware admits that it was surprised that microsegmentation became the main reason users were interested in its NSX network virtualisation products. It turns out that plenty of organisations are willing to wear some complexity if it delivers better security. Next year (probably) VMware will offer realtime VM behaviour whitelist-checking from inside the hypervisor, offering a new reason to do more virtualisation.
Citrix and VMware have both also lined up several ways to use virtualisation to take workloads into mobile devices.
So while server virtualisation is mostly done, and is frankly now a bit boring, the concepts pioneered to make it happen are going to start popping up in lots of other new and interesting places.
Beyond VMware and Microsoft, both of which are in rude health, the virtualisation industry looks quiet.
Citrix had a quiet virtual year, offering a Xen Server update that surprised or excited nobody. Overall the company is doing well: a year or three back it looked to be losing its balance at the top of a slippery slope. Now it looks to be comfortably stable and to have decentprospects. And it pulled that off after getting out of the server virtualisation business.
Xen, the open source version, had a mixed year. On the upside, it's development cycle accelerated and it made two major releases in the form of Xen 4.7 and 4.8. On the other, nasty bugs led to questions about the rigorousness of its development process. The project is too important to big clouds to have critical mass problems like Open Office, but so will probably improve on the security front.
Xen's currently doing a lot of work to make it work with ARM CPUs. It will be interesting to see if those efforts kickstart the ARM server business in 2017. The Project's also pushing Xen into automotive and embedded applications. Also of interest will be how the hypervisor evolves as Chinese increase their contributions. Huawei's already high on the list of contributors and we hear of other Chinese concerns offering Xen-based virtualisation stacks. We're not saying Chinese participation is in any way a risk, just that when new contributors come aboard they can sometimes tug the project in interesting directions.
Oracle's still plugging away in virtualisation without ever seeming to give anyone beyond its base even the slightest reason to consider a move. That won't change in 2017, especially as it encourages more users to get cloudy.
Virtuozzo still deserves the occasional glance, as it's trying with its service-provider-centric efforts. KVM continues to be widely used without disturbing anyone. ®