This article is more than 1 year old
Linux is so grown up, it's ready for marriage with containers
Beats dating virtualisation, but – oh – the rules
Linux is all grown up. It has nothing left to prove. There's never been a year of the Linux desktop and there probably never will be, but it runs on the majority of the world's servers. It never took over the desktop, it did an end-run around it: there are more Linux-based client devices accessing those servers than there are Windows boxes.
Linux Foundation boss Jim Zemlin puts it this way: "It's in literally billions of devices. Linux is the native development platform for every SOC. Freescale, Qualcomm, Intel, MIPS: Linux is the immediate choice. It's the de facto platform. It's the client of the Internet."
Linux is big business, supported by pretty much everyone – even Microsoft. Open source has won, but it did it by finding the niches that fit it best – and the biggest of these is on the millions of servers that power the Web. Linux is what runs the cloud, and the cloud is big business now.
Which is why last year's LinuxCon Europe was full of smartly dressed professionals rather than beards and beer-guts, and also why every other talk seemed to be about containers.
One of the core technological enablers of the cloud is virtualisation: to achieve the fêted "web scale" by dividing tasks across multiple separate servers, and bringing those servers online as and when the load requires by starting and stopping VMs.
But VMs are expensive. Not in terms of money – although they can be – but in resources and complexity. Whole-system virtualisation is a special kind of emulator: under one host OS, you start another, guest one. Everything is duplicated – the whole OS, and the copy that does the work is running on virtual – in other words: pretend, emulated – hardware, with the performance overhead that implies. Plus, of course, the guest OS has to boot up like a normal one, so starting VMs takes time.
Which is what has led one wag to comment that: "Hypervisors are the living proof of operating system's incompetence."
Fighting words! What do they mean, incompetence? Well, here are a few examples.
The kernel of your operating system of choice doesn't scale well onto tens of cores or terabytes of NUMA RAM? No problem: partition the machine, run multiple copies in optimally sized VMs.
Your operating system isn't very reliable? Or you need multiple versions, or specific app versions on the operating system? No problem. VMs give you full remote management, because the hardware is virtual. You can run lots of copies in a failover cluster – and that applies to the host hardware, too. VMs on a failed host can be auto-migrated to another.
Even down at the small end of the scale, a SOHO operation with one server, it still helps. Operating system needs specific drivers and config to boot on a particular model of machine? If the box dies, the backup can't just be restored onto a newer replacement – it won't boot. No problem: dedicate the box to run a single VM. This provides a standard hardware template, eliminating driver problems. You can move the installed OS from one machine to another with impunity, unlike a bare-metal install. It facilitates backup and restore, capacity planning and more.
Make no mistake, virtualisation is a fantastic tool that has enabled a revolution in IT. There are tons of excellent reasons for using it, which in particular fit extremely well in the world of long-lived VMs holding elaborately configured OSs which someone needs to maintain. It enables great features, like migrating a live running VM from one host to another. It facilitates software-defined networking, simplifying network design. If you have stateful servers, full of data and config, VMs are just what you need.
And in that world, proprietary code rules: Windows Server and VMware, and increasingly, Hyper-V.
But it's less ideal if you're an internet-centric business, and your main concern is quick, scalable farms of small, mostly-stateless servers holding microservices built out of FOSS tools and technologies. No licences to worry about – it's all free anyway. Spin up new instances as needed and destroy them when they're not.
Each instance is automatically configured with Puppet or Ansible, and they all run the same Linux distro – whatever your techies prefer, which probably means Ubuntu for most, Debian for the hardcore and CentOS for those committed to the RPM side of the fence.
In this world, KVM and Xen are the big players, with stands and talks at events such as LinuxCon devoted to them. Free hypervisors for free operating systems – but the same drawbacks apply: running Linux under Linux means lots of duplication of the stack, lots of unnecessary virtualisation of hardware, inefficient resource-sharing between VMs, slow VM start-up times, and so on.
And the reason that everyone is talking about containers is they solve most of these issues. If your kernel scales well and all your workloads are on the same kernel anyway, then containers offer the isolation and scalability features of VMs without most of the overheads. We talked about how they work in 2011, but back then, Linux containers were still fairly new and crude.
Since then, though, one product has galvanised the development of Linux containers: Docker. Originally a wrapper adding some handy additional facilities to LXC, Docker has expanded to support multiple back-ends. A whole new section of the software industry is growing around Docker. New types of Linux distro are being built to host Docker containers, such as CoreOS and Red Hat's Project Atomic. CoreOS also has its own rival format to Docker, called Rocket. Docker isn't limited to Linux, either. Existing Docker containers can be run on Joyent's SmartOS, based on a fork of OpenSolaris, and a version of Docker will be available to manage the Windows containers of Windows Server 2016, too. Even Oracle is making interested noises.
Meanwhile, Canonical has a different take on the containers model with its own flavour, LXD.
None of this means the end of "traditional" virtualisation. Containers are great for microservices, but at least in their current incarnations, they're less ideal for existing complex server workloads. The current generation of management tools is also far weaker, and as such, most people are running their containerised workloads on top of a host OS inside a VM – even though there are performance penalties to doing so.
Plus, as containers pose a clear threat to existing hypervisor vendors, companies are scrambling to find ways to make VMs that behave more like containers.
Now that operating-system level virtualisation has finally arrived on the default Unix of the Web era, it is poised to radically transform the market – and Linux. And that means, too, lots of new code and lots of new buzzwords. ®