Sysadmin Blog Docker, meet hype. Hype, meet Docker. Now: Let's have a sit down here and see if we can work through your neuroses.
For those of you who don't yet know about Docker, it is a much-hyped Silicon Valley startup productising (what a horrible unword) Linux containers into something that's sort of easy to use.
Containers aren't a new idea, and Docker isn't remotely the only company working on productising containers. It is, however, the one that has captured hearts and minds.
Docker started out with the standard LXC containers that are part of virtually every Linux distribution out there, but eventually transitioned to libcontainer, its own creation. Normally, nobody would have cared about libcontainer, but as we'll dig into later, it was exactly the right move at the right time.
A container is a way of "jailing" an application so that it cannot see other applications. Applications – or clusters of applications that need to work closely together – are presented with some storage, memory, cpu, and network.
Done properly, applications that are jailed cannot impinge upon the resources of other applications that reside in other jails, cannot see their storage and shouldn't even be aware that those other jails exist.
The good containers out there also offer "root privilege isolation", which means that running as the root user inside the container won't let you "break out of jail" and go do things to applications in other jails. If you want to manage the system in a containerised setup with root privilege isolation, you need to log in to the core host instance itself.
The earliest containerisation likely to be used by my Register readers is chroot. Chroot was terrible; it sort of isolated the file system of chrooted apps from one another, and it offered no other forms of isolation. Breaking out of chroot jails was a popular pastime for young hackers in the 90s and something of a rite of passage.
The next major containerisation system was FreeBSD jails. The nerds looked upon them and realised that they were good. They were pressed into wide service amongst the security-minded and emulated many times. Linux-VServer was quickly made available, as were what would become the Virtuozzo* containers by Parallels.
Big Unix has always been interested in cutting up mainframes and servers using various forms of virtualisation, but they got behind containers in a big way during the aughties. Solaris, HP-UX and AIX all got on board and have proper containerization integrated into their offerings.
LXC versus libcontainer
When Docker first came out, it was using a containerisation technology known as LXC. LXC is common, well known, and obviously widely distributed, but it also suffers from that bane of all things Linux: ecosystem fragmentation.
Each of the major distros implement LXC ever so slightly differently, ship different versions and are otherwise not cross compatible. This is not a problem if your goal is to use containers just like everyone had been using FreeBSD Jails for ages: to secure the contents of a single operating system.
In this traditional way of using containers each operating system – and its various containers – would still be a "pet". Unique and individual, the whole system would require individualised care and feeding. That was okay for 15 some odd years because people only conceived of containerization as "jails", a means of security, and not as "operating-system level virtualisation".
Solomon Hykes at dotCloud had distinctly different ideas. He viewed containers – properly implemented – as a means to package up applications for easier deployment. To Hykes, containers were not merely an extension of the security measures of an operating system; they were a competitor to hypervisors.
Hypervisors, of course, do more than just serve applications. They contain entire operating systems on which the applications run. For Hykes, this didn't matter: it was the ends that mattered, not the means. The application hosted was the goal, not the operating system, and containers offered the means to make applications easier to deploy.
Hykes built Docker as a means of doing exactly this, using LXC at the core, but he kept running up against that irritating fragmentation barrier. Eventually, the libcontainer project was started - and when Docker 0.9 came out, Docker switched from LXC to libcontainer as the default environment.
It was at this point that the Silicon Valley hype machine switched from "excited" to "an outrageously frenetic congress of gibbering hopefuls". Docker was now a rock star and, with the launch of 1.0, a major player that won't be stopped.
Having set the stage, then, the Sysadmin Blog will be back with more on Docker next week. ®
*Virtuozzo, by the way, is still the best container system available. Unfortunately, it isn't open source, and that means you have to pay if you want to use it. Virtuozzo, by the way, has the distinction of being the only halfway decent container solution to (currently) run natively on Windows.
Docker has a Windows solution, but it's actually a femtoLinux kernel that runs under Windows then fires up containerization software. Like anything awesome, there is an open source clone of Virtuozzo called OpenVZ. Microsoft appears to be adding support for Docker to the next generation of Windows Server.
Read the next three parts of this four-part series: