Sysadmin Blog Virtualization is not new - mainframes have been doing it for ages, and other non-x86 operating systems have been slicing up servers for quite some time as well. Yet if I had to pin a single IT label on the first decade of this century, I'd tag it as the decade of x86 virtualization.
Virtualization went mainstream in the noughties. It graduated from a technology almost exclusively used in large enterprise servers, to something so common that even smaller SMEs are using it for Virtual Desktop Infrastructure (VDI) deployments.
To start a discussion on VDI, or any other aspect of virtualization, a primer is in order. If you know a fair amount about computers, then explaining the basics is reasonably simple. Virtualization is a method by which you can run multiple containerized operating systems (guests) on a single physical computer (the host). You install your operating system to a Virtual Hard Drive (VHD) which acts a lot like an .iso file. It contains the file system of your virtual machine in one big file.
You devote a slice of your host’s resources to a guest, allowing that guest to occupy a fixed amount of RAM, share X number of cores and access other resources such as optical drives or network cards. You can turn guests on or off at will as easily as mounting an .iso in Daemon Tools.
While this will explain the basics of virtualization to the kind of computer adept who already has Daemon Tools installed, explaining this to your pointy-haired boss is another challenge entirely. I have gone through many different models of explanation and the one that has worked best so far is a boat analogy.
Picture a large ocean-going vessel whose engines drive a single large propeller. That one large propeller has an awful lot of power available to it, but the only way to steer is with a rudder placed behind it. It’s really good at going in a straight line, but remarkably clumsy and awkward for anything else.
Now think of more modern ships, where you instead use the generators to produce electricity, and drive dozens or even hundreds of smaller propellers. Instead of having rudders these smaller and more numerous propellers can turn in 360 degrees offering the ability to individually direct thrust. You lose a tiny bit of efficiency in converting to electrical power and the current all over the ship to power your props, but now your ship is far more easy to steer.
To extend the boat analogy, virtualization is the ability to split the resources of a single physical computer (the host) to support multiple smaller virtual computers (the guest.) No single guest would run as fast as if it were installed directly on the host system, but you can run a lot more guests (thus doing a lot more thing simultaneously) using virtualization than you could with a physical box. The server doesn’t go as fast in a straight line, but it is a heck of a lot more manoeuvrable.
From there it gets significantly more complicated; I could write an entire set of articles dedicated to the more advanced concepts (and in fact, I will!). Things like RAM deduplication, variable versus fixed VHDs, hardware assisted virtualization, IOMMU and more - they are all necessary for any virtualization admin to know, but for now only the basics are required.
With VDI, the actual work your users do on their desktop is not performed on the computer in front of them. They use a remote access application (for example RDP or X11 forwarding) to connect to a virtual operating system living on a server somewhere. The computer they are accessing from doesn’t actually matter all that much. It could be a many kilodollar gaming rig, a cheap thin client or even a mobile phone.