This article is more than 1 year old

Before 'the cloud' was cool: Virtualising the un-virtualisable

A brief history of virtualisation

Buzzwords often have very short lifetimes in IT. Today it's cloud computing, but there would be no infinitely scalable cloud without the previous "big new thing": virtualisation. We take it for granted now, but it's worth remembering that it is still quite a new and relatively immature technology, with a long way to go.

In this article, and the next four articles, The Register looks at the history of virtualisation and what lessons we can learn from it about the technology's probable future.

How full-system virtualisation came to the PC

Around 2007 to 2008 was the peak time for virtualisation on the PC. In '07, VMWare IPO-ed and its shares went from US$29 to $55 on the first day. Citrix bought XenSource for half a billion bucks.

Microsoft, as is its wont, stuck virtualisation into Windows for free, based in part on the goodies it bought in when it acquired Connectix – whose Windows version of VirtualPC Redmond now also gives away for nothing.

With version 2.6.20, the Linux kernel gained its own virtualisation system: KVM, the Kernel Virtual Machine – nothing to do with Keyboard-Video-Mouse switches, but a whole-system virtualisation technique that uses Intel or AMD's hardware virtualisation instructions.

But although virtualisation only came to the PC in the late 1990s, it is nothing really new. IBM mainframes pioneered it in the 1960s and Sun and IBM's big Unix servers have been doing it for years. PCs have supported various forms of virtualisation for over 25 years and it was old back then. But the thing is, there are actually multiple types of virtualisation, not that you'd know that today.

virtualisation_illustration2

Sexy: running multiple OSs side-by-side simultaneously on a single machine.

At heart, "virtualisation" in this context really just means running one operating system under another, so that a single machine can run multiple OSs side-by-side simultaneously. That's nothing new; you could run multiple copies of DOS, booted straight off floppy, under OS/2 v2.0 in 1992, and the 386 edition of Windows 2.1, known back then as Windows/386, used the Virtual 86 feature of the 80386 CPU to run multiple DOS applications side-by-side.

The Popek and Goldberg Virtualisation Requirements

Even before the 386, with its hardware support for virtualising 8086 code and operating systems, Locus Corporation's DOS Merge software allowed you to run MS-DOS as a task under various forms of Unix, even on an 80286.

Locus was founded by Gerald Popek, who in 1974 co-wrote "Formal Requirements for Virtualizable Third Generation Architectures" with Robert Goldberg – a document that encapsulated what came to be known as the Popek and Goldberg Virtualisation Requirements.

Although in its time it was very handy to be able to run DOS under Unix, or multiple DOS sessions under Windows or OS/2, it would be even more useful if you could run any full PC OS under any other. This is what is called "full system virtualisation," and according to Popek and Goldberg, the x86 architecture couldn’t do it.

Others processors could – it's no problem on IBM POWER or PowerPC, or on SPARC – but x86 didn't meet the second of the requirements: that the virtual machine monitor – the program running on the host OS – could remain in complete control of the hardware.

The problem is that certain instructions in the x86 instruction set can't be trusted – they directly affect the whole machine, so if a guest OS ran them, it would bring down the host OS (and thus all guests too) in a tumbling heap.

Lord of the rings

Although it was hard to get around, the problem is simple to describe. All you have to understand is that computer processors run in several different modes, sometimes called "protection levels".

Classical x86 has four numbered "rings": 0, 1, 2 and 3. The higher the number, the lower the privilege. Programs running in ring 3 have to ask the operating system for resources, and can't see or touch stuff belonging to other programs in ring 3. Code in ring 0 is the boss, and can directly access the hardware, control virtual memory and so on. So, naturally, the core code of the OS itself runs in ring 0, as it is controlling the whole show.

(In fact, the vast majority of PC OSs only use 0 and 3. For the historically-inclined, only two mainstream PC OSs ever actually used more than these two rings. One was IBM's OS/2: its kernel ran in ring 0 and ordinary unprivileged code in ring 3, as usual, but unprivileged code that did I/O ran in ring 2.

This is why OS/2 won’t run under Oracle’s open-source hypervisor VirtualBox in its software-virtualisation mode, which forces Ring 0 code in the guest OS to run in Ring 1. The other exception - depending on configuration - was Novell Netware 4 and above, which could move NLMs into higher rings for better stability, or lower ones for more performance.)

The trick of full-system virtualisation is to find a way to take control away from privileged code that is already running in ring 0, so that another, even-more-privileged program can control it. If you can do this transparently – meaning that the guest does not know it's being manipulated – then you can then run one operating system as a program under another. That gives you the ability to run more than one OS at a time on a single machine.

The result is a hierarchy of OSs. In the old days, early OSs were called by the rather more descriptive name of "supervisors". An OS is just another program, but it's the one that supervises other programs. What do you call a program that supervises multiple supervisors? A "hypervisor". The OSs running under it are called "guests" and the boxes they run in are "virtual machines" or VMs.

More about

TIP US OFF

Send us news


Other stories you might like