This article is more than 1 year old

Before the PC: IBM invents virtualisation

A brief history of virtualisation

Virtualisation is not a novelty. It's actually one of the last pieces of the design of 1960s computers to trickle down to the PC – and only by understanding where it came from and how it was and is used can you begin to see the shape of its future in its PC incarnation.

As described in our first article in this series, current PC virtualisation means either software-assisted (Hyper-V, Xen etc) or all-software (VMware) full-system virtualisation.

Full-system will mean a full-fat server OS running multiple virtual machines, which are each complete emulated PCs with emulated chipset and emulated disk drives, running complete full-fat servers or client OSs. What the mainstream – that is, Windows-using – world seems to have forgotten, if it ever knew at all, is that there are other ways to crack the virtualisation nut, with their own unique benefits.

Virtualisation got really big, really quickly on the PC in three stages. Firstly, VMware showed that it could be done, in defiance of Popek and Goldberg.

Secondly, this caught on to the extent that Intel and AMD added hardware virtualisation to their processors. Thirdly, the rise of multi-core 64-bit machines, with many CPU cores and threads and umpteen gigs of RAM: resources that existing 32-bit OSs and apps can't use effectively, but which virtualisation devours with relish.

PC virtualisation is not ready for the big time just yet

Currently, however, the PC's full-system virtualisation is just about the simplest, most primitive and inefficient kind. When you look at the fancy tools that VMware and Microsoft are creating to provision and manage VMs – and the large-scale rollouts that are starting to occur – it's easy to forget that this is not a mature technology. In fact, PC virtualisation is still in its youth, and the fact that it is starting to show a few hairs on its chin doesn't mean that it is ready for the big time just yet.

Before you can understand how far it has yet to go, though, you need to know a bit of the background. And there's more of it than you might expect.

Before the PC: IBM invents virtualisation

Of course, there is nothing new under the Sun. (Or should that be under the Oracle, these days?) The arrival of ubiquitous virtualisation on the PC could be seen to deliver one of the last pieces of the set of features delivered by IBM’s System/360 computers of the 1960s.

An original member of the System/360 family announced in 1964, the Model 50 was the most powerful unit in the medium price range.

IBM System/360: Hot new tech from the 1960s

Launched in 1964, the S/360 was intended from the start to be a whole range of compatible computers, stretching from relatively small, inexpensive machines to large, high-capacity ones. The S/360 took a radical new approach: all would run the same software, so that programs could be moved from one machine to another without modification – a bold innovation at the time.

Some of the exotic new features of the S/360 might sound familiar: memory addressed in units of fixed-length bytes; a byte always being eight bits; words being 32 bits long. What’s more, the S/360 was the first successful platform to achieve compatibility across different processors using microcode, which again is now a standard feature of most computers.

One of the things that the S/360 didn’t do at first, though, was the then-new feature of time-sharing. IBM systems had traditionally taken a batch-oriented approach: operators submitted "jobs" which the machine scheduled itself to run, without user interaction, whenever enough free resources were available.

Time share

In the mid-1960s, though, interactive computing was becoming popular: people were sitting at terminals, typing commands and getting the response immediately, as opposed to a pile of printouts the next day. But back then, a single computer was too expensive to be dedicated to just one person, so DARPA sponsored "Project MAC," one focus of which was building operating systems that would allow multiple people to use a single machine at once, via dumb terminals.

IBM wanted in on what might be a lucrative new market, so it set up the Cambridge Scientific Centre (CSC) to create a time-sharing version of the S/360. IBM designed a special dual-processor host for the job, the S/360-67, and CSC built a time-sharing OS for it, imaginatively named TSS. The snag is, it never worked satisfactorily.

One of the chief problems was that the S/360 didn't include some of the key features necessary for time-sharing, such as support for virtual memory and what was much later called a memory-management unit (MMU). For the PC, this has been no big deal since the Intel '386 appeared in 1985 – a good two decades later.

Mind you, it took until 1993 for Windows NT 3.1 to appear, the first edition of Microsoft's OS properly equipped to exploit these featurer. Users of SCO Xenix, among other Unices, had been happily multitasking with 386s for about five years by then. Soon after, so had intrepid users of Windows/386 2.1 and later Windows 3 in Enhanced Mode – if they were lucky and it didn't bluescreen on them, anyway.

Multics

But back to the 1960s. MIT, home of Project MAC, turned down IBM's flakey TSS/360 and went with a 36-bit General Electric mainframe instead and developed a time-sharing OS called Multics.

Multics memorabilia - badge captioned "You never outgrow your need for MULTICS"You might well never have heard of Multics – the last machine running it was shut down in 2000 – but you will have heard of the OS it inspired: Unix.

Unix was conceived as a sort of anti-Multics – "Uni" versus "multi", geddit? Unix was mean to be small and simple, as opposed to the large, complicated Multics. Consider the labyrinthine complexity of modern Unix and ponder what Multics must have been like.

Another famous offspring of Project MAC was the MIT AI Lab, from which sprang Richard Stallman, Emacs, the GNU Project and the Free Software movement. It all worked out in the end, but you might like to reflect for a moment on the rarity of 36-bit hardware or Multics systems today. Project Mac's legacy was not products or technology, but rather a pervasive influence over the future of computing.

When Project MAC went off in its own, non-IBM direction, it left IBM's CSC division with nothing to do. In the hope of survival, CSC decided to press on with a different approach.

It took some lessons from an earlier IBM virtualisation project, the M44/44X, based on the pre-S/360 IBM 7000 series mainframe. The M44/44X was an attempt to implement partial virtualisation.

This was conceptually comparable to the modern open-source Xen hypervisor. On x86 CPUs without hardware virtualisation, Xen can't trap (ie, catch and safely run) all of the instruction set without hardware assistance, so it requires guest OSs to be modified so that they don't use the instructions Xen can't handle.

Today, this is called paravirtualisation: guests can only use a subset of the features of the host. Back in the early 1960s, IBM's M44 did much the same: it implemented what its developers called a "virtual machine," the 44X, which was just that critical bit simpler than the host.

More about

TIP US OFF

Send us news


Other stories you might like