This article is more than 1 year old

A brief history of BLADE SERVERS: From the Big Bang to the, er, 'unblade'

Ease of use is hard to kill

Nowadays we take blade servers for granted, but a lot of moving parts had to come together for us to get where we are today. Tracing their history can help us make better judgments about how the technologies of tomorrow will evolve.

The development of VMEbus architecture in about 1981 was perhaps the beginning of technological approaches that would evolve into blades. VMEbus involved plugging multiple boards into a backplane.

This approach was built upon over the years until the Cubix-ERS (Cubix Enhanced Resource Subsystem) was born in 1995. The Cubix was the first attempt to design a system that met the same goals as a modern blade system but it wasn't quite there.

The biggest issues were that hot-swap capabilities were somewhat limited and many resources were not capable of being shared. It was in many ways closer to today's "unblade" systems such as the Supermicro FatTwin. Steven Foskett has a great write-up on it here.

Standards make a difference

Ultimately, it was the development of various standards – notably CompactPCI – that enabled the commercial adoption of bladed servers. Incorporation of technologies such as hot plugging into CompactPCI allowed systems to overcome many of the limitations that plagued its Eurocard-based VMEbus predecessor.

The original CompactPCI bus allowed for the creation of a chassis-based computer in which individual cards could be added and removed in much the same way as with VMEbus but using standard PCI signalling.

You could think of these cards as not dissimilar to a PCI card in a modern server, with the chassis being the server. The difference was that the PCI cards were designed to come out without a lot of fussing about.

It was not until the PCIMG standards body released version 2.16 of the CompactPCI specification that the modern blade server emerged. The CompactPCI packet switching backplane allowed Ethernet to be used to interconnect cards in the chassis.

This meant a server controlling a set of pluggable cards could evolve into an administrator unit overseeing a set of independent networked servers.

For quite some time blade chassis simply dropped the PCI signalling component of CompactPCI (and its successor, CompactPCI Express) entirely, as the interconnect that mattered most was typically Ethernet.

In an interesting twist, as Ethernet is increasingly seen as a bottleneck in today's flash-driven data centre, PCI Express signalling may experience a resurgence in the next generation of Compact PCI Express chassis buses, making for an A3Cube-like inter-node communications capability.

The standards development process was long but most of the elements that would be in the final version were known well in advance of its official ratification in September 2001.

Christopher Hipp and David Kirkeby applied for the blade server patent in 2000. They pushed out the first blade server in from their company, RLX technologies, within a month of the CompactPCI standard being ratified and were granted the patent in 2002.

RLX was mostly made up of ex-Compaq employees and was bought by HP in 2005, only a few years after HP bought Compaq. When RLX was sold, Hipp wrote an article, Chapter in the history of blade computing closed.

Hipp's piece views the competitors for RLX's blades (as of 2005) to have been HP, Compaq, Dell, IBM and Sun, most of which entered the market in 2002 and 2003.

Blades have the power

Blades are often powerful computers in their own right but they didn't start out that way. The earliest blades were designed to be not that different from microserver or physicalisation projects today.

With weak processors designed more for low power consumption and low heat generation, the closest analogue today would be products such as Supermicro's MicroCloud or HP's Moonshot.

Gone are the days when blades were built out of laptop components and shoved into shared chassis to drive down energy costs

Today blades usually contains two processors and as much RAM as can physically fit in the box. They run tier-1 enterprise workloads and virtualisation, and are big, powerful and energy-hungry.

Gone are the days when blades were essentially built out of laptop components and shoved into shared chassis to drive down energy costs for a single workload as low as they would go. Today blades are above all an exercise in density of computing. HP's liquid-cooled Apollo 8000 blade system can cram 80 kilowatts into a single rack.

In turn this has spawned a new generation of microblades. The original concept of "a lot of low power, weak cores" still has its adherents. But instead of cramming laptops parts into a shared chassis we are now cramming what amounts to smartphones into even smaller chassis to achieve the same goals.

Next page: Evolution

More about


Send us news

Other stories you might like