Modularity for all! The data centres you actually want to build

Democratising the build out of racks

Portability and modularity in the world of data centres aren’t new: for years, they’ve been something unique to the military and others operating in either temporary or hostile environments.

You put your data center gear in a ruggedised and self-supporting unit of some kind and walk away, managing it remotely. Increasingly, however, modularity is becoming something for those of in the mainstream – at least, those of us still building data centers.

Data centres are an expensive proposition; a 10,000sq/ft facility designed to last 15 or 20 years will costs about $33m. And, unlike in years past, the returns are not guaranteed.

Firms are less interested in running their own tin and are shipping out the compute to the public cloud. Service providers, meanwhile, are struggling to make a profit against the likes of Amazon.

Increasingly, it makes less sense to initiate an old-school blanket data centre rollout. Increasingly, such projects are the preserve of the web-tier super league, such as Facebook and Microsoft.

Rather, modularity is the new approach.

The value of the pre-fabricated, modular data centre market is calculated to grow at a CAGR of 30.1 per cent by 2018, up from $1.5bn last year, according to 451 Research. Last year also saw a bout of activity that included Schnider buying AST Modular and UK specialist Bladeroom entering the US market.

So what will vendors be unfurling from the back of those trucks, then?

Most of the major hardware vendors offer containerised data centres – for example, in recent years, Hewlett-Packard introduced the Performance Optimised Data Center (POD) and IBM the Portable Modular Data Center (PMDC) and the Scalable Modular Data Center (SMDC).

These are pre-configured units of computational power, delivered ready cabled and all set to fire up, within a defined physical container. If there exists a tradition in this sector, then the traditional method of delivering these has been within the humble shipping container – the type used by international freight companies that are officially known as intermodal containers, to distinguish them from any container being used for shipping something – are 12 metres (40 feet) long, and can be thrown open at one end to access the delivery payload inside.

The deceased Sun Microsystems was first out the box (so to speak) with Project Blackbox – which became Sun Modular Datacenter, or (on The Reg) trailer-park computing – in 2006. A Project Blackbox cluster filled with over a thousand AMD Opteron processors (in a 20-foot intermodal container) joined the 2007 TOP500 list of non-distributed compute power at number 412; impressive for a non-standard approach to the supercomputer question.

In July 2007, a Sun Modular Datacenter containing 252 Sun Fire X2200 compute nodes was deployed in the SLAC National Accelerator Laboratory, the particle accelerator at Menlo Park formerly known as the Stanford Linear Accelerator Center, which has produced three Nobel Prizes in Physics in the last century. HP followed with the POD – 40c, 20c, and 240a.

The 40ft-long 40c in 2008 was a bit of a beast. With a capacity for twenty of your standard 50-rack unit data centre cabinets, and a power capacity of 27kW per rack, the density in the HP PODs are an impressive statistic. You can cram 3,500 compute nodes into each one, and HP claims this is equivalent to 4,000 square feet of traditional data centre floorspace.

The 20c two-years later was half the size at 20ft. With a mere ten racks, it held half the kit too, but was no less dense than its sibling and both the 20c and 40c achieve an impressive Power Usage Efficiency (PUE) of 1.25 thanks to some nifty water cooling.

The 240a in 2011 saw things get really dense. You can fit an astonishing 44 industry-standard 50U cabinets, in a twin-aisle configuration, into the latest HP POD. If you go heavy on blades that could be over seven thousand computational nodes into a space smaller than your average city bus, which HP claims would take over 10,000 square feet of traditional brick-built data centre space.

I’ve not even mentioned the cooling credentials of the 240a – nicknamed the HP EcoPOD – which can take advantage of built-in, free-air cooling modules for a PUE as low as 1.05 in the right locales.

So, what does this brief tour through history and development in modular designs teach us about fixed-site data centres?

Over the years, we’ve been building some expandability into our data centres. From the basic principles of not populating every hall to allow for future growth, to empty banks in the plant room for parachuting in generators and UPS units if demand requires, it makes sense to plan ahead.

Similar topics


Send us news

Other stories you might like