This article is more than 1 year old

You care about TIN? Why the Open Compute Project is irrelevant

Just say 'no' to rolling your own

There’s a lot of angst right now over the Open Compute Project, Facebook’s open-source data centre gift to the world. Some, as detailed by El Reg, describe Open Compute testing as “a complete and utter joke.” One that isn’t apparently very funny.

At least, not to Cole Crawford, executive director of the Open Compute Project. In a pointed rebuttal, Crawford lauds OCP’s “democratized process whereby anyone [can] validate a config using open source tools,” and declares the associated certification as Very Good.

I can’t help but wonder, however, if the whole thing is completely and utterly irrelevant.

If Supernap’s Mark Thiele is to be believed, and I think he is, he reckons that “95 per cent plus of all companies have failed to create the appropriate organization to build, operate, protect, monitor, sustain, and lifecycle a complex system like a data centre.”

Even if that data centre is open source.

Much ado about nada

According to the anonymous engineer cited by The Reg, OCP is all about “cheap engineering, components, testing, manufacturing, low power consumption and low thermal generating systems. Quality does not seem to be a metric of consideration.”

This he (or she) contends, is a far cry from “name-brand system, storage and networking companies [that] spend quite a bit of money testing and certifying solutions over the course of several months,” all while “us[ing] specialised software tools, internal diagnostics, protocol analysers, error injectors and lots of thermal/electrical/shock/vibe type of certifications.”

Crawford, in response, argues that such statements are both untrue and beside the point, saying: “There is a place for fault-tolerant gear and even a place for expensive certifications but hyper scale/cloud environments is not that place.”

OCP hardware, in other words, can deliver high availability but is less concerned with fault tolerance.

What Thiele argues, however, is that we’re even delusional about our ability to build that high availability into our data centres.

Getting high on data centres

Not that we’re not determined to try, anyway, despite all evidence that suggests rolling your own data centre is a Very Bad Idea.

We just can’t escape the heroin hit of potential, as Thiele explains: “Most companies fool themselves into believing they understand and have planned for the ramifications of owning a data center because of project euphoria.”

And so we conveniently forget the burden of ownership that John Sheputis, chief executive of Fortune data centres, calls out: “The type of work and focus needed to run a data center effectively is very different than running a short-term project. A data centre requires day in and day out focus on being perfect and making marginal improvements, while avoiding risk to production operations.”

What Thiele and Sheputis implicitly argue is that you should turn to them to build and manage your data centres, and that’s a valid argument, so far as it goes. But in increasingly fast-paced competitive environments, the future, as 451 Research analyst Donnie Berkholz posits, looks increasingly like “moving away from the need to worry about servers at all.”

Stop making a fetish of servers

If you’re building a data centre, you’re worried about servers. If you’re creating a so-called private cloud, someone within the enterprise is still making a fetish of servers, even if they to create a magical illusion of elasticity and convenience for internal users.

This is legacy thinking. As Amazon chief technology officer Werner Vogels declared at AWS Summit NYC recently: “Today, everything is programmable. There is no hardware anymore.”

He’s pitching the AWS story, of course, but it’s one that clearly resonates with an increasing number of enterprises. Like US retailer Nordstrom, which told the AWS Summit audience that they have moved away from internal data centres so that AWS can do the “IT heavy lifting” and allow them to focus on customer value.

Indeed, it’s only when companies move to truly public clouds that they can stop worrying about their server pets and instead focus on the applications that actually power their business.

This is what’s missed in the whole OCP debate: whether you should be trying to shave pounds and pence from your data centre operations at all. Because, guess what? You’re never, ever (ever!) going to be able to do so more efficiently than Amazon, Microsoft, or Google. You’re just not.

And you shouldn’t try. Just plug into their clouds and worry about your application, not the durability of OCP certification or fault tolerant hardware or highly availabile infrastructure. Someone else is worrying about that, and they’re much better at it than you. ®

More about


Send us news

Other stories you might like