This article is more than 1 year old

Facebook's open hardware: Does it compute?

Open hardware is not open source

Comment What happens if, as we saw at the launch of Facebook's Open Compute Project on Thursday, the design of servers and data centers is open sourced and completely "demystified"?

If open source software is any guide, hardware infrastructure will get better and cheaper at a faster rate than it might otherwise. And someone is going to try to make money assembling hardware components into "server distros" and "storage distros", and perhaps even sell technical-support services for them, as Red Hat does for the several thousand programs it puts atop the Linux kernel.

But even if the Open Compute project succeeds in some niches, don't expect for open source hardware to take over the world. At least not any time soon in the established Western economies – although in any greenfield installation in a BRIC country, anything is possible.

Proprietary systems built by traditional manufacturers and their very sticky applications and databases have lingered for decades. The general-purpose tower and rack-mounted servers, built usually by one of the big five server makers – HP, Dell, IBM, Oracle, or Fujitsu, in descending order – used by most companies today and usually running Windows or Linux, will also linger as well.

Companies have their buying habits, and they have their own concerns about their business. Being green in their data centers is generally not one of their top priorities – managing their supply chains and inventories, paying their employees, and watching their capital expenditures are. For most companies, even in 2011, data center costs are not their primary concern.

This is obviously not true of a hyperscale web company such as Facebook, which is, for all intents and purposes, a data center with a pretty face slapped on it for linking people to each other. At Facebook, the server and its data-center shell is the business, and how well and efficiently that infrastructure runs is precisely what that business is ultimately all about.

Facebook has designed two custom server motherboards that it is installing in its first very own data center, located in Prineville, Oregon. These servers, their racks, their battery backups, and the streamlined power and cooling design of the data center (which is cooled by outside air) are all being open sourced through the Open Compute project. There will no doubt be many other server types and form factors that Facebook uses (and maybe even instruction sets) as the company's workloads change throughout what we presume will be its long history.

The whole point of the Open Compute designs put out by Facebook on Thursday is that they are minimalist and tuned specifically for the company's own workloads. Amir Michael, a hardware engineer who used to work for Google and who is now the leader of the server-design team at Facebook, said that the company started with a "vanity free" design with the server chassis. There's no plastic front panel, no lid, no paint, as few screws as possible, and as little metal as possible in the chassis – just enough for it to still stay rigid enough to hold components. Here it is:

Facebook Open Compute chassis

Vanity-free server chassis

The chassis is designed to be as tool-less as possible, with snaps and spring-loaded catches holding things to the chassis, and the chassis into the rack. Nothing extraneous. Nothing extravagant. The chassis is actually 2.6 inches tall - that's 1.5U in rack–form factor speak - which means the servers get more airflow than a standard 1U pizza box machine, and that Facebook can put in four 60mm fans. The larger the fan, the more air it can move in a more efficient manner - and usually, more quietly too.

The taller box also allows Facebook to use taller heat sinks, which are also more efficient at cooling processors. It has room for six 3.5-inch disk drives, mounted in the back, contrary to conventional server wisdom – you generally don't want to blow hot air over your disks. But if you have a clustered system with failover and your workload can heal over the failures, then you don't really care if the disk is a little warm.

Next page: Server minimalism

More about

TIP US OFF

Send us news


Other stories you might like