This article is more than 1 year old

Inside Facebook's engineering labs: Hardware heaven, HP hell – PICTURES

Better duck, Amazon... Hardware drone incoming

10 years of Facebook Facebook's hardware development lab is either a paradise, a business opportunity, or a hell, depending on your viewpoint.

If you're a hardware nerd who loves fiddling with data centre gear, ripping out extraneous fluff, and generally cutting the cost of your infrastructure, then the lab is a wonderful place where your dreams are manufactured.

If you're an executive from HP, Dell, Lenovo, Cisco, Brocade, Juniper, EMC or NetApp, the lab is likely to instill a sense of cold, clammy fear, for in this lab a small team of diligent Facebook employees are working to make servers and storage arrays –and now networking switches – which undercut your own products.

Meanwhile, if you represent Asian component manufacturers like Quanta and Foxconn and Wiwynn, you're likely to relish a trip to the lab, as it is here that Facebook designs its "Open Compute" gear, the designs of which it eventually makes available to the wider community. When these designs are published, the Asian companies are usually the ones selling the designs – at the cost of the profits of HP, Dell, Lenovo, and so on.

Obviously, El Reg had to take a tour and so earlier this month we took the Caltrain down to the social network's headquarters to ask Facebook some questions, the first of which was: what business does a social network have designing and building its own data centre hardware?

Lots, it turns out.

Though Facebook may seem like a trivial app in itself, the scale at which it operates – over a billion users, tens upon tens of petabytes of storage, three data centres around the world (and one in construction) each containing (El Reg estimates) hundreds of thousands of servers – means that it has had to rethink how it buys and consumes hardware to keep costs down.

The main insight the social network has had is that its workloads can fit into about five different types, and therefore it only needs to have five different server variants across its mammoth fleet.

These SKUs tend to be limited by a single bottleneck, with RAM or flash capacities the stumbling point for database servers, HDD capacity for photo servers, CPU speed for Hadoop gear, and so on.

"The primary driver for evolution in our hardware SKUs is the primary component," explains Facebook's director of infrastructure Jason Taylor.

fbl4s

One of Facebook's distinctive Open Compute sled servers

For this reason, Facebook has always had a clear motivation to design servers that can be easily upgradable, without needing to either remove them from the data centre or perform complicated maintenance. This has led to its design of a sled-based server (pictured) that makes quick maintenance possible while keeping costs down.

"The thing that we think about most across all of our hardware is what is the critical bottleneck we're building for," Taylor explains. "I think that for really the last four years or so we've been really good at being one of the first adopters of a new piece of tech when there's a significant change."

Where dreams are made

One of the main ways Facebook has achieved this speed is with its hardware lab, which allows it to refine existing designs and come up with new chassis to take advantage of different technologies.

By encouraging experimentation, Facebook lets its employees rapidly prototype ideas, allowing them to rethink how they arrange and configure hardware as they come up with use cases specific to the social networking giant.

fbl4s

A server development board being tested

As Facebook develops its servers it will order in development boards (pictured) from hardware partners to help it test the hardware. Sometimes it will let people curious in adopting Open Compute Project designs test the boards themselves, though from what we understand this isn't an official policy.

More about

TIP US OFF

Send us news


Other stories you might like