This article is more than 1 year old

Facebook reveals next-gen Open Compute wares

Double your servers, double your fun

Facebook's Open Compute Project, founded to open source the social media mogul's server and data center designs, has hosted its first meeting, previewing its next-generation server and storage iron.

While a lot of companies give mere lip service to compute density and performance per watt, hyperscale web companies such as Facebook will profit or not based on how well their infrastructure scales, how little it costs to acquire and operate, and how densely it can be crammed into data centers.

Back in April, Facebook launched the Open Compute Project with a vanity-free 1.5U rack-mounted chassis that sports custom two-socket motherboards based on Intel Xeon 5600 and Advanced Micro Devices Opteron 6100 processors, fed by a 450 watt power supply. Every feature not required by Facebook's applications have been ripped out of the custom motherboards, provided by Chinese mobo and server maker Quanta Computer.

With many more cores expected from future "Sandy Bridge" Xeon E5s and "Interlagos" Opteron 4200s and 6200s, Facebook's hardware design manager Amir Michael tells The Register that the company first took a stab at using fatter four-socket boxes to run its mix of applications. The idea, says Michael, was simple: with a four-socket server, you can in theory use one fast network pipe and one very efficient power supply, and eliminate some of the components you would need in a pair of two-socket servers.

But, says Michael, for Facebook workloads the SMP/NUMA architecture of a four-socket x64 creates as many problems as it solves. In prototype tests run by Facebook, four-socket x64 machines don't scale as well on machines with lots of cores – much like many other workloads out there in the real world. "You also need to take much tighter control of the memory inside of the machine," Michael explains, saying that if you are not careful, you end up having to do multiple hops inside of a machine to do processing after work has been dispatched to a node. "Performance degrades even more at that point."

And so with the next generation of Open Compute platforms, Facebook is doing what many hyperscale companies already do: putting two half-width, two-socket servers inside the chassis to double-up the compute density. This arrangement is sometimes called a twin server, and you can see a mock up of the future Open Compute server machine in a blog post – on Facebook, of course.)

Future Facebook double-stuffed server

Given that Intel and AMD have not yet announced their Xeon E5 and Opteron 4200/6200 processors, Michael is not at liberty to provide the detailed feeds and speeds on these future half-width servers. Details of the machines will be published on the Open Compute site once the chips and chipsets are announced, but Michael could talk about the chassis and server in general.

The Open Compute chassis has disk-drive carriers in the back of the chassis, and if these are pulled, a 6.5-inch by 20-inch half-width, two-socket motherboard slides right into the existing Open Compute chassis, butting up right against the fans.

Michael says that if there were a standard size for half-width boards, Facebook would have used it (as it does for the existing two-socket machines), but half-width boards range in width from 6.5 to 6.6 inches and anywhere from 18 to 20 inches in length.

The server design keeps the disk drives in the front of the chassis, as before: two mounted on each server mobo, stacked atop each other. From the mock up, it appears that a few more disks are packed onto the right of the machine.

The processors are behind the disks, and because of the tight packing of the components, airflow will be warmed as it passes over the disks, to the first processor, and then to the second processor. The earlier Open Compute designs tried to avoid "shadowing" – server-speak for having one component heating up the cooling air for another component – but in a twin design, this is very tough to avoid.

Given this, Facebook knew that it would need to crank up the fans a bit to keep it all cool, but fan power was only increased from about 2 per cent of overall server power draw to around 3 per cent, according to Michael.

The server node is based on 1.3 volt memory, not the standard 1.5 volt sticks used in servers last year, and Michael doesn't think it will be long before 1.25 volt memory will become more common, helping to shave power consumption inside the box – a little.

Having two whole servers in the 1.5U chassis also means needing more juice, but even here, Facebook is pushing up efficiency, moving from one 450 watt power supply to a single 700 watt unit.

More about

TIP US OFF

Send us news


Other stories you might like