Social media giant Facebook had built precisely one data center in its short life, the one in Prineville, Oregon, before it had had enough of an industry standard that was part of the railroad infrastructure and then the telephone infrastructure build outs and bubbles: The 19-inch rack for mounting electronic equipment.
Facebook chief hardware techie Frank Frankovsky
At the Open Compute Summit in San Antonio, Texas, Frank Frankovsky, director of hardware design and supply chain at Facebook and a co-founder of the Open Compute Project, said it was time to get rid of the old 19-inch rack and give it a skosh more room – two inches to be precise. A bigger rack would do a better job of packing compute, storage, and networking gear in data centers and provide better airflow over components.
By sticking with the current 19-inch racks and their limitations, "we all end up with racks gone bad," said Frankovsky, He explained that this dimension for racking and stacking gear was first used for relay switches for the railroad industry in the prior century and was subsequently adopted when telephone switching went from human-based to electronic.
Given the tech industry experience with this sort of rack – and the fact that equipment rooms, tools, and techniques for bending metal to make electronics were all standardized on 19-inch racks, it made sense for the computer industry to adopt the same form factors when machines started getting racked up in volume in the late 1980s.
Old school electronics racks – look familiar?
The Open Compute Project (OCP) started up by a year ago to open-source the specs for the servers used in Facebook's Prineville, Oregon data center as well as the specs for the data center itself. The Project argues that we need to rethink everything from the chiller to the chip when it comes to data center manufacturing and design.
The rollout today of the Open Rack standard proposed by the OCP is just one in what will no doubt be a lot of standards that are re-thought in the next several years by companies building hyperscale data centers.
The problem with 19-inch racks is that people try to cram too much gear into them, and they often end up poking out the back of the rack, which messes up the hot aisles in data centers. Maybe you can get a parts cart down the aisle now, and maybe you can't. Racks are also packed with a bazillion network cables and heavy power cables, which makes maintenance of the machines in a rack difficult. When you have 10,000 or 100,000 servers, anything that makes deployment and maintenance easier will reduce costs.
Facebook and Open Compute's Open Rack, front view
With the new Open Rack Facebook and its hardware friends have taken the promising ideas behind blade servers – which the server makers of the world squandered for their own revenue and profit gain – and given it a more modern twist.
Blades were supposed to be about sharing power and peripherals across multiple compute nodes for the sake of density, but as all of the blade vendors would no doubt agree today, this segment of the market didn't pan out as well as one might have expected. If blades did what the marketing materials said they did, Facebook would run on commercial blade servers and Frankovsky would still be working at Dell.
"Blades were a great promise," he said. "Companies needed help, and once blades came out, they said, 'Please don't help us anymore.'" By going back to the drawing board and coming up with the Open Rack design, Frankovsky says that the engineers at the social media company have come up with a scheme that is "blades done right."
No doubt Egenera, which was founded on the idea that the rack was the single unit of consumption for a blade and that there should be shared power and cooling, would agree. Ditto Silicon Graphics with its Rackable Systems machines.
The difference is that Facebook proposes its designs as a standard through the OCP, and it will also presumably be using the racks and the servers that fit hand-in-glove inside of them in its shiny new Forest City, North Carolina data center, and another one it is building in Sweden.
The OCP wants vendors to adopt the Open Rack standard, but they may never do it except for the hyperscale data center customers that might pick up the Facebook designs and use them in their own data centers (or adapt the ideas with tweaks).
It took a long time to get servers into racks in the first place, but given the benefits that the OCP and Facebook espouse from the Open Rack design, adoption may be quicker than some might think. IBM has used 24-inch racks for its mainframes and high-end Power Systems machines for more than a decade, and for exactly the same thermal and capacity reasons that Facebook has created the 21-inch rack, which has the same 24-inch outside dimension as a standard 19-inch rack in a single rack configuration.
"Let's face it, guys. There's only so many different ways to bend metal," said Frankovsky, referring to the ways that vendors try to tweak their rack designs to make them a little bit different and how they did not standardize (as they could have) on blade server and chassis form factors a decade ago. "By completely standardizing the mechanicals and electricals, this is going to help us stay away from racks gone bad."
Open Compute Open Rack triple, front view
By going with a 21-inch wide rack design, the Open Rack can put five 3.5-inch drives across horizontally inside of a single server tray. Moreover, it can also put three skinny two-socket server nodes across on a tray and still have plenty of room for memory slots. Considering that the 3.5-inch disk is still the cheapest and densest storage device and packing more servers into the same area is what hyperscale computing is all about, the move to 21 inches is a no brainer for Facebook.
The Open Rack also has power trays that are separate from the servers, which allows servers to be even denser and gives more flexibility in what you can put in the rack in terms of servers, storage, and networking. The idea is that when a new processor from Intel or AMD comes out, you replace as few of the components as possible to get the CPU upgrade and leave everything else in place.
The Open Rack comes in three sizes at the moment: a triple rack, a single rack, and a half rack. Facebook likes triple racks because it saves that tiny bit more space and has used such designs in the Prineville data center.
Chinese web powerhouses Tencent, Baidu, and Alibaba were already working on their own custom rack designs that have some features similar to the Open Rack design, called "Project Scorpio," and Frankovsky said in a blog post that the two camps were working out how to converge their respective racks to a single standard by 2013.
Hewlett-Packard and Dell both have "clean sheet" storage and server designs that will slide into the Open Racks, respectively code-named "Coyote" and "Zeus." El Reg will tell you all about these when we get a little more detail. ®