This article is more than 1 year old
One day all this could be yours: Be Facebook, without being Facebook
The pros and cons of Open Compute
What's the appeal?
Open Compute offers some appealing prospects, should it come off:
Vendor agnostic management – One of the core tenets of Open Compute is that simplicity helps drive out cost. Keeping toolsets to a minimum means less time spent training staff, managing software and keeping the software updated.
Vendor agnostic hardware – This is probably what most people see when they think about Open Compute and some of the big savings are actually quite dull, but important. Take, for example, legacy server racks. When they were designed, networking infrastructure as we know it didn’t exist. The classic rack design is very inefficient and each vendor puts a proprietary spin on it. When done more efficiently, you fit more servers into a Open Compute style rack but also remove the extraneous unused clutter, such as cable arms, doors that not only impede efficient airflow but also as a DC grows all the extraneous items can make it look pretty messy.
Every component has a use and failure is going to happen – Some vendors such as Google have taken this to extremes and had their own custom motherboards pre-populated with the correct hardware, so rather than repair the hardware in situ, it’s pulled out and replaced. Such an approach also helps reduce the hardware cost, because there is no on-board redundancy and failure should have little or no effect on the services being provided.
Quick, dynamic management – Large companies traditionally suffer from very long-winded approvals processes, sometimes meaning servers can take several weeks to be put into service. Proper pod-and-block design allows extra resources to be spun up exceptionally quickly. Pod-and-block design in itself is an art form.
As with all good things, there are a certain amount of downsides that need to be understood about Open Compute. Small businesses will find their returns very limited. Open Compute is designed for data-centric companies that need to be able to scale. Less than forty servers and there will be minimal savings.
If you are interested in embracing Open Compute equipment, there’s some things you should factor in.
The first, is to prepare the basics. Power and cooling are key. Any company that is planning to go super-dense needs to ensure that they have the power and cooling available and ready to go. Can you safely provide enough power per rack, for every rack that is going to be super-dense compute? Also, can your current cooling configuration take the increased density? These items are key. Without this, failures are not if, but when.
Next, understand the basics of the Open Compute model. It is designed around disposable commodity hardware. Whilst all that is good, you need to understand how to set up your software to ensure that you none of your “disposable” machines lie on the same hardware or even same rack if possible. This helps ensure that a single failure doesn’t take down your public-facing infrastructure.