Go to the Computer History Museum in Mountain View, California, and you’ll see a strange contraption cobbled together from commodity motherboards purchased from electronics stores. It’s one of Google’s first production servers, built in 1999 when it didn’t have money to waste on dead-end projects like Wave, NexusQ and Buzz.
Google's founders were fed up with paying through the nose for heavily marked-up branded hardware full of features that they didn’t need, so they decided to build their own.
That was smart, and Google has since scaled its operation. It builds lots of its own boxes, buying around 300,000 chips per quarter. It is even making its own chips for specialised AI applications.
Not everyone gets to make their own servers, let alone their own chips, but for hyperscale vendors, going around the brand vendors to avoid their high markup has been an increasingly common tactic. They have been buying equipment directly from the original design manufacturers (ODMs) that supply the likes of Hewlett-Packard Enterprise, Dell, and Cisco, avoiding heavy markups.
Muscle for contract manufacturing
Hyperscale service providers typically have the muscle to sign manufacturing contracts, procuring boxes for specific specifications. That requires significant volume. Enterprise customers typically don’t that that kind of power.
Historically, the alternative was to buy generic white box servers produced by the ODMs in bulk. Buying those can create logistical problems for corporate customers.
Continuity is an oft-overlooked issue. “With white boxes, the biggest danger is that you buy the one that’s best value at the time,” according to 451 founder and distinguished analyst John Abbot. “Then six months later you need to buy some more, and that box isn’t available any more. You have another that’s slightly different.”
That lack of continuity forces generic white-box customers to deal with different drivers and hypervisors and configurations, which will gum things up for the operations team. It would be nice to buy equipment designed for tightly-controlled datacentre specifications, but ODMs want customers to buy thousands of nodes to make custom builds worthwhile.
What’s needed is standardisation of commodity kit for specific workloads. The now-six-year-old Open Compute Project (OCP) has been instrumental here, launched by Facebook in 2011 to publish open-source specs for pared-down compute, storage and network nodes.
Things like the OCP has seen investment in some basic building blocks. That makes it possible for large datacenters to source boxes with OCP specs without designing their own kit.
These boxes are often tailored for specific tasks such as compute-intensive analytics or virtualized workloads, he explained. The model cuts costs by whittling away brand markup and unwanted crud like enclosures, but it also shaves away the cost of unnecessary system-level components. The result: performance-optimised tin at big savings over brand-name purchases.
“Quick comparison by product detail and socket capability shows a 50-70 per cent reduction in average selling prices,” says IDC research analyst Eckhardt Fischer of ODM boxes.
Standardising ODM boxes for the enterprise OCP firms like Stack Velocity are also stepping up to service large deployment customers. Companies like Stack offer contract manufacturing services that vary in complexity, from co-designing systems through to supplying equipment based on its own OCP reference designs.
William Carter, chief technology officer for the Open Compute Project Foundation, believes that the ODM model has the potential to push its way down into the enterprise.
“I don’t know that we have an absolute volume figure where it makes sense to use a ‘white box’ or ODM product,” he said. “Businesses that can leverage exactly the same products and configurations that are already in high volume, and able to provide some of the support that is bundled in a OEM server, will find the total cost of ownership favorable.”
We are seeing some of these big names get on board. Goldman Sachs, which is a key contributor to the OCP, bought around 70-80 per cent (subscription required) of its servers last year based on the project’s specifications. That represents around 4,300 nodes, say reports. The Bank of America is also gambling on this model with its own IT. It’s OK for massive, information-driven businesses to play with this stuff, but Joe Bloggs Widgets, with its 500-person headcount? Not so much.
Companies must still be large enough to have an expert team that can configure the software and manage these servers, because no frills really means no frills. Corporate customers that meet this requirement must also be sophisticated and organised enough to manage all of their support and configuration in-house. “Where we see the majority of the shipments going, support is not offered as this is taken care of in-house,” said IDC’s Fischer.
Supporting your own ODM boxes in-house as an enterprise customer could be down to simply treating your servers like cattle rather than pets, and ripping out ones that die. In some cases you might not bother with hardware support at all when you’re dealing with commodity servers and standard parts, said Abbot. A rip-and-replace approach can suffice there.
It isn’t enough to be big enough
One problem is that the credentials to play the ODM game can also be self-defeating. Companies must be large enough to buy in sufficient volume, but the larger the enterprise is, the more likely the IT and datacentre operations are to have different owners within the organization. That can make a concerted hardware procurement and support effort more challenging.
There are other procurement challenges, too. “ODM product is build once an order is received, which may result in about a four-month lead time,” Carter said. “The product undergoes a very specific test and any changes to the configuration or the software stack can introduce issues.”
Branded vendors are doing their best to meet large customers’ requirements for cheaper, tightly-specced kit. HPE, for example, not only builds servers for hyperscale firms it does custom design for server customers.
Among its offerings, Cloudline - the firm’s OCP-compliant boxes that HPE claims is five-20 per cent less than Proliant configurations. HPE is also on the OCP solution provider roster.
John Gromala, senior director for hyperscale product management, said that along with white boxes, HPE has more of an open firmware and management design adding customers buying these systems focus more on software than hardware. “The software that they’re running is more important to how they’re managing the resiliency of their datacentres than managing resiliency at the hardware level,” he said.
Focusing the system functionality on software while honing the specs and the cost of the hardware is a key part of the commodity ODM story. Its increased adoption will depend heavily on the industry’s ability to make software and hardware entirely independent of each others.
That’s still a tough problem, thanks to the configuration of components such as storage and network controllers.
OCP is helping there as it tries to standardise the building blocks, but the industry has a long way to go yet. Enterprise customers with sophisticated on-premise cloud capabilities may be able to play, but for most firms, this will be way out of reach, at least for a while to come. ®
Sponsored: Ransomware has gone nuclear