Rackspace to build custom servers, storage for cloud biz

What's good for the Zuck is good for the Rackers


Open Compute 2013 Rackspace Hosting is getting into custom server design, and it is working with manufacturing partners with the Open Compute Project to get its tweaked versions of servers, storage arrays, and racks created by Facebook to run the social network manufactured by multiple suppliers.

The hosting giant and cloud contender has made no secret of the fact that it has found open source religion when it comes to software and that is why it co-founded the Open Stack cloud controller project with NASA more than two years ago and joined the ranks of hyperscale data center operators that rely on open source as well as homegrown applications to run their business.

Rackspace admitted in early 2011 that it was moving towards whitebox servers and away from machines supplied by tier one server makers, but it was not clear that Rackspace would go all the way and not only embrace OCP designs for IT gear, but also set its own hardware engineers about customizing the custom servers.

But that is precisely what Rackspace is now doing, explained Mark Roenigk, chief operating officer at the company, at last week's Open Compute Summit in Santa Clara.

It was not all that hard to figure out that this was the plan, with Rackspace making a lot of noise about the openness of OpenStack and being one of the founding members of the Open Compute Project, which was started by Facebook a year and a half ago and which is gaining steam while at the same time letting some of the air out of the tires of the bespoke server businesses of Dell and Hewlett-Packard.

The largest hyperscale data center operators have figured out that they can cut out most of the middlemen and go straight to ODMs in China or Taiwan and get custom motherboard and systems built and not have to pay Dell, HP, or anyone for the privilege of not taking one of their plain vanilla x86 servers.

Google has been making custom servers for years, and Amazon dabbles in them, too, although it also buys a certain amount of Rackable-brand gear from Silicon Graphics. Microsoft uses a lot of custom Dell and HP iron for its search and cloud infrastructure, and Facebook used to be the poster child for Dell's Data Center Solutions custom server business until Zuck's engineers decided to do the work themselves and design data centers and servers that fit like a glove over fingers.

The combination of OpenStack running on Open Compute servers will let Rackspace marketeers claim they are offering the most open cloud on the planet, and if Rackspace opens up the designs of its systems and gives them to Open Compute, then companies could in theory have exactly the same iron in their own private cloud running the same release of OpenStack as Rackspace is hosting in its own data centers.

This could simplify the supporting of hybrid clouds that span private and public computing and storage. This is also precisely what Amazon says it will never sell. The only cloud that Amazon really believes in is its own public cloud, its partnership with Eucalyptus Systems to make its eponymous cloud control freak compatible with EC2 notwithstanding. OCP machinery could be a differentiator for Rackspace on both the home and the external data center fronts.

The Open Rack standard that Facebook and its OCP friends launched last May and that is being deployed by the social network in its Forest City, North Carolina and Lulea, Sweden does not fit the needs of Rackspace, explained Roenigk. And so it is tweaking the Open Rack design with the help of an ODM named Delta, which makes power supplies, among other things.

The custom Open Rack from Rackspace

The custom Open Rack from Rackspace

The changes are not huge, but significant. The Open Rack used by Facebook takes all of the power supplies out of the servers and puts them on three power shelves. This was not the right mix of shelves and supplies for Rackspace – Roenigk did not explain why – and so it has one power shelf in the middle of the rack and two zones for servers or storage rather than three power shelves and three equipment zones as in the Facebook-designed racks.

To each their own, and from each according to their specs.

Facebook Open Rack versus Rackspace Open Rack

Facebook Open Rack versus Rackspace Open Rack

The differences between the two racks, as you can see above, are more than that. The Rackspace version of Open Rack is a bit taller, and it is also designed to have a slightly higher power density as well – between 14.4 and 16.8 kilowatts per rack compared to 12 kilowatts for the Facebook rack.

Facebook has AC or DC power options, but Rackspace does not because in a lot of cases, it is co-locating its data centers in the facilities of others, so it has to do AC power until DC becomes more common or it decides to build its own data centers as Facebook has done.

Rackspace is taking out the batteries that Facebook uses in its Open Rack and pitting in a power distribution unit for networking gear as well as a cable management bay that Facebook does not have.

According to a report in Data Center Knowledge, Rackspace will be deploying its modified Open Racks in a new data center in Ashburn, Virginia just down the road from the Equinix facility that hosts the East region of the Amazon Web Services cloud.

These 21-inch-wide racks require some customization on the part of the co-lo facility operator, DuPont Fabros Technology, but with Rackspace spending approximately $200m a year on servers and storage, DuPont Fabros no doubt doesn't mind making exceptions because, frankly, if it doesn't then Equinix can.

On the server front, Rackspace is working with Quanta Computer and Wiwynn, the US arm of ODM Wistron, to co-design and build servers and disk arrays that will slide into these modified Open Racks. The server designs are tweaked versions of Open Compute servers and Open Vault arrays, which again have been modified to meet specific needs that Rackspace has.

The Rackspace three-node server manufactured by Wiwynn

The Rackspace three-node server manufactured by Wiwynn

The first new server is a variant of the three-node sled server called "Winterfell" that Facebook is using as its Web server in the Swedish data center. The social network did not provide any details on the Winterfell machines last week, aside from a picture, but the variant that Rackspace is having built by Wiwynn has some feeds and speeds.

It is using Intel Xeon E5 processors, and it has a total of sixteen memory slots for a maximum of 256GB of memory across the sixteen cores in the box. The nodes have a RAID controller with cache memory to link out to external storage plus one 3.5-inch SATA drive that slots into the vanity-free server designs.

The motherboard snaps in network controllers through mezzanine cards, and in this case there are two 10 Gigabit Ethernet ports in the mezz slot. There are also two more 10GE ports on a PCI-Express slot. This Wiwynn variant of the Winterfell box is in production now.

The second Rackspace server, which is being manufactured by Quanta, is still in development and includes hot swap fans as well as improved cable routing over the Facebook design.

The Rackspace-Quanta four-sled dense server

The Rackspace-Quanta four-sled dense server

This Quanta server looks very much like a Twin2 chassis from Super Micro and puts four nodes and two redundant power supplies into the 1.5U chassis. The nodes are also two-socket Xeon E5 machines with sixteen memory slots that max out at 256GB of main memory. The nodes have the same networking and RAID controller options as on the Wiwynn machine but the nodes have two 2.5-inch SATA disk drives and the nodes are wider as well as being shorter.

Rackspace has also tweaked the Open Vault (PDF) storage array that Facebook put at the heart of its cold storage for the Facebook Photo service. This JBOD disk array is code-named "Knox" inside of Facebook and the design has been opened up at the OCP. Rackspace says the variant it has put together is already being manufactured by Wiwynn and has 30 3.5-inch SATA drives in each system, which are linked to a storage server node in the chassis by four SAS interfaces.

Going hand-and-hand with its Quanta server, Rackspace has also cooked up its own JBOD storage array:

Rackspace's own twist on a JBOD array

Rackspace's own twist on a JBOD array

This array is still in development, and doesn't have the big hinge that looks like more trouble than it is worth with the Open Vault design. The Rackspace-Quanta JBOD, which does not yet have a code-name, packs 28 3.5-inch drives in a single chassis and has the same four SAS interfaces feeding back to the servers as the Open Vault design. The front of the chassis has drive carriers and is hot swappable, as is the fans that cool the disks.

The Quanta server and storage boxes will be in production in the second quarter and are in their testing cycles now.

Why is Rackspace is doing OpenStack and Open Compute? To save money, plain and simple.

"We don't have a large supply chain organization, nor do we have a large product engineering organization," Roenigk explained in his keynote. "When we put resources into something, we need to make sure we get a lot of value out of it."

Specifically, the combination of OpenStack plus Open Compute iron is projected to deliver around 40 per cent capital expense savings and around 50 per cent operational expense savings, according to Roenigk. This money is important, to be sure, but Roenigk says that "the speed that we have been able to implement is every bit as important."

It now takes anywhere from five to nine months less time to get systems running software onto the floor than it did before moving to OpenStack and Open Compute iron. This is a competitive advantage. Well, at least until all other service providers move to open source hardware and software. ®

Similar topics


Other stories you might like

  • Share your experience: How does your organization introduce new systems?

    The answer is rarely obvious. Take part in our short poll and we'll find out together

    Reg Reader Survey The introduction of new systems into an organization is essential. If we stay still, if we continue to rely on legacy systems, if we fail to innovate – well, we (or, in reality, the company) will die. As business guru Sir John Harvey-Jones once put it: “If you are doing things the same way as two years ago, you are almost certainly doing them wrong.”

    But who should lead innovation in our companies? Who should be introducing new systems? The answer is not obvious.

    On one hand, the introduction of new systems into the business should be led by the business. In principle, the people doing the work, dealing with the suppliers, selling to the customers, are best placed to be standing up and saying: “We need the system to do X,” whether their motivation be to reduce cost, increase revenues, make products more efficiently, or even bolster our environmental credentials.

    Continue reading
  • These Rapoo webcams won't blow your mind, but they also won't break the bank

    And they're almost certainly better than a laptop jowel-cam

    Review It has been a long 20 months since Lockdown 1.0, and despite the best efforts of Google and Zoom et al to filter out the worst effects of built-in laptop webcams, a replacement might be in order for the long haul ahead.

    With this in mind, El Reg's intrepid reviews desk looked at a pair of inexpensive Rapoo webcams in search for an alternative to the horror of our Dell XPS nose-cam.

    Rapoo sent us its higher-end XW2K, a 2K 30fps device and, at the other end of the scale, the 720p XW170. Neither will break the bank, coming in at around £40 and £25 respectively from online retailers, but do include some handy features, such as autofocus and a noise cancelling microphone.

    Continue reading
  • It's one thing to have the world in your hands – what are you going to do with it?

    Google won the patent battle against ART+COM, but we were left with little more than a toy

    Column I used to think technology could change the world. Google's vision is different: it just wants you to sort of play with the world. That's fun, but it's not as powerful as it could be.

    Despite the fact that it often gives me a stomach-churning sense of motion sickness, I've been spending quite a bit of time lately fully immersed in Google Earth VR. Pop down inside a major city centre – Sydney, San Francisco or London – and the intense data-gathering work performed by Google's global fleet of scanning vehicles shows up in eye-popping detail.

    Buildings are rendered photorealistically, using the mathematics of photogrammetry to extrude three-dimensional solids from multiple two-dimensional images. Trees resolve across successive passes from childlike lollipops into complex textured forms. Yet what should feel absolutely real seems exactly the opposite – leaving me cold, as though I've stumbled onto a global-scale miniature train set, built by someone with too much time on their hands. What good is it, really?

    Continue reading
  • Why Cloud First should not have to mean Cloud Everywhere

    HPE urges 'consciously hybrid' strategy for UK public sector

    Sponsored In 2013, the UK government heralded Cloud First, a ground-breaking strategy to drive cloud adoption across the public sector. Eight years on, and much of UK public sector IT still runs on-premises - and all too often - on obsolete technologies.

    Today the government‘s message boils down to “cloud first, if you can” - perhaps in recognition that modernising complex legacy systems is hard. But in the private sector today, enterprises are typically mixing and matching cloud and on-premises infrastructure, according to the best business fit for their needs.

    The UK government should also adopt a “consciously hybrid” approach, according to HPE, The global technology company is calling for the entire IT industry to step up so that the public sector can modernise where needed and keep up with innovation: “We’re calling for a collective IT industry response to the problem,” says Russell MacDonald, HPE strategic advisor to the public sector.

    Continue reading
  • A Raspberry Pi HAT for the Lego Technic fan

    Sneaking in programming under the guise of plastic bricks

    There is good news for the intersection of Lego and Raspberry Pi fans today, as a new HAT (the delightfully named Hardware Attached on Top) will be unveiled for the diminutive computer to control Technic motors and sensors.

    Continue reading
  • Reg scribe spends week being watched by government Bluetooth wristband, emerges to more surveillance

    Home quarantine week was the price for an overseas trip, ongoing observation is the price of COVID-19

    Feature My family and I recently returned to Singapore after an overseas trip that, for the first time in over a year, did not require the ordeal of two weeks of quarantine in a hotel room.

    Instead, returning travelers are required to stay at home, wear a government-issued tracking device, and stay within range of a government-issued Bluetooth beacon at all times for a week … or else. No visitors are allowed and only a medical emergency is a ticket out. But that sounded easy compared to the hotel quarantine we endured in 2020.

    Continue reading
  • Intel teases 'software-defined silicon' with Linux kernel contribution – and won't say why

    It might enable activation of entirely new features on existing Xeon CPUs … or, you know, not

    Intel has teased a new tech it calls "Software Defined Silicon" (SDSi) but is saying almost nothing about it – and has told The Register it could amount to nothing.

    SDSi popped up around three weeks ago in a post to the Linux Kernel mailing list, in which an Intel Linux software engineer named David Box described it as "a post-manufacturing mechanism for activating additional silicon features".

    "Features are enabled through a license activation process," he wrote. "The SDSi driver provides a per-socket, ioctl interface for applications to perform three main provisioning functions." Those provisioning functions are:

    Continue reading
  • Chip manufacturers are going back to the future for automotive silicon

    Where we're going, we don't need 5nm

    Analysis Cars are gaining momentum as computers on wheels, though chip manufacturers' auto focus isn't on making components using the latest and greatest fabrication nodes.

    Instead, companies that include Taiwan Semiconductor Manufacturing Co and Globalfoundries are turning back the clock and investing billions in factories that use older manufacturing techniques to make chips for vehicles.

    The rapid digitization and electrification of cars has created a giant demand for smaller, more power-efficient auto chips, said Jim McGregor, principal analyst at Tirias Research. He added that cars don't necessarily need the latest manufacturing processes, though, and many are still using analog-based components for various functions.

    Continue reading

Biting the hand that feeds IT © 1998–2021