Rackspace: Why we're designing our own cloud servers

Just what will it take to compete with Amazon and Google


Exclusive Any cloud computing provider that wants to operate at scale and compete against its peers is under pressure to build some kind of custom hardware. It may, in fact, be necessary to compete at all.

That is what Rackspace, which is making the transition from website hosting to cloud systems, believes. And that's why the San Antonio, Texas-based company started up OpenStack - the open-source cloud controller software project - with NASA nearly three years ago, and accepted an invitation from Facebook to join the Open Compute Project, an effort by the social network to design open-source servers and storage and the data centres in which they run.

Rackspace, which was founded in 1998, grew up just as Linux and rack-mounted off-the-shelf servers were starting to make their way into data centres in big numbers, but the company had not been fully commercialised yet. And its early machines reflected that.

"What most companies did was colocation," said chief technology officer John Engates, referring to the practise of renting data-centre space, and paying for power and internet connectivity, in order to get a server onto the web. Engates was a founder and manager of Internet Direct, one of the original internet service providers in Texas back when the 'net was being commercialised in the mid-1990s.

"We took the model of putting servers up on racks very quickly and turning them on in 24 hours and we called it managed hosting. At the time, all of our founders at Rackspace were Linux geeks and they were all do-it-yourselfers, and they were literally building white-box servers. They were buying motherboards, processors, and everything piecemeal, and we assembled these tower-chassis form-factors on metal bread racks and it was really not very sexy."

Rackspace CTO John Engates

Rackspace CTO John Engates

The description sounds precisely like early Beowulf clusters based on cheap PCs or tower servers, halls of machines powering the first dot-com boom, or indeed the early generations of hardware at search engine giant Google. After a few years, Rackspace decided to chase enterprise customers to do their managed hosting, and that meant shifting to higher-end gear.

"We mimicked what the enterprise would do in their data centre to go win business from those enterprises," said Engates. "Enterprises didn't want to think they were being put on a white-box, homemade server. They wanted a real server with redundant power supplies and all that fancy stuff."

Rack servers evolved and matured, giving much better density than a bunch of tower machines stacked on bread shelves, and Rackspace started buying Dell PowerEdge 2650s for the first generation of enterprise-grade kit and then 2850s for the second generation. Today, in its managed hosting business, the split is about 60 per cent Dell iron and about 40 per cent Hewlett-Packard iron, and all of it is, of course, x86 machinery.

Now fast forward to a couple of years ago, and cloud computing gets under way. Instead of dedicating a server to a customer, each machine is thrown a hypervisor that slices up its processing abilities and memory capacity, and clients are sold access to a pool of these CPU and RAM chunks to run their Windows or Linux workloads on demand.

"Now," said Engates, "we are basically back to our own designs because it really doesn't make a lot of sense to put cloud customers on enterprise gear. Clouds are different animals – they are architected and built differently, customers have different expectations, and the competition is doing different things."

At first, when building its public cloud computing service, what Rackspace focussed on was getting custom gear from Dell and HP that better fit its needs. The web biz had the two vendors get all of the gear configured and cabled up in racks to make it easier to buy server and storage capacity and roll it right into the data centre so it could be given power and network and start doing useful work straight away.

And then Frank Frankovsky, vice-president of hardware design and supply chain at Facebook, invited Rackspace to join the Open Compute Project (OCP)'s open-source computer design efforts a little more than three years ago – by sending Engates a message through Facebook, of course. And from that moment, Rackspace has been moving more and more towards self-sufficiency for server and rack design.

Monitor ports, DVD drives, pretty LCD panels, all in the bin

What is good for Facebook is not perfect for Rackspace, as the latter explained at the Open Compute Summit back in January, but the basic rack and server designs can be tweaked to fit the needs of a managed hosting and public cloud provider.

The first OCP machines for servers and storage roll out in the Rackspace data centres in April; Wiwynn and Quanta are building servers and Quanta will build a just-a-bunch-of-disks (JBOD) array that better suits the needs of Rackspace than the giant winged beast that Facebook invented for itself and opened up.

"Everything that is in our multi-tenant business is some non-standard server or storage architecture," said Engates, and that can mean something cooked up by a specialist hardware manufacturer or the custom server business units of Hewlett-Packard or Dell. Most of the dedicated hosting is done on plain vanilla, enterprise-class servers, still.

"But that may change over time because we count private cloud in that category and we do have plans over time to offer Open Compute-powered private clouds. So even in the dedicated business, it is likely to be non-branded gear over time."

The vanity-free design is something that appeals to Rackspace for the same reasons as it appealed to Facebook, and indeed, is why Google started making its own servers many years ago. If you are never going to plug a monitor into a machine, why bother with a console port? You don't need CD-ROMs nor DVDs, either, and forget that front LCD panel. All of these things block airflow, add cost, and are a potential point of failure (either hardware or software) in the server and should be eliminated.

"The goal is to use OCP designs in more locations and to have a lower number of SKUs and fewer parts to stock, and therefore as we increase the number of servers that we buy we can lower the cost," said Engates. "We also improve our ability to maintain them by having fewer machines to train people on; as people understand the machines and get familiar with them, it is easier.

"You homogenise the data centre as much as you can because homogeneity in the data centre is a good thing, you want fewer moving parts in your data centre design and operations, and this is one of the means of getting there. And one of the beautiful things about Open Compute is that we remove things from the servers that we don't need."

Similar topics


Other stories you might like

  • While the iPhone's repairability is in the toilet, at least the Apple Watch 7 is as fixable as the previous model

    Component swaps still a thing – for now

    Apple's seventh-gen Watch has managed to maintain its iFixit repairability rating on a par with the last model – unlike its smartphone sibling.

    The iFixit team found the slightly larger display of the latest Apple Watch a boon for removal via heat and a suction handle. Where the previous generation required a pair of flex folds in its display, the new version turned out to be simpler, with just the one flex.

    Things are also slightly different within the watch itself. Apple's diagnostic port has gone and the battery is larger. That equates to a slight increase in power (1.094Wh from 1.024Wh between 40mm S6 and 41mm S7) which, when paired with the slightly hungrier display, means battery life is pretty much unchanged.

    Continue reading
  • Better late than never: Microsoft rolls out a public preview of E2EE in Teams calls

    Only for one-to-one voice and video, mind

    Microsoft has finally kicked off the rollout of end-to-end-encryption (E2EE) in its Teams collaboration platform with a public preview of E2EE for one-to-one calls.

    It has been a while coming. The company made the promise of E2EE for some one-to-one Teams calls at its virtual Ignite shindig in March this year (https://www.theregister.com/2021/03/03/microsoft_ups_security/) and as 2021 nears its end appears to have delivered, in preview form at least.

    The company's rival in the conference calling space, Zoom, added E2EE for all a year ago, making Microsoft rather late to the privacy party. COO at Matrix-based communications and collaboration app Element, Amandine Le Pape, told The Register that the preview, although welcome, was "long overdue."

    Continue reading
  • Recycled Cobalt Strike key pairs show many crooks are using same cloned installation

    Researcher spots RSA tell-tale lurking in plain sight on VirusTotal

    Around 1,500 Cobalt Strike beacons uploaded to VirusTotal were reusing the same RSA keys from a cracked version of the software, according to a security researcher who pored through the malware repository.

    The discovery could make blue teams' lives easier by giving them a clue about whether or not Cobalt Strike traffic across their networks is a real threat or an action by an authorised red team carrying out a penetration test.

    Didier Stevens, the researcher with Belgian infosec firm NVISO who discovered that private Cobalt Strike keys are being widely reused by criminals, told The Register: "While fingerprinting Cobalt Strike servers on the internet, we noticed that some public keys appeared often. The fact that there is a reuse of public keys means that there is a reuse of private keys too: a public key and a private key are linked to each other."

    Continue reading

Biting the hand that feeds IT © 1998–2021