This article is more than 1 year old

DIY with Akamai: What to do when no one sells the servers you need? You build your own

If it looks like a hyperscaler, swims like a hyperscaler...

Akamai Edge World Content delivery network Akamai runs a distributed computing platform consisting of 250,000 servers. At this scale, every bit of IO and every Watt of power consumption matter – so instead of buying hardware off the shelf, the company designs its own.

"It would be convenient if we could just go to a normal manufacturer and look at the price list and just decide to go buy something – but we can't. Ultimately, our workloads, except for some niche ones, are very unique," Noam Freedman, senior veep for networks and chief network architect at Akamai, told El Reg at the company's annual powwow in Vegas.

Freedman has been with the company since 1999 and is responsible for all aspects of its physical infrastructure. He also manages the relationships with ISPs and colocation providers that provide a home for Akamai's custom boxes.

The company has built its business on a network of servers around the world that store copies of websites and files. Those can then be served from a location close to the end user, rather than a faraway data centre. This simultaneously cuts latency for the user and bandwidth cost for the service provider. Using this infrastructure, Akamai then expanded into cybersecurity, especially DDoS protection.

"In the general market, there's hardware that's built for very high compute, and hardware that's built for storage. A workload that we have, a combination of CDN and security, doesn't need very high compute," Freedman explained. "And when other people need storage, they don't usually need high IO. Even when we can use spinning disk, the number of drives and configuration, and controller throughput – the boxes that are available, they don't work well."

The company started building its own hardware because it saw no viable alternatives, and soon discovered the economies of scale – the same way AWS, Google and Facebook did.

"Obviously once we're doing it, we start focusing on power efficiency and economic benefits. You look at the motherboard and you can identify – there's one Watt here, there's five Watts there, that you can begin to eliminate. If you have a thousand servers, it is not worth doing those optimisations. If you have hundreds of thousands of servers, that's a lot of power you are wasting."

After experimenting with servers, Akamai started designing switches – and Freedman said around 90 percent of its switches today are "white-box" devices running homegrown software.

To build the boxes, the company works directly with the same contract manufacturers that supply cloud vendors – always more than one at a time for redundancy. Akamai usually has around two to three SKUs optimised for different tasks but makes changes to its hardware often. Combined with an average lifecycle of five years, this means it has around 15 versions of its boxes floating about at any one time. Managing this variety of hardware is a "constant pain".

Facebook Open Compute AMD mobo

Facebook's open hardware: Does it compute?

READ MORE

Even the lifecycle duration is far from constant. According to Freedman, it might make sense to refresh your hardware every couple of years in Tokyo, where data centre space and power is expensive and a new server would deliver a lot more capacity while using up the same resources. Meanwhile, an old server hosted with a small ISP might sit quietly in the corner for many years.

Akamai's servers feature some disk but, being a performance-focused company, most of its drives are solid-state. The right mix of storage media in a particular location is not decided by people, but a complex model that attempts to forecast workload types and traffic. There's also software that helps identify locations that need an extra helping of servers.

None of it is an exact science, though. "When we make the purchase decisions, we think, given everything we know about the business today and where we're going, what's the right mix to be buying right now? And the only thing I know is, it will never be right. We'll always be off in some way that you weren't able to predict," Freedman said.

But how does Akamai choose where to put its servers? This is dictated by the traffic. "We want to put our servers as close as possible to every user in the world," he said. "Our focus starts with the data we collect on how much traffic we deliver to every IP address on the internet."

Akamai's DIY gear is present in more than 2,600 data centres around the world – most run by local ISPs, but there are up to 100 carrier-neutral colocation facilities in the mix.

The company doesn't care about the age or features of such data centres– in fact, some of these relationships are purely virtual, all the negotiations take place online, and Akamai simply ships its hardware using a delivery service, to be installed by the locals, sight unseen.

Following traffic flows means that, sometimes, the company has to go places other providers won't – Russia or China, for example. "We provide the most value to our customers in the places that people don't like to operate the most," Freedman laughed. "Places that are painful are guaranteed to be the places that we are focusing on."

Right now, he is excited about rolling out the latest Akamai creation. "The focus for us right now is this new box that we're literally just now beginning to ship out the first 1,000 units of. It's three times as powerful as the servers we have been deploying." ®

More about

TIP US OFF

Send us news


Other stories you might like