This article is more than 1 year old

Custom silicon, 9PB storage boxes, and 25Gb Ethernet – just another day in AWS hardware

Turns out printing money requires a buttload of compute muscle

AWS re:Invent AWS says it has moved into building its own silicon to help deliver the throughput for its massive cloud service.

The profitable side of the Amazon empire says it has started using a custom-designed Annapurna ASIC chip to help control the networking activity – both physical and SDN – in its AWS servers. This is freeing up the CPUs on its hundreds of thousands of servers to focus on compute tasks.

The custom chips also power a custom AWS network architecture that uses 25Gb Ethernet, a format Amazon believes is actually more scalable and efficient than the 10Gb and 40Gb Ethernet standards commonly used.

James Hamilton, Amazon VP and distinguished engineer, said at AWS re:Invent in Las Vegas this week that the chips add another level to the flexibility that allows Amazon to optimize all of its data centers specifically for the task of hosting AWS cloud instances. It also allows Amazon to take a different approach to networking that he says reflects a larger industry trend of going away from the traditional closed router box.

"If you look at where the networking world is, it is sort of where the server world was 20 years ago," Hamilton offered. "As soon as you chop up these vertical stacks and you have companies competing and working together, great things start to happen."

It is also part of a larger philosophy Amazon used to build its 18 AWS data center regions. Within each region are multiple buildings ("availability zones," in AWS-speak) that house the rack-mount servers, up to 300,000 per building, along with other "transit center" buildings to house the network connections on Amazon's own global network. Each of those availability zones takes up about 25-30MW of power, though Hamilton said Amazon could scale up to 250MW if doing so makes economic sense.

Hamilton estimates that AWS adds roughly the compute requirements of a Fortune 500 company to its total capacity every day.

"In 2015, AWS deployed enough server capacity to support Amazon in 2005, when it was an $8.49bn enterprise," he noted.

With that massive appetite for hardware, AWS is able to pretty much dictate the terms of everything from server design to the ASIC chips in its networking gear. The rack-mounted servers, purpose built to host AWS cloud instances, are actually quite small, with nearly half of the enclosure empty.

Hamilton says those spartan server designs result in boxes that are much more power- and heat-efficient than similar boxes it could have purchased from a vendor.

"What the OEMs are selling to customers are probably three, four, or five times more dense and they are less efficient," he said. "They make it up by charging more."

Likewise, AWS storage is both impressively engineered and mind-bogglingly immense. To handle the huge loads of data on the service, Amazon uses extremely dense storage appliances. One such box, which Hamilton says is actually an old design, packs 8.8PB of capacity from 1,110 hard drives jammed into a single 42U rack mount. The entire unit weighs in at 2,778 pounds.

For now, bonkers scale is big enough for AWS. Hamilton says that he believes the current size of its data center facilities is in a sweet spot as far as physical footprint, and its planned regions will not be much bigger than the facilities it uses now.

"It starts to get to the point where the gains of going to a lot bigger are really small," says Hamilton.

"Our take right now is, this is the right size facil. It costs us a bit more, but we think it is right for users." ®


Similar topics


Send us news

Other stories you might like