Amazon's weekend cloud outage highlights EBS problems
The red-headed stepchild of Bezos & Co's cloud just can't keep up
Problems in the Amazon cloud over the weekend crushed apps like Vine, websites like Airbnb, and numerous other services that depend on Bezos & Co's hulking cloud, and the problems were due to a familiar culprit – Elastic Block Store (EBS).
EBS is a network-attached block level storage service for Amazon EC2 instances. Amazon says it is "suited for applications that require a database, file system, or access to raw block level storage," – in other words, everything.
Sunday's failure marked the third significant outage in two years to come about from EBS failures, and brought to mind the characterization of EBS as "a barrel of laughs in terms of performance and reliability" by a former Reddit sysadmin after a major outage in April 2011.
The problems on Sunday were acknowledged by Amazon in a post to the company's status dashboard at 1:22pm Pacific Time, when the company said it was "investigating degraded performance for some volumes in a single [Availability Zone] in the US-EAST-1 Region."
Amazon found that the problem was a network issue that led to elevated EBS-related API error rates in a single region. "The networking device was removed from service and we are performing a forensic investigation to understand how it failed," the company wrote.
Besides the 2011 incident, EBS also went down in December 2012. In the wake of that outage, one EBS-reliant company named Awe.sm wrote that "to maintain high uptime, we have stopped trusting EBS." Awe.sm added that in its experience, input-output rates on EBS volumes were poor, that when it fails it tends to fail across an entire data center cluster, and that if it goes down when connected to an image when running Ubuntu it fails severely.
Given the outage during the weekend just gone, cloud-first businesses might want to start looking at EBS and working out how to design their systems around potential failures in Amazon's data center hubs. ®