This article is more than 1 year old

Amazon gets 'F' for communication amidst cloud outage

CTO's distributed computing pal analyzes EC2 failure

'500,000' volumes affected

According to RightScale's extrapolations, about 500,000 EBS volumes were affected.

"After Amazon managed to contain the problems to one zone, it took a very long time to get the EBS machinery under control and to recover all the volumes. Given the extrapolated number of volumes it would not be surprising that an event of this scale exceeded the design parameters and was never tested (or able to be tested). I’m not sure there is any system of comparable scale in operation anywhere," von Eicken says.

"I do want to state that while 'something large' clearly failed, namely the EBS system as a whole, the real big failure is that multiple availability zones were affected for ~3 hours."

In addition, von Eicken says he's "uncomfortable" with the performance of Amazon RDS (Relational Database Service), which serves up MySQL and at least one other database via EC2. RDS also experienced problems during the outage. "Some databases that were replicated across multiple availability zones did not fail-over properly," he says. "It evidently took more than 12 hours to recover a number of the multi-[availability zone] databases."

According to von Eicken, Amazon has "made it difficult" to backup their database outside of RDS, and this leaves them "no choice but to wait for someone at Amazon to work on their database. This lock-in is one reason many of our customers prefer to use our MySQL master-slave setup or to architect their own."

Even RightScale, he says, was confused by Amazon's terse status messages. "In hindsight we should have intentionally failed-over our master database which was operating in the 'impacted availability zone' early on at a time where we could minimize downtime." he says. "We were lucky that it didn’t get affected until about 12 hours after the start of the outage, but we didn’t connect one and one. A clear message from Amazon that more and more volumes were continuing to fail in the zone would have been really helpful."

von Eicken has called on Amazon to make a long list of improvements to its communication policies. These include giving actual percentages of volumes affected by an outage, listing the names of each availability zone affected, and sending individual status updates to particular users.

"Use email (or other means) to tell us what the status of our instances and volumes is," he says. "I don’t mean things I can get myself like cpu load or such, but information like 'the following volumes (by id) are currently recovering and should be available within the next hour, the following volumes will require manual intervention at a later time, …' That allows users to plan and choose where to put their efforts."

But he also wants a blog post from the company. Which is still yet to be seen. As of 1pm Pacific time on Monday, Amazon says that a "vast majority" of volumes have been recovered and that EBS is now operating normally for all APIs on these volumes, but some volumes have not been recovered, and the company says it is working to contact the customers involved.

Thorsten von Eicken was a professor in Cornell's computer science department when Werner Vogels did research there, and the two have coauthored distributed-computing research. RightScale began as a web front-end for Amazon Web Services when the service was only accessible from a command line, but it has sense morphed into a management service for several other infrastructure clouds as well. ®

More about

TIP US OFF

Send us news


Other stories you might like