A wave of "hurricane-like" thunderstorms ripped across Indiana, Ohio, West Virginia, and Virginia on Friday night, leaving more than 3.5 million people without power and knocking out the US-East-1 data center operated by Amazon Web Services.
Netflix, Pinterest, Instagram, and Heroku, which run their services atop Amazon's infrastructure cloud, all reported outages because of the power failure in the AWS data center in Northern Virginia. Luckily for the Prickett Morgan household, we had just finished up watching several episodes of The IT Crowd over Netflix just before the storm hit.
A statement from Dominion Virginia Power, which supplies juice to the state and therefore to Amazon, said 900,000 homes went dark on Saturday. Powerful storms on Friday night, driven by the triple-digit temperatures (Fahrenheit) in the Mississippi Valley, sustained winds in excess of 80 miles per hour, with intense lightning, and falling trees took out power lines.
Technically, according to the meteorologists at the Weather Channel, a derecho is a line of storms that can produce high winds in a straight line over areas that are hundreds of miles long.
As of Saturday afternoon, just under 600,000 customers of Dominion Virginia Power's total 2.45 million customer base were still without power, and the hardest hit area is in northern Virginia, where the Amazon data center is located, with nearly 385,000 of 832,000 customers still powerless.
Amazon Web Services fared a bit better. According to the AWS Service Health Dashboard, the Elastic Compute Cloud (EC2) compute cloud started having connectivity issues at 8:21 PM Pacific on Friday, June 29, and by 8:40 PM, Amazon said it had "a large number of instances in a single availability zone" had lost power due to the storms. Power was restored nine minutes later, and the company set about recovering impacted EC2 instances and updating related data volumes.
By 11:19 PM, about half of the EC2 instances and a third of the related volumes had been recovered, but Elastic Load Balancers and Elastic Block Storage were also affected, and this gummed up the recovery works. By 10:25 AM Pacific on June 30, Amazon said that the majority of the affected EC2 instances that did not have impaired EBS disk volumes were recovered, but it was still recovering EBS volumes for some customers; load balancing was restored and working normally.
The CloudSearch and Relational Database Service were also impacted by the downed availability zone, and the majority of the instances of these services were recovered by Saturday morning Pacific time.
Looks like Google and Microsoft might be getting some infrastructure cloud biz after all. The next bucket of venture capital money will be thrown at companies that can load balance across multiple cloud suppliers. ®