AWS admits to 'severely impaired' services in US-EAST-1, can't even post updates to Service Health Dashboard

Multiple services sickly after Kinesis catches a cold: CloudWatch, DynamoDB, Lambda, Managed Blockchain


AWS has delivered the one message that will strike fear into the hearts of techies working out their day before Thanksgiving Holidays: US-EAST-1 region is suffering a "severely impaired" service.

At fault is the Kinesis Data Streams API in that, er, minor part of the AWS empire. The failure is also impacting a number of other services including CloudWatch, DynamoDB, Lambda and Managed Blockchain among others.

"This issue," admitted the AWS team, "has also affected our ability to post updates to the Service Health Dashboard."

Initial rumblings kicked off at around 1400 UTC today, with AWS confirming it was looking into increased error rates for the Kinesis Data Streams APIs in US-EAST-1.

Kinesis, for those unfamiliar with the service (one of a multitude that AWS will happily sell customers) is all about dealing with real-time data, such as telemetry from IoT devices. "Amazon Kinesis," trumpets the company, "can handle any amount of streaming data and process data from hundreds of thousands of sources with very low latencies."

Unless, of course, it is borked.

Problems soon escalated. The company posted just over an hour later that it was working on identifying the root cause. Soon after it noted that other services were affected, including (but not limited to) "our ability to post updates to the Service Health Dashboard."

So quite bad then.

Finally, as 1700 UTC approached, AWS faced up to the grim reality of the situation and confirmed that the Kinesis Data Streams API was "severely impaired." CloudWatch, Cognito and EventBridge in the US-EAST-1 region are also affected by the Kinesis issue.

Problems could well have been exacerbated by the fact that AWS defaults to US-EAST-1 when endpoints are used with no Region set. US-EAST-1 is, according to the company's documentation, "the default Region for API calls."

We contacted AWS to find out what had befallen the East Coast, and will update should the cloud giant respond. Its support orifice could only offer apologies to affected customers.

Twitter users were their usual supportive selves:

While multiple companies realised just how dependent they are on AWS, other users raised a more salient point.

Quite. ®


Biting the hand that feeds IT © 1998–2021