On-Prem

Networks

How four rotten packets broke CenturyLink's network for 37 hours, knackering 911 calls, VoIP, broadband

FCC delivers postmortem after blunder cripples US fiber links


A handful of bad network packets triggered a massive chain reaction that crippled the entire network of US telco CenturyLink for roughly a day and a half.

This is according to the FCC's official probe [PDF] into the December 2018 super-outage, during which CenturyLink's broadband internet and VoIP services fell over and stayed down for a total of 37 hours. This meant subscribers couldn't, among other things, call 911 over VoIP at the time – which is a violation of FCC rules, and triggered a formal investigation.

"This outage was caused by an equipment failure catastrophically exacerbated by a network configuration error," America's communications regulator said in its summary of its inquiry, published yesterday.

"It affected communications service providers, business customers, and consumers who directly or indirectly relied upon CenturyLink’s transport services, which route communications traffic from various providers to locations across the country, resulting in extensive disruptions to phone service, including 911 calling."

CenturyLink has six long-haul networks that make up the backbone of its digital empire, interconnecting regions of America. These networks use Infinera-built nodes to switch packets over high-speed optic fiber: data flowing into each node is directed to other nodes, ultimately pumping VoIP, regular internet traffic, and more, across the nation as needed.

We're told four malformed network packets were the root cause of the outage: they were generated by a switching module in a node in Denver, Colorado, for reasons still yet unknown, and sent on to other nodes. The broken packets all had the following qualities:

1. a broadcast destination address, meaning that the packet was directed to be sent to all connected devices;

2. a valid header and valid checksum;

3. no expiration time, meaning that the packet would not be dropped for being created too long ago; and

4. a size larger than 64 bytes.

Each dodgy packet would arrive at a node, get rejected and be passed along a chain of filters until it was injected into a management channel and handed to all connecting nodes. Here's a flow diagram, courtesy of the FCC, showing how the corrupted packets ended up being forwarded on to all neighboring nodes, and so on and so on, producing a growing chain reaction of corrupted packets...

Click to enlarge

"Due to the packets’ broadcast destination address, the malformed network management packets were delivered to all connected nodes. Consequently, each subsequent node receiving the packet retransmitted the packet to all its connected nodes, including the node where the malformed packets originated," the FCC said in its report.

"Each connected node continued to retransmit the malformed packets across the proprietary management channel to each node with which it connected because the packets appeared valid and did not have an expiration time. This process repeated indefinitely."

As you might imagine, the exponentially growing storm of packets soon overwhelmed CenturyLink's optic-fiber backbone, causing regular traffic to stop flowing: VoIP phones stopped working, internet access slowed to a halt, and so on. Folks in New Orleans were first to spot their connections stalling, at roughly 0356 EST on December 27.

Here is where things went from really, really bad to terrible: the nodes along the fiber network were so flooded, they could not be reached by their administrators to troubleshoot the issue. It wasn't until some 15 hours later the techies could finally track down the single errant node in Colorado responsible for sparking the deluge, not that replacing it helped. The packet tsunami was still washing back and forth, knocking nodes over.

US states join watchdog probing CenturyLink's Xmas data center outage that screwed 911 system

READ MORE

"At 2102 on December 27, CenturyLink network engineers identified and removed the module that had generated the malformed packets," the report noted. "The outage, however, did not immediately end; the malformed packets continued to replicate and transit the network, generating more packets as they echoed from node to node."

It would be another three hours before CenturyLink's network admins could begin to get through to the other nodes, and get them to kill off the spread of bad packets. It took until 1130 on December 28 to get visibility of the network back, and it wasn't until 2336 that all nodes had been restored. On December 29, just after midday, CenturyLink finally declared the crisis over.

"The event caused a nationwide voice, IP, and transport outage on CenturyLink’s fiber network. CenturyLink estimates that 12,100,108 calls were blocked or degraded due to the incident," the FCC said.

"Where long-distance voice callers experienced call quality issues, some customers received a fast-busy signal, some received an error message, and some just had a terrible connection with garbled words."

The outage also knackered local governments and telcos that relied on the CenturyLink network for portions of their services. State governments in Illinois, Kansas, Minnesota, and Missouri all had portions of their networks down for roughly 36 hours thanks to CenturyLink, and phone services sold by Comcast, Verizon, TeleCommunication Systems, General Dynamics IT, and West Safety Services – including 911 call centers – saw connectivity interrupted for some or all of the outage period.

As to what can be done to prevent similar failures, the FCC is recommending CenturyLink and other backbone providers take some basic steps, such as disabling unused features on network equipment, installing and maintaining alarms that warn admins when memory or processor use is reaching its peak, and having backup procedures in the event networking gear becomes unreachable.

"Currently, CenturyLink is in the process of updating its nodes’ Ethernet policer to reduce the chance of the transmission of a malformed packet in the future," the report notes. "The improved ethernet policer quickly identifies and terminates invalid packets, preventing propagation into the network. This work is expected to be complete in Fall, 2019."

The report did not mention any possible fines or penalties against CenturyLink. ®

Send us news
53 Comments

Verizon outages across US as hurricane recovery continues

California, Arizona, beyond affected as well as storm-smashed states

US lawmakers dig into FCC's $900M Starlink snub in wake of Hurricane Helene

Nearly a billion dollars in rural broadband subsidies wouldn't go amiss

T-Mobile US to cough up $31.5M after that long string of security SNAFUs

At least seven intrusions in five years? Yeah, those promises of improvement more than 'long overdue'

Chinese cyberspies reportedly breached Verizon, AT&T, Lumen

Salt Typhoon may have accessed court-ordered wiretaps and US internet traffic

Bank of America app glitch zeroes out people's balances

Unidentified tech issues now resolved

Microsoft admits Outlook crashes, says impact 'mitigated'

Users just need to 'refresh/restart' their sessions

ServiceNow root certificate blunder leaves users high and dry

More like ServiceNo, or maybe ServiceNotforawhile

1 in 10 orgs dumping their security vendors after CrowdStrike outage

Many left reeling from July's IT meltdown, but not to worry, it was all unavoidable

FCC boss starts bringing up Musk's Starlink dominance, antitrust concerns

SpaceX broadband network now accounts for nearly two-thirds of all active satellites in orbit

Post-CrowdStrike catastrophe, Microsoft figures moving antivirus out of Windows kernel mode is a good idea

Existing low-level access for security solutions will undergo a rework

Major ISP bungles settings, causing Microsoft 365, Azure outage

AT&T confirms 'brief disruption,' no indication of foul play

'IT failure' hits blood tests as another critical incident declared by NHS

Unlike in London, foul play isn’t suspected