Picture this: Live 'net congestion maps for sysadmins

So you can know, before Twitter does


The Center for Applied Internet Data Analysis (CAIDA) is getting closer to giving the world live 'net congestion maps and alerts.

It's outlined the current state of its development here.

The effort, ongoing since 2014, will eventually provide a “real time” view of congestion events around the world – something that might look like this:

It's the sort of thing that will, if successful, get posted interminably on social recyclers like IFLS, just like Telegeography's maps do (even though they've existed for more than a decade).

But it's also a worthwhile endeavour, because frankly at the moment, outages are all-too-often mysteries – submarine cable companies, for example, try to avoid announcing cable breaks if they can get away with it.

Sample internet heat map from CAIDA

CAIDA's congestion map

It sounds easy, surely? – if, for example, you wanted to check congestion in OzEmail's network from a Sydney vantage point, a probe to the nearest router and one to the furthest (in Perth, for example) should tell you whether there's an unexpectedly big change in latency.

The change in latency was what CAIDA hoped to use as a proxy for congestion – but things turned out to be more complex (don't they always?).

It turned out to be hard to identify a network's interdomain links – network owners don't follow any standard for internal IP address assignment, ISPs maintain unadvertised IP address spaces, and traceroute isn't a particularly reliable probe mechanism.

The CAIDA boffins decided in the end that their best approach was to create their own probing tool, called bdrmap, and since 2014, they've put increasing effort into sucking that data into a suitable back-end, scaling up the measurement system, and improving the visualisations.

Next on CAIDA's list is to add alarms, so sysadmins can have their lives made hell by the question “what happened to the Ballarat CDN link” when a video goes unexpectedly viral.

As the team notes: “The major piece that remains is continuous analysis of the TSLP data, generating alarms, and pushing on-demand measurements to the reactive measurement system”.

Amogh Dhamdhere, Matthew Luckie, Alex Gamero-Garrido, Bradley Huffaker, kc claffy, Steve Bauer, David Clark are the core team in the development.

CAIDA is also the outfit behind this Internet topology map. ®


Other stories you might like

  • 381,000-plus Kubernetes API servers 'exposed to internet'
    Firewall isn't a made-up word from the Hackers movie, people

    A large number of servers running the Kubernetes API have been left exposed to the internet, which is not great: they're potentially vulnerable to abuse.

    Nonprofit security organization The Shadowserver Foundation recently scanned 454,729 systems hosting the popular open-source platform for managing and orchestrating containers, finding that more than 381,645 – or about 84 percent – are accessible via the internet to varying degrees thus providing a cracked door into a corporate network.

    "While this does not mean that these instances are fully open or vulnerable to an attack, it is likely that this level of access was not intended and these instances are an unnecessarily exposed attack surface," Shadowserver's team stressed in a write-up. "They also allow for information leakage on version and build."

    Continue reading
  • A peek into Gigabyte's GPU Arm for AI, HPC shops
    High-performance platform choices are going beyond the ubiquitous x86 standard

    Arm-based servers continue to gain momentum with Gigabyte Technology introducing a system based on Ampere's Altra processors paired with Nvidia A100 GPUs, aimed at demanding workloads such as AI training and high-performance compute (HPC) applications.

    The G492-PD0 runs either an Ampere Altra or Altra Max processor, the latter delivering 128 64-bit cores that are compatible with the Armv8.2 architecture.

    It supports 16 DDR4 DIMM slots, which would be enough space for up to 4TB of memory if all slots were filled with 256GB memory modules. The chassis also has space for no fewer than eight Nvidia A100 GPUs, which would make for a costly but very powerful system for those workloads that benefit from GPU acceleration.

    Continue reading
  • GitLab version 15 goes big on visibility and observability
    GitOps fans can take a spin on the free tier for pull-based deployment

    One-stop DevOps shop GitLab has announced version 15 of its platform, hot on the heels of pull-based GitOps turning up on the platform's free tier.

    Version 15.0 marks the arrival of GitLab's next major iteration and attention this time around has turned to visibility and observability – hardly surprising considering the acquisition of OpsTrace as 2021 drew to a close, as well as workflow automation, security and compliance.

    GitLab puts out monthly releases –  hitting 15.1 on June 22 –  and we spoke to the company's senior director of Product, Kenny Johnston, at the recent Kubecon EU event, about what will be added to version 15 as time goes by. During a chat with the company's senior director of Product, Kenny Johnston, at the recent Kubecon EU event, The Register was told that this was more where dollars were being invested into the product.

    Continue reading

Biting the hand that feeds IT © 1998–2022