The time we came up with a solution – and found a big customer problem

A fascinating firsthand retelling of the technical history of MPLS

Systems Approach One of the more satisfying conference experiences in my career was giving a presentation in the SIGCOMM 2003 Outrageous Opinions session, entitled: MPLS Considered Helpful.

The Outrageous Opinion session was at that point about eight years old; I had chaired the first such session in 1995. The inaugural session contained a number of memorable talks, such as David Clark's actually-not-outrageous position that networking people should all become economists.

By this stage in its evolution the session had turned into something of a stand-up comedy show, and the idea of making a hopefully humorous defense of MPLS in front of an audience that either ignored or disagreed with the admittedly controversial technology came to me while out on a run through the German countryside.

As someone with a lot of investment in shaping the MPLS architecture and getting it deployed in production, I was aware that my talk might come across as overly defensive, so I liberally scattered the message "I'm not bitter" through the talk. In the end, that was what most people remembered, and to this day I occasionally find people bringing that phrase up as I walk the corridors of networking conferences. 

Almost 20 years later I still find a fair amount of confusion – and cynicism – about what problem MPLS was supposed to solve. For example, I recently appeared on the Packet Pushers podcast, ostensibly to discuss the future of networking, and yet we ended up on a long digression about the history of MPLS.

I was frankly surprised at how differently I, as someone involved in its formation, viewed MPLS compared to someone who had experienced it more as an end user. So at the risk, once again, of appearing defensive, I think it's worth a look at how MPLS came about and why it was ultimately successful in terms of its global deployment. 

I want to point out that I am definitely not either the inventor or the father of MPLS. That title normally is given to Yakov Rekhter, who wrote the original ideas for Tag Switching in a two-page memo that was circulated internally at Cisco, and is named as inventor on many of the central patents. Yakov and I later co-authored a book on MPLS which was an excellent career move on my part.

Tag, you're it

Tag Switching formed the basis for much of MPLS when our group brought it to the IETF for standardization. However, there is plenty of credit to go around. Notably, some months before Yakov's memo, in 1995, Chandranmenon and Varghese published a SIGCOMM paper that introduced the idea of threaded indices, which, just like tag switching, allows a fixed-length identifier to represent a variable-length destination prefix, thus simplifying the task of looking up addresses to forward IP datagrams. 

But I'm getting ahead of myself. The context in which Yakov wrote his paper (and in which threaded indices were invented) was very different from today. In fact, the 1995 Outrageous Opinion session contained several spirited debates about the merits of ATM, with its fixed-length cells and fixed-length header lookups, versus IP, with its variable-length packets and relatively complex longest-match lookup algorithms.

Generic illustration of networking gear

It's time to decentralize the internet, again: What was distributed is now centralized by Google, Facebook, etc


Yakov and I both joined Cisco in 1995 as the company was trying to figure out the implications of ATM for its very IP- and Ethernet-centric business. The team that I joined was tasked with figuring out a way to somehow combine the technical approaches of ATM and IP. After a few months of kicking ideas around, the memo from Yakov struck me as plainly superior to everything we had seen before.

Not only did it include the same basic idea as threaded indices (which I was unaware of), but it also introduced the idea of hierarchical tagging – something that already existed in ATM, with virtual circuit and virtual path identifiers representing a two-level "stack of labels". And it was Eric Rosen – in my view, one of the unsung heroes of MPLS, a prolific inventor and a brilliant protocol architect – who saw that a label stack could readily be generalized to an arbitrary depth, which turned out to be one of the truly powerful additions to the MPLS architecture. 

At that time, there were half a dozen architects at Cisco, and a few more at Juniper, all trying to figure out how this idea of putting labels on packets could be useful. Many others got involved as the IETF effort took off. The idea of speeding up the lookup operation on an IP datagram turned out to have little practical impact. We even went as far as designing a router line card that could forward labeled packets only, to test this out.

The line card was negligibly cheaper than a full-blown IP forwarding card, and no faster, because by this point fast IP lookups, while not trivial, were largely a solved problem. Buffering, switching among line cards, optical transceivers, etc, were all more important to cost and performance than optimizing a few percent out of the lookup engine.


What ultimately made MPLS take off – and it did, in an under-the-radar sort of way – was enterprise VPNs. These were variously known as MPLS-VPNs, BGP/MPLS VPNs, or RFC2547 VPNs.

At the time, service providers faced a significant challenge: how could they deploy VPNs for their enterprise customers without having every VPN be, in effect, a custom-built overlay per customer. This problem was presented to us by AT&T, which had a huge business building VPNs using Frame Relay – another technology mostly ignored by the academic community but one that was in such high demand that they were struggling to keep up. This was one of the first times I understood that scaling networks is as much about operational scalability as the scalability of technology.

MPLS VPNs took us from a world where a new customer with N sites represented an N^2 configuration problem to one where it was order N. As we noted in our recent article, operational issues are often the most important ones to solve, and that was certainly the case here.

MPLS finally solved a real customer problem: it changed both the cost and the value of the solution for enterprise VPNs

MPLS was just one piece of mechanism that we used to build this solution: essentially it provided a lightweight mechanism for tunneling packets across the service provider core, much as IPSEC tunnels or Frame Relay circuits were used in other approaches. Much of the real innovation was in (a) multiprotocol extensions to BGP, allowing non-unique addresses from customer sites to be handled correctly by the service provider, and (b) introduction of a large number of VRFs (virtual routing and forwarding tables) into the edge routers, a major implementation effort for the router vendors.

There are too many details to cover here, but the important point was that MPLS (as part of this new VPN architecture) finally solved a real customer problem: it changed both the cost and the value of the solution for enterprise VPNs. (The technical details of BGP/MPLS VPNs are, in my admittedly biased opinion, fascinating, and you can pick them up from the very well-written RFC2547 or from the book I wrote with Yakov, MPLS: Technology and Applications.)

There was a lot more to MPLS, including traffic engineering and tunneling layer-2 traffic across the internet. But for me the important lesson was about the interplay between customer pull and technology push.

For about 3 years we really didn't know whether the technology would even see the light of day. Eventually we solved a big operational problem for service providers, and over time the majority of service providers offered BGP/MPLS VPN services to their customers. Today, we can see ways that it might have been done differently, with software-defined WAN (SD-WAN) solving many of the same problems in a significantly less costly and less configuration-intensive manner.

But that required a whole other round of technical innovations to take place first, such as the distributed systems techniques to build scalable centralized control planes. As Eric Rosen said to me more than once: all networking problems are scaling problems. For a while, MPLS was the most scalable solution to a significant networking problem. ®

Larry Peterson and Bruce Davie are the authors of Computer Networks: A Systems Approach and the related Systems Approach series of books. All their content is open source and available on GitHub. You can find them on Twitter, their writings on Substack, and past The Register columns here.

Broader topics

Other stories you might like

  • Despite 'key' partnership with AWS, Meta taps up Microsoft Azure for AI work
    Someone got Zuck'd over?

    Meta’s AI business unit set up shop in Microsoft Azure this week and announced a strategic partnership it says will advance PyTorch development on the public cloud.

    The deal [PDF] will see Mark Zuckerberg’s umbrella company deploy machine-learning workloads on thousands of Nvidia GPUs running in Azure. While a win for Microsoft, the partnership calls in to question just how strong Meta’s commitment to Amazon Web Services (AWS) really is.

    Back in those long-gone days of December, Meta named AWS as its “key long-term strategic cloud provider." As part of that, Meta promised that if it bought any companies that used AWS, it would continue to support their use of Amazon's cloud, rather than force them off into its own private datacenters. The pact also included a vow to expand Meta’s consumption of Amazon’s cloud-based compute, storage, database, and security services.

    Continue reading
  • Atos pushes out HPC cloud services based on Nimbix tech
    Moore's Law got you down? Throw everything at the problem! Quantum, AI, cloud...

    IT services biz Atos has introduced a suite of cloud-based high-performance computing (HPC) services, based around technology gained from its purchase of cloud provider Nimbix last year.

    The Nimbix Supercomputing Suite is described by Atos as a set of flexible and secure HPC solutions available as a service. It includes access to HPC, AI, and quantum computing resources, according to the services company.

    In addition to the existing Nimbix HPC products, the updated portfolio includes a new federated supercomputing-as-a-service platform and a dedicated bare-metal service based on Atos BullSequana supercomputer hardware.

    Continue reading
  • In record year for vulnerabilities, Microsoft actually had fewer
    Occasional gaping hole and overprivileged users still blight the Beast of Redmond

    Despite a record number of publicly disclosed security flaws in 2021, Microsoft managed to improve its stats, according to research from BeyondTrust.

    Figures from the National Vulnerability Database (NVD) of the US National Institute of Standards and Technology (NIST) show last year broke all records for security vulnerabilities. By December, according to pentester Redscan, 18,439 were recorded. That's an average of more than 50 flaws a day.

    However just 1,212 vulnerabilities were reported in Microsoft products last year, said BeyondTrust, a 5 percent drop on the previous year. In addition, critical vulnerabilities in the software (those with a CVSS score of 9 or more) plunged 47 percent, with the drop in Windows Server specifically down 50 percent. There was bad news for Internet Explorer and Edge vulnerabilities, though: they were up 280 percent on the prior year, with 349 flaws spotted in 2021.

    Continue reading
  • ServiceNow takes aim at procurement pain points
    Purchasing teams are a bit like help desks – always being asked to answer dumb or inappropriate questions

    ServiceNow's efforts to expand into more industries will soon include a Procurement Service Management product.

    This is not a dedicated application – ServiceNow has occasionally flirted with templates for its platform that come very close to being apps. Instead it stays close to the company's core of providing workflows that put the right jobs in the right hands, and make sure they get done. In this case, it will do so by tickling ERP and dedicated procurement applications, using tech ServiceNow acquired along with a company called Gekkobrain in 2021.

    The company believes it can play to its strengths with procurements via a single, centralized buying team.

    Continue reading
  • HPE, Cerebras build AI supercomputer for scientific research
    Wafer madness hits the LRZ in HPE Superdome supercomputer wrapper

    HPE and Cerebras Systems have built a new AI supercomputer in Munich, Germany, pairing a HPE Superdome Flex with the AI accelerator technology from Cerebras for use by the scientific and engineering community.

    The new system, created for the Leibniz Supercomputing Center (LRZ) in Munich, is being deployed to meet the current and expected future compute needs of researchers, including larger deep learning neural network models and the emergence of multi-modal problems that involve multiple data types such as images and speech, according to Laura Schulz, LRZ's head of Strategic Developments and Partnerships.

    "We're seeing an increase in large data volumes coming at us that need more and more processing, and models that are taking months to train, we want to be able to speed that up," Schulz said.

    Continue reading
  • We have bigger targets than beating Oracle, say open source DB pioneers
    Advocates for MySQL and PostgreSQL see broader future for movement they helped create

    MySQL pioneer Peter Zaitsev, an early employee of MySQL AB under the original open source database author Michael "Monty" Widenius, once found it easy to identify the enemy.

    "In the early days of MySQL AB, we were there to get Oracle's ass. Our CEO Mårten Mickos was always telling us how we were going to get out there and replace all those Oracle database installations," Zaitsev told The Register.

    Speaking at Percona Live, the open source database event hosted by the services company Zaitsev founded in 2006 and runs as chief exec, he said that situation had changed since Oracle ended up owning MySQL in 2010. This was as a consequence of its acquisition that year of Sun Microsystems, which had bought MySQL AB just two years earlier.

    Continue reading
  • Beijing needs the ability to 'destroy' Starlink, say Chinese researchers
    Paper authors warn Elon Musk's 2,400 machines could be used offensively

    An egghead at the Beijing Institute of Tracking and Telecommunications, writing in a peer-reviewed domestic journal, has advocated for Chinese military capability to take out Starlink satellites on the grounds of national security.

    According to the South China Morning Post, lead author Ren Yuanzhen and colleagues advocated in Modern Defence Technology not only for China to develop anti-satellite capabilities, but also to have a surveillance system that could monitor and track all satellites in Starlink's constellation.

    "A combination of soft and hard kill methods should be adopted to make some Starlink satellites lose their functions and destroy the constellation's operating system," the Chinese boffins reportedly said, estimating that data transmission speeds of stealth fighter jets and US military drones could increase by a factor of 100 through a Musk machine connection.

    Continue reading
  • How to explain what an API is – and why they matter
    Some of us have used them for decades, some are seeing them for the first time on marketing slides

    Systems Approach Explaining what an API is can be surprisingly difficult.

    It's striking to remember that they have been around for about as long as we've had programming languages, and that while the "API economy" might be a relatively recent term, APIs have been enabling innovation for decades. But how to best describe them to someone for whom application programming interfaces mean little or nothing?

    I like this short video from Martin Casado, embedded below, which starts with the analogy of building cars. In the very early days, car manufacturers were vertically integrated businesses, essentially starting from iron ore and coal to make steel all the way through to producing the parts and then the assembled vehicle. As the business matured and grew in size, car manufacturers were able to buy components built by others, and entire companies could be created around supplying just a single component, such as a spring.

    Continue reading

Biting the hand that feeds IT © 1998–2022