Telcos, terrified of being consigned to eternal status as ‘dumb pipes’, keep coming up with crazy ideas for over-the-top (OTT) high-value services. In America, they’re buying entertainment properties. Comcast, easily that nation’s most hated company, purchased NBCUniversal so that they’d have something to transmit over all that fibre they’re laying past US homes to blunt the growing threat from Google. Telstra has a 50% stake in Foxtel - creating a substantial conflict of interest when the former monopolist entertains the thought of providing higher bandwidth to consumers - voting with their dollars for Netflix type cable-cutting services. And so on.
All of these services rest on an unquestioned assumption that a pipe is simply a series of tubes that transport bits from one point to another across the global Internet. That’s never been particularly true - for instance, some points are far better connected than others - and now it threatens to be utterly at odds with reality.
In the earliest days of networking (I wrote firmware for X.25 PADs in the early 80s, so I know whereof I speak), it really was a series of fixed connections, running point-to-point at fairly low speeds (56-64Kbps). So back in the day, packet switching via X.25 was an OTT service.
TCP/IP kicked over those traces, giving us the perception of a hypercloud of connectivity as every point virtually connected to every other point. The truth is always far more complex - as a traceroute will show - but that fundamental accessibility provided the foundation for basic services like FTP, telnet, and NNTP.
All networks for the past twenty-five years have grown up around the assumption that all services are equally accessible across the network. That’s rarely the case; as any network engineer knows, a network is only as fast as its slowest span. These days, networks are composed of many, many spans.
There are ways around this problem. Content delivery networks provide points-of-presence relatively proximal to demand, making the virtual circuit a little less virtual. It’s a patch on a hack on an imperfect implementation of a timeless idea.
But there’s a problem: No matter how close these points-of-presence become, they’re confronting a network architecture passively hostile to them. In order to service the needs of all, networks have never been able to satisfy the demands of all. Those demands are heterogeneous and dynamic. The bandwidth I need today - right now! - is not what I’ll need in an hour. The latency acceptable for a backup is not acceptable for a broadcast.
That’s about to create some big problems.
The Japanese will be broadcasting the 2020 Tokyo Olympics in 8K resolution. Yes, you read that right - 8x the resolution of our current generation HDTV tellies. (I’ve actually heard the Japanese will be ready for 16K broadcasting in 2020, but have yet to make that announcement.)
It doesn’t really matter that there are no televisions that can display that sort of resolution; we have computers to handle that sort of thing, and besides, by 2020 there will be a great many virtual reality headsets - Oculus and its competitors - rendering fully immersive 360-degree broadcasts of ultra-ultra high definition content.
The Japanese will be spitting out multiple multi-gigabit UUHDTV streams to their broadcasting partners around the world, who will, in turn, be sending it out to billions of viewers worldwide.
Somewhere in there is where the network as we know it breaks.
Terrestrial broadcasting will suffice for an HDTV Olympics. But broadcasting, driven by Moore’s Law, grows geometrically in resolution and bandwidth requirements. That means the network - not the cameras or the televisions - begins to fall over.
Already, Australia’s network struggles under HD Netflix content. 4K is beyond the pale. And in America, Comcast will lease you a sufficiently capacious 2 Gbps fibre connection - for a hefty $300 a month, so you could watch the broadcast - if you had any money leftover to pay for content.
Various upstart networking vendors have been spruiking Software Defined Networks (SDNs) as the panacea for large data centres. Using SDNs, corporates can reconfigure their networks on-the-fly, adapting them to needs and desires at minimal cost. It’s a brilliant idea, one that is rapidly making the older generation of reliable-but-inflexible networking equipment obsolete. Within a few years, every data centre worth the name will be supported by a complex, powerful and fully configurable SDN.
Why stop there? Why presume that the inside of a data centre has some special quality that merits an SDN? It’s now clear that every user of the network, from minor to massive, could benefit from a network that adjusted its capacities on demand.
Watching the Olympics in 8K immersive video, consumers can pay a premium for the extra bandwidth that’ll keep things flowing smoothly. Working from home, an employee can pay for the bandwidth to run backups in the middle of the night. A gamer gets a low-latency virtual circuit to their MMORPG server. And so on.
The future of the dumb pipe is its exact opposite, as both the network and the applications running on top of it communicate and collude to provide the best possible service quality, on demand and as required. It’s not that hard to understand; networks are a fixed resource, and making them smarter allows us to make the most of them.
There’ll be a penny-drop moment, sometime in the next few years, as the world's big, slow, legacy carriers realise that owning everything between the customer to the backbone means they can develop a lot of services those customers will eagerly consume. They’ll invest in the kit, transform their dumb pipes into flexible, smart networks, and open the floodgates to a new generation of applications relying on that intelligence.