Big Tech shrank the internet while growing its own power
Classic internet ideas matter less now that CDNs and private networks dominate traffic
Comment The internet has become smaller, the result of a rethinking of when and where to use the 'net's intended architecture. In the process it may also have further concentrated power in the hands of giant technology companies.
Given the ever-expanding content and resources available online, and proliferation of connected devices, the notion that the internet has shrunk is counter-intuitive. But shrunk it has – to the point at which some iPhones do not immediately connect to the open internet.
Those phones are iPhones running the latest version of Apple's iOS and the opt-in service called Private Relay. The iGiant bills Private Relay as a privacy enhancement because it obscures users' DNS lookups and IP addresses by funneling traffic over networks operated by Cloudflare, according to specs set by Apple.
That network is not the open, public internet as we know it, but effectively a private network connecting devices to content, which may be provided by Cloudflare systems. Apple and Cloudflare can run it as they please. Yes, VPNs do more or less the same thing. But VPNs don't have Apple nagging hundreds of millions of users to adopt them.
Mobile UK – the lobby group that represents that nation’s cellular network – recently asserted that Private Relay means operators lose sight of traffic carried on their own networks, and therefore "are no longer the internet service provider to Private Relay customers."
The i in iPhone stands, among other things, for "internet" – now Apple doesn't want those devices on the open internet? That's a very different state of affairs to the vision of the classical, decentralized internet architecture that saw functions more widely distributed among those participating in the network of networks that made the World Wide Web, the iPhone, and so many other things possible.
Ye Olde Internette
When this classical internetworking ruled, your request to access an online resource would pass through numerous independent nodes and networks – some dedicated to data transit – all employing the same standards. That architecture was designed to make the internet resilient, and to allow many players to participate in the carriage of traffic and operation of the functions required to move it around.
That open architecture remains in place, but now carries traffic for much less physical distance than was once the case. Now, most of the content you watch, read, or listen to is stored in a relatively nearby server, or in a server connected via private links, for fast access; your request for stuff typically doesn't need to go very far nor have to elbow its way through the open, public internet the whole way.
Most traffic thus spends a little time on a classical network run by your internet service provider, then perhaps travels over a transit provider's classical network. Before long it reaches on-ramps to networks that might for the first time properly be called "information superhighways" – remember that phrase? – because they exist to funnel massive amounts of traffic to and from either datacenters or Big Tech. Your data is often on the open, public internet for a few thousand metres.
It may not even escape your internet service provider's network; video-streaming giants, for instance, have been known to place servers filled with content in ISPs' networks so they can be rapidly accessed by broadband subscribers.
The Systems Approach view
"People built CDNs to give a really optimized overlay to web content," explained Bruce Davie, Reg columnist, co-author of Computer Networks: A Systems Approach and the related series of books, a Cisco fellow, and a pioneer of MPLS and software-defined networking. "It is no longer a path from host to host. Now it is a path from browser to CDN, so we optimize for connection from CDN to humans."
Davie doesn't feel that change precludes innovation. In discussion with The Register, he cited blockchain-fueled ideas as a networked application that wasn't envisaged either by classical internetworking or Big Tech.
And he's not sure pushing for more standards are the answer. Across his career he's seen standards used by vendors to advance their own ideas. But he also feels that standards remain necessary, because they create an open environment in which new ideas can be tested out by all, not just the big players.
This shift hasn't altered the fundamentals of what happens when you request access to a resource connected to the internet: the likes of TCP/IP and Border Gateway Protocol (BGP) remain essential. Standards probably run private networks too - although if their operators choose to use something else, that's their problem so long as the rest of us can eventually connect.
But what has changed is that the resources most people want to access online are either hosted by Big Tech, only accessible on networks controlled by Big Tech, or both.
The tyranny of distance
This is not necessarily a bad thing.
"It is really hard to do high speed over long distances," Geoff Huston, chief scientist at Asia Pacific Network Information Centre (APNIC, the Asia Pacific's regional address registry), told The Register. "Protocols work best when the connection is tight."
Which means it's really hard for a video to stream reliably if you need to bounce through a dozen nodes to reach a server on the other side of the world.
Those who profit from content figured this out in the early 2000s and moved their assets into datacenters served by substantial internet connections, then replicated them around the world. Wherever users sought out content, it would be fetched from the nearest host, ideally. Those networks grew quickly. Big Tech built them fast, because only by controlling the network could they satisfy consumers and do things like reliably and swiftly inserting dynamically generated ads.
Private networks made this possible. So today, if the resource you seek uses a content delivery network – as is best practice – or is hosted by a Big Tech player, your request will pass through your ISP and quickly be routed to a point of presence that connects to Big Tech's private networks. Those points of presence often employ caches of content, to reduce latency and improve user experience.
Again, that change is not a bad thing. For several reasons, one being quality of service – a desirable feature for a multi–party video conference. Because if one or more participants are out of sync, or buffering, the experience is intolerable for all.
Enabling the Zoom boom
One reason video conferencing has boomed is that the likes of Zoom quickly move traffic away from networks controlled by carriers or ISPs, and onto the private networks they control. Use of those private networks is a huge reason that videoconferences have become ubiquitous: shrinking the internet means it is now possible to carry traffic with sufficient speed and reliability that the experience is mostly tolerable (Zoom etiquette and content are not dependent on networks).
All big tech players run such private networks, to move data to and from their server farms or clouds. The likes of Amazon Web Services, Cloudflare, and Akamai rent content delivery networks as a service to all comers. An overwhelming majority of internet traffic now spends at least some of its journey on these networks and the physical assets – submarine and terrestrial networks – that Big Tech has built or funded to ensure their information superhighways are, well … super.
But the enormous concentration of traffic these private networks carry worries some.
The Internet Society, for example, started to measure the concentration of internet resources in December 2021, to understand whether concentration reduces the resilience of the net by creating dependencies on a small number of actors.
"Networks are becoming more and more concerned about being connected with CDNs," said Carl Gahnberg, a senior policy advisor at the Internet Society, where he focuses on internet governance. And that concentration means less resilience, as illustrated by widespread outages that occur when a big CDN or cloud goes down.
When the internet as we know it "sees" the absence of a resource, it looks for an alternative path to that resource. But when a CDN goes down – as happened to Fastly in June 2021 and Akamai in July 2021 – the impact is widespread. There is no alternative route to content the CDNs supposedly tend.
And when private operators become the dominant supplier of essential services like DNS, entire nations' internet connections can become precarious. Sovereign capability therefore becomes important.
Another reason to worry about the shrinking internet is that what benefits Big Tech doesn't automatically benefit consumers.
Mobile UK's submission to the UK Competition and Markets Authority's study on the mobile ecosystem market asserts that while customers assume Private Relay "makes everything private and harder to intercept/hack," it also removes carriers' ability to improve their own networks.
"This may stop other helpful features from working and add latency and jitter to time-sensitive applications due to everything funneling through iCloud," the submission states. "It could thus harm new edge applications, just as edge applications are starting to take off."
Carriers losing the chance to add value with networks matters, because they need to be profitable to afford the spectrum and equipment to build and operate networks.
Geoff Huston, chief scientist at APNIC, says the shrunken internet makes that hard. "The content folks are going great and the carriage folk are going broke," he told The Register.
Huston recounted how network providers tried to have content providers pay for carriage, but content providers pushed back by arguing that without content there would be no demand for carriage.
Content providers then found their own revenue streams: advertising and subscriptions. "As soon as that happened the consumer money shifted – because consumers wanted content and carriage was incidental," Huston explained.
ISPs may not mind
Not all carriers worry about the increasing dominance of Big Tech’s networks.
John Reisinger, co-founder and CTO of Australian ISP Aussie Broadband, which has over 300,000 customers, directs around eighty per cent of traffic to Big Tech's networks, and doesn’t mind doing so.
"For someone our size it is quite comfortable," he told The Register. He explained that his firm can afford to arrange links to the datacenters in which Big Tech maintains its points of presence. A smaller internet also means that Aussie Broadband needs to arrange less carriage from transit providers, and can bank the savings.
"It does help keep costs down and keeps things closer to our network," he said, adding that Aussie broadband "has more control over things rather than going over other networks."
Yet Aussie Broadband has also invested significantly in onshore customer service – making that a differentiator even though it is more costly to operate.
Who needs open standards?
APNIC's Huston also argued that the shrunken internet reduces the adoption of open technologies, in two ways.
One is that there's no incentive for carriers to upgrade to IPv6. He has argued that Big Tech's private networks use so much network address translation (NAT) of IPv4 that that the global uptake of IPv6 has slowed – just because the shortage of IPv4 addresses matters less. There's also no financial reason for carriers to change systems that work well.
Huston also believes that Big Tech companies have figured out that they can implement changes that matter their own networks or applications, without waiting for the rest of the world to catch up by developing a standard that delivers the same outcome.
He cites DNS as an example. After Edward Snowden's revelations, many in the tech community felt it could usefully be made less leaky, but re-working DNS would require years of collaboration and the final result could take years more to propagate.
However, Facebook's mobile apps handle DNS queries over their own private network. "The platform does not know about them, the ISP does not know it is going on, nobody can know and nothing changed in DNS," Huston lamented.
APNIC and the Registro de Direcciones de Internet de América Latina y Caribe (LACNIC) raised similar concerns in a 2021 study that, among other things, considers the Quick UDP Internet Connections (QUIC) protocol that Google developed as an alternative to TCP, then baked it into the Chrome browser in ways that make some traffic less observable. Google at least submitted QUIC to the standards process, so hasn't tied its client and private networks quite as tightly as Facebook has done.
Google's actions also suggest that, while Big Tech will develop technology that benefits its own operations, big players also recognize they can't re-shape the shrunken classical networks to adopt their preferred specs.
Too big to fight?
But a wider point is that Big Tech can afford the combination of innovation in an application as well as the operation of private networks.
The rest of us live in Big Tech's world, and suffer at its whims.
Innovating when giant players set the rules is never easy - even for other giants. Even Facebook suffers. Apple's tech offering iThing users the chance to opt out of some tracking kicked a $10 billion hole in The Social Network's revenue.
At the time of writing, worry about the shrunken internet mostly remains just that – worry. The numerous antitrust actions against Big Tech around the world are mostly considering other matters that are enabled by Big Tech's private networks, but it is advertising markets and privacy that legislators want to change – not network architectures.
A smaller internet therefore seems set to become the norm. The Register is almost compelled to make that observation, because you're reading this article with the help of … Cloudflare. We rely on the service to safeguard our site's stability, so that this article is always easy to find and read. ®