The internet remains resilient, and its underlying protocols and technologies dominate global networking – but its relevance may be challenged by the increasing amount of traffic carried on private networks run by Big Tech, or rules imposed by governments.
So says a Study on the Internet's Technical Success Factors commissioned by APNIC and LACNIC – the regional internet address registries for the Asia–Pacific and Latin America and Caribbean regions respectively – and written by consultancy Analysys Mason.
Presented on Wednesday at the 2021 Internet Governance Forum (IGF), the study identifies four reasons the internet has succeeded:
- Scalability supporting the growth of the internet;
- Flexibility in network technologies;
- Adaptability to new applications;
- Resilience in the face of shocks and changes.
The study also argues that the early designers of the internet incorporated three critical guiding ideals: openness, simplicity, and decentralization. These ideals were applied across three design principles: layering, creating a network of networks, and the end-to-end principle that sees intelligence placed at the network edge rather than the core.
The end-to-end principle matters, because it means applications can be installed in connected devices without the need to change any networks.
Much of the study fondly recalls how the abovementioned elements have delivered decades of useful innovation.
A significant fraction of global traffic is now moved between the datacentres and edge networks of large internet companies
The document also identifies risks.
A section on technical challenges to the success of the internet points out that the architecture has weak points, and that technologies to harden them aren't being strongly adopted.
"While both DNSSEC and the BGP security extensions are important steps towards securing the internet infrastructure, significant efforts will still be needed before these protocols are widely deployed and used. Significant efforts will still be needed," the study warns.
The lack of a proper quality of service (QoS) standard is also called out, because its absence has created "concerns … that the best-effort model will not be sufficient to support the needs of emerging interdomain applications such as augmented/virtual reality or interactive gaming".
- Here comes the blob: Asia's top 'net boffin thinks 'shapeless services' could replace the Internet
- OK, this time it's for real: The last available IPv4 address block has gone
- Chromium cleans up its act – and daily DNS root server queries drop by 60 billion
Imposing a QoS standard would threaten the network-of-networks principle, the study states, adding that any attempt to change internet protocols would likely be rejected – if only because the world has sunk so much effort into current networks.
But the study identifies some players that could decide to go their own way: "social media companies, video streaming companies, CDNs and cloud companies".
The document states that "a significant fraction of global IP traffic now consists of data that is moved between the datacentres and edge networks of large internet companies."
Those companies' needs, and growing networks, lead the analysts to suggest that "over time, we could see the internet transform into a more centralised system with a few global private networks carrying most of the content and services.
"In this scenario, what remains outside these private networks are primarily ISP networks that move traffic to and from end users, and the user experience would be shaped by how close a user sits to the private network of the relevant internet company.”
The study also suggests Big Tech could research protocols it needs, and by doing so take resources away from work on open internet protocols. While any such work would need to be interoperable with the wider internet, and therefore preserve the network-of-networks principle – the document cites development of the TCP-alternative QUIC protocol as an example of a successful private technology push – it also suggests "increased centralisation could blur the distinction between network and applications, as expressed in the layering principle."
Another risk is that when private networks break, many users suffer. Exhibit A: yesterday's AWS brownout, which hurt Netflix and Disney+, among others.
The study also identifies governance issues as an emerging risk – especially when nations seek to impose their own requirements on the internet.
"A development where governments gain more control over the development of the internet may involve a risk of a more fragmented system, without the common address space and global reachability we have today."