Big Tech's private networks and protocols threaten the 'net, say internet registries

APNIC and LACNIC worry about who will set the rules of future internetworking


The internet remains resilient, and its underlying protocols and technologies dominate global networking – but its relevance may be challenged by the increasing amount of traffic carried on private networks run by Big Tech, or rules imposed by governments.

So says a Study on the Internet's Technical Success Factors commissioned by APNIC and LACNIC – the regional internet address registries for the Asia–Pacific and Latin America and Caribbean regions respectively – and written by consultancy Analysys Mason.

Presented on Wednesday at the 2021 Internet Governance Forum (IGF), the study identifies four reasons the internet has succeeded:

  1. Scalability supporting the growth of the internet;
  2. Flexibility in network technologies;
  3. Adaptability to new applications;
  4. Resilience in the face of shocks and changes.

The study also argues that the early designers of the internet incorporated three critical guiding ideals: openness, simplicity, and decentralization. These ideals were applied across three design principles: layering, creating a network of networks, and the end-to-end principle that sees intelligence placed at the network edge rather than the core.

The end-to-end principle matters, because it means applications can be installed in connected devices without the need to change any networks.

Much of the study fondly recalls how the abovementioned elements have delivered decades of useful innovation.

A significant fraction of global traffic is now moved between the datacentres and edge networks of large internet companies

The document also identifies risks.

A section on technical challenges to the success of the internet points out that the architecture has weak points, and that technologies to harden them aren't being strongly adopted.

"While both DNSSEC and the BGP security extensions are important steps towards securing the internet infrastructure, significant efforts will still be needed before these protocols are widely deployed and used. Significant efforts will still be needed," the study warns.

The lack of a proper quality of service (QoS) standard is also called out, because its absence has created "concerns … that the best-effort model will not be sufficient to support the needs of emerging interdomain applications such as augmented/virtual reality or interactive gaming".

Imposing a QoS standard would threaten the network-of-networks principle, the study states, adding that any attempt to change internet protocols would likely be rejected – if only because the world has sunk so much effort into current networks.

But the study identifies some players that could decide to go their own way: "social media companies, video streaming companies, CDNs and cloud companies".

The document states that "a significant fraction of global IP traffic now consists of data that is moved between the datacentres and edge networks of large internet companies."

Those companies' needs, and growing networks, lead the analysts to suggest that "over time, we could see the internet transform into a more centralised system with a few global private networks carrying most of the content and services.

"In this scenario, what remains outside these private networks are primarily ISP networks that move traffic to and from end users, and the user experience would be shaped by how close a user sits to the private network of the relevant internet company.”

The study also suggests Big Tech could research protocols it needs, and by doing so take resources away from work on open internet protocols. While any such work would need to be interoperable with the wider internet, and therefore preserve the network-of-networks principle – the document cites development of the TCP-alternative QUIC protocol as an example of a successful private technology push – it also suggests "increased centralisation could blur the distinction between network and applications, as expressed in the layering principle."

Another risk is that when private networks break, many users suffer. Exhibit A: yesterday's AWS brownout, which hurt Netflix and Disney+, among others.

The study also identifies governance issues as an emerging risk – especially when nations seek to impose their own requirements on the internet.

"A development where governments gain more control over the development of the internet may involve a risk of a more fragmented system, without the common address space and global reachability we have today."


Other stories you might like

  • Saved by the Bill: What if... Microsoft had killed Windows 95?

    Now this looks like a job for me, 'cos we need a little, controversy... 'Cos it feels so NT, without me

    Veteran Microsoft vice president, Brad Silverberg, has paid tribute to former Microsoft boss Bill Gates for saving Windows 95 from the clutches of the Redmond Axe-swinger.

    Silverberg posted his comment in a Twitter exchange started by Fast co-founder Allison Barr Allen regarding somebody who'd changed your life. Silverberg responded "Bill Gates" and, in response to a question from senior cybersecurity professional and director at Microsoft, Ashanka Iddya, explained Gates' role in Windows 95's survival.

    Continue reading
  • UK government opens consultation on medic-style register for Brit infosec pros

    Are you competent? Ethical? Welcome to UKCSC's new list

    Frustrated at lack of activity from the "standard setting" UK Cyber Security Council, the government wants to pass new laws making it into the statutory regulator of the UK infosec trade.

    Government plans, quietly announced in a consultation document issued last week, include a formal register of infosec practitioners – meaning security specialists could be struck off or barred from working if they don't meet "competence and ethical requirements."

    The proposed setup sounds very similar to the General Medical Council and its register of doctors allowed to practice medicine in the UK.

    Continue reading
  • Microsoft's do-it-all IDE Visual Studio 2022 came out late last year. How good is it really?

    Top request from devs? A Linux version

    Review Visual Studio goes back a long way. Microsoft always had its own programming languages and tools, beginning with Microsoft Basic in 1975 and Microsoft C 1.0 in 1983.

    The Visual Studio idea came from two main sources. In the early days, Windows applications were coded and compiled using MS-DOS, and there was a MS-DOS IDE called Programmer's Workbench (PWB, first released 1989). The company also came up Visual Basic (VB, first released 1991), which unlike Microsoft C++ had a Windows IDE. Perhaps inspired by VB, Microsoft delivered Visual C++ 1.0 in 1993, replacing the little-used PWB. Visual Studio itself was introduced in 1997, though it was more of a bundle of different Windows development tools initially. The first Visual Studio to integrate C++ and Visual Basic (in .NET guise) development into the same IDE was Visual Studio .NET in 2002, 20 years ago, and this perhaps is the true ancestor of today's IDE.

    A big change in VS 2022, released November, is that it is the first version where the IDE itself runs as a 64-bit process. The advantage is that it has access to more than 4GB memory in the devenv process, this being the shell of the IDE, though of course it is still possible to compile 32-bit applications. The main benefit is for large solutions comprising hundreds of projects. Although a substantial change, it is transparent to developers and from what we can tell, has been a beneficial change.

    Continue reading
  • James Webb Space Telescope has arrived at its new home – an orbit almost a million miles from Earth

    Funnily enough, that's where we want to be right now, too

    The James Webb Space Telescope, the largest and most complex space observatory built by NASA, has reached its final destination: L2, the second Sun-Earth Lagrange point, an orbit located about a million miles away.

    Mission control sent instructions to fire the telescope's thrusters at 1400 EST (1900 UTC) on Monday. The small boost increased its speed by about 3.6 miles per hour to send it to L2, where it will orbit the Sun in line with Earth for the foreseeable future. It takes about 180 days to complete an L2 orbit, Amber Straughn, deputy project scientist for Webb Science Communications at NASA's Goddard Space Flight Center, said during a live briefing.

    "Webb, welcome home!" blurted NASA's Administrator Bill Nelson. "Congratulations to the team for all of their hard work ensuring Webb's safe arrival at L2 today. We're one step closer to uncovering the mysteries of the universe. And I can't wait to see Webb's first new views of the universe this summer."

    Continue reading
  • LG promises to make home appliance software upgradeable to take on new tasks

    Kids: empty the dishwasher! We can’t, Dad, it’s updating its OS to handle baked on grime from winter curries

    As the right to repair movement gathers pace, Korea’s LG has decided to make sure that its whitegoods can be upgraded.

    The company today announced a scheme called “Evolving Appliances For You.”

    The plan is sketchy: LG has outlined a scenario in which a customer who moves to a locale with climate markedly different to their previous home could use LG’s ThingQ app to upgrade their clothes dryer with new software that makes the appliance better suited to prevailing conditions and to the kind of fabrics you’d wear in a hotter or colder climes. The drier could also get new hardware to handle its new location. An image distributed by LG shows off the ability to change the tune a dryer plays after it finishes a load.

    Continue reading
  • IBM confirms new mainframe to arrive ‘late’ in first half of 2022

    Hybrid cloud is Big Blue's big bet, but big iron is predicted to bring a welcome revenue boost

    IBM has confirmed that a new model of its Z Series mainframes will arrive “late in the first half” of 2022 and emphasised the new device’s debut as a source of improved revenue for the company’s infrastructure business.

    CFO James Kavanaugh put the release on the roadmap during Big Blue’s Q4 2021 earnings call on Monday. The CFO suggested the new release will make a positive impact on IBM’s revenue, which came in at $16.7 billion for the quarter and $57.35bn for the year. The Q4 number was up 6.5 per cent year on year, the annual number was a $2.2bn jump.

    Kavanaugh mentioned the mainframe because revenue from the big iron was down four points in the quarter, a dip that Big Blue attributed to the fact that its last mainframe – the Z15 – emerged in 2019 and the sales cycle has naturally ebbed after eleven quarters of sales. But what a sales cycle it was: IBM says the Z15 has done better than its predecessor and seen shipments that can power more MIPS (Millions of Instructions Per Second) than in any previous program in the company’s history*.

    Continue reading

Biting the hand that feeds IT © 1998–2022