Approved: Master plan to end US gov control of internet's highest level

Big hurdle jumped, but flaws in IANA plan remain


A plan to end US government control of the top level of the internet was formally approved Wednesday when six different internet groups voted to send it on to the board of DNS overseer ICANN.

Gathering at an ICANN meeting in Marrakesh, and after two years of work on the plan to transition control of the IANA contract to ICANN itself, members of the six groups which represent governments, business, the companies and organizations that make up the internet's core infrastructure, and individual internet users all agreed in separate votes to approve the plan.

The plan will now move on to the board of ICANN, which will then forward it to the US Department of Commerce for evaluation.

After evaluation, the plan will be handed to Congress and ultimately to the President for approval. The goal is to formally hand over IANA when the current contract ends on 30 September.

Disagreement

Although there was unanimous approval by ICANN's supporting organization and advisory committees, there were a number of abstentions, and the critical issue over what role and influence the world's governments will have once the US government steps away from its role remains unsettled.

The powerful Governmental Advisory Committee (GAC) was unable to reach consensus on a critical recommendation about what the board of ICANN's obligations are when it comes to formal GAC "advice."

Under the existing recommendation [PDF] (#11), there would be an increase in the number of countries and a decrease in the number of ICANN board members that would have to vote in order for advice from governments to be implemented. In other words, the GAC's power will be reduced.

GAC advice will require "full consensus," meaning that no one government actively objects, whereas currently a "very small" number of countries are allowed to object and it is still regarded as formal advice.

When that advice is received, there would now need to be a 60 per cent vote in favor of rejecting that advice at the ICANN board level, where previously it was two-thirds. Pragmatically, the difference is one board member's vote out of 20.

This recommendation was designed to limit some people's fears that governments would try to impose more control over ICANN, and hence IANA. It was opposed by 16 governments including Argentina, Brazil, France, Portugal and Russia.

Next page: Stalemate

Similar topics

Broader topics


Other stories you might like

  • New audio server Pipewire coming to next version of Ubuntu
    What does that mean? Better latency and a replacement for PulseAudio

    The next release of Ubuntu, version 22.10 and codenamed Kinetic Kudu, will switch audio servers to the relatively new PipeWire.

    Don't panic. As J M Barrie said: "All of this has happened before, and it will all happen again." Fedora switched to PipeWire in version 34, over a year ago now. Users who aren't pro-level creators or editors of sound and music on Ubuntu may not notice the planned change.

    Currently, most editions of Ubuntu use the PulseAudio server, which it adopted in version 8.04 Hardy Heron, the company's second LTS release. (The Ubuntu Studio edition uses JACK instead.) Fedora 8 also switched to PulseAudio. Before PulseAudio became the standard, many distros used ESD, the Enlightened Sound Daemon, which came out of the Enlightenment project, best known for its desktop.

    Continue reading
  • VMware claims 'bare-metal' performance on virtualized GPUs
    Is... is that why Broadcom wants to buy it?

    The future of high-performance computing will be virtualized, VMware's Uday Kurkure has told The Register.

    Kurkure, the lead engineer for VMware's performance engineering team, has spent the past five years working on ways to virtualize machine-learning workloads running on accelerators. Earlier this month his team reported "near or better than bare-metal performance" for Bidirectional Encoder Representations from Transformers (BERT) and Mask R-CNN — two popular machine-learning workloads — running on virtualized GPUs (vGPU) connected using Nvidia's NVLink interconnect.

    NVLink enables compute and memory resources to be shared across up to four GPUs over a high-bandwidth mesh fabric operating at 6.25GB/s per lane compared to PCIe 4.0's 2.5GB/s. The interconnect enabled Kurkure's team to pool 160GB of GPU memory from the Dell PowerEdge system's four 40GB Nvidia A100 SXM GPUs.

    Continue reading
  • Nvidia promises annual updates across CPU, GPU, and DPU lines
    Arm one year, x86 the next, and always faster than a certain chip shop that still can't ship even one standalone GPU

    Computex Nvidia's push deeper into enterprise computing will see its practice of introducing a new GPU architecture every two years brought to its CPUs and data processing units (DPUs, aka SmartNICs).

    Speaking on the company's pre-recorded keynote released to coincide with the Computex exhibition in Taiwan this week, senior vice president for hardware engineering Brian Kelleher spoke of the company's "reputation for unmatched execution on silicon." That's language that needs to be considered in the context of Intel, an Nvidia rival, again delaying a planned entry to the discrete GPU market.

    "We will extend our execution excellence and give each of our chip architectures a two-year rhythm," Kelleher added.

    Continue reading
  • Amazon puts 'creepy' AI cameras in UK delivery vans
    Big Bezos is watching you

    Amazon is reportedly installing AI-powered cameras in delivery vans to keep tabs on its drivers in the UK.

    The technology was first deployed, with numerous errors that reportedly denied drivers' bonuses after malfunctions, in the US. Last year, the internet giant produced a corporate video detailing how the cameras monitor drivers' driving behavior for safety reasons. The same system is now apparently being rolled out to vehicles in the UK. 

    Multiple camera lenses are placed under the front mirror. One is directed at the person behind the wheel, one is facing the road, and two are located on either side to provide a wider view. The cameras are monitored by software built by Netradyne, a computer-vision startup focused on driver safety. This code uses machine-learning algorithms to figure out what's going on in and around the vehicle.

    Continue reading
  • AWS puts latest homebrew ‘Graviton 3’ Arm CPU in production
    Just one instance type for now, but cheaper than third-gen Xeons or EPYCs

    Amazon Web Services has made its latest homebrew CPU, the Graviton3, available to rent in its Elastic Compute Cloud (EC2) infrastructure-as-a-service offering.

    The cloud colossus launched Graviton3 at its late 2021 re:Invent conference, revealing that the 55-billion-transistor device includes 64 cores, runs at 2.6GHz clock speed, can address DDR5 RAM and 300GB/sec max memory bandwidth, and employs 256-bit Scalable Vector Extensions.

    The chips were offered as a tech preview to select customers. And on Monday, AWS made them available to all comers in a single instance type named C7g.

    Continue reading
  • Beijing reverses ban on tech companies listing offshore
    Announcement comes as Chinese ride-hailing DiDi Chuxing delists from NYSE under pressure

    The Chinese government has announced that it will again allow "platform companies" – Beijing's term for tech giants – to list on overseas stock markets, marking a loosening of restrictions on the sector.

    "Platform companies will be encouraged to list on domestic and overseas markets in accordance with laws and regulations," announced premier Li Keqiang at an executive meeting of China's State Council – a body akin to cabinet in the USA or parliamentary democracies.

    The statement comes a week after vice premier Liu He advocated technology and government cooperation and a digital economy that supports an opening to "the outside world" to around 100 members of the Chinese People's Political Consultative Congress (CPPCC).

    Continue reading
  • Nvidia teases server designs for Grace-Hopper Superchips
    x86 still 'very important' we're told as lid lifted on Arm-based kit

    Computex Nvidia's Grace CPU and Hopper Superchips will make their first appearance early next year in systems that'll be based on reference servers unveiled at Computex 2022 this week.

    It's hoped these Arm-compatible HGX-series designs will be used to build computer systems that power what Nvidia believes will be a "half trillion dollar" market of machine learning, digital-twin simulation, and cloud gaming applications.

    "This transformation requires us to reimagine the datacenter at every level, from hardware to software from chips to infrastructure to systems," Paresh Kharya, senior director of product management and marketing at Nvidia, said during a press briefing.

    Continue reading
  • Nvidia brings liquid cooling to A100 PCIe GPU cards for ‘greener’ datacenters
    For those who want to give their racks an air cut

    Nvidia's GPUs are becoming increasingly more power hungry, so the US giant is hoping to make datacenters using them "greener" with liquid-cooled PCIe cards that contain its highest-performing chips.

    At this year's Computex event in Taiwan, the computer graphics goliath revealed it will sell a liquid-cooled PCIe card for its flagship server GPU, the A100, in the third quarter of this year. Then in early 2023, the company plans to release a liquid-cooled PCIe card for the A100's recently announced successor, the Hopper-powered H100.

    Nvidia's A100 has already been available for liquid-cooled servers, but to date, this has only been possible in the GPU's SXM form factor that goes into the company's HGX server board.

    Continue reading

Biting the hand that feeds IT © 1998–2022