Nvidia promises annual updates across CPU, GPU, DPU lines

Arm one year, x86 the next, and always faster than a certain chip shop that still can't ship even one standalone GPU


Computex Nvidia's push deeper into enterprise computing will see its practice of introducing a new GPU architecture every two years brought to its CPUs and data processing units (DPUs, aka SmartNICs).

Speaking on the company's pre-recorded keynote released to coincide with the Computex exhibition in Taiwan this week, senior vice president for hardware engineering Brian Kelleher spoke of the company's "reputation for unmatched execution on silicon." That's language that needs to be considered in the context of Intel, an Nvidia rival, again delaying a planned entry to the discrete GPU market.

"We will extend our execution excellence and give each of our chip architectures a two-year rhythm," Kelleher added.

Products will be updated in years between the release of new architectures.

"One year we will focus on x86, the next on Arm," Kelleher said, without explaining what happens if that means a single year could feature both a product and architecture update – which could be a lot to swallow. We've asked Nvidia to clarify what this policy means.

Whatever the details, Nvidia now considers it has three architectures: one for GPUs, one for CPUs like the Grace-Hopper superchips announced today, and one for its Bluefield range of DPUs.

In its keynote, Nvidia execs said the three are inseparable in its vision for enterprise computing – with CPUs managing systems, GPUs doing the heavy lifting, and DPUs providing networking and isolation services. Using all three types of processor in harness, execs said, is its secret sauce for the kind of scalability and speed Nvidia feels is needed to run AI, or seize opportunities like the $100 billion market for cloud-streamed games.

Speaking of games, the keynote also featured news of an Nvidia-commissioned study that found the company's RTX visual computing platform, when used in shooting games, allows players to achieve a better rate of firing accuracy. We mention this because the study found that the lowest quartile of players improved more than the best players.

If this enterprise AI thing doesn't work out, your correspondent imagines marketing those RTX results to parents everywhere could see Nvidia grow regardless. ®


Other stories you might like

  • Nvidia wants to lure you to the Arm side with fresh server bait
    GPU giant promises big advancements with Arm-based Grace CPU, says the software is ready

    Interview 2023 is shaping up to become a big year for Arm-based server chips, and a significant part of this drive will come from Nvidia, which appears steadfast in its belief in the future of Arm, even if it can't own the company.

    Several system vendors are expected to push out servers next year that will use Nvidia's new Arm-based chips. These consist of the Grace Superchip, which combines two of Nvidia's Grace CPUs, and the Grace-Hopper Superchip, which brings together one Grace CPU with one Hopper GPU.

    The vendors lining up servers include American companies like Dell Technologies, HPE and Supermicro, as well Lenovo in Hong Kong, Inspur in China, plus ASUS, Foxconn, Gigabyte, and Wiwynn in Taiwan are also on board. The servers will target application areas where high performance is key: AI training and inference, high-performance computing, digital twins, and cloud gaming and graphics.

    Continue reading
  • Nvidia taps Intel’s Sapphire Rapids CPU for Hopper-powered DGX H100
    A win against AMD as a much bigger war over AI compute plays out

    Nvidia has chosen Intel's next-generation Xeon Scalable processor, known as Sapphire Rapids, to go inside its upcoming DGX H100 AI system to showcase its flagship H100 GPU.

    Jensen Huang, co-founder and CEO of Nvidia, confirmed the CPU choice during a fireside chat Tuesday at the BofA Securities 2022 Global Technology Conference. Nvidia positions the DGX family as the premier vehicle for its datacenter GPUs, pre-loading the machines with its software and optimizing them to provide the fastest AI performance as individual systems or in large supercomputer clusters.

    Huang's confirmation answers a question we and other observers have had about which next-generation x86 server CPU the new DGX system would use since it was announced in March.

    Continue reading
  • AMD touts big datacenter, AI ambitions in CPU-GPU roadmap
    Epyc future ahead, along with Instinct, Ryzen, Radeon and custom chip push

    After taking serious CPU market share from Intel over the last few years, AMD has revealed larger ambitions in AI, datacenters and other areas with an expanded roadmap of CPUs, GPUs and other kinds of chips for the near future.

    These ambitions were laid out at AMD's Financial Analyst Day 2022 event on Thursday, where it signaled intentions to become a tougher competitor for Intel, Nvidia and other chip companies with a renewed focus on building better and faster chips for servers and other devices, becoming a bigger player in AI, enabling applications with improved software, and making more custom silicon.  

    "These are where we think we can win in terms of differentiation," AMD CEO Lisa Su said in opening remarks at the event. "It's about compute technology leadership. It's about expanding datacenter leadership. It's about expanding our AI footprint. It's expanding our software capability. And then it's really bringing together a broader custom solutions effort because we think this is a growth area going forward."

    Continue reading

Biting the hand that feeds IT © 1998–2022