You're stuck inside, gaming's getting you through, and you've $1,500 to burn. Check out Nvidia's latest GPUs

Kitchen table chat tries to sell you on the latest kit, AI devs might like it, too


Nvidia launched its GeForce RTX 30 series last night, its latest family of real-time ray-tracing graphics processors aimed primarily at PC gamers.

CEO Jensen Huang introduced three new GPUs: the RTX 3070, RTX 3080, and the RTX 3090 in a video apparently shot from his kitchen. They’re all based on an 8nm Samsung-fabricated cores that squeeze in 28 billion transistors onto the die.

Jensen Huang

What's Jensen Huang cooking? Source: Nvidia webstream

The RTX 30's Ampere processor architecture runs three core functions: render graphics, process machine-learning applications, and execute game engines. Oh, and accelerating storage IO.

“Nvidia RTX fuses programmable shading, ray tracing and AI for developers to create entirely new worlds,” said Huang. You can watch the whole thing here:

You can see the whole thing here.

The three different GPUs vary by price, based on memory and the maximum resolution it can render games at:

  • GeForce RTX 3070: The cheapest option at $499 with 8GB GDDR6 memory, capable of running games at 4K and 1440p resolutions. Nvidia said it was 60 per cent faster than the previous RTX 2070 Ti. It’s expected to be available in October.
  • GeForce RTX 3080: From $699, this one is the mid-range GPU. It boasts 10GB of higher-speed GDDR6X memory accessible at 19Gbps. Games will look crisp at 60 frames per second at 4K resolution, we're told. The RTX 3080 is twice as fast as the RTX 2080, and will be ready to purchase from September 17.
  • GeForce RTX 3090: Nvidia’s top-of-the-range ray tracing GPU, nicknamed BFGPU meaning Big, er, Ferocious GPU. It is for hardcore gamers and streamers willing to spend $1,499 per card. The RTX 3090 comes with 24GB of GDDR6X memory, and is apparently 50 per cent better than the Titan RTX – once said to be the most advanced real-time ray-tracing GPU and based on Nv's previous Turing architecture. It's hot in more ways than one though it has a fan system that is 10-times quieter than previous series and keeps it up to 30C cooler, too, according to Nvidia. It’s capable of rendering 60 FPS in 8K resolution.

The cards come with a port for HDMI 2.1 so gamers can connect their systems to 8K HDR TVs.

Huang said these beefy yet tiny slabs of silicon delivered Nvidia’s “greatest generational leap ever,” compared to its previous Turing real-time ray-tracing chips. “Nvidia RTX fuses programmable shading, ray tracing and AI for developers to create entirely new worlds,” he gushed.

ampere

Ampere powers Nvidia's next generation. Click to enlarge. Source: Nvidia.

The GPUs run Nvidia's Deep Learning Super Sampling (DLSS) software, which uses a range of AI techniques to touch-up individual pixels for real-time ray tracing. Epic, the creators of the popular (and controversial) first-person-shooter Fortnite said it will use DLSS to render its computer graphics for players online. Other programmers can presumably tap into the chips' performance via Nvidia's CUDA programming framework. ®

Similar topics

Narrower topics


Other stories you might like

  • Nvidia wants to lure you to the Arm side with fresh server bait
    GPU giant promises big advancements with Arm-based Grace CPU, says the software is ready

    Interview 2023 is shaping up to become a big year for Arm-based server chips, and a significant part of this drive will come from Nvidia, which appears steadfast in its belief in the future of Arm, even if it can't own the company.

    Several system vendors are expected to push out servers next year that will use Nvidia's new Arm-based chips. These consist of the Grace Superchip, which combines two of Nvidia's Grace CPUs, and the Grace-Hopper Superchip, which brings together one Grace CPU with one Hopper GPU.

    The vendors lining up servers include American companies like Dell Technologies, HPE and Supermicro, as well Lenovo in Hong Kong, Inspur in China, plus ASUS, Foxconn, Gigabyte, and Wiwynn in Taiwan are also on board. The servers will target application areas where high performance is key: AI training and inference, high-performance computing, digital twins, and cloud gaming and graphics.

    Continue reading
  • Nvidia taps Intel’s Sapphire Rapids CPU for Hopper-powered DGX H100
    A win against AMD as a much bigger war over AI compute plays out

    Nvidia has chosen Intel's next-generation Xeon Scalable processor, known as Sapphire Rapids, to go inside its upcoming DGX H100 AI system to showcase its flagship H100 GPU.

    Jensen Huang, co-founder and CEO of Nvidia, confirmed the CPU choice during a fireside chat Tuesday at the BofA Securities 2022 Global Technology Conference. Nvidia positions the DGX family as the premier vehicle for its datacenter GPUs, pre-loading the machines with its software and optimizing them to provide the fastest AI performance as individual systems or in large supercomputer clusters.

    Huang's confirmation answers a question we and other observers have had about which next-generation x86 server CPU the new DGX system would use since it was announced in March.

    Continue reading
  • GPUs aren’t always your best bet, Twitter ML tests suggest
    Graphcore processor outperforms Nvidia rival in team's experiments

    GPUs are a powerful tool for machine-learning workloads, though they’re not necessarily the right tool for every AI job, according to Michael Bronstein, Twitter’s head of graph learning research.

    His team recently showed Graphcore’s AI hardware offered an “order of magnitude speedup when comparing a single IPU processor to an Nvidia A100 GPU,” in temporal graph network (TGN) models.

    “The choice of hardware for implementing Graph ML models is a crucial, yet often overlooked problem,” reads a joint article penned by Bronstein with Emanuele Rossi, an ML researcher at Twitter, and Daniel Justus, a researcher at Graphcore.

    Continue reading
  • Will optics ever replace copper interconnects? We asked this silicon photonics startup
    Star Trek's glowing circuit boards may not be so crazy

    Science fiction is littered with fantastic visions of computing. One of the more pervasive is the idea that one day computers will run on light. After all, what’s faster than the speed of light?

    But it turns out Star Trek’s glowing circuit boards might be closer to reality than you think, Ayar Labs CTO Mark Wade tells The Register. While fiber optic communications have been around for half a century, we’ve only recently started applying the technology at the board level. Despite this, Wade expects, within the next decade, optical waveguides will begin supplanting the copper traces on PCBs as shipments of optical I/O products take off.

    Driving this transition are a number of factors and emerging technologies that demand ever-higher bandwidths across longer distances without sacrificing on latency or power.

    Continue reading
  • Despite global uncertainty, $500m hit doesn't rattle Nvidia execs
    CEO acknowledges impact of war, pandemic but says fundamentals ‘are really good’

    Nvidia is expecting a $500 million hit to its global datacenter and consumer business in the second quarter due to COVID lockdowns in China and Russia's invasion of Ukraine. Despite those and other macroeconomic concerns, executives are still optimistic about future prospects.

    "The full impact and duration of the war in Ukraine and COVID lockdowns in China is difficult to predict. However, the impact of our technology and our market opportunities remain unchanged," said Jensen Huang, Nvidia's CEO and co-founder, during the company's first-quarter earnings call.

    Those two statements might sound a little contradictory, including to some investors, particularly following the stock selloff yesterday after concerns over Russia and China prompted Nvidia to issue lower-than-expected guidance for second-quarter revenue.

    Continue reading
  • Los Alamos to power up supercomputer using all-Nvidia CPU, GPU Superchips
    HPE-built system to be used by Uncle Sam for material science, renewables, and more

    Nvidia will reveal more details about its Venado supercomputer project today at the International Supercomputing Conference in Hamburg, Germany.

    Venado is hoped to be the first in a wave of high-performance computers that use an all-Nvidia architecture, in this case using Grace-Hopper Superchips that combine CPU and GPU dies, and Grace CPU-only Superchips.

    This supercomputer "will be the first system deployed not just with Grace-Hopper in terms of the converged Superchip but it’ll also have a cluster of Grace CPU-only Superchip modules,” Dion Harris, Nvidia’s head of datacenter product marketing for HPC, AI, and Magnum IO, said during an Nvidia press conference ahead of ISC.

    Continue reading

Biting the hand that feeds IT © 1998–2022