This article is more than 1 year old

Nvidia touts another two spanking new GPUs to join its list of Ampere architecture based goodies

Also, you can sign up for to use its Omniverse and CloudXR SDK graphics platforms

GTC 2020 Nvidia will be rolling out two new GPUs based on its latest Ampere architecture, as well as a slew of graphics rendering software tools, the business said today at its virtual GPU Technology Conference (GTC).

The annual conference, typically held at Silicon Valley’s San Jose McEnery Convention Center, was cancelled as an in-person event earlier this year due to the coronavirus pandemic. CEO Jensen Huang introduced the US corp's latest and most powerful architecture from his kitchen back in May, but held back a couple of tidbits just in time for GTC online, which is an all-week affair.

Unlike the flagship A100 we saw launch in May, the two new chips – the A40 GPU and the RTX A6000 – use 8nm processors fabricated by Samsung, rather than using TSMC’s 7nm process technology. Although they can both be used in the same areas, like machine learning and AI, these two chips are less powerful than the A100.

Bottom line: if you need the heavy lifting for things like training large neural nets on tons of data then stick to the A100, but if you’re less concerned about size and speed, then the A40 and RTX A6000 will probably suffice.

The A40 GPU die measuring 628.4 mm2, packs 28.3 billion transistors, and has 48GB of GDDR6 in-package memory with a memory bandwidth of 696 GB/s. It contains 10,752 CUDA cores, and 336 third generation Ampere Tensor Cores and 84 of the less powerful, second generation Turing RT Cores to optimise matrix math operations. The chip also supports NVLink, and PCIe interconnects at bidirectional bandwidth speeds of 112.5 GB/s and 16 GB/s respectively. It consumes a maximum of 300W of power, and will go into server racks in data centers.

The RTX A6000 pretty much has the exact same specs as the A40, except it has a slightly higher memory bandwidth of 768 GB/s. Everything else is the same.

An Nvidia spokesperson explained to The Register why the A40 was slower: "The A40 is passively cooled, there is no fan to help keep the GPU and the rest of the components cooled while running, the server’s cooling system has to do that, so we need to run the A40 GPU at slightly lower speed. Both the GPU and memory are running a little bit slower on the A40."

The RTX A6000, however, is designed to go into individual workstations rather than being rented out over the cloud.

Jensen Huang

You're stuck inside, gaming's getting you through, and you've $1,500 to burn. Check out Nvidia's latest GPUs

READ MORE

Nvidia said it would reveal the prices for both chipsets at the end of the year – the A40 is due out early 2021, and the RTX A6000 this December – and it did not disclose the performance of either.

“We expect great AI and ML performance from the A40 and A6000,” Allen Bourgoyne, Nvidia’s senior product marketing manager told reporters at a press briefing. “For the maximum AI & ML performance, the A100 will be the GPU for max performance.”

The other announcements are geared more towards those relying on graphics for things gaming and virtual reality.

Omniverse, a platform that kind of looks like a glorified Photoshop, aimed at animators, game designers, and architects working on 3D visualisations is now open for beta testers. You can sign up to try the program here. Essentially, the platform allows artists to collaborate on a project and uses Nvidia’s ray-tracing GPUs to render crisp graphics in real-time.

“Physical and virtual worlds will increasingly be fused,” Huang said. “Omniverse gives teams of creators spread around the world or just working from home the ability to collaborate on a single design as easily as editing a document. This is the beginning of the Star Trek Holodeck, realized at last.”

Last but not least, Nvidia’s CloudXR SDK software - an application-only program which allows companies to stream graphics for augmented reality or virtual reality over the internet is now available on Amazon Web Services.

Users will be able to access the platform’s tools that run on Nvidia’s previous generation V100 and T4 GPUs. That means graphics can be directly displayed on smartphones or tablets over the cloud, instead of relying on nearby workstations or clunky VR tracking systems. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like