This article is more than 1 year old

Arm teases its GPU that will follow next year's graphics processor tech

Pushing for a 5x performance boost albeit over a 2018 cousin

Arm has teased an upcoming graphics processor unit, due to be unveiled next year, and said it is tuned heavily for running artificial intelligence code.

This unnamed GPU will provide a 4.7x FP32 performance improvement over its Mali-G76 cousin, said Ian Bratt, fellow and senior director of technology at Arm's machine learning group, during a speech at the chip business's DevSummit conference on Wednesday.

This mystery "2022 GPU" won't be announced until next year, it appears, and likely ship much later. To put the performance improvement claim in context, the Mali-G76 was announced in 2018, and the latest in the series, the G710, was announced earlier this year and is expected to ship in silicon in 2022.

gpu

Arm's Ian Bratt teasing the unnamed 2022 GPU in a DevSummit talk

The G710 GPU, which is targeted at premium smartphones and Chromebooks, it said to provide a 35 per cent improvement in the performance of AI applications, such as automatic enhancements to images and videos, over the G78, which was announced in 2020 and is appearing this year in things like the Google Pixel 6. As such, you can see that Arm GPUs tend to ship the year after they are announced to the world, something to bear in mind for the "2022 GPU."

We also have to provide the software, the tools, the libraries to enable that ML performance

No information was shared on the performance boost the mystery GPU would provide to graphics rendering. Arm declined to provide further details about the upcoming GPU or the CPU cores it would be paired with. Typically top-line Mali GPUs are linked with Arm's most powerful processor core designs, and Arm earlier this year announced such a CPU core, the Cortex-X2.

"It's more than just adding instructions and improving hardware IP, we also have to provide the software, the tools, the libraries to enable that ML performance," Bratt said of the jump in processing oomph.

Arm is in a bit of a race to come up with compelling system-on-chip cores that can speed up machine-learning tasks and other specialized jobs: its licensees can hire and are hiring the talent needed to create their own accelerators. Arm therefore has to make the pitch that it's easier, more cost effective, or more power efficient, say, to just use its off-the-shelf blueprints. And so these designs have to be up to scratch.

No doubt as part of that, Arm has introduced more machine-learning-friendly elements to its mainline CPU architecture; Armv9 supports Scalable Vector Extension version two (SVE2), for instance. In July, it announced it was making architectural improvements to speed up matrix operations in future chips.

The British chip designer is focused on accelerating neural networks, roughly modeled on human brains, to add cognition features on processors, while also accounting for power constraints, Bratt said.

"We actually don't have the luxury of millions of years of evolution, so we need to develop tools to kind of short circuit that evolution and enable quick exploration of neural network architectures," Bratt said.

"We need to increase the amount of compute, we need to focus on the right type of compute, and we need to create tools to explore new neural network architectures," he explained.

Arm is taking its GPU focus in a similar direction followed by Nvidia, which realized its powerful GPUs were pretty effective at handling AI operations. Nvidia, which is still hoping to acquire Arm, went on to bolster its floating-point and integer operations, which can be used by software to identify patterns in data sets and, ideally, quickly generate useful answers to questions. ®

More about

TIP US OFF

Send us news


Other stories you might like