This article is more than 1 year old

Nvidia says Google's TPU benchmark compared wrong kit

You're faster than the Kepler, but what about the newer and better Pascal?

It's not easy being Nvidia: the rise of AI has put a rocket under demand for GPUs, but the corollary to that is World+Dog publishing benchmarks to try and knock Nvidia off its perch.

The company is famously touchy about such things – witness last year's spat with Intel over benchmarks it didn't regard as fair.

Well, it's happened again, this time with Nvidia taking exception to Google's claims last week to have overtaken everybody else in the field with its Tensor Processing Units.

Had it been competitor Intel, El Reg suspects the return-of-serve would have been made less gently, but The Chocolate Factory is a heavy-hitting Nvidia customer. So before taking issue with Google's claims, Nvidia's blog post starts with a liberal spreading of honey.

After acknowledging Mountain View's “groundbreaking work in deep learning” and patting its own back, Nvidia finally gets to the point: Google's benchmark showed the TPU outperformed its Kepler 80 by thirteen times, “however, it doesn’t compare the TPU to the current generation Pascal-based P40”.

Nvidia compares TPU to GPU

Nvidia's benchmark

“The P40 balances computational precision and throughput, on-chip memory and memory bandwidth to achieve unprecedented performance for training, as well as inferencing.” the post claims.

While the TPU outruns the P40 for inferencing, its memory bandwidth is less than a tenth that of the P40, Nvidia says.

Nvidia isn't the only big name to present its own take on Google's performance claims. Last week, announcing it had taken the Chainer AI framework into its warm embrace, Intel claimed the project made Mountain View's TensorFlow look like it was working in treacle and outlined plans to make Xeons better than GPUs at the data-cruncing required to run AI code. ®

More about

TIP US OFF

Send us news


Other stories you might like