This article is more than 1 year old
Your 90-second guide to new stuff Nvidia teased today: Volta V100 chips, a GPU cloud, and more
Various bits and bobs to break Intel's heart this year
GTC Today at Nvidia’s GPU Technology Conference in San Jose, California, CEO Jensen Huang paraded a bunch of forthcoming gear – all aimed at expanding the graphics chip giant’s reach in AI.
Or in other words, stealing a march on Intel's machine learning efforts: the x86 goliath is desperately bent on stopping Nvidia and others from expelling it from the artificial intelligence processing space.
Huang announced the Tesla Volta V100, a new GPU that tries to marry machine learning with high performance computing. It is equipped with 5,120 CUDA cores, 640 Tensor Cores, delivers 7.5 TFLOPS using 64-bit FP and 15 TFLOPS using 32-bit FP, and stocks a 16MB cache and 16GB HBM2 memory bank with a bandwidth of 900GB per second. The V100 GPU is “at the limits of photolithography,” we're told, packing together 21.1 billion transistors on 815mm2 of silicon. The chip is manufactured by TSMC using its 12nm FinFET process.
The Tesla V100 is a jump from Nvidia’s Pascal P100 GPU unveiled last year in terms of size and performance. Increasing computational power is needed as the demand to build bigger and more complex neural networks rises. The chip isn't out yet: it's due to arrive later this year. You can find more analysis on the V100 and its Volta architecture, here, on our high-performance computing sister website, The Next Platform.
Nvidia’s deep learning computer, the DGX-1, has been updated to support the new Tesla V100 GPUs, and each box will cost a whopping $149,000 with the latest silicon when it becomes available.
If that's a little much for your wallet, Nvidia is teasing a new GPU Cloud service that will enter public beta in the third quarter of this year. Part of this is a software stack that runs on PCs, workstations and servers, and assigns workloads to local GPUs, connected DGX-1 boxes, and processors hosted in Nvidia's forthcoming cloud, as needed. It supports Caffe, Caffe2, CNTK, MXNet, TensorFlow, Theano and Torch frameworks, Plus Nv's Digits, Deep Learning SDK, CUDA, and so on. This service is expected to rival GPU-in-the-cloud offerings from Amazon's AWS, Microsoft's Azure, and Google's compute cloud.
Meanwhile, the GPU giant is taking a greater interest in robotics with Isaac, a simulator platform, which is hoped to make it easier for developers to design and build robots using Nv's GPU technology.
Isaac is a virtual droid trained using reinforcement learning in an environment rendered by video game graphics technologies and integrated with OpenAI’s Universe. The idea is that several Isaacs can be trained at the same time, and the best agent can be chosen before it’s deployed for real-world testing.
Nvidia also announced updates for its biggest project: self-driving cars. Toyota, Japan’s largest automotive manufacturer, is collaborating with Nvidia to develop its Drive PX software platform for its driverless cars. Drive PX comes with a LIDAR sensor that allows cars to process their surroundings in real time and plan and execute preventive actions such as stopping the driver from accelerating at a green light.
The strangest project announced was Holodeck, a virtual reality environment that is, it is claimed, photo realistic and represents users as floating robot torsos. It uses a physics engine, and creators can import their virtual products into Holodeck to explore design options. In a demonstration, people could see into the interior of a virtual car. It's like the 1990s never ended. ®
Updated to add
The V100 can hit 120 "Tensor" TFLOPS, according to Nvidia. We wondered what a Tensor TFLOPS is. A spokesperson explained a Tensor floating-point operation is a "mixed-precision FP16/FP32 multiply-accumulate" calculation. "Tensor Cores add new specialized mixed-precision FMA operations that are separate from the existing FP32 and FP16 math operations," we're told. There's more info here.