Nvidia hits the gas on autonomous vehicle software
DRIVE stack promises safer roads and smarter cars – eventually
GTC Paris Nvidia has officially rolled out its autonomous vehicle (AV) software, despite telling a UK car mag that fully self-driving vehicles are not likely before the next decade.
The AI accelerator chipmaker declared at its GTC Paris event that the Nvidia DRIVE AV software platform is now in full production, claiming it offers the automotive industry a "robust foundation for AI-powered mobility" – when combined with the firm's own hardware, naturally.
Nvidia describes DRIVE as modular and flexible, so that automakers can deploy the entire stack or just a subset of its driver assistance capabilities. These currently include surround perception, automated lane changes, parking and active safety, aimed at level 2+ and level 3 vehicles in terms of autonomy.
That implies that Nvidia's platform isn't ready for fully autonomous self-driving vehicles just yet, which would be level 5, though the firm claims it offers a "seamless path" to higher levels of automation as technologies and regulations evolve.
Ali Kani, VP of Nvidia's automotive team, conceded as much earlier this year in an interview with Autocar. He told them that truly autonomous cars will "not appear in this decade," as the tech is "super-hard."
Kani said this is because the current generation of driver assistance systems work by planning pre-defined actions, while truly autonomous cars will need to learn to behave more naturally, which is far more complicated.
This is perhaps news to Tesla chief Elon Musk, whose company has been testing self-driving Model Ys on the streets of Austin, Texas, since the end of last month – although some have expressed doubts about their safety.
According to Nvidia, most traffic accidents are linked to human factors such as distraction or misjudgment, meaning there is the potential to make roads safer if autonomous software developers can get it right.
While the existing approach has been modular, with separate components for perception, prediction, planning, and control, DRIVE unifies these functions, the AI firm says.
- Schneider, Nvidia sign pact to cool Europe's AI ambitions
- Omni-Path is back on the AI and HPC menu in a new challenge to Nvidia's InfiniBand
- Broadcom aims a Tomahawk at Nvidia's AI networking empire with 102.4T photonic switch
- Nvidia scores its first DOE win since 2022 with Doudna supercomputer
It uses deep learning and foundation models trained on large datasets of human driving behavior to process sensor data and directly control vehicle actions, eliminating the need for predefined rules. This means that vehicles are designed to benefit from vast amounts of real and synthetic driving behavior data to safely navigate complex environments with human-like decision-making – or so the company claims.
To complement its in-car software stack, the GPU giant touts a trio of hardware platforms. One is an automotive-grade Nvidia DRIVE AGX in-vehicle computer, the second is its Nvidia DGX systems and GPUs for training AI models and software development, while the third is its Omniverse and Cosmos platforms running on Nvidia OVX systems (basically servers with L40S GPUs) for simulation and synthetic data generation.
Nvidia last year named several automakers that it said were looking to adopt its DRIVE platform. The list included China's BYD, the largest electric vehicle maker in the world, joining others such as Mercedes-Benz, Toyota, Volvo, and Volkswagen. ®