This article is more than 1 year old

Nvidia open to third parties making custom silicon tuned for CUDA applications

Who's going to be the first to bite? Hint, you may need to be the size of Facebook or Google

Software is a top priority for Nvidia, the chip designer has made clear at this week's ongoing GPU Technology conference, and that this continues to influence its hardware development.

The Silicon Valley giant is open to the idea of non-Nvidia processors tuned for native execution of software built using its CUDA development toolkit, Nvidia CEO Jensen Huang told The Register during a press conference. CUDA is Nvidia's proprietary programming platform and interface for applications to harness the computing power of the company's GPUs. CUDA is helping Nvidia sell more of these accelerators into enterprises.

The company has no plans to open-source its CUDA development environment, though if companies want to build or optimize their own chips for CUDA-built applications, the company is not necessarily against that effort, Huang told us.

"Underneath CUDA is Nvidia's hardware," Huang said. "There's really nothing to open source. If somebody would like to build an application for CUDA or to build another chip for CUDA, we're not not fundamentally against it, and nobody has ever asked."

The alternative would be for Nvidia to open-source its GPUs for others to use in their system-on-chips with CUDA-built applications running on top, which just isn't going to happen, Huang said. CUDA is often considered light-years ahead of similar frameworks for other architectures, and Nvidia isn't going to open up the software, nor the underlying hardware, to rivals.

To successfully produce a CUDA-compatible accelerator that can take full advantage of the framework, you will likely need Nvidia's input, and that's only going to happen if it makes commercial sense all round.

If a large player with lots of money to spend wants to develop custom silicon for the programming framework, that would grab Nvidia's interest, said Jim McGregor, principal analyst at Tirias Research.

"If it's a huge customer like Facebook, [Nvidia] will do whatever they need to," McGregor said. Top cloud providers like Amazon and Google are customizing chips for specific workloads, and Nvidia may lose out if it chooses not to collaborate in this area, plus CUDA's relevance could be diluted, he opined.

Google has its family of homegrown TPUs to accelerate machine-learning software, for instance, we note.

Nvidia is positioning itself as a software company around CUDA, which is more of a means to sell more GPUs. The company sees itself as the software and hardware provider for the metaverse, a parallel 3D universe championed by Facebook (now Meta) as a borderless digital world in which avatars can work, play, and interact.

CUDA is central to Nvidia's metaverse hardware and software platform called Omniverse. Meanwhile, companies are using CUDA to bring their applications to virtual worlds.

Nvidia has 150 software development kits available for building tools and whatnot on CUDA, with some new applications being ReOpt for supply chain optimization and cuQuantum for simulation of quantum computing on a GPU. CUDA is also being used to write software for autonomous cars equipped with Nvidia hardware.

Nvidia is balancing on a tightrope of projecting itself as an "open" company, while also recruiting organizations into its closed hardware and software ecosystem.

"Our strategy is not to be a bespoke, not to be a proprietary computer, but be an open computer," Huang said during the press conference, "but be an open computer that allows the world to build software upon it. And whenever the software doesn't exist, we go and create it."

While Nvidia holds on tight to CUDA, its crown jewels, rival tools are trying to fill the gap. Nvidia's GPUs are compliant with OpenCL, a parallel programming framework backed by AMD and Intel. AMD offers a hardware-acceleration software suite and CUDA wannabe called ROCm, and Intel has its whole oneAPI offering.

OpenAI in July announced an AI-specific framework called Triton, which provides a Python-like programming environment in which researchers with no CUDA experience can write efficient code for execution on Nvidia GPUs.

A project called Vortex is looking to bring the execution of CUDA applications to GPUs within RISC-V devices.

Back in 2013, Nvidia said it would license its GPU IP to third parties. The company did not respond when we asked if it is still doing so. AMD has licensed its GPU architecture to Samsung, which the company plans to use in its mobile chips. ®

More about

More about

More about


Send us news

Other stories you might like