Nvidia wants a piece of the custom silicon pie, reportedly forms unit to peddle IP

Don't want a GPU? How about some intellectual property or design help?

Nvidia is reportedly putting together a business unit to peddle its intellectual property and design services to the likes of AWS, Microsoft, and Meta.

The corporate shift comes in response to the growing number of cloud provers and hyperscalers building homegrown alternatives to Nvidia's GPUs for AI and other accelerated workloads, a Reuters report claims, citing multiple sources familiar with the matter.

Amazon Web Services was among the first to roll out custom silicon in its datacenters with its Graviton GPUs over five years ago and has since expanded its lineup to include smartNICs and AI accelerators. Similarly, Google's tensor processing units (TPUs) — an alternative to GPUs for AI training and inference workloads - have been under development since 2015, but have only been made available to the public since 2017.

However, it's only been more recently that Microsoft and Meta, two of the largest consumers of Nvidia GPUs for generative AI, have started rolling out custom silicon of their own. Last week we looked at Meta's latest inference chips, which it plans to deploy at scale across its datacenters to power deep learning recommender models. Microsoft, meanwhile, revealed its Maia 100 AI accelerators last fall designed for large language model training and inferencing.

While custom in the sense they're built and optimized for a cloud provider's internal workloads, these chips often rely on intellectual property from the likes of Marvell or Broadcom. As we reported last fall, Google's TPUs make extensive use of Broadcom technologies for things like high-speed serializer-deserializer, or SerDes, interfaces that allow the chips to talk to the outside world.

Nvidia, for its part, has developed and acquired a considerable amount of intellectual property related to everything from parallel processing to networking and interconnect fabrics.

According to reports, Nvidia execs see an opportunity to mimic Broadcom and parcel out these technologies and has already approached Amazon, Meta, Microsoft, Google, and OpenAI regarding the prospect of developing custom chips based on its designs. Nvidia has also approached telecom, automotive, and video game customers with similar offers, it's claimed.

Nvidia's courting of Google is particularly interesting as last year a now disputed rumor began to swirl that the search giant was planning to cut ties with Broadcom.

We've asked Nvidia for comment regarding its plans to license the intellectual property; we'll let you know if we hear anything back.

While more cloud providers are pursuing custom silicon, it doesn't appear any of them are ready to replace Nvidia, AMD, or Intel hardware anytime soon.

Despite ongoing efforts to make their custom silicon available to the public — Google, for instance, announced a performance-tuned version of its TPUv5 AI accelerator in December that can be rented in clusters of up to 8,960 — GPUs remain king when it comes to generative AI.

Meta may have started rolling out its custom inference chip, but it won't replace GPUs for every workload. In fact Meta plans to deploy 350,000 H100s and claims it will have the equivalent of 600,000 H100s worth of compute by the end of the year. These chips will reportedly power CEO Mark Zuckerberg's latest fascination: artificial general intelligence.

Meta isn't the only corporate hedging its custom silicon bets with large GPU deployments. Microsoft continues to deploy massive quantities of Nvidia H100s and recently revealed its plans to employ AMD's newly launched MI300X at scale to power its generative AI-backed services.

Meanwhile, AWS announced a large deployment of 16,384 Nvidia Grace-Hopper super chips alongside its fourth-gen Graviton CPUs and second-gen Trainium AI accelerators last fall. ®

More about


Send us news

Other stories you might like