This article is more than 1 year old

FYI: Those fancy 'Google-designed' TPU AI chips had an awful lot of Broadcom help

And Meta's tapping up Big B too – it's big bucks for this silicon giant

Comment A now-challenged report that Google wants to end its reliance on Broadcom has drawn attention to the role the San-Jose-based electronics giant plays in the production of custom silicon for hyperscale clouds.

On Thursday, The Information reported, behind its paywall, that a "person with direct knowledge" of these things said Google executives wanted to ditch Broadcom as a supplier of the search titan's AI-accelerating TPU chips as early as 2027, and that such a move could save Google billions annually.

TPUs being tensor processing units; those are the custom homegrown processors that Google boasted it designed and deployed itself to train and run at scale the machine-learning models that drive Google Search, Gmail, Google Translate, Google Photos, YouTube, various cloud APIs, games-playing AlphaZero, and much more of its sprawling empire. Google even said it used AI to tweak and improve the performance of its AI-accelerating TPU designs.

It turns out Broadcom made those chips a reality for Google, by providing key technologies – such as high-speed serializer-deserializer, or SerDes, interfaces that allow the chips to talk to the outside world - as well as helping turn Google engineers' TPU specifications and blueprints into a form that the likes of TSMC can use to fabricate the actual processors. Until recently Broadcom, which is in the process of buying VMware, has not been given credit publicly for that custom chip work.

Our work to meet our internal and external cloud needs benefit from our collaboration with Broadcom; they have been an excellent partner

One thing that Broadcom is known for doing in the industry is driving a hard bargain when it comes to licensing and pricing. According to the above report, Google and Broadcom clashed over their TPU supply arrangements and the prices involved, which led the Chrome giant to actively court Marvell as an alternative supplier.

Google has played down the claims, saying it sees no change in its relationship with Broadcom, describing the biz as "excellent." In a statement to The Register last night, a Google spokesperson said: "We are productively engaged with Broadcom and multiple other suppliers for the long term. Our work to meet our internal and external cloud needs benefit from our collaboration with Broadcom; they have been an excellent partner, and we see no change in our engagement."

While Broadcom – supplier of the system-on-chips in Raspberry Pis among many other sorts of devices – may not be the first chip house that comes to mind when you think of AI, the corporation in its latest earnings call said it was working with several hyperscale customers in that area.

That effort, which involved supplying networking technologies to scale up and out AI clusters as well as compute engines, accounted for more than $1 billion of revenues in Q3 alone and "represented virtually all the growth in our infrastructure business," CEO Hock Tan said on that call with analysts last month.

"Generative AI investments are driving the continued strength in hyperscale spending for us. As you know, we supply a major hyperscale customer with custom AI compute engines. We are also supplying several hyperscalers a portfolio of networking technologies as they scale up and scale out their AI clusters within their datacenters."

That "major" customer appears to be Google, and the compute is the TPU family.

A spokesperson for Broadcom was not available for further comment. We do note that in March the biz revealed in a blog post that it was a large Google Cloud customer as well as a supplier of its chips, for mobile and on-prem, without going into too much detail. That might explain why Google wishes to remain on good terms with Broadcom. Moving to Google Cloud helped Broadcom eliminate nearly 200 software test labs and slash costs, we're also told.

"Broadcom and Google enjoy a unique relationship," Vijay Nagarajan, veep of Broadcom's wireless connectivity division, wrote.

"Broadcom supplies wireless chips for Google phones as well as chips for its data center and cloud services. At the same time, Broadcom is one of Google Cloud’s biggest customers for its cloud products. This bidirectional relationship has also forged a special bond."

With that outpouring of love for Google, and an indirect hat tip to TPUs in the post, we suppose the clues were all there. As smart as Googlers are at drawing up custom chip logic, the cloud giant would obviously need help assembling those plans into a state that could be manufactured on advanced process nodes.

Introduced in 2015, Google's TPUs were initially developed to power its internal machine learning models. The web goliath made the accelerators available for rental to the general public on Google Cloud in 2018. Now, in their fifth-generation, Google's TPUs are used extensively for both AI training and inferencing, including large-language models and generative AI.

Broadcom's role in the development of Google's TPU AI accelerators isn't well known publicly, a fact highlighted by the industry-watching folks at SemiAnalysis in a report late last month.

"Often overlooked is that Broadcom is the second largest AI chip company in the world in terms of revenue behind Nvidia, with multiple billions of dollars of accelerator sales," they wrote. SemiAnalysis noted this is primarily the result of a ramp in TPU deployments by Google in response to Microsoft's partnership with OpenAI.

According to the analyst crew, Broadcom is also working with Meta on custom AI chips, but The Social Network "doesn't deploy too many of these… yet." That status suggests there's ample room for Broadcom to grow its revenues further, as Facebook supremo Mark Zuckerberg looks to AI to revitalize his advertising biz. ®

More about

TIP US OFF

Send us news


Other stories you might like