Musk ropes Dell, Supermicro into xAI supercomputer project
Oh, Mike, what have you got yourself into now
Dell Technologies and Supermicro have been confirmed as the computer makers building an Nvidia-powered AI supercomputer for Elon Musk's xAI startup.
"We're building a Dell AI factory with Nvidia to power Grok for xAI," Michael Dell confirmed in an X post, with an image depicting rows of plastic-draped server racks on pallets. Grok being xAI's attempt at a generative chatbot.
As it turns out, Dell won't be building the entire system. "To be precise, Dell is assembling half of the racks that are going into the supercomputer that xAI is building," Musk responded.
The other half the cluster will be built by Supermicro, another leading producer of GPU-accelerated servers used in AI training and inference, Musk clarified in a separate post referencing the manufacturer's ticker symbol SMC.
Earlier this month, the Tesla tycoon announced xAI was only months away from bringing 100,000 liquid cooled Nvidia H100 GPUs online, which will, apparently, be located in North Dakota.
If we had to guess, these are the systems that Dell and Supermicro are likely supplying. However, it's also clear that Musk isn't all that happy with the H100's efficiency, especially after witnessing Nvidia's Blackwell-generation GPUs back at GTC.
"Given the pace of technology improvement, it's not worth sinking one gigawatt of power into H100s," Musk said earlier this month.
Instead, the big-brain billionaire argued the "next step" would probably be a cluster of around 300,000 Nv B200s — the equivalent of 37,500 nodes — connected by Nvidia's 800Gb/s ConnectX-8 NICs, which could be deployed as early as next summer.
Such a cluster won't be cheap, but it probably doesn't hurt that xAi raised $6 billion in Series B funding late last month, which should cover the cost of a decent chunk of those accelerators.
- X boss Elon Musk tries to make nice with firms at ad biz conference
- Tesla shareholders agree to pay Musk staggering sum of $48B
- Tesla's Autopilot false advertising tussle with California DMV must go to trial
- Elon Musk ends OpenAI lawsuit without explaining why
xAI's acquisition of Nvidia GPUs has been a controversial subject in recent weeks.
Earlier this month, Musk confirmed he'd prioritized orders for 12,000 H100 GPUs, valued at roughly $500 million, from his publicly listed Tesla to his privately owned xAI. The decision, it seems, came down to logistics, with Musk arguing that Tesla couldn't have used the GPUs even if it wanted to because it didn't have anywhere to deploy them.
But while Elon appears to be going all in on Nvidia to accelerate development of xAI's Grok chatbot, the situation at Tesla is less clear. The car maker has been deploying large numbers of Nvidia GPUs to develop its "full" "self driving" program, but it's also allegedly developing an entirely custom supercomputer system called Dojo, which we've looked at in depth previously.
Last year, Tesla said it would spend upwards of $1 billion on its Dojo supercomputers. But it seems Musk's enthusiasm for Dojo has waned over the past year.
When asked about the system in during Tesla's January earnings call, Musk waffled on the subject, telling investors to "think of Dojo as a long shot," and he's "hedging his bets" with large orders for Nvidia GPUs. ®