This article is more than 1 year old

Nvidia bundles enterprise AI software with incoming H100 systems

GPU maker wants more of your money with next wave of servers

Nvidia plans to bundle a five-year license of its commercial AI Enterprise software with every PCIe-based H100 GPU coming to servers soon.

The GPU giant disclosed the plan on Monday at the Supercomputing 2022 conference ahead of the launch of its next-generation H100 datacenter GPU, which the company has promised will provide a substantial performance boost over the A100.

The H100 will be available in dozens of new servers from a litany of vendors including Dell, HPE, and Supermicro over the next couple months. Other vendors planning to sling H100-based servers include Asus, Atos, Gigabyte, Lenovo, Penguin Computing, QCT, and Ingrasys.

Availability is expected in the first quarter of 2023 for some.

While the economy is hurting, Nvidia and its server partners are hoping plenty of businesses and organizations have a continued appetite for running AI applications and other software that benefits from an accelerator like a GPU. We'll get a good idea of how Nvidia is faring in its most recent earnings report next week.

Among the H100-powered servers coming soon are a new stable of Dell PowerEdge systems, including the XE9680, which uses Nvidia's HGX board to pack eight H100 or A100 GPUs along with two 4th-gen Intel Xeon Scalable processors in an air-cooled design. Supermicro said it's releasing its "most advanced GPU server yet" with its new 8U Universal GPU server, which is equipped with eight H100s. HPE's Cray supercomputer unit will also support the H100 with the XD6500.

Nvidia wants more of your money with software plans

The Nvidia AI Enterprise bundling move is part of the chip designer's ambition to build a multibillion-dollar software business that complements its silicon revenue.

The initiative will, of course, require many businesses to find value in its commercial software. With the company seemingly giving away thousands of dollars' worth of licenses, the hope seems to be that users will become hooked and want to pay.

If you need a refresher, Nvidia AI Enterprise is a bundle of AI tools – which include frameworks like PyTorch and TensorFlow as well as things like Nvidia Inference Server – that is optimized to run in containers or virtual machines, either on VMware's vSphere or Red Hat's OpenShift platforms.

A five-year subscription license for Nvidia AI Enterprise costs $8,000 per CPU socket, according to the product's licensing guide. If we are to take Nvidia at its word that it's bundling a five-year license with every H100 PCIe GPU, this means a dual-socket server with two H100 PCIe cards would fully cover the costs of Nvidia AI Enterprise for that server. But a server with more CPU sockets than H100 PCIe cards would require a business to cough up more money to satisfy licensing requirements.

A subscription license for Nvidia AI Enterprise comes with standard support services, but if a customer wants faster response times and more attention, they will have to pay more. Critical support services for Nvidia AI Enterprise starts at $450 for a one-year subscription, and it goes to $2,250 for five years.

For those not wanting to go the subscription route, Nvidia offers a perpetual license that has options for one, three, and five-year support plans, ranging from $4,494 to $80,90 per CPU socket.

Nvidia is likely bundling Nvidia AI Enterprise with the PCIe version of the H100 because that form factor fits in many mainstream servers used by businesses and other organizations. The H100's other form factor is the SXM, which fit in Nvidia's DGX systems and its HGX motherboards.

"By including that with our H100 platform, it's basically offered as a value-add, and we want to encourage customers to deploy and use this in production. And so this will allow us to give them the certainty that we'll be standing behind them," said Dion Harris, head of datacenter product marketing at Nvidia. ®

More about

TIP US OFF

Send us news


Other stories you might like