Nvidia pushes crowd-pleasing container support into AI Enterprise suite
As long as you're running on VMware
Nvidia has rolled out the latest version of its AI Enterprise suite for GPU-accelerated workloads, adding integration for VMware's vSphere with Tanzu to enable organisations to run workloads in both containers and inside virtual machines.
Available now, Nvidia AI Enterprise 1.1 is an updated release of the suite that GPUzilla delivered last year in collaboration with VMware. It is essentially a collection of enterprise-grade AI tools and frameworks certified and supported by Nvidia to help organisations develop and operate a range of AI applications.
That's so long as those organisations are running VMware, of course, which a great many enterprises still use in order to manage virtual machines across their environment, but many also do not.
However, as noted by Gary Chen, research director for Software Defined Compute at IDC, deploying AI workloads is a complex task requiring orchestration across many layers of infrastructure. Anything that can ease that task is likely to appeal to resource-constrained IT departments.
"Turnkey, full-stack AI solutions can greatly simplify deployment and make AI more accessible within the enterprise," Chen said.
The headline feature in the new release is production support for running on VMware vSphere with Tanzu, which Nvidia claims was one of the most requested capabilities among users. With this, developers are able to run AI workloads on both containers and virtual machines within their vSphere environments. As VMware pros will be aware, vSphere with Tanzu is effectively the next generation of vSphere, with native support for Kubernetes and containers across vSphere clusters.
Nvidia is also planning to add the same capability to its Nvidia LaunchPad programme, which provides enterprise customers with access to an environment where they can test and prototype AI workloads at no charge. The environments are hosted at nine Equinix data centre locations around the world and showcase how to develop and manage common AI workloads using Nvidia AI Enterprise.
- Nvidia CFO talks data centre opportunities, chip shortages
- Nvidia promises British authorities it won’t strong Arm rivals after proposed merger
- How can we push more chips, Nvidia thinks: Ah yes, free 3D metaverse-slash-omniverse tools for creators
- Nvidia says its SmartNICs sizzled to world record storage schlepping status
This latest release is also validated for operations with Domino Data Lab's Enterprise MLOps Platform, which is designed to simplify the automation and management of data science and AI workloads in an enterprise environment.
The combination of the two should make it easier for data science teams to deploy projects such as training an image recognition model, performing textual analysis with Nvidia RAPIDS, or deploying an intelligent chatbot with Triton Inference Server, according to Domino Data Lab.
For organisations considering use of the AI Enterprise suite, Nvidia has also added the first certified systems from Cisco and Hitachi Vantara to the list of supported hardware. These join certified systems from the usual suspects, including Dell, HPE, Lenovo and Supermicro.
The Cisco UCS C240 M6 rack server with A100 Tensor Core GPUs is a twin-socket 2U server, while Hitachi's is the Advanced Server DS220 G2, also with A100 Tensor Core GPUs.
Nvidia AI Enterprise comprises various AI and data science tools, including TensorFlow, PyTorch, Nvidia's RAPIDS and TensorRT software libraries, and its Triton Inference Server.
Meanwhile, Nvidia's CFO recently told virtual attendees of the Annual Needham Growth Conference that the company is still in the early stages of penetrating the server market with its GPUs for accelerating AI and other applications, and said there was ample opportunity for growth in future. ®