Nvidia gets down with low code in AI Enterprise update

GPU giant promises to make ML accessible to even the most modest biz


Nvidia aims to take the pain out of machine-learning development this week with the latest release of its AI Enterprise suite, which includes a low-code toolkit for machine-learning workloads.

The update also extends support for Red Hat OpenShift, Domino Data Lab’s ML operations platform, and Azure’s NVads A10 v5 series virtual machines.

Introduced last summer, Nvidia bills AI Enterprise as a one-stop shop for developing and deploying enterprise workloads on its GPUs, regardless of whether they’re deployed on-prem or in the cloud.

The suite is a collection of tools and frameworks developed or certified by Nvidia to make building AI/ML applications more accessible to enterprises of all sizes. Over the past year, the chipmaker has rolled out support for a variety of popular frameworks and compute platforms, like VMware’s vSphere.

The latest release — version 2.1 — introduces low-code support in the form of Nvidia’s TAO Toolkit.

Low code is the idea of abstracting away the complexity associated with manually coding an application — in this case speech and AI vision workloads — using little to no code in the process. Nvidia's TOA Toolkit, for example, features REST API support, weights import, TensorBoard integrations, and several pre-trained models, designed to simplify the process of assembling an application.

Alongside low-code functionality, the release also includes the latest version of Nvidia RAPIDS (22.04) — a suite of open source software libraries and APIs targeted at data-science applications running on GPUs.

The 2.1 release also sees the chipmaker certify these tools and workloads for use with a variety of software and cloud platforms.

For those migrating to containerized and cloud-native frameworks, the update adds official support for running Nvidia workloads on Red Hat’s popular OpenShift Kubernetes platform in the public cloud.

Red Hat’s container runtime is the latest application environment to be certified, and follows VMware’s vSphere integration last year. Domino Data Lab’s MLOps service also received Nvidia’s blessing this week. The company’s platform provides tools for orchestrating GPU-accelerated servers for the virtualization of AI/ML workloads.

And, in what should surprise no one, team green has certified Microsoft Azure’s latest generation of Nvidia-based GPU instances, introduced in March. The instances are powered by the chipmaker’s A10 accelerator, which can be split into up to six fractional GPUs using temporal slicing.

In addition to Nvidia AI Enterprise updates, the company also introduced three new labs to its LaunchPad service, which provides enterprises short-term access to its AI/ML software and hardware for proof of concepts and testing purposes.

The latest labs include multi-node training for image classification on vSphere with Tanzu, VMware’s Kubernetes platform; fraud detection using the XGBoost model and Triton, Nvidia’s inference server; and object detection modeling using the TOA Toolkit and DeepStream, the chipmaker’s streaming analytics service. ®


Other stories you might like

Biting the hand that feeds IT © 1998–2022