This article is more than 1 year old
Nutanix reckons you can stuff AI into a box – a box it manages, that is
You need resources in lots of places, which is the hybrid cloud taming trick the company exists to perform
Interest in AI workloads has reached the point at which major enterprise vendors are packaging it for easy consumption, with Nutanix the latest to prep its platform for binary brainboxes.
The outfit's explanation for its "GPT-in-a-Box Solution" is that plenty of organizations are both AI-curious and wary of experimenting in the cloud for fear of rapidly running up big bills. Data governance is another concern, according to Thomas Cornely, Nutanix's senior veep of product management, because running AI workloads means working with plenty of data that could have diverse compliance requirements.
Also on users' minds, Cornely reckons, is the possibility that their initial AI forays will have poor return on investment.
What's a user to do? When you're Nutanix, and your "hammer" is a software stack that abstracts on-prem hardware and hyperscale clouds into hybrid clouds and automates management across those platforms, every workload looks like a nail.
GPT-in-a-Box is therefore a cut of Nutanix's stack tuned to the needs of AI workloads. The offering also incorporates Kubeflow, which aims to ease deployment of machine learning workloads, the PyTorch machine learning framework, and Jupyter. Users can also run their preferred generative pre-trained transformers (GPTs), with Llama2, Falcon, and mosaicML supported on day one.
Nutanix's AHV hypervisor is already certified to run Nvidia AI Enterprise 4.0, a key AI workload. And Nutanix can also manage GPUs.
- Nutanix de-converges by allowing dedicated nodes for compute and storage
- Nutanix's cloudy clusters now officially at home in Azure
- Nvidia gives Grace Hopper superchip an HBM3e upgrade – sometime next year
- Dell pumps out reference designs, plumps services, to bring AI on-prem
The vendor's hybrid cloud cred comes into play with the realization that the data needed to feed a model can be substantial, so may be best located in the cloud, but that AI workloads could run on-prem, on the edge, or in a cloud. Nutanix's platform is all about managing such collections of workload under a logical construct.
The company's plan is therefore to have customers that already run its stuff consider the platform for their AI workloads, if only because creating a silo just for AI is a silly idea. And also because Nutanix wants more of your stuff on its platform.
It's not alone in that ambition. Dell has already cooked up a similar offering. VMware has teased partnerships with AI players to package their wares for vSphere, likely to be revealed at next week's Explore conference. The big hyperscale clouds have also already packaged AI workloads.
Nutanix is starting with Nvidia GPU, and Cornely told us support for AMD's accelerators is on the way. We asked about Intel's standalone GPUs, kit it has only recently brought to market and which is well regarded but yet to achieve notable adoption. Cornely said it's too soon for Nutanix to consider support, but didn't rule out a bright future for Intel GPUs. ®