Simplifying AI deployment for secure workloads

How Nutanix uses GenAI versatility in its GPT-in-a-Box development package

Sponsored Feature Enterprises are busy trying to figure out how they can use GenAI to move themselves ahead in their markets.

They're seeing multiple potential use cases that could benefit from AI, including customer service assistance, IT operations management, predictive maintenance, financial reporting/accounting, recruitment and more. It looks like there's no limit as to where and how smart technology can be applied.

GenAI applications are modernizing industries such as media, entertainment, retail, manufacturing, IT, and telecom. In fact, the global GenAI market was valued at $13bn in 2023 and is projected to expand at a compound annual growth rate of 36.5 percent from 2024 to 2030, according to Grand View Research.

Yet for all the potential GenAI offers, there's no getting away from the fact that most organizations find it challenging to get started with the technology, often hampered by a lack of experience and daunted by the complexity of the infrastructure and processes needed to make it a success.

That's especially true when it comes to handling synthetic data, the lifeblood of many GenAI apps. Synthetic data is projected information based on historical data.

It's artificially generated rather than collected from real-world events and is typically created using algorithms, computer simulations, or other techniques to mimic the statistical properties and patterns of real data.

The latest of Meta's Llama 3.1 AI models show that synthetic data is also being used for training. Llama 3.1 was pretrained on ~15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets as well as over 25M synthetically generated examples.

This information provides abundant and diverse datasets for training AI/ML models, especially when real data is scarce, expensive to collect, or sensitive. It allows for testing model performance under various scenarios, including edge cases and rare events that might not be well-represented in real data. Gartner predicts that by 2027, 75 percent of businesses will use to create synthetic customer data used in various next-gen apps – up from below 5 percent only one year ago.

But businesses are struggling to operationalize AI and synthetic data to their advantage, often finding that their legacy computing, network, and storage infrastructure can't support their ambitions. Neither can they guarantee that the sensitive information being ingested and processed can be properly protected in accordance with increasingly stringent data privacy regulations. Thus, business process transformation is a clear requirement.

Putting the power of GenAI to work

If an enterprise is going to put the power and potential of GenAI to profitable use in daily production, then it needs a good plan. And that plan should include a versatile data management/development platform that can be utilized by as many employees as possible in various departments.

Why? Using a piecemeal GenAI initiative (various vendors to handle large language models, data libraries, data storage and sharing, development platform, security, data privacy, and so on) means that specialists will need to be hired to handle each of these entities and that numerous licenses will have to be juggled on a constant basis. Finally, there's always the worry that these various moving parts potentially may not work together optimally – now or for the long term.

The biggest potential advantage of deploying a name-brand GenAI platform solution, such as Nutanix's GPT-in-a-Box, is that it provides users with a consistent set of tools, data services, and best practices to accelerate development and usage of their AI applications. This can help to reduce operational complexity and costs compared to a multi-vendor approach, where users need to learn and manage multiple tools and platforms.

San Jose, Calif.-based Nutanix aims to give users the ability to run their AI workloads on their own terms, with the security and data protection they require, while avoiding the per-inference costs associated with public cloud deployments, says the company.

"This (current) version of GenAI is transformative and it is disruptive," says Debo Dutta, VP of Engineering and AI at Nutanix. "If used correctly, operated efficiently, and can adapt, it can reduce the bottom line of operational costs."

Dutta continues: "There aren't many companies here in the second half of 2024 that are putting their GenAI initiatives to work on a daily basis. But while it's early in the relatively new GenAI cycle, the tools are already out there, and time is of the essence when it comes to competing in the market. Early adopters that get their GenAI toolboxes up and running quickly are positioning themselves to see profits sooner rather than later."

"Our enterprise users will need to look at a consistent set of tools and best practices/processes to accelerate their GenAI applications," Dutta added. "In order to do that, we feel that we are extremely well-positioned to provide them with an entire set of full-stack solutions and with the right partnerships."

Our enterprise users will need to look at a consistent set of tools and best practices/processes to accelerate their GenAI applications.

How Nutanix provides its GenAI software and services

Nutanix helped to pioneer hyperconverged infrastructure (HCI) by integrating computing and storage resources into a single, scalable platform in order to simplify datacenter management and reduce complexity and costs for enterprises. The secret to Nutanix's platform is its data services, says the company, which were built to offer a radically simple computing and storage infrastructure for implementing enterprise-class virtualization without complex and expensive network storage (SAN or NAS) that scales to manage petabytes of data while running millions of containerized microservices and virtual machines at the same time.

It also offers integrated data services and security designed to help enterprises maintain the privacy and integrity of AI/ML workloads and data they store and process. It has continued its GenAI product development by integrating it into the Nutanix Cloud Platform, enabling vast scale-out capacity for any size workload.

Dutta describes the three pillars supporting Nutanix's GenAI strategy in this way:

- AI on the Nutanix Cloud Platform: Providing a single, unified platform for onboarding customers' generative AI workloads and managing their data on Nutanix architecture.

- AI with Day 2 Operations on Nutanix: Focusing on operating an AI environment's constant rate of change, including updates, upgrades, security, and management. Being prepared to operate at scale ensures you can adapt as things change.

- AI in Nutanix: Using AI and generative AI to make Nutanix's internal processes and business more efficient. These insights also can be shared with users.

All these are embodied in the company's GPT-in-a-Box solution. There's an entire menu of tools and services all in one place for users to pick and choose the ones they want to deploy first, second, and third, according to their own GenAI strategy.

Current use cases for GPT-in-a-Box

Dutta described a few dominant use-case types that Nutanix has identified early in the GPT-in-a-Box era.

"We often see a customer that has their private corpus of data, and they want to be able to search and have a conversational interface to that data," Dutta said. "One of our early customers was a federal agency that had that need, and they also wanted to do fraud detection. So that's what they're using it for now."

Use case number two is when enterprises want to automate the creation of code: "A code assistant, a code copilot, a private version — not something that runs in the cloud on their code base — on their premises, on their own terms," Dutta said. A third common scenario typically involves healthcare or manufacturing customers that have a lot of proprietary, regulatory data that they want to be able to search and summarize and are building copilots, support bots, and private data stores to help them do it.

Finally, Nutanix's focus on helping customers create a hybrid multicloud helps to ensure that businesses can operate AI production anywhere, from the edge, across to the public clouds. As AI models are trained, the inferencing and tuning can, and should, be done completely in the enterprise's control.

"We see more customers asking about their day 2 operations for generative AI that includes AI in production in a public cloud," Dutta said. "They want to bring it up to private infrastructure to save costs, but also run private and secure inferencing operations." When you bring it [the AI workload] back within your infrastructure that you control you don't pay per inference request, and that is a good cost savings."

Nutanix has also announced partnerships with Hugging Face and Nvidia for its GPT-in-a-Box platform. These partnerships are focused on providing a consistent set of tools and best practices to help enterprise users accelerate their AI applications. For example, GPT-in-a-Box is under development to integrate with Nvidia NIM inference microservices, part of Nvidia AI Enterprise, and with validated Hugging Face LLMs to enable customers to deploy and run GenAI workloads with confidence more efficiently.

The company is also open to supporting GPT-in-a-Box on other cloud platforms beyond their own private infrastructure in the future. More information is available in the following resources:

- What is Artificial Intelligence? A Guide to AI in the Cloud: Nutanix

- Seeing AI's Impact on Enterprises: Nutanix President and CEO Rajiv Ramaswami

- GPT-in-a-Box 2.0 is Here With Four Ways to Get Started with GenAI: Nutanix

Sponsored by Nutanix.

More about

More about

More about

TIP US OFF

Send us news