This article is more than 1 year old
Snowflake's finding NeMo to train custom AI models
Submerge an Nvidia LLM 20,000 leagues under the data lake
Snowflake Summit Nvidia and cloud data warehouse company Snowflake have teamed up to help organizations build and train their own custom AI models using data they have stored within Snowflake's platform.
Announced at the Snowflake Summit in Las Vegas, the move will see Nvidia's NeMo framework for developing large language models (LLMs) integrated with Snowflake to allow companies to use the data in their Snowflake accounts to make custom LLMs for generative AI services, with chatbots, search and summarization listed as potential uses.
One of the advantages of this that is being touted by the two companies is that customers can build or customize LLMs without having to move their data, which means that any sensitive information can remain safely stored within the Snowflake platform.
However, Snowflake is provided as a self-managed service that customers deploy to a cloud host of their choice, and as NeMo has been developed to take advantage of Nvidia's GPU hardware to accelerate the AI processing, this requires customers to ensure that their cloud provider supports GPU-enabled instances to make this all possible.
It also isn't clear if NeMo is being made a standard part of Snowflake, or if the two will have to be licensed as separate packages. We will update this article if we get an answer.
This new partnership is unashamedly jumping on the LLM bandwagon following the surge of interest in generative AI models caused by ChatGPT, dubbed the "the iPhone moment of AI" by Nvidia CEO Jensen Huang.
But according to Nvidia VP for Enterprise Computing Manuvir Das, what this partnership with Snowflake allows is for LLMs to be endowed with the skills needed for such AI algorithms to fulfill their function within an organization.
"A large language model is basically trained with a lot of data from the internet. And then it is endowed with certain skills. And you can really think of that LLM as like a professional employee in a company. And a professional employee has two things at their disposal. One, they have a lot of knowledge that they've acquired, and the other is they have a set of skills, things they know how to do," Das said.
- Small custom AI models are cheap to train and can keep data private, says startup
- Microsoft Azure OpenAI lets enterprises feed corporate secrets to ChatGPT
- Why can't Nvidia boss Jensen Huang escape the Uncanny Valley that makes AI feel icky?
- Get ready, Snowflakes: Azure AI is coming for you with one click
"So when you take an LLM, essentially, it's like having a new hire into your company, a student straight out of Harvard, for example.
"If you think about it from the company's point of view, you would really like to have not just this new hire, but an employee who's got 20 years of experience of working at your company. They know about the business of your company, they know about the customers, previous interactions with customers, they have access to databases, they have all of that knowledge."
Inserting the model making engine that is NeMo into Snowflake is intended to let customers take foundation models, and train them and fine tune them with the data that they have in their Snowflake Data Cloud so they gain those skills, or they can just start from the ground up and train a model from scratch, Nvidia said. Either way, they end up with a model unique to them that is also stored in Snowflake.
The NeMo framework features pre-packaged scripts and reference examples, and also provides a library of foundation models that have been pre-trained by Nvidia, according to Das.
Snowflake chairman and CEO Frank Slootman said in a statement that the partnership brings Nvidia's machine learning capabilities to the vast volumes of proprietary and structured enterprise data stored by Snowflake users, which he described as "a new frontier to bringing unprecedented insights, predictions and prescriptions to the global world of business." ®