This article is more than 1 year old

Microsoft rains more machine learning on Azure cloud

No surprise: Nvidia is in the picture

Microsoft made sure to include Azure in the AI-fest that was the Build 2023 developer conference this week.

As enterprises consider experimenting with or deploying generative AI, they may well look to public clouds and similar scalable compute and storage infrastructure to run things like large-language models (LLMs).

Microsoft, armed with ChatGPT, GPT-4, and other OpenAI systems, has for months been shoving AI capabilities into every nook and cranny of its empire. Azure is no different – the OpenAI Service is an example – and after its Build conference, Redmond's public cloud now has even more claimed offers.

High on the list is an expanded partnership with Nvidia, which itself is rushing to establish itself as the indispensable AI technology provider, from GPU accelerators to software. This week alone the chipmaker unveiled a host of partnerships, such as with Dell at Dell Technologies World and supercomputer makers at ISC23.

Bringing Nvidia resources into Azure

Specifically, Microsoft is integrating Nvidia's AI Enterprise suite of software, development tools, frameworks, and pretrained models into Azure Machine Learning, creating what Tina Manghnani, product manager for the machine learning cloud platform, called "the first enterprise-ready, secure, end-to-end cloud platform for developers to build, deploy, and manage AI applications including custom large language models."

The same day, Microsoft made Azure Machine Learning registries – a platform for hosting and sharing such machine-learning building blocks as containers, models and data and a tool for integrating AI Enterprise into Azure – generally available. AI Enterprise in Azure Machine Learning is also available in limited technical preview.

"What this means is that for customers who have existing engagements and relationships with Azure, they can use those relationships – they can consume from the cloud contracts that they already have – to obtain Nvidia AI Enterprise and use it either within Azure ML to get this seamless enterprise-grade experience or separately on instances that they choose to," Manuvir Das, vice president of enterprise computing at Nvidia, told journalists a few days before Build opened.

Isolating networks to protect AI data

Enterprises running AI operations in the cloud want to ensure their data doesn't get exposed to other companies, with network isolation being a key tool. Microsoft has features like private link workspace and data exfiltration protection, but no public IP option for compute resources of companies training AI models. At Build, the vendor announced managed network isolation in Azure Machine Learning for choosing the isolation mode that best fit an enterprise's security policies.

Don't miss our Build 2023 coverage

Unsurprisingly, open-source tools are increasingly coming into the AI space. Microsoft last year partnered with Hugging Face to bring Azure Machine Learning endpoints powered by the open-source company's technology. At Build, the pair of organizations expanded their relationship.

Hugging Face already offers a curated set of tools and APIs as well as a huge hub of ML models to download and use. Now a collection of thousands of these models will appear Redmond's Azure Machine Learning catalog so that customers can access and deploy them on managed endpoints in Microsoft's cloud.

More foundation model options

Redmond also is making foundation models in Azure Machine Learning available in public preview. Foundation models are powerful and highly capable pretrained models that organizations can customize with their own data for their own purposes and roll out as needed.

Foundation models are becoming quite important, as they can help organizations build non-trivial ML-powered applications, shaped to their specific requirements, without having to spend hundreds of millions of dollars training the models from scratch or offloading processing and sensitive customer data to the cloud.

Nvidia released a NeMo framework that may be useful in this area, and this month has partnered with ServiceNow and – this week – Dell in Project Helix along those lines.

"As we've worked with enterprise companies on generative AI in the last few months, what we have learned is that there are a large number of enterprise companies that would like to leverage the power of generative AI, but do it in their own datacenters or do it outside of the public cloud," Nvidia's Das said.

Resources like open-source and foundation models promise to reduce complexity and costs to allow more organizations access to generative AI. ®

More about

TIP US OFF

Send us news


Other stories you might like