Oracle continues GenAI push into enterprise data

Don't leave us for LLM systems in other clouds, says Big Red

Oracle has launched its OCI Generative AI service and introduced new betas to help customers build machine learning models from their own data. All of this as it tries to ensure the LLM bandwagon does not carry data from its systems to other cloud providers.

Big Red announced enhanced retrieval-augmented generation (RAG) techniques to let customers refine out-of-the-box LLMs with their own data. The so-called OCI Generative AI Agents service with a RAG agent works with enterprise search built on OCI OpenSearch to target results with context from enterprise data.

OCI OpenSearch is currently in beta with Oracle promising the coming release will support a wider range of data search. This will include allowing users to access systems including Oracle Database 23c with AI Vector Search and MySQL HeatWave with Vector Store. The vector part is intended to play well with LLMs, which are based on vector embeddings.

Oracle adds it will also offer GenAI-boosted SaaS applications, including Oracle Fusion Cloud Applications Suite, Oracle NetSuite, and industry applications such as Oracle Health.

In addition, general availability of OCI Generative AI was confirmed today. This is a managed service designed to integrate LLMs from Cohere and Meta Llama 2 and put them to productive use in business. Big Red has added multilingual capabilities that support over 100 languages, an improved GPU cluster management experience, and flexible fine-tuning options.

In a statement, Greg Pavlik, senior OCI veep, said: "Instead of providing a tool kit that requires assembling, we are offering a powerful suite of pre-built generative AI services and features that work together to help customers solve business problems smarter and faster."

Oracle has already announced some of these features in applications. In June 2023, for example, it announced Oracle Cloud HCM generative AI services on an OCI Supercluster, relying on the "metal compute with Nvidia GPUs." It aims to accelerate LLM training with the highest performance at the lowest cost, Oracle said.

IDC group vice president Ritu Jyoti pointed out that Oracle's GenAI play was designed to avoid asking customers to move their data to a separate vector database.

"With a common architecture for generative AI that is being integrated across the Oracle ecosystem from its Autonomous Database to Fusion SaaS applications, Oracle is bringing generative AI to where exabytes of customer data already reside, both in cloud datacenters and on-premises environments.

She said this "simplifies the process for organizations to deploy generative AI with their existing business operations."

But what may make sense in terms of simplicity and cost – let's not talk about Oracle licensing right now – may not be best in terms of customer choice.

Those already building an enterprise-wide AI/data analytics strategy based on platforms from Microsoft, Google, AWS, and Databricks, for example, might not want to keep their data in OCI.

On this point, Oracle's deal with Microsoft Azure could become relevant.

In September last year, Oracle said that hardware used to run its databases would sit in Microsoft's Azure datacenters, under an expanded tie up between the two. The move would let Microsoft offer an Oracle database service within Azure under the Oracle Database@Azure branding, but Oracle would operate and manage the service behind the scenes.

The agreement built on a deal to allow Oracle application to run in OCI but be "close" to products on Azure via an interconnect service launched in 2022.

Heavy Oracle users looking elsewhere for an ML stack might be tempted by Microsoft Azure, which launched a revamped Fabric product late last year. Wouldn't it be canny of Microsoft to try to play both sides? ®

More about


Send us news

Other stories you might like