This article is more than 1 year old

While it waited for Broadcom, VMware set out to do to data services what it did to storage

And decided Intel's GPUs are worthy of on-prem AI action

VMware hasn't been sitting on its hands while waiting for Broadcom to buy it: it has spent the past couple of years planning a move on the data services market.

As explained to The Register by Christos Karamanolis, a VMware fellow and boss of anything to do with data and storage, Virtzilla has observed growing diversity of the data-centric workloads run on its platform. While relational databases still dominate, object stores, caches, data warehouses, data lakes, and document stores have all proliferated. Karamanolis professed he is "surprised by how much Kafka" – the streaming data platform – runs on VMware.

He's also observed that developers expect to be able to provision data services without having to raise a ticket to IT or otherwise slow CI/CD pipelines.

The virty giant started to deliver that last year with a product called "VMware Data Services Manager" but kept it quiet and won a dozen customers.

The product is a control plane that manages multiple data services – initially MySQL and PostgreSQL. The tool is integrated into VMware Cloud Foundation, a cloud-in-a-box suite, and allows those with access to deploy data services as and when needed – as is already the case for deploying VMs or Kubernetes resources.

At the VMware Explore Europe conference in Barcelona today, the under-offer outfit announced version 2.0 of the product. Karamanolis thinks that's a misnomer and this version really represents the product reaching general availability status.

We'll leave that fight to VMware's product marketing people.

For users, the update means Google's AlloyDB Omni – the run anywhere version of the software first offered as Google Cloud's homebrew database – and MinIO's object store can now be deployed using Data Services Manager. Karamanolis said VMware hopes to name more software partners for the platform – perhaps a vector database given those tools are a hot new workload as they're a part of many AI rigs. Over time Karamanolis hopes to move up the stack and add data-centric applications that rest on data services tools to the Manager.

One limitation of the Data Services Manager is that it can only deploy resources into Cloud Foundation implementations. AWS, Azure, Google Cloud, and Oracle Cloud are all therefore a target, as are 4,000-plus smaller clouds run by VMware partners. That's plenty of choice, even if it's not as wide as just spawning databases into whatever public or private cloud takes your fancy. VMware, as ever, fancies its overlay of consistent policy and security makes its involvement worthwhile.

Speaking of those 4,000 smaller VMware-powered clouds, many are now operated as sovereign clouds – rigs tuned to ensure that all resources run in a single jurisdiction to satisfy some users' requirements for data residency or otherwise keeping info free of extraterritorial entanglements.

VMware's sovereign cloud program has added the ability to manage deployment of data services – namely MongoDB, Kafka, and Greenplum. Again, the aim is to ensure developers can get self-service access to tools they want, as VMware doesn't want sovereign clouds to be less flexible than border-spanning affairs.

Another way VMware has addressed that concern is by allowing self-service Kubernetes deployment into sovereign clouds. It has a SaaS tool to do that – Tanzu Mission control – but has created an on-prem version to satisfy sovereign requirements. That's a rare example of SaaS descending from the cloud!

Bring Your Own Key tools have also been brought to VMware's sovereign clouds.

Private AI

Also announced in Barcelona was the addition of IBM's WatsonX to VMware's private AI offering – the vendor's scheme to package AI workloads so they're easier to deploy across hybrid clouds. WatsonX will run on Cloud Foundation, with RedHat OpenShift as the runtime. VMware's Tanzu competes with OpenShift, but when needs must VMware sees RedHat's tool as just another workload to be virtualized and managed.

Another addition to VMware's Private AI scheme is an alliance with Intel concerning its GPUs and CPUs.

VMware thinks the latter – especially fourth-gen Xeons – can handle some AI workloads all by themselves and the Intel collab will show how to make that happen. It's a handy option, given the price and scarcity of GPUs. Virtzilla is keen on Intel's GPUs too – a fillip for Chipzilla as its accelerators are immature and largely ignored during conversations about AI workloads given Nvidia's dominance of the field.

+Comment: This stuff is why Broadcom wants VMware

In conversation with The Register, Karamanolis explained that the data services push resembles VMware's plan for virtual storage – which integrates storage into VMware's platform instead of forcing IT shops to manage silos of storage and compute with different tools and different people.

Karamanolis wants to do the same for database administrators, whom he hopes can be left to exercise their special skills instead of having to trouble themselves with RAID configurations or other infrastructure-centric matters.

That's the kind of play only a platform company can make. And VMware remains very much a platform company – albeit one that is yet to demonstrate it's a great platform for the complex challenges of building and operating hybrid clouds.

Ensuring VMware creates that platform is the job Broadcom has set itself, once it sorts out whatever it is that's pushed closure of the deal past the desired date. ®

More about

TIP US OFF

Send us news


Other stories you might like