This article is more than 1 year old

IBM launches Watsonx to help enterprises streamline workers out the door

Let's face it, Big Blue has plenty of experience in that area

IBM has made it no secret that it believes AI will supplant workers, and during its annual Think event this week, the IT titan revealed how it plans to help enterprise managers do the same.

At the heart of this strategy is IBM's newly unveiled Watsonx product suite — essentially a collection of ML tools, hardware, models, data storage, and consulting services stitched together in a way Big Blue claims will make it easier for customers to integrate machine learning within their existing product stacks.

IBM touts the platform as a "full technology stack" for training, tuning, and deploying AI models, including foundation and large language models, while ensuring tight data governance controls.

IBM: We can be the foundation of your AI ouster

In its current form, Watsonx is really a collection of three product offerings:,, and Watsonx.governance. As we all know, Watson has great brand recognition these days. is arguably the most important of the three, as it pairs a suite of pre-trained foundation models curated by IBM, as well as access to Hugging Face's library of open-source models.

Depending on the scale of the model, training can be an incredibly time-consuming and expensive proposition. These foundation models are essentially a starting point from which a model can be customized and tuned, IBM explained.

"Foundation models make deploying AI significantly more scalable, affordable, and efficient," said IBM CEO Arvind Krishna. "Clients can quickly train and deploy custom AI capabilities across their entire business."

When launches in July, the available foundation models will include: fm.code, an AI model designed to generate code snips and automate IT tasks; fm.NLP, a natural language model that can be tuned for industry-specific domains; and fm.geospatial, which uses climate and remote sensing data to help enterprises understand and plan for weather-related disruptions.

And for anything that doesn't fit neatly into those boxes, IBM has, as we said, partnered with Hugging Face to offer its library of open source models.

As for the hardware on which these services will be built and served, IBM says it will be making "new" GPU instances available to customers. But beyond the fact that they'll use Nvidia accelerators, we don't know much about them.

The Register reached out to IBM and Nvidia for comment on the architectures underpinning Watsonx; we'll let you know if we hear anything back.

Given what we know about IBM's recently revealed Vela AI supercomputer, we'll hazard a guess that Big Blue is using Nvidia's three-year-old A100s for training, though H100s aren't out of the question either. Our sister site The Next Platform recently took a deep dive into IBM's Vela system if you're interested in learning more about what makes it tick.

Having said that, it wouldn't surprise us if IBM is being intentionally vague and using software abstraction to move workloads to more efficient architectures as they become available. Given the number of AI accelerators making their way onto the market from Intel, Nvidia, AMD, and others, this would help IBM avoid getting locked into a walled garden.

In addition to training and refining AI models, Watsonx also includes a data store service built on an open lakehouse — a kind of architecture designed to support analytics on both structured and unstructured data — that IBM says has been tuned with artificial intelligence in mind. However, the service won't arrive for a few months, and details were particularly thin in IBM's launch post.

Alongside providing data storage, Watsonx.governance provides tools for mitigating the risk of applying AI models to sensitive customer data. IBM says the offering can "proactively" detect model bias and drift as to avoid ethics conflicts from the use of AI. But, like IBM's service, we'll have to wait until later this year for that piece of the puzzle to make its way to the public.

IBM eats its own dog food

In what should come as a surprise to no one, Big Blue is working to build many of these tools into its existing product portfolio.

In its announcement, IBM identified four software products and services that it plans to integrate with Watsonx. The first of these included Watson Code Assistant — a tool for automatically generating code snippets based on user input. The service appears to be a jab at Microsoft's GitHub Copilot X offering, which is built atop Open AI's GPT family of large language models.

IBM also plans to use NLP models to speed up its AIOps Insights platform, using machine learning to pinpoint anomalies in IT processes and facilitate faster and more effective mitigation by support teams.

Watson Assistant and Watson Orchestrate will also be getting an AI tune up at some point in the near future. The former is an AI chatbot designed to interact with customers and workers, while the latter is an automation toolkit.

Finally, IBM plans to integrate its fm.geospatial foundation model into its Environmental Intelligence Suite in order to identify environmental risks before they disrupt operations.

While IBM describes many of these tools as helping employees work more efficiently, it's hard not to see how they might also be used to "streamline" workers out of jobs — an idea we'll note IBM has been rather up front about.

Earlier this month, Krishna told Bloomberg that up to 30 percent of Big Blue's back-office jobs – around 7,800 roles – could be replaced by AI, and that the IT giant would likely slow hiring in those roles over the next five years. ®

More about


Send us news

Other stories you might like