AI to be bigger than IaaS and PaaS combined by 2025

$500bn a year to be spent on electro-brain and supporting tech vs $400bn on cloud infrastructure


Analyst firm IDC has predicted that by the year 2025 more money will be spent on artificial intelligence software and services than on infrastructure-as-a-service and platforms-as-a-service.

The firm on Wednesday published details of its Worldwide Semi-annual Artificial Intelligence Tracker, which predicted global spending of $341.8 billion this year – representing 15.2 per cent year-over-year growth.

Growth will accelerate to 18.8 per cent in 2022, leading IDC to predict the market is on track to pass $500 billion annual spend by 2024.

AI software currently dominates, accounting for 88 per cent of AI expenditure.

For the next couple of years, IDC reckons AI hardware will grow fastest, until AI services takes over from 2023. AI services will be a $50 billion market by 2025, IDC opined, with IT services likely to account for 80 per cent of spend and general business services accounting for the rest.

IDC's definition of AI includes applications software that puts it to work, AI infrastructure software and AI-related services.

IDC ranks spookware vendor Palantir as the leader by revenue among vendors of AI Platforms, with IBM in third place behind Microsoft. The beast of Redmond is in the top three of IDC's other AI software categories: Applications, System and Infrastructure Software, and AI Application Development & Deployment software. IBM, which led with AI before deciding hybrid clouds are its future, makes just one more appearance in the System and Infrastructure Software slot.

AI System Infrastructure Software is predicted to enjoy five-year CAGR of 14.4 per cent while accounting for roughly 35 per cent of all AI Software revenues. In the AI Applications market, IDC says growth of enterprise risk management wares will outperform CRM over the next five years.

AI Lifecycle Software is where you'll find the hottest action among AI Platform products.

Over in the public cloud, IDC has forecast infrastructure-as-a-service and platforms-as-a-service will crack $400 billion annual revenue in 2025, thanks to 28.8 per cent compound annual growth rate over the coming years.

Apparently you've done plenty of cloud migration already, and the market has shifted to app modernisation – because that gets you closer to what IDC called "agile application delivery and cloud operations".

PaaS and IaaS growth will also be driven by the need to scale data growth without also scaling capital expenditure.

"By 2022, IDC anticipates that almost half of an enterprise's products and services will be digital or digitally delivered, increasing the business's reliance on infrastructure (compute, storage, networking) to support more than traditional business applications," IDC opined. "Timely access to innovative infrastructure resources – both shared and dedicated – will be imperative to sustain the adaptive, resilient, secure, and compliant digital business models of the future." ®


Other stories you might like

  • If AI chatbots are sentient, they can be squirrels, too
    Plus: FTC warns against using ML for automatic content moderation, and more

    In Brief No, AI chatbots are not sentient.

    Just as soon as the story on a Google engineer, who blew the whistle on what he claimed was a sentient language model, went viral, multiple publications stepped in to say he's wrong.

    The debate on whether the company's LaMDA chatbot is conscious or has a soul or not isn't a very good one, just because it's too easy to shut down the side that believes it does. Like most large language models, LaMDA has billions of parameters and was trained on text scraped from the internet. The model learns the relationships between words, and which ones are more likely to appear next to each other.

    Continue reading
  • Cerebras sets record for 'largest AI model' on a single chip
    Plus: Yandex releases 100-billion-parameter language model for free, and more

    In brief US hardware startup Cerebras claims to have trained the largest AI model on a single device powered by the world's largest Wafer Scale Engine 2 chip the size of a plate.

    "Using the Cerebras Software Platform (CSoft), our customers can easily train state-of-the-art GPT language models (such as GPT-3 and GPT-J) with up to 20 billion parameters on a single CS-2 system," the company claimed this week. "Running on a single CS-2, these models take minutes to set up and users can quickly move between models with just a few keystrokes."

    The CS-2 packs a whopping 850,000 cores, and has 40GB of on-chip memory capable of reaching 20 PB/sec memory bandwidth. The specs on other types of AI accelerators and GPUs pale in comparison, meaning machine learning engineers have to train huge AI models with billions of parameters across more servers.

    Continue reading
  • AMD touts big datacenter, AI ambitions in CPU-GPU roadmap
    Epyc future ahead, along with Instinct, Ryzen, Radeon and custom chip push

    After taking serious CPU market share from Intel over the last few years, AMD has revealed larger ambitions in AI, datacenters and other areas with an expanded roadmap of CPUs, GPUs and other kinds of chips for the near future.

    These ambitions were laid out at AMD's Financial Analyst Day 2022 event on Thursday, where it signaled intentions to become a tougher competitor for Intel, Nvidia and other chip companies with a renewed focus on building better and faster chips for servers and other devices, becoming a bigger player in AI, enabling applications with improved software, and making more custom silicon.  

    "These are where we think we can win in terms of differentiation," AMD CEO Lisa Su said in opening remarks at the event. "It's about compute technology leadership. It's about expanding datacenter leadership. It's about expanding our AI footprint. It's expanding our software capability. And then it's really bringing together a broader custom solutions effort because we think this is a growth area going forward."

    Continue reading

Biting the hand that feeds IT © 1998–2022