AI revolution driving datacenter network investment surge
Why more companies are recognizing the need to make infrastructure fit for purpose in the brave new world of AI
Sponsored Feature The global Artificial Intelligence (AI) technology revolution underway is promising to transform commerce, redesign industrial operations and revolutionize markets.
But, while AI offers huge potential benefits, it's also engendering what is arguably an unprecedented level of technological disruption, including across datacenter networks as organizations rush to upgrade infrastructure to cope with the increased demands associated with AI computing workloads.
The scale of this forthcoming datacenter investment is huge. Recent research from International Data Corporation (IDC) entitled The Global Impact of Artificial Intelligence on the Economy and Jobs, predicts that AI will have a cumulative global economic impact of $19.9 trillion through to 2030 and will drive a cool 3.5 percent global GDP in 2030.
A new McKinsey report - AI power: Expanding data center capacity to meet growing demand - published in October 2024, notes that the "race is on" to build sufficient datacenter capacity to support the current and impending huge accelerations in the use of AI.
"A big chunk of growing demand - about 70 percent at the midpoint of McKinsey's range of possible scenarios - is for datacenters equipped to host advanced-AI workloads," McKinsey predicts. "And the nature of those workloads is rapidly transforming where and how datacenters are being designed and operated."
Challenges creating next-gen datacenters
However, there are substantial challenges associated with creating next-generation datacenters that can maintain reliability, connectivity and uptime in the face of the dramatically increased pressure that AI places on core networks. These workloads are typically much more compute-intensive than others created by traditional applications. In addition, AI routinely involves the exchange of massive volumes of data, combined with the need for very fast processing to deliver acceptable levels of user performance.
To cope with these AI-driven hikes in network and computational demand Nokia notes that today's datacenter networks need to scale up substantially. However, the networking technology specialist goes on to warn that many current solutions have "lost the plot on reliability and simplicity", while at the same time not being able to deliver the necessary reliability and resilience to keep infrastructure and services running to support AI. Nokia cited a recent ACM SIGCOMM Computer Communication Review study of more 180,000 switches in datacenters across 130 geographical locations revealed that approximately 32 percent of switch failures are caused by hardware issues and 17 percent of switch failures are caused by software gremlins in vendor switch operating systems.
To minimize downtime and maximize service availability across AI-supporting infrastructures it's important to implement well designed back-end and front-end network architectures that can meet the stringent requirements of AI workloads. These configurations must combine high reliability, high speed, high capacity, low latency and lossless networking. These networks, according to Nokia, must be able to maximize the utilization of compute resources to achieve the shortest possible job completion times (JCTs) for effective processing of AI workloads. To meet existing and evolving AI needs, compute nodes must be interconnected by high-speed, lossless and low-latency networking.
In terms of datacenter infrastructure design, the back-end network is used for interconnecting high-value Graphical Processing Units (GPUs) required for compute-intensive AI training, AI inference and other high-performance computing (HPC) workloads, often at very high scale. Meanwhile, the front-end needs to support connectivity for AI workloads, general-purpose workloads (non-AI compute) and the management of AI workloads. No less important is the capacity to support reliable, high-performance network interconnectivity between datacenters that implement AI and HPC workloads across geographically-dispersed locations, says the company.
Switches, gateways and interconnects through datacenters and clouds
In light of these prerequisites Nokia's datacenter network solutions are designed for the new demands of AI workloads using high-performance switches (the datacenter fabric), gateways and interconnects to, from, and through datacenters and clouds. Nokia explains that its datacenter networks enable connectivity and high performance to support mission-critical applications and services: "We leverage intelligent automation to optimize traffic flow, enhance performance, and predict issues before they impact your operations," states the company.
Nokia's datacenter fabric comprises a network of leaf and spine switches that work together to provide a resilient and scalable infrastructure for connecting traditional and AI-based applications installed on servers. Each of these leaf and spine switches includes hardware and software. The hardware provides the physical networking interfaces, switching matrix, control complex, fans and other physical components. High-performance hardware platforms enable enterprises to implement modern, massively scalable and reliable datacenter switching architectures for such leaf, spine, super-spine and management top-of-rack (TOR) applications. These include the fixed-configuration Nokia 7220 Interconnect Router (IXR) and Nokia 7215 Interconnect System (IXS), and the Nokia 7250 Interconnect Router (IXR) series, which provides modular and fixed-configuration platforms.
Each switch runs network operating system (NOS) software that provides the intelligence, routing capabilities, telemetry and other programmable capabilities required to operate the device. The network is typically managed by datacenter fabric management and automation tools or platform.
Another significant component of Nokia's datacenter fabric is Nokia Event-Driven Automation (EDA): a modern infrastructure automation platform. It has been designed to simplify datacenter network automation, from small edge clouds to the largest AI fabrics.
"With EDA, you can automate the entire datacenter network lifecycle from Day 0 design, Day 1 deployment to Day 2+ daily operations. The platform abstracts the complexity of multivendor networks, which helps you provision and monitor your network in real time and make sure it always operates as expected," the company states. "EDA builds on the proven Kubernetes platform and taps into its vast open-source ecosystem. This reduces your risks and lowers barriers to entry for users".
Added to that is the Nokia Data Center Gateway, a carrier-grade router that supports advanced networking functions for connecting a datacenter fabric to the outside world. The gateway connects to the spine layer in a mesh topology in a way that's similar to a leaf switch. According to Nokia, effective datacenter interconnect can be achieved by using the Wide Area Network (WAN) to connect multiple datacenter fabrics. However, the transport and service technologies used in the datacenter and the WAN are typically different, so some level of interworking is required.
The Nokia Data Center Gateway allows organizations to support DCI over IP-only, MPLS (LDP, RSVP, SR-MPLS) or SRv6 tunnels in the WAN. It also enables enterprises to scale and simplify the integration between the datacenter fabric and the WAN by collapsing the border leaf and the Provider Edge (PE) router into a single device.
"All of these capabilities enable you to enhance overall network performance and deliver a superior user experience by ensuring that critical applications receive the necessary bandwidth with minimal delay," Nokia states.
Hyperscale customers fitting out AI datacenters
This rapidly rising global demand for AI network capacity has prompted Microsoft to extend a multi-year deal with Nokia for the supply of datacenter routers and switches for deployment across its global Azure network of datacenters. The partnership will grow Nokia's global footprint to over 30 countries and strengthen its role as a strategic supplier for Microsoft's worldwide cloud infrastructures. As part of the expansion, Nokia will supply its 7250 IXR-10e platform to deliver multi-terabit-scale interconnectivity solutions.
A further real-world example of an enterprise proactively preparing its datacenters of AI was highlighted in February 2025 when Maxis, Malaysia's leading integrated telecommunications provider, selected Nokia's datacenter switches and Event-Driven Automation platform to deliver a more scalable, secure, and efficient datacenter architecture. Maxis announced it will be deploying Nokia 7220 IXR datacenter switches and EDA across multiple datacenters.
Nokia also recently announced it has been selected by Nscale, the hyperscaler engineered for AI, to deliver an IP network solution to support AI workloads at Nscale's new datacenter in Stavanger, Norway, which is powered entirely by renewable energy and optimized for energy-efficient cooling. The upgrade will enable cutting-edge AI services, including Graphics Processing Unit as a Service (GPUaaS).
As the momentum behind AI ramps up even further over coming years it is clear that investment in cutting-edge technologies to enhance datacenters supporting AI workloads will continue to grow commensurately. The new Goldman Sachs report, AI to drive 165 percent increase in data center power demand by 2030, published in February 2025, describes how this global explosion in generative artificial intelligence has resulted in a technology "arms race", which will require a surge in construction of high-density datacenters.
It is apparent that, as the battle lines are being drawn in the global drive to maximize the potential of AI, many enterprises are moving to prepare their datacenter infrastructures for the challenges ahead.
Sponsored by Nokia.