On-Prem

Networks

Microsoft wants fatter pipes between its AI datacenters, asks Lumen to make light work of it

Is this what the kidz call a glow-up?


Microsoft has tasked network operator Lumen Technologies — formerly CenturyLink — with scaling up its network capacity as the Windows giant looks to grow its burgeoning AI services business, the duo revealed Wednesday.

Specifically, the deal will see Microsoft employ Lumen's Private Connectivity Fabric to bolster both bandwidth and overall capacity between its AI datacenters using a combination of current and fresh fiber lines dedicated to the cause.

The partnership comes as Microsoft looks to capitalize on its early investments in OpenAI by shoehorning the super-lab's generative neural networks into everything from GitHub and Office 365 to the Windows platform as a whole. However, these AI products require the processing and transfer of massive quantities of data, putting pressure on existing datacenter interconnect networks.

Modern large language models, like those being developed in Microsoft datacenters by OpenAI, are often trained on many terabytes – tens if not hundreds of TB – of data. Meanwhile, larger models demand bigger compute clusters containing tens of thousands of GPUs working in unison in order to train such systems in meaningful amounts of time.

As Microsoft looks to grow its AI infrastructure business to support fresh clients and model builders, it's not surprising that there are challenges with inter-datacenter capacity.

It's not clear how much additional capacity the partnership will afford Microsoft. We suspect some of this will depend on the intended use case of the model as the technology and constraints involved will differ depending on whether Microsoft is looking to scale its GPU cluster across multiple sites, or is simply looking to bolster its inter-datacenter communications.

We've asked the folks at Redmond for clarification and will let you know if we hear anything back.

As part of the deal, Lumen has committed to migrating at least some of its workloads to run in Microsoft Azure. These efforts will apparently include integrating Microsoft's AI technology into Lumen's products and employing Redmond's Entra identity management and access control platform.

Lumen expects the migration will save it more than $20 million over the next year and improve its customer service experience, which might suggest that Lumen may be looking to offload some of its customer service roles to Microsoft's OpenAI-based chatbots. ®

Send us news
1 Comment

Microsoft wants us to believe AI will crack practical fusion power, driving future AI

This BS ends at some point, right?

AWS says Britain needs more nuclear power to feed AI datacenter surge

CEO warns energy demands will overwhelm grid without extra generation capacity

Microsoft tries to kill the 'pausing datacenter builds must be bad news for AI' trope

Sees economic strife as chance to sell even more stuff than its $70bn Q3 haul

Microsoft adds Grok – the most unhinged chatbot – to Azure AI buffet

Never mind the chatbot's recent erratic behavior

Tech titans: Wanna secure US AI leadership? Stop giving the world excuses to buy Chinese

Execs from AMD, Microsoft, and OpenAI tear into profit busting AI diffusion rules

CoreWeave may have built a house of (graphics) cards

An overdependence on hyperscalers and a mountain of debt could pull the rug out

Microsoft set to pull the plug on Bing Search APIs in favor of AI alternative

Devs told to swap raw results for LLM-generated summaries as August shutdown looms

Nvidia sets up shop in Taiwan with AI supers and a factory full of ambition

Researchers and TSMC to benefit from expanded infrastructure

Microsoft winnows: Layoffs hit software engineers hard

Python, TypeScript, Azure SDK devs among those let go

US tech titans rejoice in $600B Saudi shopping spree

Prince Mohammed bin Bone Saw will take a few hundred thousand GPUs with his missiles and fighter jets

Microsoft pulls MS365 Business Premium from nonprofits

Microsoft giveth and Microsoft taketh away

Qualcomm confirms it's dipping into datacenter world again, probably for AI

CEO Cristiano Amon teases plans for high-speed-low-power inferencing products