Microsoft wants fatter pipes between its AI datacenters, asks Lumen to make light work of it
Is this what the kidz call a glow-up?
Microsoft has tasked network operator Lumen Technologies — formerly CenturyLink — with scaling up its network capacity as the Windows giant looks to grow its burgeoning AI services business, the duo revealed Wednesday.
Specifically, the deal will see Microsoft employ Lumen's Private Connectivity Fabric to bolster both bandwidth and overall capacity between its AI datacenters using a combination of current and fresh fiber lines dedicated to the cause.
The partnership comes as Microsoft looks to capitalize on its early investments in OpenAI by shoehorning the super-lab's generative neural networks into everything from GitHub and Office 365 to the Windows platform as a whole. However, these AI products require the processing and transfer of massive quantities of data, putting pressure on existing datacenter interconnect networks.
- Oak Ridge casts nets in search of Frontier supercomputer's heir
- Meta claims 'world's largest' open AI model with Llama 3.1 405B debut
- Brit watchdog gnawing on HPE's $14B buy of cable giant Juniper Networks
- Don't blame AI for rise in carbon emissions, says Google exec
Modern large language models, like those being developed in Microsoft datacenters by OpenAI, are often trained on many terabytes – tens if not hundreds of TB – of data. Meanwhile, larger models demand bigger compute clusters containing tens of thousands of GPUs working in unison in order to train such systems in meaningful amounts of time.
As Microsoft looks to grow its AI infrastructure business to support fresh clients and model builders, it's not surprising that there are challenges with inter-datacenter capacity.
It's not clear how much additional capacity the partnership will afford Microsoft. We suspect some of this will depend on the intended use case of the model as the technology and constraints involved will differ depending on whether Microsoft is looking to scale its GPU cluster across multiple sites, or is simply looking to bolster its inter-datacenter communications.
We've asked the folks at Redmond for clarification and will let you know if we hear anything back.
As part of the deal, Lumen has committed to migrating at least some of its workloads to run in Microsoft Azure. These efforts will apparently include integrating Microsoft's AI technology into Lumen's products and employing Redmond's Entra identity management and access control platform.
Lumen expects the migration will save it more than $20 million over the next year and improve its customer service experience, which might suggest that Lumen may be looking to offload some of its customer service roles to Microsoft's OpenAI-based chatbots. ®