Rethinking storage for the AI era
Huawei outlines how telecom operators may need to redesign data infrastructure as AI workloads grow
Sponsored Post At MWC Barcelona 2026, Huawei used the stage to argue that the next phase of AI adoption will hinge less on models and more on the infrastructure that feeds them.
Speaking at the event, Yuan Yuan, president of Huawei's data storage product line, said many organizations have embraced AI experimentation but have yet to scale it in production. According to Huawei, more than 90 percent of enterprises have explored AI-driven innovation in the past two years, yet fewer than 10 percent have successfully deployed it at scale.
One of the main reasons is data.
Enterprises still struggle with fragmented data silos, inconsistent data quality, and labor-intensive preparation processes such as collection, cleansing and labeling. These issues make it difficult for AI systems to deliver commercial value and have raised questions about return on investment.
Huawei's view is that the role of storage infrastructure must evolve. Rather than simply archiving information, future platforms will need to support AI systems that continuously access, learn from and update data. As Yuan described it, the shift will move organizations from storing data to storing knowledge and memory.
This change reflects a broader transformation in how AI applications operate. As AI agents become major consumers of data, storage platforms must support new formats including vector, graph and key-value data models. Huawei argues that integrating knowledge storage, memory functions and inference acceleration into a unified data platform could simplify architecture while improving performance.
The company also highlighted the operational pressures telecom carriers face as AI services expand. Inference speed, reliability and cost remain key challenges for operators deploying large models across customer services, internal systems and new digital offerings.
Huawei pointed to an intelligent computing service platform developed with a Chinese carrier that uses large-scale key-value caching to improve efficiency. By reducing repeated computation and coordinating on-chip memory, DRAM and storage, the system is designed to improve throughput while lowering inference costs and response times.
The broader message from Huawei is that AI infrastructure planning must extend beyond GPUs and model training. Storage architecture, data lifecycle management and compute collaboration will increasingly determine how effectively organizations deploy AI.
For telecom operators navigating digital transformation, the implication is clear. Data infrastructure may become one of the most critical foundations for making AI practical at scale.
Sponsored by Huawei