This article is more than 1 year old

Compute Express Link glue that binds processors and accelerators hits spec version 2.0... so, uh, rejoice?

Interconnect tech likely to improve data center operations

The CXL Consortium, a standards body focused on interconnect technology, released the 2.0 version of its Compute Express Link specification today.

Introduced in March, 2019, CXL is intended to help connect data center host processors to assorted other devices like accelerators (graphics chips and FPGAs), memory buffers, and smart network interface controllers (NICs), to shuttle data back and forth as efficiently as possible.

The spec is backed by the usual rogues gallery of US companies, specifically AMD, Cisco, Dell EMC, Facebook, Google, HPE, IBM, Intel, Microchip, Microsoft, and Xilinx, along with Arm in the UK. The group also has support from China-based firms like Alibaba and Huawei.

It's worth noting that Arm, AMD, and Xilinx are among the backers of CCIX, a PCIe-based alternate high-speed CPU-accelerator interconnection scheme, and were not among the CXL founding members.

CXL 2.0 is designed to be backwards compatible with CXL 1.1 and 1.0 while adding various new capabilities.

"The data rate remains the same at 32GT/s, but there are a bunch of new usage models," explained Debendra Das Sharma, an Intel fellow, director of Intel's I/O Technology and Standards Group, and CXL technical task force co-chair, in an interview with The Register.

The first has to do with fan-out support or expansion. "CXL supports switching infrastructure," he said. "So CXL 2.0 switch is going to enable you to connect multiple devices to the host using one CXL link."

Das Sharma said another benefit of CXL is memory pooling, which allows resources like GPUs to be pooled and discarded when not needed.

"This way you don't have resources in the system that are locked up with a given server," he explained. "Resources are fungible, and you can basically do what we call server-level disaggregation."

"A lot of the persistent memories out there have a slow read-write bandwidth," said Larrie Carr, a Technical Strategy and Architecture fellow at Microchip and a CXL board member, "so aggregating a number of persistent memory modules together to form a fatter pipe would be one application [of this technology]."

Carr said the standardization of the management of persistent memory is an important step for the CXL spec. "I think this is almost equivalent to what we did with NVMe and storage where if a new technology comes along, there is now actually a standard to implement to allow it to seamlessly integrate into this external infrastructure," he said.

There's also a standardized CXL fabric manager for resource allocation.

"Once you are managing resources across different hosts, across different devices, across different domains, there has to be a notion of how do they talk to each other," said Das Sharma. "So we have defined how a fabric manager works, what will the API's look like, all of that comes with CXL 2.0."

The spec also introduces link-level Integrity and Data Encryption (CXL IDE) to keep data safe as it traverses the CXL link.

Asked about examples of companies making use of the technology, Carr declined to name anyone but insisted as someone who interacts with a lot of IT system architects that details will emerge before too long.

"There is nothing public right now, but CXL is a new way of connecting into your processor memory hierarchy," he said. "And it is opening up a lot of thinking. There's a lot of whiteboarding going on. And over the next year or two, I think some of these ideas are going to hit at least the proof-of-concept stage." ®

More about

TIP US OFF

Send us news


Other stories you might like