Big-name tech companies will together unveil yet another interconnect, this one dubbed Compute Express Link or CXL, which is aimed at plugging data-center CPUs into accelerator chips.
The technology is built on fifth-generation PCIe protocols and electrical connections, meaning it can top out at 32 billion data transfers per second, or up to 128GB/s using x16 lanes.
More details on version 1.0 of the spec are due to be published on computeexpresslink.org. Support for the high-speed interface is set to appear in chipsets in 2020, and in shipping products in 2021, we're told. So this is more of a heads up rather than a hardware launch.
It is not expected to replace DDR RAM connections, though it can be used to build tiers of memory, such as hooking non-volatile storage to CPU cores. For now, it's aimed at accelerators: think graphics chips, FPGAs, and ASICs.
Indeed, the techies working on the blueprints hope it will become a future standard for connecting server processors to silicon dedicated to handling AI, network packets, security, and so on, at high speed.
It has three interface methods: an IO mode mainly for sending commands and receiving status updates; a memory protocol, allowing the host processors to efficiently share physical RAM with an accelerator; and a data coherency interface. The key blurb from the CXL Consortium, which is promoting the standard, reads thus:
CXL technology maintains memory coherency between the CPU memory space and memory on attached devices, which allows resource sharing for higher performance, reduced software stack complexity, and lower overall system cost. This permits users to simply focus on target workloads as opposed to the redundant memory management hardware in their accelerators. CXL was designed to be an industry open standard interface for high-speed communications, as accelerators are increasingly used to complement CPUs in support of emerging applications such as Artificial Intelligence and Machine Learning.
The specification is available to companies that join the CXL Consortium.
Any organization is permitted to join the club – you pretty much have to see the quote open unquote specs – and help steer its technical development, it is claimed. The companies initially involved, and keen to use the tech, are the usual clique: Intel, Microsoft, Google, Facebook, HPE, Cisco, and Dell-EMC, plus Huawei and Alibaba. So, no AMD, Nvidia, Xilinx, IBM, nor Arm and its server-class system-on-chip licenses, though as we said, their engineers are said to be welcome to join or implement the spec. We'll have to see if corporate politics or commercial rivalries permit that.
"We want this to be the most open of open specifications," Jim Pappas, director of technology initiatives at Intel, told The Register late last week.
Version 2.0 is, we're assured, already being worked on, and will be backwards compatible with version 1.0. Thus if your CXL 1.0 product comes out just as version 2.0 is published, it will still work when plugged into a version 2.0 interface.
In short, here's another interconnect to look out for in future purchases. It's being driven by Intel and its pals, with certain cloud giants and on-premises equipment makers eagerly waiting to use it to hook their Intel CPUs to supporting devices.
There are, of course, competing incompatible high-speed CPU-accelerator interconnects out there, notably CCIX, which is also PCIe-based and backed primarily by AMD, Arm, Mellanox, Qualcomm, Xilinx, and Huawei. That's pretty much everyone who isn't in the CXL founding gang, with the exception of Huawei. Fancy that! ®