UXL Foundation readying alternative to Nvidia's CUDA for this year
An open standard challenger appears
The UXL Foundation is readying its open standard accelerator programming model, touted by some as an alternative to Nvidia's CUDA platform, for "a spec release in Q4."
Announced last year, the Unified Acceleration (UXL) Foundation is a group of companies operating under the aegis of the Linux Foundation to develop an open standard accelerator programming model for application realms, including, of course AI.
This would potentially make it a rival for Nvidia's software, such as its CUDA platform, which is long established but designed to work with the company's own GPU accelerator hardware.
Rod Burns, chair of the UXL Foundation Steering Committee, told The Register: "In fact, the specification has been under development for a few years and has been released regularly over that time. This means we already have a mature specification for many of the fundamentals needed."
However, he added: "This work continues through 2024 through refinement and is led by the Specification Working Group within the foundation, we will make be aiming to ratify a spec release in Q4."
Beyond this, implementations of the spec have been under development for the past few years and are now being used by some developers to write code and target multiple vendors, Burns explained.
"Our goal this year is to continue to build out the vendor support for the libraries, add new features and follow best practices for open governance to deliver a specification fit for the whole community."
Meanwhile, Reuters reported that the UXL Foundation technical steering committee is preparing to "nail down" technical specifications in the first half of this year, and that these will be refined to a "mature state" by the end of the year.
The project has backing from a number of heavy-hitters, with steering members of the UXL Foundation including Arm, Intel, Google Cloud, Qualcomm, Fujitsu, and VMware.
Intel is the significant one here as the UXL Foundation effort is effectively an evolution of the oneAPI initiative, its existing unified programming model aimed at providing a common experience for developers across accelerator architectures.
- Tiny Corp launches Nvidia-powered AI computer because 'it just works'
- Nvidia: Why write code when you can string together a couple chat bots?
- How to run an LLM on your PC, not in the cloud, in less than 10 minutes
- Nvidia talks up local AI with RTX 500, 1000 Ada mobile GPUs
The heterogeneous support is intended to include not only CPUs and GPUs, but other accelerators, including FPGAs, although recent interest in AI acceleration has largely focused on GPUs.
"The increasing demand for data-intensive workloads has led to proliferation in the use of GPUs, and most recently the emergence of LLM-based AI applications has created an explosion in GPU usage," Burns wrote at the time of UXL's announcement in September.
"The challenge we are facing right now is that, where Linux and GNU transformed the software stack for CPUs using open source and standards-based projects, the GPU software stack is still quite new and standards are in some areas, especially AI, still being defined," he added.
Burns is also VP of Ecosystem at Codeplay Software. Codeplay was acquired by Intel in 2022 for its skills in SYCL, a cross-platform abstraction layer used in oneAPI that allows developers to program for heterogeneous architectures in C++ code.
Of course, challenging Nvidia's dominance may not be so easy. UXL eventually aims to support Nvidia hardware and code, yet many customers have already invested large sums of money into projects based on the Nvidia's software stack and may see little reason to change.
AMD, for example, was said to be working to bring binary compatibility with Nvidia's CUDA APIs to its own ROCm software so that applications written for Nvidia would run on its hardware without modification. According to Phoronix, however, AMD has not released it as a product and has now discontinued funding the project. ®