Intel: Here, have some AI reference kits ... now please buy our silicon

With 34 designs to pick from, you can choose your own ML adventure

To encourage techies and engineers to try out its AI acceleration hardware, Intel has put together a bunch of software reference kits it claims will reduce the time and resources required to deploy machine learning systems on its silicon.

As you'd expect, the 34 open source reference kits address a variety of common AI/ML workloads – ranging from large language models used to power chatbots and other generative AI, to more mundane tasks like object detection, voice generation, and financial risk prediction.

Intel says each kit, developed in collaboration with Accenture, contains all the necessary model code, training data, libraries, oneAPI components, and instructions necessary to implement them on Intel hardware. We're told the reference kits will be updated periodically based on community feedback.

To be clear, these kits appear to be purely software: you provide the (Intel inside) hardware, and then use the given kits to build applications on them.

Intel has quite a few accelerators and GPUs capable of running these sorts of AI apps – including its Habana Gaudi2 training processors, Ponte Vecchio GPUs, and the Advanced Matrix Extensions baked into Intel's Sapphire Rapids Xeon Scalable processors.

Despite all the hype surrounding generative AI, though, Intel's accelerators haven't enjoyed the public attention and widespread adoption that Nvidia's GPUs have. But, due to the large clusters of GPU nodes required to train the largest and most impressive models – the major cloud providers are deploying tens of thousands of GPUs and accelerators for this very reason – Intel may end up scoring wins simply because customers can't get their hands on Nvidia's cards in adequate volumes or reasonable prices.

As reported by our sibling site The Next Platform, Nvidia's H100 PCIe cards – not even the most powerful version of the GPU – have been spotted selling for as much as $40,000 a pop on eBay.

So if Intel can lower the barrier to deploying AI workloads on its accelerators, it stands to reason the x86 titan should have an easier time convincing customers to buy its parts – particularly the more expensive ones.

Of course, Intel isn't alone in this strategy. Nvidia has already found great success in developing and commercializing software accelerated by its GPUs. Last year CFO Colette Kress highlighted the importance of such subscription software to driving the acceleration champ's revenues to a lofty $1 trillion.

AMD has also got more aggressive about pushing its own GPUs and accelerators for AI. In June it detailed its Instinct MI300 APUs and GPUs, which are designed to compete head on with Nvidia in both the HPC and AI/ML arenas. Alongside the new silicon, the chipmaker also announced a strategic partnership with Hugging Face, which develops tools for building ML apps, to optimize popular AI models for AMD's Instinct GPUs, Alveo FPGAs, and Epyc and Ryzen CPUs. ®

More about


Send us news

Other stories you might like