This article is more than 1 year old
What links AMD CPU guru Jim Keller, an AI chip upstart, and SiFive? This vector-crunching 64-bit RISC-V processor
Stressing the ex in x86
Canadian AI chip startup Tenstorrent, which is headed by former top AMD engineers, has picked one of SiFive's latest RISC-V CPU designs for its unconventional machine-learning processors.
Specifically, Tenstorrent will license SiFive's Intelligence X280 processor cores to slot them into its homegrown AI training and inference chips alongside its own Tensix cores.
The X280 is a 64-bit multi-core-capable RISC-V CPU design that supports the open-source instruction set architecture's vector math extension. That extension is expected to prove useful in accelerating machine-learning applications.
SiFive's Chris Lattner, also known as the main author of the LLVM compiler, is due to give a talk tomorrow at the Linley Spring Processor Conference on the X280's features and the Intelligence family of designs, which includes the VIU75 launched late last year.
Android 10 ported to homegrown multi-core RISC-V system-on-chip by Alibaba biz, source code released
READ MORETenstorrent's Tensix cores are described here by our sister title, The Next Platform. The startup hopes to provide specialized processors – indeed, a full hardware and software stack – that can handle exponentially growing neural networks, primarily by avoiding the usual route of stuffing dies with matrix-math units and instead taking a packet processing approach.
One remarkable thing about Tenstorrent is that its CEO is Ljubisa Bajic, a former AMD and Nvidia chip architect, and its CTO and president is Jim Keller, the AMD and Apple CPU doyen who has had stints at DEC, Intel, Tesla, and other places. In fact, Tenstorrent's top team mostly seems to have put in time at AMD and Intel among others.
Now they're building AI accelerators around conditional computation, and tapping up RISC-V CPU cores from SiFive. To us, it sounds as though Tenstorrent wants to build a processor that does it all, heterogeneously: Tensix for major neural network routines, and X280 for application code and anything that needs vector math. One of those leans toward training, and other to inference.
Meanwhile, SiFive and Renesas yesterday announced they're designing chips together for vehicles, supply chains be damned. ®
PS: AI chip biz Cerebras has come up with another ridiculous-sized "waferscale" part, the CS-2, detailed here.