This article is more than 1 year old
Can Larrabee Lazarus stunt Nvidia's Tesla?
Hitting the numbers
Our pal TPM writes here about how Intel is re-targeting the Larrabee (or a Larrabee-like) processor from being a graphics card replacement to providing HPC processing power in a tidy package.
It would act as a co-processor a la Nvidia's Tesla or AMD's GPUs or even FPGAs. The key difference between Larrabee and other solutions is that Larrabee uses the ubiquitous x86 architecture and instruction set, and the others don't.
This has been the biggest hurdle to GPU adoption, because developers and users need to do some custom coding to take advantage of the much speedier number-crunching provided by GPU accelerators.
Nvidia may have thought it had smooth sailing when Intel dropped Larrabee as a video device, and AMD may have figured it had more runway to get its products off the ground. But now they both will have to battle the chip behemoth in the HPC space.
The latest TOP500 list has been released (here), and we already know from the press releases that the largest systems are using thousands of GPUs to pump up performance.
Hitting the numbers
The Top500 guys don't break out hybrid systems or provide GPU counts for the boxes on their list, but we know that the systems at the tip top of the chart are using at least 4,000 GPUs to hit their numbers.
I think the burgeoning trend toward greater use of analytics will spur growth in GPUs and other accelerators. Essentially repackaged high-end consumer video cards, These products carry much higher margins and bring happy smiles to the faces of product managers when sold as HPC accelerators.
So how well will Intel do in this market? It isn't a slam dunk: we don't know how its final product is going to perform, and bang-for-the-buck performance is, of course, very important in an accelerator. However, the development environment and ecosystem is even more important.
If Intel had made this move two or three years ago, it could have conceivably killed off Nvidia's CUDA operating environment. But it didn't, and there is now critical mass behind CUDA and Nvidia, with AMD and the OpenCL development environment gaining some followers as well. Dominating this market won't be a cakewalk for Intel, but this announcement certainly will garner attention and get some customers lined up to kick the tires when the silicon is released.
One thing is certain: the game has become quite a bit more interesting for Nvidia and AMD - and I mean 'interesting' in the old Chinese curse sense of the word...