This article is more than 1 year old

'Bigger is better' is back for hardware – without any obvious benefits

Software was supposed to eat the world, but it's scarcely snacking on today's monstrous silicon

Column When I first saw an image of the 'wafer-scale engine' from AI hardware startup Cerebras, my mind rejected it. The company's current product is about the size of an iPad and uses 2.6 trillion transistors that contribute to over 850,000 cores.

I felt it was impossible. Not a chip – not even a bunch of chiplets – just a monstrous silicon wafer.

Cerebras's secret sauce allows it to treat that massive wafer as a single computing device, by neatly connecting the bits that managed to get through the manufacturing process, while ignoring the (undoubtedly numerous) bits that didn't.

That approach resembles the 17-year cicada strategy, applied to computing – availability in such overwhelming numbers that it simply doesn't matter if thirty per cent of capacity disappears into the belly of every bird and lizard within a hundred kilometers. Cerebras says sure – we'll take the whole wafer, we'll make as many processors as we can, and if a percentage of them don't work, no biggie.

At first it seemed that Cerebras was simply an outlier. But this year we've learned it was actually a forerunner. When Apple introduced its Mac Studio – powered by its latest M1 Ultra silicon – we got a look at what happens when a mainstream computer hardware manufacturer decides to go all in: over 100 billion transistors spread across a monstrous 1000mm2 die. To cool all of that silicon, Apple had to strap a huge copper heatsink to the top of its chip. This monster breathes fire.

Curiously, the kind of monstrous computing power offered by M1 Ultra – and new designs from Intel, AMD, and Arm – has not inspired a new class of applications that could benefit from the exponentially increased capabilities of the hardware. We might have expected some next-level artificial intelligence, augmented reality or other game-changer. All Apple offered was the opportunity to edit 4K video more efficiently.

Yawn.

It feels as though our capacity to etch transistors onto a bit of silicon has outgrown our ability to imagine how they could be put to work. The supercomputer of just 20 years ago has shrunk into a tidy desktop workstation, but where are the supercomputer-class applications?

Back in March, Nvidia announced its latest architecture – Hopper – previewing its fused 'superchip', with 144 Arm cores, integrated RAM and GPU. It effectively squeezes a lot of datacenter functionality into a few massive pieces of silicon, all creatively bonded together to operate efficiently. It's another monstrous bit of hardware – but at least it offers a price-performance benefit for existing datacenters.

On the desktop and the within the datacenter, the pendulum has begun to swing back toward 'bigger is better'. Moore's Law may still apply – until we exhaust the periodic table – but my iPhone 13 is significantly chunkier than my iPhone X, and my next MacBook Pro will be bigger and heavier than my 2015 model. Something has shifted in computing. While it makes sense that the datacenter should see such advances, it remains unclear what that means for the desktop, and personal computing.

When I started my career in IT, mainframes had given way to minicomputers, and those minicomputers would soon be overwhelmed by microcomputers. We've lived in the micro era ever since. Now we're witnessing the dawn of a new era: monstrocomputers – devices where trillions of transistors and kilowatts of power blend.

But to what end? Raw capacity has never been the point of computing. These monsters need to be tamed, trained, and put to work. ®

More about

TIP US OFF

Send us news


Other stories you might like