Zombie Moore's Law shows hardware is eating software

Customised CPUs are doing things software just can't do on commodity kit

After being pronounced dead this past February - in Nature, no less - Moore’s Law seems to be having a very weird afterlife. Within the space of the last thirty days we've seen:

  1. Intel announce some next-generation CPUs that aren’t very much faster than the last generation of CPUs;
  2. Intel delay, again, the release of some of its 10nm process CPUs; and
  3. Apple’s new A10 chip, powering iPhone 7, is as one of the fastest CPUs ever.

Intel hasn’t lost the plot. In fact, most of the problems in Moore’s Law have come from Intel’s slavish devotion to a single storyline: more transistors and smaller transistors are what everyone needs. That push toward ‘general purpose computing’ gave us thirty years of Wintel, but that no longer looks to be the main game. The CPU is all grown up.

Meanwhile, in the five years between iPhone 4S and iPhone 7, Apple has written its obsessive-compulsive desire for complete control into silicon. Every twelve months another A-series System-on-a-Chip makes its way into the Apple product line, and every time performance increases enormously.

You might think that’s to be expected - after all, those kinds of performance improvements are what Moore’s Law guarantees. But the bulk of the speed gains in the A-series (about a factor of twelve over the last five years) don’t come from making more, smaller transistors. Instead, they come from Apple’s focus on using only those transistors needed for their smartphones and tablets.

Although the new A10 hosts an ARM four-core big.LITTLE CPU, every aspect of Apple’s chip is highly tuned to both workload and iOS kernel-level task management. It’s getting hard to tell where Apple’s silicon ends and its software begins.

And that’s exactly the point.

The cheap and easy gains of the last fifty years of Moore’s Law gave birth to a global technology industry. The next little while - somewhere between twenty and fifty years out - will be dominated by a transition from software into hardware, a confusion of the two so complete it will literally become impossible to know where the boundary between the two lies.

Apple isn’t alone; NVIDIA has been driving its GPUs through the same semiconductor manufacturing process nodes that Intel pioneers, growing more, smaller transistors to draw pretty pictures on displays, while simultaneously adding custom bits to move some of the work previously done in software - such as rendering stereo pairs for virtual reality displays - into the hardware. A process that used to cost 2x the compute for every display frame now comes essentially for free.

Longtime watchers of the technology sector will note this migration from software into hardware has been a feature of computing for the last fifty years. But for all that time the cheap gains of ever-faster CPUs versus the hard work of designing and debugging silicon circuitry meant only the most important or time-critical tasks migrated into silicon.

Now that Moore’s Law has given up the ghost, we’re seeing a migration away from software and into hardware, wringing every last bit of capacity out of the transistor.

This transition is already well underway. Last month The Register revealed that Microsoft had designed a custom processor for its HoloLens virtual reality goggles. This surprisingly sophisticated 24-core DSP handles all of the data flowing in from the HoloLens’ many spatial sensors, taking a huge processing burden away from its rather wimpy Atom CPU - and does the job two hundred times faster.

It took a specialised team of silicon designers to create the Hololens DSP, because designing chips is hard work, fraught with trial and error and poor support tools. It’s an elite field requiring highly specialist skills.

Pretty much where software was thirty years ago.

Now that the drive into hardware is well and truly on, we can expect a new generation of tools - many backed by machine learning and artificial intelligence capacities - to make chip design significantly easier. Whether it ever becomes as easy as writing code is an open question - but between FPGAs today and nanoscale 3D printing tomorrow there’s every reason to suspect the ‘build’ phase in the late 2020s will be precisely that - building a chip.

It’s just this that makes the Mystorm project so very interesting. Sitting somewhere between the friendly hackability of Arduino and the deep power of the Raspberry Pi, Mystorm wants to make FPGA design accessible and cheap, because it’s designed to sit on the Raspberry Pi’s 40-pin connector, and they’re aiming for a stripped-bare retail cost of around $30 - same as Raspberry Pi.

But hardware is only half the battle here. If we want 11 year-olds designing custom hardware (and we very much do want that) we’ll need to give them the kinds of tools and support and endless YouTube videos they already have whenever they do an Arduino or Raspberry Pi project -- resources also useful for a generation of programmers in their 20s who will spend much of the rest of their careers cozying up to the silicon. Then Moore’s Law will live on, long after we’ve reached its physical constraints, as we explore the limits of creativity and imagination. ®

Similar topics

Other stories you might like

  • Intel says Sapphire Rapids CPU delay will help AMD catch up
    Our window to have leading server chips again is narrowing, exec admits

    While Intel has bagged Nvidia as a marquee customer for its next-generation Xeon Scalable processor, the x86 giant has admitted that a broader rollout of the server chip has been delayed to later this year.

    Sandra Rivera, Intel's datacenter boss, confirmed the delay of the Xeon processor, code-named Sapphire Rapids, in a Tuesday panel discussion at the BofA Securities 2022 Global Technology Conference. Earlier that day at the same event, Nvidia's CEO disclosed that the GPU giant would use Sapphire Rapids, and not AMD's upcoming Genoa chip, for its flagship DGX H100 system, a reversal from its last-generation machine.

    Intel has been hyping up Sapphire Rapids as a next-generation Xeon CPU that will help the chipmaker become more competitive after falling behind AMD in technology over the past few years. In fact, Intel hopes it will beat AMD's next-generation Epyc chip, Genoa, to the market with industry-first support for new technologies such as DDR5, PCIe Gen 5 and Compute Express Link.

    Continue reading
  • Intel is running rings around AMD and Arm at the edge
    What will it take to loosen the x86 giant's edge stranglehold?

    Analysis Supermicro launched a wave of edge appliances using Intel's newly refreshed Xeon-D processors last week. The launch itself was nothing to write home about, but a thought occurred: with all the hype surrounding the outer reaches of computing that we call the edge, you'd think there would be more competition from chipmakers in this arena.

    So where are all the AMD and Arm-based edge appliances?

    A glance through the catalogs of the major OEMs – Dell, HPE, Lenovo, Inspur, Supermicro – returned plenty of results for AMD servers, but few, if any, validated for edge deployments. In fact, Supermicro was the only one of the five vendors that even offered an AMD-based edge appliance – which used an ageing Epyc processor. Hardly a great showing from AMD. Meanwhile, just one appliance from Inspur used an Arm-based chip from Nvidia.

    Continue reading
  • Intel details advances to make upcoming chips faster, less costly
    X86 giant says it’s on track to regaining manufacturing leadership after years of missteps

    By now, you likely know the story: Intel made major manufacturing missteps over the past several years, giving rivals like AMD a major advantage, and now the x86 giant is in the midst of an ambitious five-year plan to regain its chip-making mojo.

    This week, Intel is expected to detail just how it's going to make chips in the near future that are faster, less costly and more reliable from a manufacturing standpoint at the 2022 IEEE Symposium on VLSI Technology and Circuits, which begins on Monday. The Register and other media outlets were given a sneak peek in a briefing last week.

    The details surround Intel 4, the manufacturing node previously known as the chipmaker's 7nm process. Intel plans to use the node for products entering the market next year, which includes the compute tiles for the Meteor Lake CPUs for PCs and the Granite Rapids server chips.

    Continue reading

Biting the hand that feeds IT © 1998–2022