Big bad boxes drive explosive growth for HPC in Q3

For supercomputers, they FIND the money

The server market as a whole is having its issues, with both virtualization and the jittery global economy holding down physical box counts – and therefore revenues – more than they might otherwise be. But the supercomputer market is chugging right along. Networks keep getting faster, virtualization has yet to touch its boxes, and software is getting better at scaling across larger systems - and it's all leading to increased demand.

The third quarter was a killer for supercomputer makers, say the box counters at IDC. Chirag Dekate, an HPC analyst at the market researcher, tells El Reg that revenues for supercomputers of all sizes – from departmental machines with a few nodes all the way up to petaflops-class capacity machines that dominate the headlines and the architectural definitions of supercomputers – in aggregate rose by 28.7 per cent to $3.35bn.

The number of systems sold in the quarter actually fell by 19 per cent - to 23,659 machines according to Dekate - but he warns not to put too much into that number. The aggregate node count across all of those machines rose by a little more than 37 per cent year-on-year. When you do the math, the nodes being used are slightly less expensive, on average, but the systems that companies are building are much more powerful and considerably more costly.

The big capacity-class supers that were accepted in the third quarter, just ahead of the November Top500 supercomputer ranking, helped drive revenues. On a sequential basis from the second quarter, revenues in the high-end supercomputer segment – machines that cost in excess of $500,000 – exploded 80.6 per cent to $2.09bn and comprised 62.4 per cent of worldwide sales in the third quarter. (IDC did not provide year-on-year comparisons in its publicly available data since it has to make a living, too.)

Meanwhile, divisional HPC systems (which cost between $250,000 and $499,000) accounted for $341m or 10.2 per cent of the HPC system pie, followed by departmental HPC machines (which cost between $100,000 and $249,000) did a bit better at $646m (19.3 per cent of the pie). The low-end workgroup segment of the HPC systems space has been shrinking all year and was down 8.8 per cent in the quarter (on a year-on-year basis) to $271m.

It will be interesting to see how the advent of HPC on the Amazon, Rackspace and SoftLayer clouds - as well as theire various competitors - will affect sales of these relatively modest clusters. If you buy a supercomputer, it is yours, and that is great. But you have to care for and feed it to justify its cost.

By vendor, IBM and Hewlett-Packard are the big peddlers of HPC systems, with Big Blue accounting for $1.11bn in Q3 (33.1 per cent of total revenues) and HP (which needs a nickname) getting $860m (25.7 per cent) from its technical computing sales. Fujitsu's revenues were driven by the acceptance of the K super, and the company had $539m, or 16.1 per cent of the pie.

You might be tempted to do a little math and compare the worldwide server revenue and shipment numbers that IDC reported three weeks ago to these HPC system revenue and shipments and back out the supercomputer sales from the overall server data to see how the server market is doing without these monster machines and their siblings among the propellerati. You can't do that, and believe me, El Reg was tempted.

The practice in the HPC industry is to book revenues on acceptance, and the HPC figures cited above follow that convention. So, for instance, a portion of the K supercomputer built by Fujitsu for the Japanese government is in the high-end supercomputer revenues above. But that machine has actually been installed for more than a year at the RIKEN lab in Kobe. More importantly, it has already created a bubble in the regular IDC server revenue and shipment data that comes out each quarter from its Server Tracker service. That service tracks vendor shipments and the revenue they generate at the factory (rather than through the channel), not after the servers have been accepted by customers.

Therefore, the generic server and HPC system numbers are apples and oranges. It is a shame, that. Because instinctively I think we all know that if you take HPC out of the picture, the overall server market is actually quite a bit softer than anyone wants to talk about.

For now, supercomputers are selling, and reasonably well despite tight government and academic budgets. And the prognosis is good looking ahead, too.

"HPC technical servers, especially supercomputers, have been closely linked not only to scientific advances but also to industrial innovation and economic competitiveness," explained Earl Joseph, the vice president at IDC in charge of technical computing research. "For this reason, nations and regions across the world are increasing their investments in supercomputing even in today's challenging economic conditions. We expect the global race for HPC leadership in the petascale-exascale era to continue heating up during this decade."

IDC is projecting for HPC system sales to grow by 7 per cent this year to $11bn. It forecasts that sales will grow at a compound annual growth rate of 7.3 per cent between 2012 and 2016, reaching $14bn by the end of that period. ®

Broader topics

Narrower topics

Other stories you might like

  • All-AMD US Frontier supercomputer ousts Japan's Fugaku as No. 1 in Top500
    Exascale beast's test system also claims top spot in the Green500

    The land of the rising sun has fallen to the United States’ supercomputing might. Oak Ridge National Laboratory’s (ORNL) newly minted Frontier supercomputer has ousted Japan’s Arm-based Fugaku for the top spot on the Top500 rankings of the world's most-powerful publicly known systems.

    Frontier’s lead over Japan’s A64X-based Fujitsu machine is by no means a narrow one either. The cluster achieved peak performance of 1.1 exaflops according to the Linpack benchmark, which has been the standard by which supercomputers have been ranked since the mid-1990s.

    Frontier marks the first publicly benchmarked exascale computer by quite a margin. The ORNL system is well ahead of Fugaku’s 442 petaflops of performance, which was a strong enough showing to keep Fugaku in the top spot for two years.

    Continue reading
  • Germany to host Europe's first exascale supercomputer
    Jupiter added to HPC solar system

    Germany will be the host of the first publicly known European exascale supercomputer, along with four other EU sites getting smaller but still powerful systems, the European High Performance Computing Joint Undertaking (EuroHPC JU) announced this week.

    Germany will be the home of Jupiter, the "Joint Undertaking Pioneer for Innovative and Transformative Exascale Research." It should be switched on next year in a specially designed building on the campus of the Forschungszentrum Jülich research centre and operated by the Jülich Supercomputing Centre (JSC), alongside the existing Juwels and Jureca supercomputers.

    The four mid-range systems are: Daedalus, hosted by the National Infrastructures for Research and Technology in Greece; Levente at the Governmental Agency for IT Development in Hungary; Caspir at the National University of Ireland Galway in Ireland; and EHPCPL at the Academic Computer Centre CYFRONET in Poland.

    Continue reading
  • Los Alamos to power up supercomputer using all-Nvidia CPU, GPU Superchips
    HPE-built system to be used by Uncle Sam for material science, renewables, and more

    Nvidia will reveal more details about its Venado supercomputer project today at the International Supercomputing Conference in Hamburg, Germany.

    Venado is hoped to be the first in a wave of high-performance computers that use an all-Nvidia architecture, in this case using Grace-Hopper Superchips that combine CPU and GPU dies, and Grace CPU-only Superchips.

    This supercomputer "will be the first system deployed not just with Grace-Hopper in terms of the converged Superchip but it’ll also have a cluster of Grace CPU-only Superchip modules,” Dion Harris, Nvidia’s head of datacenter product marketing for HPC, AI, and Magnum IO, said during an Nvidia press conference ahead of ISC.

    Continue reading

Biting the hand that feeds IT © 1998–2022