HPC bar goes lower and wider

Parallel computing pitches at the mainstream


The more things stay the same, the more things are likely to change, and clear evidence of that could be seen today at the announcement of the latest Top500 Supercomputers league tables at the International Supercomputer Conference in Dresden.

The tables, compiled every six months, show the fastest-performing systems installed anywhere in the world, based on the LinPack benchmark. This measures performance in Floating Point Operations per Second (otherwise known as a `Flops/s’). The most notable factor of the latest results, however, was not what systems won in the World, Europe and Asia categories. In fact, nothing changed, with the NEC/Sun/ClearSpeed/Voltaire collaborative system, Tsubame, still holding top slot in Asia, IBM’s Barcelona-based Mare Nostrum system sitting atop the European table and Blue Gene/L, IBM’s last world leader hiding away at the Lawrence Livermore Laboratories in the USA, still topping the world rankings with a performance rated at 280.6 TeraFlops/s.

This will no doubt be replaced by Blue Gene/P, just announced by IBM, in the next listing. But the more interesting results could be seen further down the list, and in the surrounding statistics. What they point to is that High Performance Computing (HPC) has suddenly reached the tipping point where it stops being an esoteric corner occupied by scientists and propeller heads and is about to move towards the mainstream of computing.

For example, IBM has dominated supercomputing for years – still does at the high end – but for the first time, HP has installed more of the Top 500 systems, with 40.6 per cent to IBM’s 38.4 per cent. But HP does not appear in the Top 50 at all, and IBM has supplied 41 per cent of the cumulative performance in the Top 500, while HP manages only 24.3 per cent.

One way of interpreting these figures is that HP can’t hack it as an HPC vendor, except that the statistics show a related trend – that 59 per cent of the Top500 are using dual-core, x86 architecture processors: mainly Intel Woodcrest devices, but 18 per cent coming from AMD and Opteron. Intel’s Itanium managed 5 per cent of the total, while IBM Power 4, Power 5 and PowerPC managed 12.2 per cent.

The trend, widely acknowledged at the conference, is that as HPC systems become based on commodity hardware devices, so the technology moves down and out into the mainstream of computing. It is already being used in financial circles for tasks such as risk analysis, and now mainstream companies such as Microsoft are showing a distinct interest in the area. The company has already got itself into the Top500 with its Compute Cluster, used by Mitsubishi UFJ Securities in Japan with a 448-node IBM BladeCenter HS21 cluster. Yet according to Kyril Faenov, Microsoft’s general manager of high performance computing, the Computing Cluster is actually being targeted as much at the low end, mainstream application as appearances in the traditional HPC league tables. To him, a `cluster’ is anything with more than one Node. And a Node is typically a dual multicore-processor server – or as he put it, “that which is managed by a single memory controller.”

HPC technology is indeed heading for a much wider user base, and more mainstream ground, and the arrival of the multicore processor is the primary driving force behind what many at the conference see as a fundamental shift in the core computing paradigm. Burton Smith, an HPC veteran as ex-chief scientist of Cray and now Technical Fellow at Microsoft charged with investigating and developing parallel computing, defined it in a keynote presentation as being at the beginning of the end for the assumptions surrounding the single-threaded Von Neumann architecture.

And multicore devices are arriving from several directions. Indeed, there is already a new class of them, according to Dr Erich Strohmaier of the Lawrence Berkeley National Laboratory in the USA, who announced the Top500 results. These are the many core devices with 100 or more cores. They are simpler cores than in the x86 architecture, but offer more flexibility because of that, especially for anyone with parallel programming skills.

Names already in the many core frame are Intel’s Polaris, with 80 cores per processor, ClearSpeed with 96 cores, nVidia’s G80 with 128 cores, and Cisco’s Metro, with 188 cores.®


Other stories you might like

  • Despite global uncertainty, $500m hit doesn't rattle Nvidia execs
    CEO acknowledges impact of war, pandemic but says fundamentals ‘are really good’

    Nvidia is expecting a $500 million hit to its global datacenter and consumer business in the second quarter due to COVID lockdowns in China and Russia's invasion of Ukraine. Despite those and other macroeconomic concerns, executives are still optimistic about future prospects.

    "The full impact and duration of the war in Ukraine and COVID lockdowns in China is difficult to predict. However, the impact of our technology and our market opportunities remain unchanged," said Jensen Huang, Nvidia's CEO and co-founder, during the company's first-quarter earnings call.

    Those two statements might sound a little contradictory, including to some investors, particularly following the stock selloff yesterday after concerns over Russia and China prompted Nvidia to issue lower-than-expected guidance for second-quarter revenue.

    Continue reading
  • Another AI supercomputer from HPE: Champollion lands in France
    That's the second in a week following similar system in Munich also aimed at researchers

    HPE is lifting the lid on a new AI supercomputer – the second this week – aimed at building and training larger machine learning models to underpin research.

    Based at HPE's Center of Excellence in Grenoble, France, the new supercomputer is to be named Champollion after the French scholar who made advances in deciphering Egyptian hieroglyphs in the 19th century. It was built in partnership with Nvidia using AMD-based Apollo computer nodes fitted with Nvidia's A100 GPUs.

    Champollion brings together HPC and purpose-built AI technologies to train machine learning models at scale and unlock results faster, HPE said. HPE already provides HPC and AI resources from its Grenoble facilities for customers, and the broader research community to access, and said it plans to provide access to Champollion for scientists and engineers globally to accelerate testing of their AI models and research.

    Continue reading
  • Workday nearly doubles losses as waves of deals pushed back
    Figures disappoint analysts as SaaSy HR and finance application vendor navigates economic uncertainty

    HR and finance application vendor Workday's CEO, Aneel Bhusri, confirmed deal wins expected for the three-month period ending April 30 were being pushed back until later in 2022.

    The SaaS company boss was speaking as Workday recorded an operating loss of $72.8 million in its first quarter [PDF] of fiscal '23, nearly double the $38.3 million loss recorded for the same period a year earlier. Workday also saw revenue increase to $1.43 billion in the period, up 22 percent year-on-year.

    However, the company increased its revenue guidance for the full financial year. It said revenues would be between $5.537 billion and $5.557 billion, an increase of 22 percent on earlier estimates.

    Continue reading

Biting the hand that feeds IT © 1998–2022