HPC

High performance for the masses

Commoditisation at the high-end


Reg reader survey High-Performance Computing (HPC) has traditionally been seen as the domain of the über-specialist.

It’s as close as the IT industry will ever get to “2 Fast, 2 Furious” – gangs of highly technical experts pushing their custom-built computers to the limit with an aim to win that ultimate prize, a place in the world supercomputing rankings. No doubt, there will be some blue neon thrown in too.

Meanwhile of course, mainstream application requirements are becoming equally demanding, for example in high-end business intelligence. As a consequence the use of higher-performance computing platforms is becoming more important in ‘routine’ business operations. To determine whether the gap was closing between traditional HPC and more run-of-the-mill high-end computing, we ran a Reg survey and gathered information from 254 respondents, the majority IT professionals and systems architects from a mix of industries and company sizes.

The first thing we were concerned with, was what’s out there in terms of technology? As we can see in the figure, the results highlighted the high degree of confidence that traditional enterprise server operating systems hold as high-performance work horses. UNIX, Linux and z/OS are all seen positively for the majority of requirements, high-performance or otherwise.

Microsoft’s Windows is a comparatively new entrant but is already seen as potentially useful by almost half of respondents, giving organisations another option when assessing which platform to deploy for such workloads. We say ‘potentially’ because it isn’t without its issues, as illustrated – but it still has more of a footprint than either mainframe or ‘specialist’ super computer equipment.

This nod towards more commodity operating systems is also illustrated when we look at the chipsets. Commercially available chip sets now dominate the HPC markets displacing specialist suppliers, as Intel and AMD x86 offerings have taken over in terms of perception of their suitability to handle such workloads. The Power chipset from IBM is also recognised as a suitable platform by just over 40 percent of respondents again reflecting its long history in this space; the Sun/Fujitsu (or should that be Oracle/Fujitsu?) SPARC is also well regarded.

While the trend may be towards commodity chipsets and operating systems, HPC still brings a number of facilities to the party – not least how systems are architected using symmetric multi-processing, for example, or bringing in specialist compilers and hardware acceleration features. While respondents were generally positive about such features, note also that all options have also been reported to pose challenges. It is unlikely that specialist skill sets will diminish any time soon for the leading edge of HPC, as reflected by the respondents who said that the requirements of HPC will continue to demand systems tuned explicitly to compute-intensive workloads.

Despite the confirmed ongoing need for more specialist additions, a clear drive exists towards commodity-based high-performance computing for more mainstream applications. The use of ‘commodity’ equipment is seen as the best approach to building HPC systems with three out of four favouring the use of such equipment versus ‘specialist’ kit. Of the remaining respondents some 14 percent are either open to idea of using commodity equipment or would like to do so but would need guidance on whether it would work for them. Only one organisation in eight actively favours using specialist equipment in their HPC operations.

What’s driving this commoditisation?

Familiarity, cost and choice emerge as the reasons – the jury is out as to what is most important but the general responses come down to ‘the devil you know’. Commodity chipsets have clearly crossed a threshold where they are powerful enough, and configurable enough, to render less relevant any additional benefits that might come from depending on more specialist platforms for example. There is also evidence (not shown) of an interest in the potential to utilise these ‘commodity’ based systems and generic operating systems together with virtualisation techniques, to allow equipment to be switched back and forth between HPC and non-HPC activity, with obvious cost benefits.

With the potential business demand for higher-performance computing platforms likely to grow, it is probable that increasing attention will be focussed on using more commodity-based platforms as time goes on, with a particular focus on ease of deployment and operation, and the associated total cost of ownership (TCO). This suggests a virtuous circle: as more use is made of such platforms, so do we expect to see high-performance computing become increasingly accessible and more straightforward to deploy, which can only benefit the business community in general. For more information on this research, you can download the full report here. ®


Other stories you might like

  • Despite global uncertainty, $500m hit doesn't rattle Nvidia execs
    CEO acknowledges impact of war, pandemic but says fundamentals ‘are really good’

    Nvidia is expecting a $500 million hit to its global datacenter and consumer business in the second quarter due to COVID lockdowns in China and Russia's invasion of Ukraine. Despite those and other macroeconomic concerns, executives are still optimistic about future prospects.

    "The full impact and duration of the war in Ukraine and COVID lockdowns in China is difficult to predict. However, the impact of our technology and our market opportunities remain unchanged," said Jensen Huang, Nvidia's CEO and co-founder, during the company's first-quarter earnings call.

    Those two statements might sound a little contradictory, including to some investors, particularly following the stock selloff yesterday after concerns over Russia and China prompted Nvidia to issue lower-than-expected guidance for second-quarter revenue.

    Continue reading
  • Another AI supercomputer from HPE: Champollion lands in France
    That's the second in a week following similar system in Munich also aimed at researchers

    HPE is lifting the lid on a new AI supercomputer – the second this week – aimed at building and training larger machine learning models to underpin research.

    Based at HPE's Center of Excellence in Grenoble, France, the new supercomputer is to be named Champollion after the French scholar who made advances in deciphering Egyptian hieroglyphs in the 19th century. It was built in partnership with Nvidia using AMD-based Apollo computer nodes fitted with Nvidia's A100 GPUs.

    Champollion brings together HPC and purpose-built AI technologies to train machine learning models at scale and unlock results faster, HPE said. HPE already provides HPC and AI resources from its Grenoble facilities for customers, and the broader research community to access, and said it plans to provide access to Champollion for scientists and engineers globally to accelerate testing of their AI models and research.

    Continue reading
  • Workday nearly doubles losses as waves of deals pushed back
    Figures disappoint analysts as SaaSy HR and finance application vendor navigates economic uncertainty

    HR and finance application vendor Workday's CEO, Aneel Bhusri, confirmed deal wins expected for the three-month period ending April 30 were being pushed back until later in 2022.

    The SaaS company boss was speaking as Workday recorded an operating loss of $72.8 million in its first quarter [PDF] of fiscal '23, nearly double the $38.3 million loss recorded for the same period a year earlier. Workday also saw revenue increase to $1.43 billion in the period, up 22 percent year-on-year.

    However, the company increased its revenue guidance for the full financial year. It said revenues would be between $5.537 billion and $5.557 billion, an increase of 22 percent on earlier estimates.

    Continue reading

Biting the hand that feeds IT © 1998–2022