HPC CIOs drifting into clouds

Not just for webish apps


Platform Computing - a pioneer in grid-style supercomputing - is trying to figure out how to make a living in this cloud-computing racket, so it's asking potential customers what they're up to, cloud-wise.

The company is keen to keep itself relevant as computer architectures continue to evolve not just inside the high performance computing arena, but in the IT space at large. To do so, it's aiming its code and expertise into the cloud.

To become attractively cloudy, Platform Computing has to figure out what IT shops think cloud computing means and what they are planning to do with cloud-style architectures. To that end, the company conducted an informal survey of chief information officers and IT managers at June's International Supercomputing Conference in Dresden, Germany.

The results of those surveys were announced today. Of the 103 IT executives polled - arguably not a large number and not necessarily a statistically significant pool of respondents - 28 per cent said that they would deploy a private cloud of some sort this year.

The use of private-cloud infrastructure is on the rise, says Platform Computing, for the same reason that everyone in IT thinks they can make money on this metered, sometimes virtualized, always abstracted, extremely automated approach to doling out processing, memory, storage, and I/O capacity. To wit: workloads are getting more complex and demanding at the same time that IT shops in corporations, governments, and academia are all being told to cut costs.

If you'll remember, Platform Computing and a bunch of other minor players were poised to get rich on grid computing a decade ago. However, the grid was not web-enabled, and it was far from easy to configure and reconfigure capacity to support ever-changing workloads. Grids enabled sharing of server nodes in an HPC cluster, or adding spare capacity on PCs to a cluster running supercomputer applications, but they were far from malleable.

Given the pool of respondents attending ISC, it's no surprise that among those IT execs that are planning to build private clouds, 67 per cent said they were planning on using the clouds to run simulation and modeling applications.

Personally, I'd love to see a before-and-after stack of hardware and software to learn how the HPCers' new private clouds differ from the parallel supercomputers and grids they're already running. We may be merely witnessing a buzzword shift.

Another 32 per cent of respondents to the ISC survey said they would use private clouds to support web services, and another 18 per cent said they would use cloudy iron to run business analytics.

Not at all ironically, the IT execs polled at ISC say that one of the biggest hurdles to deploying private clouds to support HPC workloads - or any other kind of workload - is that "they do not feel that business decision-makers understand the potential of private clouds."

While this may be true, it's also probably true that business decision-makers probably think they've shelled out enough cash to get an IT utility built in the basement already, and they're a bit perplexed as to why IT is so calcified.

Anyway, 26 per cent of those polled said that the complexity of managing a private cloud was a barrier to adoption, and 21 per cent said security was an issue - although this is an odd response, considering that in a private cloud your security is as good as anything else you have plunked behind the firewall.

Only 8 per cent cited upfront cost as a barrier to building a cloud, and an equal number called software licenses a roadblock to private-cloud adoption. ®

Broader topics


Other stories you might like

  • Red Hat helps US Department of Energy containerize supercomputing
    You might say the US agency needed an OpenShift in mindset

    Cloud-native architectures have changed the way applications are deployed, but remain relatively uncharted territory for high-performance computing (HPC). This week, however, Red Hat and the US Department of Energy will be making some moves in the area.

    The IBM subsidiary – working closely with the Lawrence Berkeley, Lawrence Livermore, and Sandia National Laboratories – aims to develop a new generation of HPC applications designed to run in containers, orchestrated using Kubernetes, and optimized for distributed filesystems.

    The work might also make AI/ML workloads easier for enterprises to deploy in the process.

    Continue reading
  • Intel’s Falcon Shores XPU to mix ‘n’ match CPUs, GPUs within processor package
    x86 giant now has an HPC roadmap, which includes successor to Ponte Vecchio

    After a few years of teasing Ponte Vecchio – the powerful GPU that will go into what will become one of the fastest supercomputers in the world – Intel is sharing more details of the high-performance computing chips that will follow, and one of them will combine CPUs and GPUs in one package.

    The semiconductor giant shared the details Tuesday in a roadmap update for its HPC-focused products at the International Supercomputing Conference in Hamburg, Germany.

    Intel has only recently carved out a separate group of products for HPC applications because it is now developing versions of Xeon Scalable CPUs, starting with a high-bandwidth-memory (HBM) variant of the forthcoming Sapphire Rapids chips, for high-performance kit. This chip will sport up to 64GB of HBM2e memory, which will give it quick access to very large datasets.

    Continue reading
  • HPE Q2 revenue growth held back by supply constraints
    'However, enterprise demand continues to persist across our entire portfolio,' says CEO

    Amid a delayed HPC contract and industry-wide supply limitations compounded by the lockdown in Shanghai, Hewlett Packard Enterprise reported year-on-year sales growth of $13 million for its Q2.

    That equated to revenue expansion of 1.5 percent to $6.713 billion for the quarter ended 30 April. Wall Street had forecast HPE to generate $6.81 billion in sales for the period and didn't look too kindly on the shortfall.

    "This quarter," said CEO and president Antonio Neri, "through a combination of supply constraints, limiting our ability to fulfill orders as well as some areas where we could have executed better, we did not fully translate the strong customer orders into higher revenue growth."

    Continue reading
  • Another AI supercomputer from HPE: Champollion lands in France
    That's the second in a week following similar system in Munich also aimed at researchers

    HPE is lifting the lid on a new AI supercomputer – the second this week – aimed at building and training larger machine learning models to underpin research.

    Based at HPE's Center of Excellence in Grenoble, France, the new supercomputer is to be named Champollion after the French scholar who made advances in deciphering Egyptian hieroglyphs in the 19th century. It was built in partnership with Nvidia using AMD-based Apollo computer nodes fitted with Nvidia's A100 GPUs.

    Champollion brings together HPC and purpose-built AI technologies to train machine learning models at scale and unlock results faster, HPE said. HPE already provides HPC and AI resources from its Grenoble facilities for customers, and the broader research community to access, and said it plans to provide access to Champollion for scientists and engineers globally to accelerate testing of their AI models and research.

    Continue reading

Biting the hand that feeds IT © 1998–2022