This article is more than 1 year old
Optimizer Rescale recommends Rescale's optimization recommender
Say that three times in a row. Better yet, train a model with Nvidia GPUs to say it
Cloud-sim platform Rescale believes its forthcoming Compute Recommendation Engine can cut the time needed to optimize AI/ML and high-performance compute (HPC) workloads, giving people more time to actually run the software.
Announced at the company's Big Compute event this week, the latest edition to Rescale's HPC-as-a-Service platform is designed to analyze customer workloads and suggest where and how to deploy them to achieve the greatest performance or lowest cost. The company claims this can drastically reduce the time and cost associated with deploying HPC workloads on their platform.
Founded in 2011, Rescale's platform is really more of an abstraction layer that automates the complex and time-consuming process of deploying HPC workloads. Despite billing itself as an HPC cloud, Rescale doesn't own or operate its own equipment. Instead, it runs atop other cloud platforms, including Amazon Web Services, Microsoft Azure, Google Cloud, and Oracle Cloud Infrastructure.
"Companies, as they start using more computationally intensive approaches to science and engineering discovery, are seeing that they have to make a lot of different decisions," Edward Hsu, chief product officer at Rescale, said during a press conference last week.
The decisions may have to do with which architecture to run a model on, or which geographies might offer the lowest cost or greatest capacity, or whether there may be data sovereignty requirements that limit how and where data can be handled, he explained.
Rescale's Compute Engine applies machine learning to metadata gathered from years of compute jobs to identify which switches to flip or knobs to turn to achieve their desired outcome.
Rescale builds Nvidia AI Enterprise into HPC cloud
The service launches alongside tighter integrations with Nvidia's software ecosystem. Rescale plans to add Nvidia's AI Enterprise suite — a marketplace of pretrained models and AI frameworks intended to streamline the process of deploying applications on the chipmaker's hardware — to its HPC cloud.
The suite "includes a lot of our end-to-end solutions like Nvidia Riva for speech, AI, and video Merlin for recommenders, as well as in Nvidia Clara for medical imaging," Dion Harris, head of datacenter product marketing at Nvidia, said during the press conference.
- HPE goes Cray for Intel's Sapphire Rapids Xeons in latest supers
- Ampere says no changes to its Arm licensing as it readies new chips
- The all liquid-cooled colo facility rush has begun
- Tiny quantum computer plugs into top Euro supercomputer – because why not?
Given that the vast majority of GPU accelerated compute available on the public cloud today is running on Nvidia hardware, Rescale's decision to lean into Nvidia's home-grown AI and HPC software ecosystem is hardly surprising.
One of the first Nvidia frameworks available on Rescale's platform will be Modulus — a framework for near real-time simulations. Siemens Energy, for example, is using Modulus "to simulate precisely how steam and water flowed through their pipes, which allowed them to understand and predict the aggregate effects of corrosion in real time," Harris said.
Additionally Rescale will adopt Nvidia's Base Command to allow customers already running workloads on Nvidia's DGX systems to extend them to the cloud using their platform.
Moving beyond intuition
Hsu believes these integrations will open the door to a new kind of engineering simulation influenced by AI and machine learning rather than human intuition.
Increasingly, Hsu says, engineers are taking advantage of platforms like Rescale to run large batches of simulations on multiple designs — airfoils for example — to narrow down which characteristics offer the best performance.
"What hasn't happened until fairly recently, is taking all the outputs from all those different designs and all those different simulations and applying ML to the entire set of inputs and outputs," he said.
But once you've done that, "you can ask that ML model 'hey, if I want an airfoil that has minimal shockwaves at these transition points, or has maximum lift, what should it look like?' And that ML model, once it's been trained with all these design elements, can actually tell you that answer." ®