This article is more than 1 year old

Ever-growing volumes of data mean computational storage is becoming crucial for HPC, say boffins at Dell's tech chinwag

Data at rest should remain at rest

Dell believes that technologies such as computational storage will soon play a part in high-performance computing (HPC) in response to ever-growing volumes of data. It doesn't see the general-purpose CPU disappearing any time soon, but says it will be complemented with specialised processors for specific tasks, with composability seen as both an opportunity and a problem.

Dell Technologies holds regular online sessions for its HPC Community, and the latest saw experts from the company discussing how HPC and mainstream enterprise IT feed off each other and where this may lead to, with advances in one sector often being picked up by the other.

Most modern HPC systems are now built from clusters of commodity servers, for example, while the use of GPUs for accelerating complex workloads started in HPC research labs and is now filtering into enterprise data centres.

Onur Celebioglu, Dell senior director of Engineering for HPC and Emerging Workloads, said that expanding volumes of data were a big part of HPC today, and this has led to the rise of specialised silicon for dedicated tasks like AI and machine learning.

"One of the ways to analyse data and get intelligence out of all that data is by utilising statistical analysis techniques and AI techniques. We're seeing that trend continuing, and AI is one of the great ways to be able to get meaning out of the data, so many businesses are investing in the opportunity to get that intelligence faster and quicker," he said.

When trying to optimise for just one thing such as deep learning or training, it is possible to fine-tune the architecture to get the best performance per watt using a special ASIC, for example, so Celebioglu sees a lot of investment going into that to get ever-faster analytics. But that does not take away the role of the CPU, since HPC covers a diverse range of problems that need to be solved, and tackling all these needs the ability to handle general purpose processing.

"When you are doing general-purpose HPC, you will be running a wide variety of algorithms, and different applications. If you look at molecular dynamics the algorithms there are very different than what's used for genomics or oil and gas, so you need a general purpose machine like a processor to cater for that wide variety of applications," he said.

When it comes to new technologies, the discussion turned to composability, techniques that allow for a bunch of systems to be viewed as a pool of resources at the rack level (or even larger). This aims to enable any application to run, even if it would require more memory or other resources than would fit inside a single node, by composing a big enough system with resources from other nodes.

A key technology here is set to be Compute Express Link (CXL), said Garima Kochhar, Distinguished Engineer at Dell. "I think that HPC will be the first adopters of CXL, and once we figure that out it will be extremely important, useful and relevant for mainstream IT as well as for the optimised infrastructure," she said.

However, she warned that there would likely be performance trade-offs in creating larger systems from distributed bits and pieces this way.

"I do think that a big piece of this composable thing is going to be a conversation between latency and any overhead versus the value that you get from being able to disaggregate," Kochhar said. She added that it could still prove useful for areas of HPC where having a larger memory space was more critical than latency, and the technology would also improve over time.

Celebioglu said that growing volumes of data would call for broader use of technology such as computational storage, which allows data to be processed without incurring many of the penalties in moving huge amounts of data around.

"The growth in data and how we move data, how do you handle that if you have very large volumes of data? That's going to continue being a challenge, not just for HPC, but for traditional IT as well," he said. Some of the challenges can be alleviated by building faster networks and giving systems larger memories, but the sheer volume of data will still risk creating a bottleneck.

"If we can analyse data where it is, that's going to be one way to shift the paradigm, and I think computational memory, computational storage technologies are going to start playing a bigger role, both in HPC and general IT," he opined.

Computational storage actually covers a range of architectural models, but the most familiar features a CPU or ASIC embedded into the controller of an SSD, allowing for high-bandwidth, low-latency access to the data stored there.

Jimmy Pike, a senior VP and senior fellow at Dell, agreed, saying that even as far back as the days of Gene Amdahl there was a saying that data at rest should remain at rest.

"What he was talking about is if you're going to do analysis on data, for god's sake do it where the data is rather than moving it around all over the place," Pike said.

Another topic touched on was SmartNICs, with Celebioglu expecting to see these enhanced network adapters cropping up more and more to offload some of the functions that are taking valuable compute resources from the system, such as security functions and serving as a point for remote management control, for bare metal provisioning, as an example.

This is an area where hyperscalers such as AWS are already ahead, with the Nitro cards inside the servers fulfilling similar functions to SmartNICs. VMware has also floated plans to use SmartNICs inside the physical hosts for virtualized infrastructure, taking on functions such as firewalls at first, but with an eye to the potential to offload software-defined network tasks as well.

Finally, on the subject of cloud versus on-premises for HPC infrastructure, Pike pointed out that the ecosystem was developing pretty much along the lines of enterprise cloud services.

"Do I think the cloud guys will continue to invest and drive HPC? You bet. Do I think that the on-prem version of that, where people are going to own and drive their own HPC will continue? You bet. Do I think there will be HPC stuff for rent, which is what the cloud is about? There is no doubt about it," he said.

Both Dell and HPE have developed HPC as-a-service via their own brands, APEX and GreenLake. ®

More about

TIP US OFF

Send us news


Other stories you might like