If combining HPC and AI workloads is a challenge, wait until you try to converge flash and HDD storage

How to bust the bottlenecks without breaking the bank

Webcast We all know HPC and AI datasets are going to be massive. But they’re also very different, the former being focused around large, sequential files, while the former are much more random.

The underlying storage architectures are quite different too: HDD and infiniband for the former; Gbe and NVMe flash for the latter. But with workloads increasingly converging, organisations face a big problem. Tailor systems towards flash, and prices will go through the roof. Focus on HDD, and you could be building in a bottle neck that will throttle the arrays of CPUs and GPUs you have crunching through your most pressing problems.

It’s a fundamental problem, and perfectly capable of stopping you solving much harder and more important problems. So, whether you’re pondering juicing up your HPC workloads with a little machine learning, or want to work in some modelling to GPU-powered AI work, you should join our Regcast. “Spend Less on HPC/AI Storage” on June 17, at 0800 PT (1100 ET, 1600 BST).

Our broadcast expert Tim Phillips will be converging with HPE’s Uli Plechschmidt, who will explain why you should be spending less on HPC/AI storage – and more on CPU/GPU compute.

They’ll pick through what sticking with your existing architectures could cost – both in terms of cold hard cash, and in innovation. And they’ll explain exactly what parallel HPC/AI storage could mean for you and your workloads, and how to build infrastructure that meets your needs and doesn’t cost the earth.

Joining this session is a model of simplicity. Just drop your details in here, and we’ll update your calendar and nudge you on the day. In the meantime, just relax, knowing that help with your storage conundrums is on its way.

Sponsored by HPE

Biting the hand that feeds IT © 1998–2021