Think combining HPC and AI workloads is a challenge? Wait until you try to converge flash and HDD

How to bust the bottlenecks without breaking the bank


Webcast We all know HPC and AI datasets are going to be massive. But they’re also very different — the former built from large, sequential files, while the latter from much more random info.

The underlying storage architectures are quite different too — HDD and Infiniband for the former, Gbe and NVMe flash for the latter.

But with workloads increasingly converging, organisations face a big problem. Tailor systems towards flash, and prices will go through the roof. Focus on HDD, and you could be building in a bottleneck that will throttle the arrays of CPUs and GPUs you have crunching through your most pressing problems.

It’s a fundamental problem, and perfectly capable of stopping you solving much harder and more important problems.

So, whether you’re pondering juicing up your HPC workloads with a little machine learning, or want to work in some modelling to GPU-powered AI work, you should join our webcast, Spend Less on HPC/AI Storage, on June 17 at 0800 PDT (1100 EDT, 16:00 BST.)

Our broadcast expert Tim Phillips will be conversing with HPE’s Uli Plechschmidt, who will explain why you should be spending less on HPC/AI storage — and more on CPU/GPU compute.

They’ll pick through what sticking with your existing architectures could cost — in terms of both cold hard cash, and in innovation. And they’ll explain exactly what parallel HPC/AI storage could mean for you and your workloads, and how to build infrastructure that meets your needs and doesn’t cost the Earth.

Joining this session is a model of simplicity. Just drop your details in here, and we’ll update your calendar and nudge you on the day. In the meantime, just relax, knowing that help with your storage conundrums is on its way.

Sponsored by HPE


Biting the hand that feeds IT © 1998–2021