This article is more than 1 year old

HPC 2.0: The monster mash-up

It's storage size and speed that counts...

Blog When Big Data gets big, data centers should get nervous Part 3

In the first two parts of this trilogy, we set the stage with a discussion of how HPC and the Big Data trend, combined with the increasing use of business analytics, are presenting existing data centers with some pretty big challenges. The first article is here, the second is here, and you’re reading the third (and final) story.

The genesis of these articles was an IBM HPC analyst event in New York last month and, more specifically, the discussion of these topics by IBM’s VP of Deep Computing, Dave Turek.

In the second installment, we covered the way that analytics processes (and HPC too) often consist of several different workloads, each of which uses a massive amount of data and relies on output from some other application.

Traditionally, this would mean moving data to the systems that will be doing the processing. But today’s disk and network technology, although fast, is still to slow to feed the analytics beast at a high enough rate to meet depth-of-analysis and time-to-solution requirements.

So if the usual data center workflow arrangements won’t work, what will? Turek spoke about a ‘Workflow Optimized System’ which, to me, looks and sounds a lot like either a big system or a cluster with virtualization in a MapReduce wrapper. In a workflow optimized infrastructure, mass moves of data from the network and disk storage is minimized to the greatest extent possible.

Data transfers take place over the system interconnect, which could be 40GB/sec to 100GB/sec, or even 400GB/sec at the high end. Even at 40GB/sec, you’d need 320 spinning hard drives running at 128 MB/ sec to equal the transfer speed of the system interconnect – which also assumes you have a network that can provide the bandwidth.

Combining different workloads on the same system or cluster presents some challenges. What each workload needs in terms of hardware resources varies.

Traditional HPC typically needs fast interconnect and high core counts – disk access speed and network bandwidth aren’t all that important. But velocity analytics depends on high network flows, a fast interconnect, and storage – not so much on core count. With volume analytics, storage size and speed and core counts are most important.

In order to maximise processing efficiency, a workflow optimized analytics infrastructure will be able to adapt on the fly to bring the right set of processing resources to the workload. While you can get this kind of resource granularity and workload management in a large SMP system – like a mainframe or commercial Unix box – these capabilities aren’t quite there yet for clusters.

There’s a lot of work happening along these lines, and some products out there right now that get us part of the way there (like ScaleMP’s memory aggregation), so we are seeing some progress. New system architectures with extensible bus mechanisms should take us farther down the road. ®

More about

TIP US OFF

Send us news


Other stories you might like