This article is more than 1 year old

Dell gets busy with GPUs

Surprisingly innovative

GTC Video Blog Okay, let’s put it on the table: when the conversation turns to cutting-edge x86 server design and innovation, the name “Dell” doesn’t come up all that often. Their reputation was made on delivering decent products quickly at a low cost. I see that opinion in all of our x86 customer-based research - it’s even something that Dell employees will cop to.

That said, two of the most innovative and cutting-edge designs on the GPU Tech Conference show floor were sitting in the Dell booth, and that’s the topic of this video blog.

First of all, the video is going to look a bit strange and washed out. Dell and the other exhibitors on that part of the floor were living in a land of darkness - the lights above them were either dimmed or totally off. To compensate, I cranked up the brightness in the finished product, which helps a bit.

They were also positioned close to an auto racing simulator – so there was the constant sound of screaming engines competing with Carol in the Dell booth as she showed me their stuff. But all of these concerns faded into the background as I got a look at their gear and considered the possibilities it presents.

The first product we discussed was their PowerEdge M610x blade. This is a two-socket, 12 DIMM slot blade that has two Gen2 x16 PCIe slots with a power supply hefty enough to handle either two additional 250 watt devices or one 300 watt device.

As you can see in the video, we discussed how customers can spec out a system with one of these slots filled by a Tesla Fermi-embedded GPU card and another with a Fusion I/O 640GB SSD, which should give plenty of performance for almost any application. It’s a good product and a nice design – so Dell does get some innovation points for having it on the show floor and available for purchase.

It’s the second product that really captured my interest. Their PowerEdge C410x is a 3U PCIe expansion chassis that can hold up to 16 PCIe devices and connect up to eight servers with Gen2 x16 PCIe cables. Customers can use it to host NVIDIA Fermi GPU cards, SSDs, Infiniband, or any other PCIe device their heart desires. What made my motor run was the possibility of cramming it full of Fermi cards and then using it as an enterprise shared device – NAC: Network Attached Compute.

I think that the next big wave in enterprise computing is going to be driven by the need for companies to take full advantage of their data in order to compete more effectively and streamline operations. To me, this means analytics with an eye toward predicting the future. These workloads can be very demanding from a computational standpoint – looking more like HPC than anything else. I think that enterprises will be looking towards GPUs sooner rather than later to turbo-charge these apps and get results quickly at a reasonable cost. I also think that most enterprises are going to dip their toes in the predictive analytics pool rather than dive right in by purchasing a dedicated cluster of optimized systems. With this in mind, products like GPU extension blades are hitting the market at the right time.

The Dell folks staffing the booth weren’t on the product team for the C410x, so they understandably didn’t have the definitive answers to every question. However, we did learn that this box has a switch inside it so that GPUs, for example, could be switched from one server connection to another without moving cables. But it’s not yet to the point where we’re talking about GPUs as a virtualized resource.

Still, Dell deserves kudos for putting out this box. It’s a step ahead of what HP and IBM are currently offering, and it moves the ball forward toward an NAC future. While it doesn’t quite get there today, it’s a start – and a good one at that. In an upcoming video blog, I’ll take a look at NextI/O and their PCIe expansion product. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like