Peer 1 Hosting, an IT service provider that does traditional hosting as well as selling virtual, cloudy infrastructure, is claiming to be the first to fluff up a CPU-GPU hybrid cloud that supports supercomputing workloads.
The company announced its GPU Cloud service at the annual Siggraph International Conference in Los Angeles along with its partner Nvidia, which is supplying the cloud's GPUs. Nvidia's Mental Images subsidiary has loaded its RealityServer Web application, a 3D rendering and animation package that works remotely over the Internet, onto the GPU Cloud to demonstrate the concept of remote ceepie-geepie hybrid computing.
According to Robert Miggins, senior vice president of business development at Peer 1 Hosting, the company has been working on the GPU Cloud for the past six months or so, and has taken input from 20 customers who expressed interest in having GPU-accelerated capacity available with utility-style access and pricing.
To accommodate their customers' desires, Peer 1 Hosting bought some two-socket x64 Dell PowerEdge servers and equipped them with four Tesla S1070 GPUs; more recently, the hoster bought some two-socket, 1U x64 servers from Super Micro, its other server supplier, and slapped in two of the M2050 embedded GPUs, which were announced in May.
Those M2050s are not off-the-shelf GPU accelerators, but rather come in special fanless, passive-cooling packaging that assumes they are being slapped into dense servers — the server does the cooling rather than the GPU itself. The M2050's number-crunching power is rated at 515 gigaflops of double-precision and 1.03 teraflops single-precision.
Peer 1 Hosting is starting out modestly, with only 128 GPUs inside of server clusters in two of its data centers — one in Toronto, the other in London. The machines are being linked by unimpressive Gigabit Ethernet links. But for now, this is enough for Peer 1 Hosting and its eager HPC customers — Miggins says it was seeing "tremendous demand" even before its GPU Cloud announcement today.
Peer 1 Hosting is using the same homegrown kickstarting software it uses in its managed-hosting business to create the CPU-GPU software images, with some manual work done to tweak the images to speak CUDA, Nvidia's development environment for GPU coprocessors.
Miggins says that the company is looking at boosting HPC networks to InfiniBand for its low latency and high bandwidth, and using CUDA-friendly provisioning tools (perhaps those from Bright Computing, Platform Computing, or Adaptive Computing, just to name three) rather than manual processes to quickly shift CPU-GPU workloads around.
For some customers, a CPU-GPU cloud will be used to offload work or to add incremental capacity to their existing ceepie-geepie clusters. For others, they will do their preliminary programming on a beefed-up workstation with GPUs — what people are calling a personal supercomputer — and then deploy the job to run on a GPU Cloud like the one Peer 1 Hosting has built.
Peer 1 Hosting is selling capacity on its GPU Cloud on a monthly basis, with an S1070 sold in blocks of four GPUs (that's one PowerEdge server with eight x64 cores) for $2,000, or $500 per GPU. The smaller Super Micro x64 server with two GPUs (rated at much higher flops per GPU) costs $1,000 per GPU. The cost-per-flops is a little lower on the older GPUs, but the newer machine probably gives more oomph where the compiler meets the GPU. ®