This article is more than 1 year old

Nvidia gets into the server biz with Visual Computing Appliance

Tier one server makers embrace Grid for their own appliances

GTC 2013 If you want to virtualize workstations and run them on shared infrastructure, or build a giant renderfarm to make the next blockbuster movie or video game, then Nvidia has a server for you. It's called the Visual Computing Appliance, and as the name suggests, it is aimed at important workloads such as running Crysis, at least when the bosses are not looking, and for running various kinds of GPU-intensive applications when they are.

Nvidia is in the ARM processor business, and obviously makes its own GPU chips. But the company's ARM chips don't have enough oomph to run workstation or server workloads ­ not yet anyway ­and the kind of software that companies run to do graphics-intensive work generally run on Windows or Linux.

If you want to support Windows on a server or workstation, that means using a Xeon chip from Intel or an Opteron chip from Advanced Micro Devices. With both companies being rivals of Nvidia, the obvious choice is to go with the most popular chip in servers, and right now that means Intel's Xeon E5. So it is no surprise that the VCA is based on a two-socket Xeon E5 server motherboard.

One scenario where Nvidia sees VCAs being used is a shared office where no one has a workstation

One scenario where Nvidia sees VCAs being used is a shared office where no one has a workstation

As many have been saying for a couple of years now, Jen-Hsun Huang, co-founder and CEO at Nvidia, said in his opening keynote at the GPU Technology Conference in San Jose yesterday that the client/server model, where we have PCs connected to central file servers, is past its prime if not dead.

"The network is becoming completely heterogeneous and the IT departments are going absolutely berserk," said Huang. "We also know that the architecture of the personal computer and the way that we set up out corporate network is related to the type of work that we were doing. It made more sense to copy a file form the server to the client, store it on swap space locally, you process it locally, and when you are done with the result, you save it back to the file server.

"You have a copy of that data, I have a copy of that data, we all have a copy of that data. It is the antithesis of staying in synch. Well, today, the data is too big. We have customers who are are copying to much data that it literally takes 24 hours for them to copy it to another site so they can dink on it just to copy it back for another 24 hours. By the time they have copied it back, the team locally had already changed the data. Data has become so large that it makes no sense to copy your data to the your PC. Now, you want to copy your PC to your data."

That, in a nutshell, is what the Visual Computing Appliance is all about. It is in essence a shared workstation that an office of designers or engineers can use as computing capacity for their Windows or Linux applications, and do so regardless of whether those applications run on their particular personal computing device.

A lot of the software that people use does not run on Mac OS, but by virtualizing the workstation instance on a VCA, it doesn't matter. So long as the PC can run a Grid client program that hooks it back to a slice on the VCA, which Mac OS can, it doesn't matter.

And Huang says that the experience on the VCA is so good that the user will swear they have their own workstation under their own desk even if they are really working for a slice of a server humming away in a closet down the hall.

A rack of Nvidia VCAs

A rack of Nvidia VCAs

The VCA comes in a 4U rack enclosure and has two Xeon E5 processors. The base machine has one processor with eight cores and sixteen threads fired up plus 192GB of main memory.

The machine has a four Grid-capable GPU adapters, each with two Kepler GPUs on them and 4GB frame buffers. It supports up to eight concurrent users. If you need more CPU oomph for your applications, you can add another processor, jack the main memory up to 384GB, and add four more GPU adapters. This configuration can support a maximum of sixteen virtual workstation users at the same time.

Grid being the brand name for GPU adapters that have their Virtual Graphics Extension (VGX) features fired up, allowing for the GPUs to be virtualized along with the CPUs, main memory, and I/O in the Xeon server to create a virtual workstation.

Nvidia showed off the prototype of this Grid setup at last year's GTC, with the XenServer hypervisor from Citrix Systems being able to carve up a virtualized Kepler GPU, allocating CUDA cores, memory, and frame buffers just as it can dice and slice the CPU cores and main memory. And, if no one else is working on the machine, the combination of XenServer and VGX allows for you to hog the whole machine, which is something of a motivation to work late, we suppose. Probably to play Crysis. . . .

Otoy, which had teamed up with Super Micro and AMD back in March 2011 to launch a cloud for the rendering of 3D images, was trotted out on stage to show off a new render cloud based on the VCAs that was running in its data center on the outskirts of Los Angeles.

They did some work on a snippet of the Transformers, showing Starscream smashing a van, working from MacBooks and with nothing but a network link between them and the cluster in Los Angeles and it really did a remarkable job with the rendering. It is not clear how Otoy's software is scaling across multiple VCAs, but presumably it can to boost the speed of rendering and resolution of the images processed.

"The wait time and the cost of making small changes goes away, and we will be able to make better movies," said Huang. One could hope.

Nvidia is selling two configurations of the VCA

Nvidia is selling two configurations of the VCA

The VCA is not cheap, but then again, neither is buying a big fat workstation for all of the people in the office who need one. The base VCA with one Xeon processor and 192GB of memory and four dual-GPU Quadro-class Grid adapters costs $29,400 plus another $2,400 per year in annual subscription fees paid to Nvidia for the Grid VGX software stack.

If you assume a three-year total cost, then the hardware plus the software runs you about $4,575 per virtual workstation user. The full-on VCA costs $39,900 for sixteen cores and sixteen GPUs with 64GB of GPU memory and 384GB of CPU memory, plus a $4,800 subscription for that Grid VGX software. For sixteen workstation users, this works out to $3,394 per user.

What this VCA does not have is much in the way of local storage, so you are probably going to have to buy a network-attached storage array or a SAS JBOD box for your applications. This will obviously add to your workstation costs.

The Visual Computing Appliance is in beta now, and Nvidia will be peddling it through its reseller channel starting in May.

While also jumping into the server appliance racket itself, Nvidia is still keen on getting other hypervisor markers and other server makers to create appliances of their own. Huang said that there are over 75 large-scale proofs of concept with various Grid implementations running today.

Microsoft and VMware now have their respective Hyper-V and ESXi hypervisors speaking VGX, and server makers Hewlett-Packard, Dell, IBM, and Cisco Systems have all tweaked specific servers to support the Grid setup. Specifically, the Grid K1 and K2 adapters are now supported on Dell's PowerEdge R720, HP's ProLiant WS460c Gen8 and SL250 Gen8, and IBM's iDataPlex dx360 M4.

This being the case, you might be wondering why Nvidia even bothered with the VCA in the first place. Clearly, Nvidia wants adoption to accelerate here, and sees the possibility of making some money from its own channel partners. The channel firms, meanwhile, might not want to go through the trouble of becoming a server distributor of one of the four vendors mentioned above. That is a lot of grief for the pinpoint application of selling workstation replacements. ®

More about


Send us news

Other stories you might like