Amazon lashes Nvidia's GRID GPU to its cloud: But can it run Crysis?

Jeff's Virtualization Palace undercuts its own graphics prices


Amazon has chugged Nvidia's new virtualized GPU technology to spin-up a new class of rentable instances for 3D visualizations and other graphics-heavy applications.

The "G2" instances, announced by Amazon on Tuesday, is another nod by a major provider to the value of Nvidia's GRID GPU adapters, which launched in 2012.

These GRID boards provide hardware virtualization of Nvidia's Kepler architecture GPUs, which include an H.264 video encoding engine. The instances will give developers 1,536 parallel processing cores to play with for video creation, graphics-intensive streaming, and "other server-side graphics workloads requiring massive parallel processing power," according to the PR bumf.

It also supports DirectX, OpenGL, Cuda, and OpenCL applications, demonstrating Amazon's lack of allegiance to any particular technology in its quest to tear as much money away from on-prem spend as possible.

Nvidia worked with server vendors including Cisco, Dell, HP, IBM, and SuperMicro to make sure the cards are reliable. Amazon Web Services will be another stress testing ground for the tech, so we'll be watching the cloud's status page closely for any hiccups.

The new GRID G2 instances will cost $0.650 per hour for Linux and $0.767 per hour for Windows. Now compare that to the $2.1-per-hour price tag for Amazon's CG1 instances, which are backed by two Intel Xeon X5570 quad-core CPUs with hyper-threading, along with two Nvidia Tesla M2050 GPUs.

We asked Amazon for details on how it plans to burn-in the GRID instances to assure stable performance and no software hiccups. A spokesperson told us: "Like the rest of Amazon EC2, we manage the complexity of designing, deploying and maintaining hardware on behalf of our customers, allowing them to focus on their core business."

At the time of writing, the company had not told us whether it was using the GRID K1 or GRID K2 GPU boards, or just lashing together spare G2 Kepler capacity through Nvidia's vGPU manager.

The GRID instances are initially available in the US East (N. Virginia), US-West (N. California and Oregon), and EU (Ireland) regions, and will be made available in other AWS regions "in the coming months". They can be bought in on-demand, reserved, and spot formats. ®


Other stories you might like

  • Amazon shows off robot warehouse workers that won't complain, quit, unionize...
    Mega-corp insists it's all about 'people and technology working safely and harmoniously together'

    Amazon unveiled its first "fully autonomous mobile robot" and other machines designed to operate alongside human workers at its warehouses.

    In 2012 the e-commerce giant acquired Kiva Systems, a robotics startup, for $775 million. Now, following on from that, Amazon has revealed multiple prototypes powered by AI and computer-vision algorithms, ranging from robotic grippers to moving storage systems, that it has developed over the past decade. The mega-corporation hopes to put them to use in warehouses one day, ostensibly to help staff lift, carry, and scan items more efficiently. 

    Its "autonomous mobile robot" is a disk-shaped device on wheels, and resembles a Roomba. Instead of hoovering crumbs, the machine, named Proteus, carefully slots itself underneath a cart full of packages and pushes it along the factory floor. Amazon said Proteus was designed to work directly with and alongside humans and doesn't have to be constrained to specific locations caged off for safety reasons. 

    Continue reading
  • Arm jumps on ray tracing bandwagon with beefy GPU design
    British chip designer’s reveal comes months after mobile RT moves by AMD, Imagination

    Arm is beefing up its role in the rapidly-evolving (yet long-standing) hardware-based real-time ray tracing arena.

    The company revealed on Tuesday that it will introduce the feature in its new flagship Immortalis-G715 GPU design for smartphones, promising to deliver graphics in mobile games that realistically recreate the way light interacts with objects.

    Arm is promoting the Immortalis-G715 as its best mobile GPU design yet, claiming that it will provide 15 percent faster performance and 15 percent better energy efficiency compared to the currently available Mali-G710.

    Continue reading
  • Nvidia taps Intel’s Sapphire Rapids CPU for Hopper-powered DGX H100
    A win against AMD as a much bigger war over AI compute plays out

    Nvidia has chosen Intel's next-generation Xeon Scalable processor, known as Sapphire Rapids, to go inside its upcoming DGX H100 AI system to showcase its flagship H100 GPU.

    Jensen Huang, co-founder and CEO of Nvidia, confirmed the CPU choice during a fireside chat Tuesday at the BofA Securities 2022 Global Technology Conference. Nvidia positions the DGX family as the premier vehicle for its datacenter GPUs, pre-loading the machines with its software and optimizing them to provide the fastest AI performance as individual systems or in large supercomputer clusters.

    Huang's confirmation answers a question we and other observers have had about which next-generation x86 server CPU the new DGX system would use since it was announced in March.

    Continue reading
  • AWS sent edgy appliance to the ISS and it worked – just like all the other computers up there
    Congrats, AWS, you’ve boldly gone where the Raspberry Pi has already been

    Amazon Web Services has proudly revealed that the first completely private expedition to the International Space Station carried one of its Snowcone storage appliances, and that the device worked as advertised.

    The Snowcone is a rugged shoebox-sized unit packed full of disk drives – specifically 14 terabytes of solid-state disk – a pair of VCPUs and 4GB of RAM. The latter two components mean the Snowcone can run either EC2 instances or apps written with AWS’s Greengrass IoT product. In either case, the idea is that you take a Snowcone into out-of-the-way places where connectivity is limited, collect data in situ and do some pre-processing on location. Once you return to a location where bandwidth is plentiful, it's assumed you'll upload the contents of a Snowcone into AWS and do real work on it there.

    Continue reading

Biting the hand that feeds IT © 1998–2022