This article is more than 1 year old
Inspur jacks up huge heat sink-sporting beast
New GPU’riffic box vies for dominance
HPC Blog At trade shows, I’m always attracted by the sight of huge heat sinks bunched together on a system board. Big and powerful hardware is a weakness of mine. The sight of them pulls me to the booth like a giant tractor beam.
That’s exactly what happened when I wondered by the Inspur booth at GTC17. Its new AGK-2 server is quite the system. On a single 2U server, it has packed in 8 GPUs, dual CPUs, and 16 DIMM slots. Now that’ll run your Crysis for you…
Better yet, the GPUs are attached to the system board via the newest NVLink 2.0 interface. For the uninitiated, NVLink 1.0 was collaboration between IBM and NVIDIA, with the goal of providing a high-speed direct and dedicated CPU-GPU and GPU-GPU connection.
The first version of NVLink offered up 80GB/s of bandwidth, more than double the 35GB/s that you’d get from attaching the GPUs to PCIe. The newest NVLink, the aptly named NVLink 2.0, provides a mind blowing 300GB/s bandwidth between CPUs and GPUs and from GPU to GPU.
The AGX-2 offers up two M.2 PCIe drive slots on the motherboard, which provide close to 4x the speed of SATA 3. Users can also host up to 8 2.5” drives in the 2U chassis.
Inspur is touting this box for extreme AI deep learning and HPC applications. It’s certainly the most powerful single server your correspondent saw at GTC17, with the possible exception of NVIDIA’s own DGX-2.
The AGX-2 has both air-only cooling and an air/liquid hybrid cooling options for the GPUs (which generate the most heat). This box has some super powerful fans, meaning that with either cooling option you can run the box flat out and not be slowed by thermal limitations.
There was no pricing info available.