When it comes to AI, Pure twists FlashBlade in NetApp's A700 guts
You see, it all becomes clear when you provide numbers
The two suppliers have both designed tech using Nvidia DGX-1 GPU servers and their own storage arrays to provide storage-server systems for "AI" applications such as deep learning.
Pure's deliverable hardware-software system is called AIRI (AI-Ready Infrastructure), while NetApp and Nvidia have deep learning reference architecture (DL RA). NetApp published Resnet-152 and Resnet-50 performance numbers and bar charts with 1, 2, 4 and 8 GPUs for its A700 all-flash array and one DGX-1, not the top line A800 array.
Pure contacted us and supplied the numbers for its Resnet runs, allowing a direct comparison. Resnet-152 first:
|Resnet-152||1 GPU||2 GPU||4 GPU||8 GPU|
The batch size is 64. Resnet-50 with the same batch size next:
|Resnet-50||1 GPU||2 GPU||4 GPU||8 GPU|
Pure's AIRI beats the NetApp/Nvidia DL RA at all GPU levels. Charting the numbers makes it plain:
Pure AIRI vs NetApp/Nvidia DL RA
AI system choices are going to need more than two simple benchmark runs, and the Resnet benchmarks have been described by a Pure spokesperson as "incredibly complex".
But here we do see a direct comparison between Pure's FlashBlade and NetApp's A700 – and the FlashBlade is ahead. ®