This article is more than 1 year old

DataCore scores fastest ever SPC-1 response times. Yep, a benchmark

But here's why that actually means something in the real world

+Comment DataCore has recorded the fastest ever SPC-1 microsecond response times, which are three times faster than all-flash EMC VNX. Together with DDN it demonstrates that parallel IO processing using multiple cores is set to become the new norm.

El Reg was contacted by an industry source to provide contextual background to the DataCore SPC-1 benchmark result.

Our source said that, of all the benchmarks in our industry, SPC-1 and SPC-2 are the most relevant and best designed. One thought they would like to offer is the importance of the response time measurements captured by the SPC-1 methodology.

Input/Output Operations Per Second (IOPS) are meaningless without response times. Response times are meaningless without IOPS. You have to say both numbers at the same time.

Probably the biggest single factor that prevents vendors “cheating” the SPC-1 is the requirement that response times must be recorded and published with the IOPS results.

Plenty of vendors publish claims about million(s) of IOPS and then, when you dig under the covers, you find that there was no constraint on response times and so those “million IOPS” are useless – because they require deep queues, and, likewise, response times, in the tens or hundreds of milliseconds, making these IOPS essentially useless for performance-sensitive applications. This is one reason why SPC-1 is really meaningful.

In my mind, our contact said, the primary significance of the DataCore SPC-1 result is this: the microsecond response times they achieved are the fastest ever measured in SPC-1, by very wide margins.

At 10 per cent IOPS load:

  • 40 microseconds for writes
  • 140 microseconds for reads
  • 80 microseconds average

At 100 per cent IOPS load:

  • 160 microseconds for writes
  • 580 microseconds for reads
  • 320 microseconds average

For reads, these numbers are three to five times faster than the fastest systems ever measured on SPC-1. For writes, these numbers are six to ten times faster than any previous top performers on SPC-1.

DataCore_CEO_and_Chief_scientist

DataCore CEO George Teixeira and Chief Scientist Nick Connolly talking to El Reg about parallel, multi-core IO

For perspective, consider that EMC's All-Flash VNX8000 configuration achieved similar IOPS numbers, but the average response times were three times longer.

FYI, this is not the first time DataCore has blown everyone away on an SPC-1. Back in 2003 it published (PDF) what was then the most IOPS, fastest IOPS and lowest cost IOPS ever recorded, again by wide margins.

Comment

Both DataCore and DataDirect Networks, with its IME (Wolf Creek) burst buffer, have cracked the technology trick of using multiple cores in a multi-core processor to handle IOs in parallel and so speed up IO processing phenomenally.

DDN's Mark Canepa, VP of worldwide presale, service and support, talked about this to us in June 2015 during an IT Press Tour.

What this shows, in our view, is that single core IO handling in multi-core processors with IO-bound applications, where IO-bound means waiting for an external array to complete queued data accesses, is insufficient.

It seems to us that server and physical/virtual storage array operating systems and hypervisors will have to add this technology to their feature set or face being left behind by much faster competitors that have done so. If, for example, VMware’s VSAN does this but neither Nutanix nor Simplivity do so, then VMware will have a significant performance advantage in the hyper-converged infrastructure IO processing area. And vice versa, of course.

If, however, servers and virtual SANs move to IO stack-bypassing through RDMA or an NVMe fabric-type IO, then the point is moot, we think. ®

More about

TIP US OFF

Send us news


Other stories you might like