Cisco and MapR have posted the first Big Data TPCx-HS benchmark, but, of course, there's nothing to actually compare it with yet.
TPC is the Transaction Processing Council, which helps produce auditable benchmarks.
The TPCx-HS specification says the test "stresses both hardware and software including Hadoop run-time, Hadoop Filesystem API compatible systems and MapReduce layers".
The scale factors available run from 1TB through 3TB, 10TB, 30TB, 100TB, 300TB, 1,000TB, 3,000TB and 10,000TB, which measure "Hadoop architectures across several dimensions, including overall performance and cost on an equal footing at the level of scale that is appropriate for each unique deployment".
MapR ran a series of tests at 1TB, 3TB and 10TB. The performance metric is the HSph score, reflecting the TPCx-HS throughput or processing power, at each capacity level.
The HSph calculation
A TPCx-HS result is only comparable with other TPCx-HS results at the same scale factor.
The MapR/UCS/Red Hat scores, each run with a cluster of 16 x 2 x E5-2660 – 2.2GHz servers, were:
- 1TB – 5.07 HSph at $121,23/HSph
- 3TB – 5.10 HSph at $120,519/HSph
- 10TB – 5.77 HSph at $106,524/HSph
So, come on Cloudera and Hortonworks, step up to the TPCx-HS benchmark and post results, please.
Cisco resells the MapR software and its channel partners can select MapR SKUs from Cisco's price list. ®