D-Wave is getting ready to drop a new benchmark on Arxiv, which the company says demonstrates its latest 1000-qubit processor outperforming classical machines.
And it's bound to provoke the “other side” of the “is it quantum and is it faster?” debate, because the latest paper – the company has posted it here – describes “a novel 'time-to-target' metric”.
That's sure to attract attention, because people like sceptic Matthias Troyer of the Swiss Federal Institute of Technology or Umesh Vazirani of UC Berkeley will be have to start by replicating the metric before they replicate the tests.
D-Wave's new metric is designed to replace “ground state success rates”, the company explains, because such metrics suffer from the exponential growth of computation time to test, and because previous benchmarks are “heavily dependent on the effects of analog noise on the quantum processors, which … complicates the study of the underlying quantum annealing algorithm.”
In the time-to-target metric, the solvers “race to a target energy determined by the D-Wave processor's energy distribution … Our use of the D-Wave processor as a reference solver in computing the TTT metric allows us to circumvent the difficulties of evaluating performance in finding ground states, and to explore an interesting property that we have observed: very fast convergence to near-optimal solutions,” the paper says.
“The TTT metric identifies low-cost target solutions found by the D-Wave processor within very short time limits (from 15ms to 352ms in this study), and then asks how much time competing software solvers need to find solution energies of matching or better quality”, it continues.
The metric also looks, to The Register's inexpert eyes, designed to match the behaviour the D-Wave chips are designed to exhibit, since the quantum annealing has been described to us as letting the chip find the lowest-possible energy state for a given problem set.
The company reckons under its TTT metric, the 2X processor achieves time-to-target times between two and fifteen times faster than “the best competing software (at largest problem sizes), for all but one of input class that we tested”, and if I/O costs are omitted, it claims between 8 times and 600 times performance boost from its machine.
Interestingly, in the latest metric, the company avoids directly attributing its performance boost to quantum speedup, saying: “We take special care here to emphasise that in this paper we do not address the issue of quantum speedup; rather our goal is to compare runtime performance strictly within the range of problem sizes tested.“
The paper will land on Arxiv soon, we're told. El Reg expects the argument at Arxiv will be joined before the end of September, if the past is anything to go by. ®