Japanese boffins try 'token passing' to scale quantum calculations

If you liked it, then you shoulda put a ring on it it on a ring

Apart from actually performing computations, one of the most difficult quantum computing challenges is getting qubits to scale.

A Japanese team has published what it believes is a solution to the problem of scale. Quantum gates are complex creatures with many more components than their classical equivalents, so instead of trying to cram enough gates into a small space to perform calculations, the University of Tokyo proposal is to send photons around in a ring, re-using one gate to act on different photons in turn.

(By now, greybearded and defeated Token Ring partisans are wiping away nostalgic tears and thinking “we were too soon”.)

If need be, the light pulses can travel around the loop indefinitely, according to professor Akira Furusawa and assistant professor Shuntaro Takeda, who came up with the scheme, without losing the quantum information they carry.

Because of this, the pair make a fairly bold claim:

“This approach potentially enables scalable, universal, and fault tolerant quantum computing, which is hard to achieve by either qubit or CV [continuous variable – El Reg] scheme alone.”

The University of Tokyo optical gate architecture

The Furusawa/Takeda loop-based quantum architecture

The paper, published at Physical Review Letters and also available at arXiv (PDF), also notes that the scheme is compatible with existing quantum error-correction techniques.

In a media release (here) Professor Furusawa says his team is working on automating the error-correction process. The release adds that his previous optical-based quantum computing system needed 6.3m2 and 500 mirrors and lenses, and could only handle a single pulse at a time.

Furusawa's paper notes that the gate sequence is electrically programmed, making it fast, and it notes that “all the basic building blocks of our architecture are already available”, meaning the work should be replicable.

Furusawa said in the canned statement: "We’ll start work to develop the hardware, now that we’ve resolved all problems except how to make a scheme that automatically corrects a calculation error." Good luck with that. ®

Similar topics

Broader topics

Other stories you might like

  • VMware claims ‘bare-metal’ performance from virtualized Nvidia GPUs
    Is... is that why Broadcom wants to buy it?

    The future of high-performance computing will be virtualized, VMware's Uday Kurkure has told The Register.

    Kurkure, the lead engineer for VMware's performance engineering team, has spent the past five years working on ways to virtualize machine-learning workloads running on accelerators. Earlier this month his team reported "near or better than bare-metal performance" for Bidirectional Encoder Representations from Transformers (BERT) and Mask R-CNN — two popular machine-learning workloads — running on virtualized GPUs (vGPU) connected using Nvidia's NVLink interconnect.

    NVLink enables compute and memory resources to be shared across up to four GPUs over a high-bandwidth mesh fabric operating at 6.25GB/s per lane compared to PCIe 4.0's 2.5GB/s. The interconnect enabled Kurkure's team to pool 160GB of GPU memory from the Dell PowerEdge system's four 40GB Nvidia A100 SXM GPUs.

    Continue reading
  • Nvidia promises annual datacenter product updates across CPU, GPU, and DPU
    Arm one year, x86 the next, and always faster than a certain chip shop that still can't ship even one standalone GPU

    Computex Nvidia's push deeper into enterprise computing will see its practice of introducing a new GPU architecture every two years brought to its CPUs and data processing units (DPUs, aka SmartNICs).

    Speaking on the company's pre-recorded keynote released to coincide with the Computex exhibition in Taiwan this week, senior vice president for hardware engineering Brian Kelleher spoke of the company's "reputation for unmatched execution on silicon." That's language that needs to be considered in the context of Intel, an Nvidia rival, again delaying a planned entry to the discrete GPU market.

    "We will extend our execution excellence and give each of our chip architectures a two-year rhythm," Kelleher added.

    Continue reading
  • Now Amazon puts 'creepy' AI cameras in UK delivery vans
    Big Bezos is watching you

    Amazon is reportedly installing AI-powered cameras in delivery vans to keep tabs on its drivers in the UK.

    The technology was first deployed, with numerous errors that reportedly denied drivers' bonuses after malfunctions, in the US. Last year, the internet giant produced a corporate video detailing how the cameras monitor drivers' driving behavior for safety reasons. The same system is now apparently being rolled out to vehicles in the UK. 

    Multiple camera lenses are placed under the front mirror. One is directed at the person behind the wheel, one is facing the road, and two are located on either side to provide a wider view. The cameras are monitored by software built by Netradyne, a computer-vision startup focused on driver safety. This code uses machine-learning algorithms to figure out what's going on in and around the vehicle.

    Continue reading

Biting the hand that feeds IT © 1998–2022