This article is more than 1 year old
Spooky entanglement revealed between quantum AI and the BBC
QC: Still not actually useful, but it's increasingly intriguing
Opinion The UK's national broadcaster, the BBC, its R&D team and its entire 100-year, 15 million item archive are part of a new consortium investigating QNLP, Quantum Natural Language Processing, with the ultimate aim of automating the extraction of meaning from humanity's babble.
"The most incomprehensible thing about the universe is that it's comprehensible," is one of those rare Einstein quotes that Einstein actually said. We don't know what he might have said about Monty Python's Flying Circus as he died 14 years before its first transmission. But it is fascinating to wonder what he, as one of the founders of quantum physics, might have made of the idea of quantum computing signposting why the universe is comprehensible in the first place.
The consortium, announced on November 25, receives funding from the Royal Academy of Engineering, and will build on work on quantum mechanics and linguistics by Professor Bob Coecke, chief scientist at UK QC company Quantinuum; Professor Stephen Clark, head of AI at Cambridge Quantum; and Professor Mehrnoosh Sadrzadeh of the Computer Science department at University College London. Two geeks in a garage it is not.
Long-term followers of the quantum computing news will know that every story about QC exists mostly in the future tense: the technology is more promise than product. It is limited by the current state of the art, noisy intermediate-scale quantum or NISQ. Current systems are too noisy and too small to be useful. Much of today's QC research is in developing techniques and algorithms that will be world-beating, once we're out of NISQ and into fault-tolerant, large-scale systems. QNLP is no different.
What makes it interesting is where it's come from. The professorial collaborators and their teams have 15 years of research under their belts in analyzing language. One result of which is the splendidly named DISCOCAT (DIStributional COmpositional CATegorical) framework, which creates a data set from groups of sentences that can be analysed on a quantum system. The inherently interesting part of this is that DISCOCAT produces a tensor network that maps very closely to how quantum logic naturally works. The project says it's an inherently good fit to quantum mechanics. But very few standard computing tasks are, so why would it apply to the meaning encoded in language?
The answer, say the researchers, is category theory. This is a mathematical approach to systems analysis, first mooted in the middle of the 20th century, which says you can learn a great deal about a system by ignoring the internal details of each component, and concentrating on how they interact. By providing a map of behaviors, category theory can reveal patterns that can't easily be derived by trying to break down individual components – which makes it a very good fit, for example, with quantum mechanics. Categorical quantum mechanics is a recent field of study that concentrates on pattern and process at quantum levels, which makes it a good fit for quantum logic among much else.
- HPC's lost histories will power the future of tech
- Twitter is suffering from mad bro disease. Open thinking can build it back better
- Qualcomm vs Arm: The bizarro quotient just went off the scale
- Open source's totally non-secret weapon big tech dares not use: Staying relevant
Category theory is also a good match to linguistic analysis, producing maps of meaning that include information about the relationships between grammar and semiotics – the structure of how meaning is encoded. This is both intensely useful and, to AI researchers and philosophers of mind alike, a very tempting path for conceptual exploration.
The kicker, however, is category theory's ability to find similar patterns in apparently disparate systems. This is basically how much of mathematics and physics advances, using knowledge of one system to gain insight into another. What the consortium researchers say is that the quantum nature of their linguistic analysis comes from it working to similar patterns to quantum mechanics. Hence QC will be staggeringly good at language – when it works.
This connection has been theoretically known for a while, but limited to classical computer simulations. Now, there is evidence that reality is prepared to comply with theory, with recent experiments starting to ask small questions of small sentence sets on IBM's Quantum Experience platform. These only involved a couple of tests, one to ask which of around a hundred sentences was about food and which about IT, and one to pluck at noun phrases. Classical computer simulations then run alongside the quantum tests to show what you could win when fault-tolerant large-scale systems come along.
In this respect, this is as good as QC gets. But in the sense that a fundamental tool of mathematics and information science is making explicit connections with the deep structure of language and the way quantum mechanics works, it's a highly intriguing pointer to how quantum computing is as interesting to philosophers of cognition as it is to physicists, businesses and computer scientists. Language is a function, perhaps the defining function, of how we categorize ourselves as intelligent, and language processing an intrinsic and unique part of human cognition and human society. To find it obeying rules that other physical systems exhibit doesn't mean that consciousness is any more quantum than any other classical macro system; nature replicates patterns at all scales, after all.
But it may help explain how we can find so much of physics comprehensible; it follows patterns we are configured to exploit. Finding a potential answer to something that baffled Einstein is no mean feat. And who knows, when a future post-NISQ AI has digested all of the BBC's output, we may even be able to ask it not only what the Parrot Sketch means, but what the point is of daytime television at all. Perhaps that's a philosophical question too far. ®