The AI arms race could give us the cool without the cruel
War is obscene but it's also responsible for many technological advances
Opinion Every week there are so many stories about different things but with a common theme.
Take the Jolly Roger Phone Company. Even the name warms the heart of right-thinking people. What it does is even better – deploying armies of audio chatbots to confuse, tie up and generally ruin the day of the telemarketers whose job it is to ruin yours. It's a cheap service where users get to choose and monitor their preferred passive-aggressive AI army. Technology in the service of humankind gets no better than this.
Telemarketing is itself a highly adaptable technology, one that operates at the cutting edge of what is technically and economically sustainable. It has to make a lot of calls to land a sale, in a highly antagonistic environment. Everyone hates them – regulators, people trying to eat their dinner, consumer rights advocates. If Jolly Roger is a success, it will be a direct threat to the frankly loathsome business model. Telemarketing will fight back, using AI to detect and close down chatbot spoofers quickly.
This could be by spectral or linguistic analysis, or some other signal from the system that nobody's thought of yet, but it would give the telemarketers their calls per minute metric back. Until Jolly Roger or, one hopes, a brand new industry protecting us from unwelcome intrusion works out countermeasures, round and round we shall go, until one party runs out of resources to keep the fight going – as the Soviet Union did when Reagan pulled his Star Wars stunt. It's an arms race.
Arms races are endemic in human affairs, they're just most obvious where technology is involved. As well as Jolly Roger, we're also being subjected of late to the Red Hat vs Rocky Linux turf war, where RHEL is trying to close off its source code from Rocky Linux's rebuilding efforts. It's messy and involves a rather sour mix of hiding repos and twisting licensing terms on one hand, and finding new ways to pull sources without triggering lawyers on the other. Each move by one side is met by another, an arms race of ideas, and we're going to learn a lot about the robustness and limits of open source as a result, and the industry will recalibrate to keep the things it thinks most important.
Arms races can generate hyper-rapid evolution, but they get a bad rep because they tend to generate victims too. In actual conflict like the Second World War, this meant a body count of many tens of millions amid abject cruelty in exchange for computers, jet engines, radar, atomic power, digital communications and much more. The Cold War arms race gave us the supreme achievement of Voyager at the edge of interstellar space, put there by the same technology that to this day has thousands of missiles desperate to fry us in cleansing nuclear fire.
- The number's up for 999. And 911. And 000. And 111
- Whose line is it anyway, GitHub? Innovation, not litigation, should answer
- Microsoft's Azure mishap betrays an industry blind to a big problem
- Windows XP's adventures in the afterlife shows copyright's copywrongs
Even Jolly Roger and RHEL v Rocky have their victims: the former with the souls forced into the dehumanizing psychological sandpaper of dealing with trickster AIs on exploitative wages, the latter playing with the whole open source ethos.
What if there was a way to harness the turbocharged transformative power of arms races without victims? We're already doing it with generative adversarial networks or GANs, where one neural network tries to synthesize data that looks very similar to training data, and another tries to tell them apart. An iterative zero-sum game, the two systems refine each other's capabilities in a relentless arms race where nobody loses.
The idea is not new. The 18th century diarist and lexicographer Dr Samuel Johnson had severe mood swings – he used his manic periods to write, his depressive times to edit. Yet for the first time, the data analysis and generative potential of machine learning means we can start to think of arms races in AI not as Google versus Microsoft versus whoever's got the best PR this month, but as a way to automate a combative environment.
What might that look like? Let's get chunky: climate change isn't going to be slowed in the nick of time by magic new technology. We have most of what we need there. Instead, it's existing interests trying to slow change by claims of economic necessity. Against that, models of what happens to agriculture, industry and populations with different carbon emission models. That can be seen as an arms race for strategies within an economic model with the winner getting to a sustainable target in a sustainable time.
Then we could look at the resources required: the opportunity cost of arms races is so often the most contentious factor. The ability of machine learning to create and critique models at scale make it a potentially groundbreaking technique to see arms racing as a discipline in its own right.
Not that we're knocking using AI to confound and discourage telemarketers. Give that Jolly Roger a Nobel, we say. It's just that this thinking could also save the world. ®