This article is more than 1 year old

Ghosts in the machine learning pipeline will be impossible to exorcise

Who wants to live forever? It could be the offer you can't refuse

Opinion Tech rarely touches the soul. It enrages when social media pours petrol on hotheads. It inspires when Hubble comes back to life and delivers more cosmic awe. It pays our bills when we work in it, and it empties our pockets when we drunk-eBay that vintage console game collection. But when it brings back a dead lover?

That's no longer science fiction. A bored indie game dev in lockdown decided to use OpenAI's GPT-3 to build a state-of-the-art chatbot based on data from a movie AI character. He opened it up to general access, and got minimal interest – until a chap called Joshua input decade-old conversations from his late fiancée, Jessica, so he could kinda, sorta talk to her again. That got into the papers because it's a touching and unsettling story, and up came the traffic.

It all ended uneventfully. Joshua moved on, while OpenAI, alerted to the project by the press, found it didn't match its ethical and safety criteria. The original developer decided they couldn't comply with those, so the project – and Jessica, which remained in suspended animation – was closed down.

Except, of course, it hasn't ended at all. There have long been questions about what happens to people's digital footprint after they die. Are they an asset to pass on? Should they be deleted when accounts close? But with social media and messaging systems now taking up so much of our conversations, what happens if someone harvests all the public stuff and runs it through natural language machine learning to reanimate someone? You can make a case that for close relatives, it should be their choice. So much is public, though, that complete strangers could build a virtual you. Every year, the software gets better, the compute gets a little cheaper, the barriers to doing this come down.

There is, of course, no law explicitly covering this. You can't copyright, trademark or patent a real-life personality. Impersonating the living is a valid career, free of licensing requirements. And the law seems reluctant to be inventive just because there's technology involved. Last week in the US, a judge decided that AIs cannot patent their inventions, much like monkey selfies can't be copyrighted, so whatever an AI output is, it's not going to be easy to legally control.

OpenAI knows this, which is why they and anyone else with half a sense of foreboding are so hot on developing AI ethics. That's not law, that's a code of conduct. The commercial potential of reanimating people is obvious; pick your favourite dead celebrity and hang out all evening online with them.

It gets weirder. A chatbot with a cryptocurrency wallet could carry on your political or cultural ambitions after you die, whether anyone else wants it to or not. Rich autocrats yearn for immortality to continue their dominance, but until now have had to settle for trying to set up a dynasty – a task that, thankfully, usually fails when the family falls apart. Now they have an option to keep meddling after their demise. Death limits dictatorial dystopias, but for how long?

Getting back to reality, there's no doubt that things like celebrity chatbots will be part of our world soon. If Abba can reform virtually and send creepy Abbatars out to holographically prance across the stages of the world, why not program a Bjornbot or two to engage the fans when they get back home. Such things will be uncontroversial, follow good ethics, and may even go beyond novelty over time. They will be filtered, monitored, and designed to be harmless – as far as they can.

OpenAI's ethical considerations, like those of many researchers and regulators, are based around avoiding harm. That may not be possible, even with the best designed intentions and foresight. When spiritualism was popular in the latter half of the 19th century, there was no shortage of sober investigators looking for scientific explanations, many of which excluded the supernatural – visions and voices came from within a person's mind. Others were good at debunking charlatans. But many didn't want to know – then as now, they chose to believe and be influenced by the interpretation they found most compelling. Then as now, this was a channel for fraud. But when someone just needs the comfort of contact, is that a harm? It is impossible to generalise. We'll have to suck it and see.

Famously, the very first chatbot, Eliza, surprised its creator, MIT researcher Joseph Weizenbaum, by its power to deceive. Programmed in the mid-1960s, it used list processing to find keywords in the user input and build simple replies, mimicking psychotherapy of the time. Users were enthralled and often convinced of its true intelligence, with Weizenbaum's secretary asking him to leave the room so she could have a private conversation. "I had not realized," he said, "that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people."

We are 50 years on from that. A single developer can build a system good enough to engage the emotions of a yearning young man by a simulacrum of his lost love. The potential to engineer in psychological dependence, or create a mix of disinformation and charismatic persuasion, at scale. These are not new concerns, what is new is the ubiquity of the systems that let us indulge them.

It will be hard to forget the image of Joshua, having rebuilt his fiancée, quietly saying goodbye again and closing the session down for the last time – leaving just enough credit on the system to bring her back but never doing so. Different to going through old photos, but no less human, no more harmful. A new kind of immortality is now within reach, and if the history of our culture is anything to go by, we will find the temptation irresistible. ®

More about

TIP US OFF

Send us news


Other stories you might like