Ghosts in the machine learning pipeline will be impossible to exorcise

Who wants to live forever? It could be the offer you can't refuse

Opinion Tech rarely touches the soul. It enrages when social media pours petrol on hotheads. It inspires when Hubble comes back to life and delivers more cosmic awe. It pays our bills when we work in it, and it empties our pockets when we drunk-eBay that vintage console game collection. But when it brings back a dead lover?

That's no longer science fiction. A bored indie game dev in lockdown decided to use OpenAI's GPT-3 to build a state-of-the-art chatbot based on data from a movie AI character. He opened it up to general access, and got minimal interest – until a chap called Joshua input decade-old conversations from his late fiancée, Jessica, so he could kinda, sorta talk to her again. That got into the papers because it's a touching and unsettling story, and up came the traffic.

It all ended uneventfully. Joshua moved on, while OpenAI, alerted to the project by the press, found it didn't match its ethical and safety criteria. The original developer decided they couldn't comply with those, so the project – and Jessica, which remained in suspended animation – was closed down.

Except, of course, it hasn't ended at all. There have long been questions about what happens to people's digital footprint after they die. Are they an asset to pass on? Should they be deleted when accounts close? But with social media and messaging systems now taking up so much of our conversations, what happens if someone harvests all the public stuff and runs it through natural language machine learning to reanimate someone? You can make a case that for close relatives, it should be their choice. So much is public, though, that complete strangers could build a virtual you. Every year, the software gets better, the compute gets a little cheaper, the barriers to doing this come down.

There is, of course, no law explicitly covering this. You can't copyright, trademark or patent a real-life personality. Impersonating the living is a valid career, free of licensing requirements. And the law seems reluctant to be inventive just because there's technology involved. Last week in the US, a judge decided that AIs cannot patent their inventions, much like monkey selfies can't be copyrighted, so whatever an AI output is, it's not going to be easy to legally control.

OpenAI knows this, which is why they and anyone else with half a sense of foreboding are so hot on developing AI ethics. That's not law, that's a code of conduct. The commercial potential of reanimating people is obvious; pick your favourite dead celebrity and hang out all evening online with them.

It gets weirder. A chatbot with a cryptocurrency wallet could carry on your political or cultural ambitions after you die, whether anyone else wants it to or not. Rich autocrats yearn for immortality to continue their dominance, but until now have had to settle for trying to set up a dynasty – a task that, thankfully, usually fails when the family falls apart. Now they have an option to keep meddling after their demise. Death limits dictatorial dystopias, but for how long?

Getting back to reality, there's no doubt that things like celebrity chatbots will be part of our world soon. If Abba can reform virtually and send creepy Abbatars out to holographically prance across the stages of the world, why not program a Bjornbot or two to engage the fans when they get back home. Such things will be uncontroversial, follow good ethics, and may even go beyond novelty over time. They will be filtered, monitored, and designed to be harmless – as far as they can.

OpenAI's ethical considerations, like those of many researchers and regulators, are based around avoiding harm. That may not be possible, even with the best designed intentions and foresight. When spiritualism was popular in the latter half of the 19th century, there was no shortage of sober investigators looking for scientific explanations, many of which excluded the supernatural – visions and voices came from within a person's mind. Others were good at debunking charlatans. But many didn't want to know – then as now, they chose to believe and be influenced by the interpretation they found most compelling. Then as now, this was a channel for fraud. But when someone just needs the comfort of contact, is that a harm? It is impossible to generalise. We'll have to suck it and see.

Famously, the very first chatbot, Eliza, surprised its creator, MIT researcher Joseph Weizenbaum, by its power to deceive. Programmed in the mid-1960s, it used list processing to find keywords in the user input and build simple replies, mimicking psychotherapy of the time. Users were enthralled and often convinced of its true intelligence, with Weizenbaum's secretary asking him to leave the room so she could have a private conversation. "I had not realized," he said, "that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people."

We are 50 years on from that. A single developer can build a system good enough to engage the emotions of a yearning young man by a simulacrum of his lost love. The potential to engineer in psychological dependence, or create a mix of disinformation and charismatic persuasion, at scale. These are not new concerns, what is new is the ubiquity of the systems that let us indulge them.

It will be hard to forget the image of Joshua, having rebuilt his fiancée, quietly saying goodbye again and closing the session down for the last time – leaving just enough credit on the system to bring her back but never doing so. Different to going through old photos, but no less human, no more harmful. A new kind of immortality is now within reach, and if the history of our culture is anything to go by, we will find the temptation irresistible. ®

Broader topics

Other stories you might like

  • Lonestar plans to put datacenters in the Moon's lava tubes
    How? Founder tells The Register 'Robots… lots of robots'

    Imagine a future where racks of computer servers hum quietly in darkness below the surface of the Moon.

    Here is where some of the most important data is stored, to be left untouched for as long as can be. The idea sounds like something from science-fiction, but one startup that recently emerged from stealth is trying to turn it into a reality. Lonestar Data Holdings has a unique mission unlike any other cloud provider: to build datacenters on the Moon backing up the world's data.

    "It's inconceivable to me that we are keeping our most precious assets, our knowledge and our data, on Earth, where we're setting off bombs and burning things," Christopher Stott, founder and CEO of Lonestar, told The Register. "We need to put our assets in place off our planet, where we can keep it safe."

    Continue reading
  • Conti: Russian-backed rulers of Costa Rican hacktocracy?
    Also, Chinese IT admin jailed for deleting database, and the NSA promises no more backdoors

    In brief The notorious Russian-aligned Conti ransomware gang has upped the ante in its attack against Costa Rica, threatening to overthrow the government if it doesn't pay a $20 million ransom. 

    Costa Rican president Rodrigo Chaves said that the country is effectively at war with the gang, who in April infiltrated the government's computer systems, gaining a foothold in 27 agencies at various government levels. The US State Department has offered a $15 million reward leading to the capture of Conti's leaders, who it said have made more than $150 million from 1,000+ victims.

    Conti claimed this week that it has insiders in the Costa Rican government, the AP reported, warning that "We are determined to overthrow the government by means of a cyber attack, we have already shown you all the strength and power, you have introduced an emergency." 

    Continue reading
  • China-linked Twisted Panda caught spying on Russian defense R&D
    Because Beijing isn't above covert ops to accomplish its five-year goals

    Chinese cyberspies targeted two Russian defense institutes and possibly another research facility in Belarus, according to Check Point Research.

    The new campaign, dubbed Twisted Panda, is part of a larger, state-sponsored espionage operation that has been ongoing for several months, if not nearly a year, according to the security shop.

    In a technical analysis, the researchers detail the various malicious stages and payloads of the campaign that used sanctions-related phishing emails to attack Russian entities, which are part of the state-owned defense conglomerate Rostec Corporation.

    Continue reading
  • FTC signals crackdown on ed-tech harvesting kid's data
    Trade watchdog, and President, reminds that COPPA can ban ya

    The US Federal Trade Commission on Thursday said it intends to take action against educational technology companies that unlawfully collect data from children using online educational services.

    In a policy statement, the agency said, "Children should not have to needlessly hand over their data and forfeit their privacy in order to do their schoolwork or participate in remote learning, especially given the wide and increasing adoption of ed tech tools."

    The agency says it will scrutinize educational service providers to ensure that they are meeting their legal obligations under COPPA, the Children's Online Privacy Protection Act.

    Continue reading
  • Mysterious firm seeks to buy majority stake in Arm China
    Chinese joint venture's ousted CEO tries to hang on - who will get control?

    The saga surrounding Arm's joint venture in China just took another intriguing turn: a mysterious firm named Lotcap Group claims it has signed a letter of intent to buy a 51 percent stake in Arm China from existing investors in the country.

    In a Chinese-language press release posted Wednesday, Lotcap said it has formed a subsidiary, Lotcap Fund, to buy a majority stake in the joint venture. However, reporting by one newspaper suggested that the investment firm still needs the approval of one significant investor to gain 51 percent control of Arm China.

    The development comes a couple of weeks after Arm China said that its former CEO, Allen Wu, was refusing once again to step down from his position, despite the company's board voting in late April to replace Wu with two co-chief executives. SoftBank Group, which owns 49 percent of the Chinese venture, has been trying to unentangle Arm China from Wu as the Japanese tech investment giant plans for an initial public offering of the British parent company.

    Continue reading
  • SmartNICs power the cloud, are enterprise datacenters next?
    High pricing, lack of software make smartNICs a tough sell, despite offload potential

    SmartNICs have the potential to accelerate enterprise workloads, but don't expect to see them bring hyperscale-class efficiency to most datacenters anytime soon, ZK Research's Zeus Kerravala told The Register.

    SmartNICs are widely deployed in cloud and hyperscale datacenters as a means to offload input/output (I/O) intensive network, security, and storage operations from the CPU, freeing it up to run revenue generating tenant workloads. Some more advanced chips even offload the hypervisor to further separate the infrastructure management layer from the rest of the server.

    Despite relative success in the cloud and a flurry of innovation from the still-limited vendor SmartNIC ecosystem, including Mellanox (Nvidia), Intel, Marvell, and Xilinx (AMD), Kerravala argues that the use cases for enterprise datacenters are unlikely to resemble those of the major hyperscalers, at least in the near term.

    Continue reading

Biting the hand that feeds IT © 1998–2022