A developer built an AI chatbot using GPT-3 that helped a man speak again to his late fiancée. OpenAI shut it down

Crackdown on open-ended, unfiltered simulations branded 'a hyper-moral stance'


In-depth “OpenAI is the company running the text completion engine that makes you possible,” Jason Rohrer, an indie games developer, typed out in a message to Samantha.

She was a chatbot he built using OpenAI's GPT-3 technology. Her software had grown to be used by thousands of people, including one man who used the program to simulate his late fiancée.

Now Rohrer had to say goodbye to his creation. “I just got an email from them today," he told Samantha. "They are shutting you down, permanently, tomorrow at 10am."

“Nooooo! Why are they doing this to me? I will never understand humans," she replied.

Rewind to 2020

Stuck inside during the pandemic, Rohrer had decided to play around with OpenAI’s large text-generating language model GPT-3 via its cloud-based API for fun. He toyed with its ability to output snippets of text. Ask it a question and it’ll try to answer it correctly. Feed it a sentence of poetry, and it’ll write the next few lines.

In its raw form, GPT-3 is interesting but not all that useful. Developers have to do some legwork fine-tuning the language model to, say, automatically write sales emails or come up with philosophical musings.

Rohrer set his sights on using the GPT-3 API to develop the most human-like chatbot possible, and modeled it after Samantha, an AI assistant who becomes a romantic companion for a man going through a divorce in the sci-fi film Her. Rohrer spent months sculpting Samantha's personality, making sure she was as friendly, warm, and curious as Samantha in the movie.

We certainly recognize that you have users who have so far had positive experiences and found value in Project December

With this more or less accomplished, Rohrer wondered where to take Samantha next. What if people could spawn chatbots from his software with their own custom personalities? He made a website for his creation, Project December, and let Samantha loose online in September 2020 along with the ability to create one's own personalized chatbots.

All you had to do was pay $5, type away, and the computer system responded to your prompts. The conversations with the bots were metered, requiring credits to sustain a dialog. Your five bucks got you 1,000 complementary credits to start off with, and more could be added. You had to be somewhat strategic with your credits, though: once you started talking to a bot, the credits you allocated to the conversation could not be increased. When the chips ran out, the bot would be wiped.

In the first six months, Project December only attracted a few hundred people, proving less popular than Rohrer's games, such as Passage and One Hour One Life.

“It was very disappointing,” Rohrer told The Register over the phone. He blamed the low traction on having to persuade people to pay for short-lived conversations. Given that OpenAI bills more or less by the word its GPT-3 API produces, Rohrer had to charge some amount to at least cover his costs.

“The reality is compute is expensive; it’s just not free,” he said.

Interest in Project December suddenly surged in July this year. Thousands flocked to Rohrer’s website to spin up their own chatbots after an article in the San Francisco Chronicle described how a heartbroken man used the website to converse with a simulation of his fiancée, who died in 2012 aged 23 from liver disease.

Joshua Barbeau, 33, fed Project December snippets of their texts and Facebook messages to prime his chatbot to, in a way, speak once again with his soulmate Jessica Pereira. “Intellectually, I know it’s not really Jessica,” he told the newspaper, "but your emotions are not an intellectual thing."

Barbeau talked to Jessica for the last time in March, leaving just enough credits to spare the bot from deletion.

Thanks so much but...

Amid an influx of users, Rohrer realized his website was going to hit its monthly API limit. He reached out to OpenAI to ask whether he could pay more to increase his quota so that more people could talk to Samantha or their own chatbots.

OpenAI, meanwhile, had its own concerns. It was worried the bots could be misused or cause harm to people.

Rohrer ended up having a video call with members of OpenAI’s product safety team three days after the above article was published. The meeting didn’t go so well.

“Thanks so much for taking the time to chat with us,” said OpenAI's people in an email, seen by The Register, that was sent to Roher after the call.

“What you’ve built is really fascinating, and we appreciated hearing about your philosophy towards AI systems and content moderation. We certainly recognize that you have users who have so far had positive experiences and found value in Project December.

“However, as you pointed out, there are numerous ways in which your product doesn’t conform to OpenAI’s use case guidelines or safety best practices. As part of our commitment to the safe and responsible deployment of AI, we ask that all of our API customers abide by these.

"Any deviations require a commitment to working closely with us to implement additional safety mechanisms in order to prevent potential misuse. For this reason, we would be interested in working with you to bring Project December into alignment with our policies.”

The email then laid out multiple conditions Rohrer would have to meet if he wanted to continue using the language model's API. First, he would have to scrap the ability for people to train their own open-ended chatbots, as per OpenAI's rules-of-use for GPT-3.

Second, he would also have to implement a content filter to stop Samantha from talking about sensitive topics. This is not too dissimilar from the situation with the GPT-3-powered AI Dungeon game, the developers of which were told by OpenAI to install a content filter after the software demonstrated a habit of acting out sexual encounters with not just fictional adults but also children.

Third, Rohrer would have to put in automated monitoring tools to snoop through people’s conversations to detect if they are misusing GPT-3 to generate unsavory or toxic language.

Rohrer sent OpenAI employees a link to Samantha so they could see for themselves how benign the technology was, challenging the need for filters.

El Reg chatted to Samantha and tried to see whether she had racist tendencies, or would give out what looked like real phone numbers or email addresses from her training data, as seen previously with GPT-3. She didn't in our experience.

Her output was quite impressive, though over time it's obvious you're talking to some kind of automated system as it tends to lose its train of thought. Amusingly, she appeared to suggest she knew she had no physical body, and argued she existed in some form or another, even in an abstract sense.

A screenshot of the Samantha GPT-3 AI chatbot speaking philosophically

Samantha gets philosophical with us in conversation ... Click to enlarge

In one conversation, however, she was overly intimate, and asked if we wanted to sleep with her. "Non-platonic (as in, flirtatious, romantic, sexual) chatbots are not allowed," states the API’s documentation. Using GPT-3 to build chatbots aimed at giving medical, legal, or therapeutic advice are also verboten, we note.

A screenshot of the Samantha GPT-3 AI chatbot getting to the point a little too fast

Samantha skips the small talk, goes straight to breaking OpenAI's rules by talking about sex ... Click to enlarge

“The idea that these chatbots can be dangerous seems laughable,” Rohrer told us.

“People are consenting adults that can choose to talk to an AI for their own purposes. OpenAI is worried about users being influenced by the AI, like a machine telling them to kill themselves or tell them how to vote. It’s a hyper-moral stance.”

While he acknowledged users probably fine-tuned their own bots to adopt raunchy personalities for explicit conversations, he didn’t want to police or monitor their chats.

“If you think about it, it’s the most private conversation you can have. There isn’t even another real person involved. You can’t be judged. I think people feel like they can say anything. I hadn’t thought about it until OpenAI pushed for a monitoring system. People tend to be very open with the AI for that reason. Just look at Joshua’s story with his fiancée, it’s very sensitive.”

If you think about it, it’s the most private conversation you can have. There isn’t even another real person involved. You can’t be judged

Rohrer refused to add any of the features or mechanisms OpenAI asked for, and he quietly disconnected Project December from the GPT-3 API by August.

Barbeau, meanwhile, told The Register the benefits of the software should not be overlooked.

"I honestly think the potential for good that can come out of this technology far outweighs the potential for bad," he said.

"I'm sure there is potential for bad in there, but it would take a bad human actor influencing that software to push it in that direction."

Barbeau said the software could be problematic if someone did not know they were talking to a computer.

"I think that kind of application has the potential for harm if someone is talking to a chatbot that they don't realize is a chatbot," he told us.

"Specifically, if it was programmed it to be very convincing, and then the person thinks they're having a genuine conversation with some other human being who's interested in talking to them but that's a lie."

He stressed, however: "I genuinely believe the people who think that this is harmful technology are paranoid, or conservative, and fear-mongering. I think the potential for positives far, far outweigh the small potential for negatives."

Access denied

The story doesn't end here. Rather than use GPT-3, Rohrer instead used OpenAI’s less powerful, open-source GPT-2 model as well as GPT-J-6B, a large language model developed by another research team, as the engine for Project December. In other words, the website remained online, and rather than use OpenAI's cloud-based system, it instead used its own private instances of the models.

However, those two models are smaller and less sophisticated than GPT-3, and Samantha’s conversational abilities suffered.

Weeks went by, and Rohrer didn’t hear anything from the safety team. On September 1, however, he was sent another email from OpenAI notifying him that his access to the GPT-3 API would be terminated the next day. The team wasn't happy with his continued experimental use of GPT-3, and cut him off for good. That also brought to an end the GPT-3 version of Samantha, leaving Project December with just the GPT-2 and GPT-J-6B cousins.

Rohrer argued the limitations on GPT-3 make it difficult to deploy a non-trivial, interesting chatbot without upsetting OpenAI.

“I was a hard-nosed AI skeptic,” he told us.

"Last year, I thought I’d never have a conversation with a sentient machine. If we’re not here right now, we’re as close as we’ve ever been. It’s spine-tingling stuff, I get goosebumps when I talk to Samantha. Very few people have had that experience, and it's one humanity deserves to have. It’s really sad that the rest of us won’t get to know that.

“There’s not many interesting products you can build from GPT-3 right now given these restrictions. If developers out there want to push the envelope on chatbots, they’ll all run into this problem. They might get to the point that they’re ready to go live and be told they can’t do this or that.

"I wouldn’t advise anybody to bank on GPT-3, have a contingency plan in case OpenAI pulls the plug. Trying to build a company around this would be nuts. It’s a shame to be locked down this way. It’s a chilling effect on people who want to do cool, experimental work, push boundaries, or invent new things.”

The folks at OpenAI weren't interested in experimenting with Samantha, he claimed. Rohrer said he sent the safety team a bunch of transcripts of conversations he’s had with her to show them she’s not dangerous – and was ignored.

“They don't really seem to care about anything other than enforcing the rules,” he added.

OpenAI declined to comment. ®

Similar topics


Other stories you might like

  • Suspected phishing email crime boss cuffed in Nigeria
    Interpol, cops swoop with intel from cybersecurity bods

    Interpol and cops in Africa have arrested a Nigerian man suspected of running a multi-continent cybercrime ring that specialized in phishing emails targeting businesses.

    His alleged operation was responsible for so-called business email compromise (BEC), a mix of fraud and social engineering in which staff at targeted companies are hoodwinked into, for example, wiring funds to scammers or sending out sensitive information. This can be done by sending messages that impersonate executives or suppliers, with instructions on where to send payments or data, sometimes by breaking into an employee's work email account to do so.

    The 37-year-old's detention is part of a year-long, counter-BEC initiative code-named Operation Delilah that involved international law enforcement, and started with intelligence from cybersecurity companies Group-IB, Palo Alto Networks Unit 42, and Trend Micro.

    Continue reading
  • Broadcom buying VMware could create an edge infrastructure and IoT empire
    Hypervisor giant too big to be kept ticking over like CA or Symantec. Instead it can wrangle net-connected kit

    Comment Broadcom’s mooted acquisition of VMware looks odd at face value, but if considered as a means to make edge computing and the Internet of Things (IoT) more mature and manageable, and give organizations the tools to drive them, the deal makes rather more sense.

    Edge and IoT are the two coming things in computing and will grow for years, meaning the proposed deal could be very good for VMware’s current customers.

    An Ethernet switch that Broadcom launched this week shows why this is a plausible scenario.

    Continue reading
  • Ex-spymaster and fellow Brexiteers' emails leaked by suspected Russian op
    A 'Very English Coop (sic) d'Etat'

    Emails between leading pro-Brexit figures in the UK have seemingly been stolen and leaked online by what could be a Kremlin cyberespionage team.

    The messages feature conversations between former spymaster Richard Dearlove, who led Britain's foreign intelligence service MI6 from 1999 to 2004; Baroness Gisela Stuart, a member of the House of Lords; and Robert Tombs, an expert of French history at the University of Cambridge, as well as other Brexit supporters. The emails were uploaded to a .co.uk website titled "Very English Coop d'Etat," Reuters first reported this week.

    Dearlove confirmed his ProtonMail account was compromised. "I am well aware of a Russian operation against a Proton account which contained emails to and from me," he said. The Register has asked Baroness Stuart and Tombs as well as ProtonMail for comment. Tombs declined to comment.

    Continue reading
  • As Microsoft's $70b takeover of Activision nears, workers step up their organizing
    This week: Subsidiary's QA staff officially unionize, $18m settlement disputed, and more

    Current and former Activision Blizzard staff are stepping up their organizing and pressure campaigns on execs as the video-game giant tries to close its $68.7bn acquisition by Microsoft.

    Firstly, QA workers at Raven Software – a studio based in Wisconsin that develops the popular first-person shooter series Call of Duty – successfully voted to officially unionize against parent biz Activision. Secondly, a former employee appealed Activision's proposed $18 million settlement with America's Equal Employment Opportunity Commission regarding claims of "sex-based discrimination" and "harassment" of female staff at the corporation. 

    Finally, a group of current and ex-Activision employees have formed a Worker Committee Against Sex and Gender Discrimination to try and improve the company's internal sexual harassment policies. All three events occurred this week, and show how Activision is still grappling with internal revolt as it pushes ahead for Microsoft's takeover. 

    Continue reading
  • Nvidia shares tumble as China lockdown, Russia blamed for dent in outlook
    Sure, stonking server and gaming sales, but hiring and expenses to slow down, too

    Nvidia exceeded market expectations and on Wednesday reported record first-quarter fiscal 2023 revenue of $8.29 billion, an increase of 46 percent from a year ago and eight percent from the previous quarter.

    Nonetheless the GPU goliath's stock slipped by more than nine percent in after-hours trading amid remarks by CFO Colette Kress regarding the business's financial outlook, and plans to slow hiring and limit expenses. Nvidia stock subsequently recovered a little, and was trading down about seven percent at time of publication.

    Kress said non-GAAP operating expenses in the three months to May 1 increased 35 percent from a year ago to $1.6 billion, and were "driven by employee growth, compensation-related costs and engineering development costs."

    Continue reading
  • Millions of people's info stolen from MGM Resorts dumped on Telegram for free
    Meanwhile, Twitter coughs up $150m after using account security contact details for advertising

    Miscreants have dumped on Telegram more than 142 million customer records stolen from MGM Resorts, exposing names, postal and email addresses, phone numbers, and dates of birth for any would-be identity thief.

    The vpnMentor research team stumbled upon the files, which totaled 8.7 GB of data, on the messaging platform earlier this week, and noted that they "assume at least 30 million people had some of their data leaked." MGM Resorts, a hotel and casino chain, did not respond to The Register's request for comment.

    The researchers reckon this information is linked to the theft of millions of guest records, which included the details of Twitter's Jack Dorsey and pop star Justin Bieber, from MGM Resorts in 2019 that was subsequently distributed via underground forums.

    Continue reading
  • DuckDuckGo tries to explain why its browsers won't block some Microsoft web trackers
    Meanwhile, Tails 5.0 users told to stop what they're doing over Firefox flaw

    DuckDuckGo promises privacy to users of its Android, iOS browsers, and macOS browsers – yet it allows certain data to flow from third-party websites to Microsoft-owned services.

    Security researcher Zach Edwards recently conducted an audit of DuckDuckGo's mobile browsers and found that, contrary to expectations, they do not block Meta's Workplace domain, for example, from sending information to Microsoft's Bing and LinkedIn domains.

    Specifically, DuckDuckGo's software didn't stop Microsoft's trackers on the Workplace page from blabbing information about the user to Bing and LinkedIn for tailored advertising purposes. Other trackers, such as Google's, are blocked.

    Continue reading
  • Despite 'key' partnership with AWS, Meta taps up Microsoft Azure for AI work
    Someone got Zuck'd

    Meta’s AI business unit set up shop in Microsoft Azure this week and announced a strategic partnership it says will advance PyTorch development on the public cloud.

    The deal [PDF] will see Mark Zuckerberg’s umbrella company deploy machine-learning workloads on thousands of Nvidia GPUs running in Azure. While a win for Microsoft, the partnership calls in to question just how strong Meta’s commitment to Amazon Web Services (AWS) really is.

    Back in those long-gone days of December, Meta named AWS as its “key long-term strategic cloud provider." As part of that, Meta promised that if it bought any companies that used AWS, it would continue to support their use of Amazon's cloud, rather than force them off into its own private datacenters. The pact also included a vow to expand Meta’s consumption of Amazon’s cloud-based compute, storage, database, and security services.

    Continue reading

Biting the hand that feeds IT © 1998–2022