OpenAI claims New York Times paid someone to 'hack' ChatGPT

Super lab alleges 'deceptive prompts' that it happily processed - and may have tracked - weren't fair, so case should be dismissed

OpenAI has accused The New York Times Company of paying someone to "hack" ChatGPT to generate verbatim paragraphs from articles in its newspaper. By hack, presumably the biz means: Logged in as normal and asked it annoying questions.

In December, the NYT sued OpenAI and its backer Microsoft, accusing the pair of scraping the newspaper's website without permission to train large language models. The lawsuit included what was said to be evidence of ChatGPT reproducing whole passages of New York Times articles as a result of user-submitted prompts.

The publisher believes users of OpenAI's technology – which Microsoft is applying across its software and cloud empire – could effectively bypass the newspaper's paywall and read stories for free by asking the chatbot to cough up chunks of coverage, thus screwing the biz out of subscription cash.

OpenAI, however, this week hit back against those claims while asking the court [PDF] to dismiss the case. The startup opined that the broadsheet's evidence "appears to have been prolonged and extensive efforts to hack OpenAI’s models," and denied that ChatGPT could divert people around paywalls, adding that folks don't use the chatbot to read published articles anyway.

"In the real world, people do not use ChatGPT or any other OpenAI product for that purpose," the super lab said. "Nor could they. In the ordinary course, one cannot use ChatGPT to serve up Times articles at will." Instead, its lawyers argued that the NYT had abused its chatbot by tricking the software into regurgitating some training data, a feat apparently beyond the ability of everyone else bar the Times’ sneaky prompt engineers.

"The truth, which will come out in the course of this case, is that the Times paid someone to hack OpenAI's products … They were able to do so only by targeting and exploiting a bug (which OpenAI has committed to addressing) by using deceptive prompts that blatantly violate OpenAI's terms of use," OpenAI claimed in retaliation.

OpenAI alleged that tens of thousands of attempts were required before ChatGPT generated passages of verbatim text.

The NYT's lead counsel and a partner at law firm Susman Godfrey, Ian Crosby, called the hacking allegations "bizarre," in a note to The Register.

"What OpenAI bizarrely mischaracterizes as 'hacking' is simply using OpenAI's products to look for evidence that they stole and reproduced The Times's copyrighted works," he said. "And that is exactly what we found. In fact, the scale of OpenAI's copying is much larger than the 100-plus examples set forth in the complaint."

So-called "prompt injection" attacks make it possible to work around guardrails that aim to prevent large language models like ChatGPT producing illegal or undesirable content. OpenAI appears to have accused the Times of using such attacks.

Crosby, meanwhile, argued OpenAI's allegations against the NYT confirmed the Sam Altman-run lab monitors user input prompts and output responses. To be honest, that's not surprising: After all, the upstart just the other week boasted it spotted and trashed Chinese, Iranian, Russian, and North Korean accounts for, in its view, misusing its suite of generative models.

We note the OpenAI privacy policy states the biz will monitor people's queries and usage of its services for various reasons, including (depending on settings and payment plan) potentially training future models.

That said, the NYT's lawyer Crosby is unimpressed. "OpenAI's response also shows that it is tracking users' queries and outputs, which is particularly surprising given that they claimed not to do so. We look forward to exploring that issue in discovery," he said.

Crosby also challenged the ChatGPT maker's argument to the court that the newspaper took too long to bring a complaint and the lab's overall stance that it hasn't done anything wrong.

"OpenAI, which has been secretive and has deliberately concealed how its products operate, is now asserting it's too late to bring a claim for infringement or hold them accountable. We disagree. It's noteworthy that OpenAI doesn't dispute that it copied Times works without permission within the statute of limitations to train its more recent and current models."

The Register has asked OpenAI for further comment. ®

More about


Send us news

Other stories you might like