This article is more than 1 year old
Texas judge demands lawyers declare AI-generated docs
After that New York case in which attorney cited ChatGPT-hallucinated proceedings as real
After a New York attorney admitted last week to citing non-existent court cases that had been hallucinated by OpenAI's ChatGPT software, a Texas judge has directed attorneys in his court to certify either that they have not used artificial intelligence to prepare their legal documents – or that if they do, that the output has been verified by a human.
"These platforms in their current states are prone to hallucinations and bias," reads a note from Judge Brantley Starr of the Northern District of Texas. "On hallucinations, they make stuff up – even quotes and citations.
"Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath.
"As such, these systems hold no allegiance to any client, the rule of law, or the laws and Constitution of the United States (or, as addressed above, the truth). Unbound by any sense of duty, honor, or justice, such programs act according to computer code rather than conviction, based on programming rather than principle. Any party believing a platform has the requisite accuracy and reliability for legal briefing may move for leave and explain why."
The judge's note goes on to explain that his court will strike any filing from an attorney who fails to submit the required certification attesting that these terms have been read and understood.
Judge Starr was not available for comment.
Not an uncommon problem
The note was published on Tuesday, May 30. It follows a New York judge's damning May 26 Order to Show Cause [PDF] directing attorneys for a plaintiff suing Colombian airline Avianca S.A. to explain why they should not be sanctioned for "the citation of non-existent cases."
The plaintiff's attorneys had admitted in an affidavit that they had submitted "to the court of copies of non-existent judicial opinions," and made "use of a false and fraudulent notarization."
Essentially, the attorneys used ChatGPT to look up previous cases, the chat-bot invented those cases, and the lawyers quoted them in filings as if they were real. The legal team was then called out by the defense, and admitted its error.
Judge Kevin Castel, as you can imagine, was not impressed. He directed his ire at the plaintiff's attorneys in Mata v. Avianca, Inc. (1:22-cv-01461), who opposed a defense effort to dismiss the personal injury claim by citing Varghese v. South China Airlines Co Ltd, Shaboon v. Egyptair, and Petersen v. Iran Air, among other machine-imagined court cases.
The Register asked for comment from the plaintiff's law firm, Levidow, Levidow and Oberman, and our call was not returned.
We've previously written about ChatGPT inventing obituaries for living people, going as far as crafting URLs to non-existent newspaper obits, so it's no surprise the text-generating bot makes up court cases. OpenAI warns its system "may occasionally generate incorrect information," which seems mild given the model's readiness to confidently put out wrong info.
- AI, extinction, nuclear war, pandemics ... That's expert open letter bingo
- Get ready for Team America: AI Police
- OpenAI calls for global watchdog focused on 'existential risk' posed by superintelligence
- AI is great at one thing: Driving next waves of layoffs
In that affidavit [PDF], the offending attorney said he "greatly regrets having utilized generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity."
While the New York case appears to be the first AI-based litigation misfire to attract public attention, there are claims other fake case citations have occurred.
Stephen Wu, shareholder in Silicon Valley Law Group and chair of the American Bar Association's Artificial Intelligence and Robotics National Institute, told The Register that while he's unaware of instances of AI confabulation beyond the New York case, he expects there are others.
Rules are rules
Asked whether lawyers should know better or whether the novelty of generative AI might somehow mitigate the situation, Wu was unequivocal.
"There is no excuse in my book," said Wu. "There are rules of professional conduct and they say that if you're going to delegate any responsibility for a legal matter, that you have a duty to supervise the people to whom you are delegating the matter."
Wu said those rules were intended to apply to a paralegal or assistant, but should apply to generative AI.
There is no excuse in my book
"I read that, in an age of generative artificial intelligence, to say if you're delegating something to artificial intelligence, you need to supervise that artificial intelligence and check its results to make sure that it meets the standard of competence that lawyers are required to meet," he explained.
Wu said there's a role for AI tools in the law, but that role has to be subordinate to a person. He said he expects generative AI will prove more useful for tasks like generating contracts, where there's more room for varied wording and interpretation. In court filings, he said, "the stakes are somewhat higher because you've got to dot every 'i' and cross every 't'."
He also believes the Texas judge’s certification requirement could be adopted more broadly.
"The judge’s standing order about certifying not relying solely on generative AI may become a local rule in specific courts," Wu said.
"And it seems to me that the Federal Rules advisory committee and state equivalents may want to change the rules of procedure. For instance, the Texas judge’s standing order concept of certifying not using generative AI or at least checking against real sources could fit within Federal Rule of Civil Procedure 11(b)."
The American Bar Association at its midyear meeting in February adopted a resolution [PDF] calling on organizations developing, designing, and deploying AI systems to ensure that people oversee those models, that individuals and organization are held accountable for harms done by neural networks and the like, and that developers of these systems implement them in a way that's transparent and auditable while also respecting intellectual property. Good luck.
Wu said the need to address how technological changes affect the law came up about a decade ago in the ABA's 20/20 ethics project. "Among other things, it talked about how lawyers' duty to competence requires competence in understanding technology, and how it relates to the practice of law and when using technology for the practice of law," he said.
Wu expects further discussion on the role of generative AI in the law. "I anticipate that we're going to have to have a little bit of committee work on that, to try to maybe come up with changes to the ABA model rules," he said. ®