LLM aka Large Legal Mess: Judge wants lawyer fined $15K for using AI slop in filing
Plus: Anthropic rolls out Claude 3.7 Sonnet
A federal magistrate judge has recommended $15,000 in sanctions be imposed on an attorney who cited non-existent court cases concocted by an AI chatbot.
In a report [PDF] filed last week, Mark J. Dinsmore, US Magistrate Judge for the Southern District of Indiana, recommends that attorney Rafael Ramirez, of Rio Hondo, Texas, be "sanctioned $15,000 for his violations in this case – $5,000 for each of the three briefs filed by Mr Ramirez where he failed to appropriately verify the validity and accuracy of the case law he cited to the court and opposing counsel."
Back on October 29, 2024, Ramirez cited three non-existent cases in a brief.
Dinsmore in December ordered [PDF] Ramirez to appear in court and show cause why he should not be sanctioned.
... did not know that AI was capable of generating fictitious cases and citations
"Transposing numbers in a citation, getting the date wrong, or misspelling a party's name is an error," the judge wrote.
"Citing to a case that simply does not exist is something else altogether. Mr Ramirez offers no hint of an explanation for how a case citation made up out of whole cloth ended up in his brief. The most obvious explanation is that Mr Ramirez used an AI-generative tool to aid in drafting his brief and failed to check the citations therein before filing it."
That indeed was the explanation offered during a January 3, 2025 hearing on the matter.
"Mr Ramirez admitted that he had relied on programs utilizing generative artificial intelligence ('AI') to draft the briefs," Dinsmore's report states. "Mr Ramirez explained that he had used AI before to assist with legal matters, such as drafting agreements, and did not know that AI was capable of generating fictitious cases and citations."
While Ramirez insisted he did not act in bad faith, he did acknowledge not fully complying with Federal Rule of Civil Procedure 11, which requires attorneys presenting material to the court to certify it is accurate to the best of their knowledge.
Ramirez did not immediately respond to a request for comment.
Dinsmore noted that while the penalty he has recommended is "at the higher end of the sanctions that have previously been imposed for similar conduct, Mr Ramirez's professed ignorance of the propensity of the AI tools he was using to 'hallucinate' citations is evidence that those lesser sanctions have been insufficient to deter the conduct."
- Microsoft's drawback on datacenter investment may signal AI demand concerns
- AI hallucinates software packages and devs download them – even if potentially poisoned with malware
- Microsoft expands Copilot bug bounty targets, adds payouts for even moderate messes
- GitLab and its execs sued again and again over 'misleading' AI hype, price hikes
Indeed, ample recent reporting about artificial intelligence's tendency to make things up has not deterred lawyers from using chatbots to help draft documents. We reported a similar incident in Wyoming earlier this month, and another in New York. And this are just the court proceedings; who knows how models are being used in marketing and accountancy now, for instance, let alone legal work and programming.
AI-generated falsehoods have even tripped up Minnesota Attorney General Keith Ellison in the case of Kohls et al v. Ellison et al.
Defending a Minnesota ban on the dissemination of "deepfakes" designed to injure a political candidate or influence an election from First Amendment free speech challenge, Ellison submitted two expert reports from professors offering "background about artificial intelligence ('AI'), deepfakes, and the dangers of deepfakes to free speech and democracy."
This proved problematic for Ellison because a declaration [PDF] from Jeff Hancock, professor of communication at Stanford University and director of the Stanford Social Media Lab, included two references to academic articles that did not exist.
This did not please the court.
... fallen victim to the siren call of relying too heavily on AI
"After reviewing plaintiffs’ motion to exclude, Attorney General Ellison’s office contacted Professor Hancock, who subsequently admitted [PDF] that his declaration inadvertently included citations to two non-existent academic articles, and incorrectly cited the authors of a third article," wrote US District Judge Laura M. Provinzino in a January 10, 2025 order [PDF] granting a motion to exclude Hancock's report. "These errors apparently originated from Professor Hancock’s use of GPT-4o – a generative AI tool – in drafting his declaration."
Hancock did not immediately respond to a request for comment.
Judge Provinzino notes that it is ironic that "a credentialed expert on the dangers of AI and misinformation, has fallen victim to the siren call of relying too heavily on AI – in a case that revolves around the dangers of AI, no less." ®