Lawyers face judge's wrath after AI cites made-up cases in fiery hoverboard lawsuit

Talk about court red-handed

Demonstrating yet again that uncritically trusting the output of generative AI is dangerous, attorneys involved in a product liability lawsuit have apologized to the presiding judge for submitting documents that cite non-existent legal cases.

The lawsuit began with a complaint filed in June, 2023, against Walmart and Jetson Electric Bikes over a fire allegedly caused by a hoverboard [PDF]. The blaze destroyed the plaintiffs' house and caused serious burns to family members, it is said.

Last week, Wyoming District Judge Kelly Rankin issued an order to show cause [PDF] that directs the plaintiffs' attorneys to explain why they should not be sanctioned for citing eight cases that do not exist in a January 22, 2025 filing.

The citations were made as part of an argument the lawyers hoped would mean some evidence could not be presented to the jury.

That argument was delivered in a motion in limine [PDF] – a special type of motion that applies for certain evidence to be excluded at trial and which is considered without a jury being present.

The document cites nine cases in support of its arguments, among them Meyer v. City of Cheyenne, 2017 WL 3461055 (D. Wyo. 2017).

As identified in a subsequent filing [PDF], the case was hallucinated by OpenAI's ChatGPT.

A screenshot of ChatGPT hallucinating a legal case

Witness ChatGPT, for example, hallucinating a legal case ... What happens if you put the fictitious 2017 WL 3461055 case into the OpenAI chatbot ... Click to enlarge

It gets worse as the case number (2:16-cv-00246-SWS), another of the imagined proceedings dredged up by ChatGPT for the under-fire attorneys, is real and is better known as 2:16-cv-00246-NDF [PDF, a 2016 case, American Wild Horse Preservation Campaign et al. v. United States Department of the Interior Secretary et al. under a different judge – Nancy D. Freudenthal (NDF) rather than the errantly cited Scott W. Skavdahl (SWS)].

As noted by Judge Rankin, eight of the nine citations in the January motion were pulled from thin air or lead to cases with different names. Pointing to some of the past instances where AI chatbots have hallucinated in legal proceedings over the past few years – Mata v. Avianca, Inc, United States v. Hayes, and United States v. Cohen – the judge's order asks the attorneys who signed the filing to explain why they should not be punished.

Two of the attorneys – Taly Goody and T. Michael Morgan – on Monday filed a joint response [PDF] acknowledging the error. Their filing includes the following:

Our internal artificial intelligence platform "hallucinated" the cases in question while assisting our attorney in drafting the motion in limine.

This matter comes with great embarrassment and has prompted discussion and action regarding the training, implementation, and future use of artificial intelligence within our firm.

This serves as a cautionary tale for our firm and all firms, as we enter this new age of artificial intelligence.

In a modest effort to prevent this from happening again, the law firm Morgan & Morgan, of which T. Michael Morgan is an attorney, on Monday "added a click box to our AI platform that requires acknowledgement of the limitations of artificial intelligence and the obligations of the attorneys when using our artificial intelligence platform."

On Thursday, the third attorney involved – Rudwin Ayala – shouldered the blame in a response [PDF] that clears his co-counsels of involvement with the drafting of the dodgy document.

"Part of my preparation of said motions in limine included use of an internal AI tool for purposes of providing additional case support for the arguments I set forth in the Motions," the attorney explains. "After uploading my draft of the Motion to the system’s AI tool, the relevant queries I made with the tool included 'add to this motion in limine Federal Case law from Wyoming setting forth requirements for motions in limine', with an additional query of 'add more case law regarding motions in limine'.

"Another query made was 'Add a paragraph to this motion in limine that evidence or commentary regarding an improperly discarded cigarette starting the fire must be precluded because there is no actual evidence of this, and that amounts to an impermissible stacking of inferences and pure speculation. Include case law from federal court in Wyoming to support exclusion of this type of evidence.'

"There were a few other inquiries made requesting the addition of case law to support exclusion of evidence, all similar in nature. This was the first time in my career that I ever used AI for queries of this nature."

It may also be the last time. After thoroughly explaining the SNAFU, the attorney threw himself at the mercy of the court.

"With a repentant heart, I sincerely apologize to this court, to my firm, and colleagues representing defendants for this mistake and any embarrassment I may have caused. The last week has been very humbling for me professionally, and personally, one that I can guarantee shall not ever repeat itself." ®

More about

TIP US OFF

Send us news


Other stories you might like