\
  The most prestigious law school admissions discussion board in the world.
BackRefresh Options Favorite

Lawyers Using AI Keep Citing Fake Cases In Court (WaPo)

https://archive.is/waUtF Lawyers using AI keep citing fak...
scholarship
  06/03/25
Didn't one of the lawyers claim that Lexis ai hallucinated c...
Dave Prole
  06/03/25
as i said before, the best thing for the profession would be...
...,,..;...,,..,..,...,,,;..,
  06/03/25
It's a problem that should solve itself in an adversarial sy...
https://imgur.com/a/o2g8xYK
  06/03/25
...
Melted Crayon
  06/04/25
so far every publicized cases has involved citing cases that...
.,..,..,.,,..,..,,,,,,,,..,...,.,.,.,
  06/04/25


Poast new message in this thread



Reply Favorite

Date: June 3rd, 2025 10:51 PM
Author: scholarship

https://archive.is/waUtF

Lawyers using AI keep citing fake cases in court. Judges aren’t happy.

Failing to vet AI output before using it in court documents could violate attorneys’ duty to provide competent representation, the American Bar Association has said.

June 3, 2025

Courts across the country are facing a deluge of filings from attorneys and litigants that back their arguments with nonexistent research hallucinated by generative artificial intelligence, prompting judges to fight back with fines and reprimands.

The problem reflects well-known issues with AI tools, which are prone to fabricate facts, or in these cases, citations. Soon after AI tools such as ChatGPT began to circulate, attorneys made headlines for submitting error-ridden memos after failing to check AI-assisted work.

But mistakes and embarrassed mea culpas have continued to pile up. Damien Charlotin, a Paris-based legal researcher who maintains a database of cases of AI hallucinations filed in court, said he’s found 95 such instances in the United States since June 2023 — including 58 this year. They include cases where attorneys or other participants admitted to using AI in filings that contained errors, or judges reported references to nonexistent cases or quotations.

“It has been accelerating,” Charlotin said.

Judges aren’t happy. In May, a Utah appeals court ordered an attorney to pay $1,000 to a Utah legal aid foundation for submitting a brief with references to nonexistent cases that the attorney later attributed to AI, according to court documents. The same month, a federal court in Indiana fined an attorney $6,000 and a California special master ordered two law firms to pay $31,100 to opposing attorneys in an insurance dispute for submitting briefs with similar mistakes.

“Plaintiff’s use of AI affirmatively misled me,” Michael Wilner, the California special master, wrote in his order. “I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them — only to find that they didn’t exist. That’s scary.”

The errors mirror those appearing in school essays, academic journals and government documents as writers increasingly adopt AI. The White House made headlines last week when its sweeping “MAHA Report” on children’s health included citations to studies that did not exist and evidence that AI had been used in its preparation, The Washington Post and others reported.

Legal experts say they’re concerned that high-profile errors haven’t led attorneys to use AI with caution.

“I thought that after the first such incident made national news, there would be no more,” said Stephen Gillers, a law professor at New York University. “But apparently the temptation is too great.”

The allure of AI is evident in a field that demands attorneys quickly produce arguments backed by research from volumes of case history. When properly vetted, AI tools can help save time in that research, Gillers said.

Lisa Lerman, a law professor at the Catholic University of America, said that lawyers taking shortcuts when pressed for time is not a new issue. Attorneys have been known to plagiarize previous briefs or cite irrelevant cases.

But AI tools can confidently fabricate cases and citations an attorney might fail to double check. That presents “dangers of an entirely different order,” Gillers said. He argued that filing court documents without checking the veracity of citations is “the classic definition of malpractice.”

Legal associations seem to agree, but formal AI policies are still taking shape. The American Bar Association wrote in a July opinion that failing to review AI output “could violate the duty to provide competent representation.” State bar associations that have adopted AI policies generally require attorneys to verify the accuracy of AI research, though they are split on whether attorneys must disclose AI use up front.

Meanwhile, references to fake cases keep coming up in court.

In the past year, errors attributed to AI have appeared in states including California and Wyoming. They have snarled high-profile federal cases like the prosecution of Timothy Burke, a Florida media consultant accused of hacking to obtain and publish unaired Fox News footage (Burke denies the footage was improperly obtained) and a defamation lawsuit against MyPillow CEO Mike Lindell.

And they’ve been met with scathing criticism. An attorney for Burke submitted a motion to dismiss charges that fabricated quotes for opinions from several other federal courts on standards for innocence, a Florida judge wrote in May.

The judge ordered Burke, if he refiled, to explain “how these unprofessional misrepresentations of legal citations occurred and what counsel will do to avoid filing any similarly unacceptable motions again.”

Lindell’s legal team submitted a brief “replete with … fundamental errors” and nearly 30 erroneous citations including misquotes and fabricated cases, and later said the brief was created with AI, a Colorado judge wrote in April.

Burke’s attorneys did not respond to a request for comment; they told the court that one attorney erroneously included unvetted research from ChatGPT in a court motion. Lindell’s attorneys did not respond to a request for comment but said in court that a prior draft was mistakenly filed in court and that “there is nothing wrong with using AI when used properly.”

Both filings were struck, but the attorneys have not faced further sanction.

Even those presumably familiar with AI tools have been caught making mistakes. In January, a Minnesota judge tossed expert testimony on AI-generated deepfakes from Jeff Hancock, a Stanford communications professor, after he admitted that his declaration mistakenly included references to two nonexistent academic articles that ChatGPT generated, according to court documents.

Attorneys representing the AI company Anthropic in a copyright case against several music labels apologized in May for submitting a brief in California federal court with an inaccurate citation created using Anthropic’s own Claude AI after opposing attorneys accused them of fabricating research. The research was genuine, Anthropic’s attorneys said, but Claude misstated a research paper’s author and title when prompted to create a properly formatted citation.

Hancock, Anthropic and Latham & Watkins, the firm representing Anthropic, did not respond to requests for comment.

As state bars grapple with AI policy, judges are figuring out in real time how best to discourage the practice. Several judges have ordered fines paid to the court, opposing attorneys or nonprofits. Wilner, the California special master, ordered two law firms pay $31,100 to opposing counsel, among the largest penalties issued so far as a sanction for AI errors.

“Strong deterrence is needed to make sure that attorneys don’t succumb to this easy shortcut,” Wilner wrote.

(http://www.autoadmit.com/thread.php?thread_id=5733138&forum_id=2Elisa#48985157)



Reply Favorite

Date: June 3rd, 2025 10:53 PM
Author: Dave Prole

Didn't one of the lawyers claim that Lexis ai hallucinated cases? That seems unfair

(http://www.autoadmit.com/thread.php?thread_id=5733138&forum_id=2Elisa#48985162)



Reply Favorite

Date: June 3rd, 2025 10:54 PM
Author: ...,,..;...,,..,..,...,,,;..,


as i said before, the best thing for the profession would be extremely harsh sanctions (from judges and/or state bars) for fucking up AI shit. lawyers should be scared shitless to dabble in AI. would be great for protectionism of the profession to keep it outdated.

(http://www.autoadmit.com/thread.php?thread_id=5733138&forum_id=2Elisa#48985165)



Reply Favorite

Date: June 3rd, 2025 10:55 PM
Author: https://imgur.com/a/o2g8xYK


It's a problem that should solve itself in an adversarial system.

(http://www.autoadmit.com/thread.php?thread_id=5733138&forum_id=2Elisa#48985168)



Reply Favorite

Date: June 4th, 2025 1:26 AM
Author: Melted Crayon



(http://www.autoadmit.com/thread.php?thread_id=5733138&forum_id=2Elisa#48985366)



Reply Favorite

Date: June 4th, 2025 12:28 AM
Author: .,..,..,.,,..,..,,,,,,,,..,...,.,.,.,


so far every publicized cases has involved citing cases that don't actually exist or are real but have nothing to do with the issue. all of this is avoidable through simple cite checking which most competent lawyers (and incompetent lawyers at biglaw firms) do anyway.

if they start coming down on plagiarism-type stuff then that will actually have an impact.

(http://www.autoadmit.com/thread.php?thread_id=5733138&forum_id=2Elisa#48985252)