Global AI Hallucination Cases: Lessons from Sanctioned Lawyers
Court sanctions for fabricated citations have reached seven-plus cases globally, establishing clear patterns for prevention.
Courts worldwide have sanctioned or disciplined attorneys in at least seven documented cases involving AI-generated fabricated citations over the past two years. These cases provide concrete guidance on what triggers sanctions and how firms can prevent similar failures.
The pattern is consistent: lawyers input research queries into generative AI, receive plausible-sounding but non-existent case citations, fail to verify them against primary sources, and face sanctions when opposing counsel or the court discovers the fabrications.
Mata v. Avianca (S.D.N.Y., June 2023)
The Facts
In February 2022, Roberto Mata filed a personal injury lawsuit against Avianca Airlines, alleging injury from a metal serving cart during an international flight. His attorneys, Peter LoDuca and Steven A. Schwartz of Levidow, Levidow & Oberman P.C., submitted a legal motion containing numerous fake precedents generated by ChatGPT.
Avianca's attorneys informed the court they could not locate many authorities cited in the opposition filing. The court conducted its own search and confirmed the citations were fabricated.
The Sanctions
On June 22, 2023, Judge P. Kevin Castel issued a written decision fining the two lawyers and their firm $5,000. He also required the lawyers to write letters to their client and to the judges whose names appeared as authors of the fake AI-generated opinions.
Judge Castel held that Mata's lawyers acted with "subjective bad faith" sufficient for sanctions under Federal Rule of Civil Procedure 11. The critical factor: the lawyers continued to advocate for the fake cases and legal arguments even after being informed by opposing counsel that the citations could not be found.
Key Lessons
Schwartz testified that he was "operating under the false perception that ChatGPT could not possibly be fabricating cases on its own." This mistaken assumption about AI capabilities is common among lawyers unfamiliar with LLM limitations.
The case triggered widespread professional education programs. By the decision date, the shortcomings of ChatGPT for legal research were on most lawyers' radar.
Ko v. Li (Ontario Superior Court, 2025)
The Facts
In this Canadian matrimonial case, the lawyer cited cases that could not be found on CanLII, Westlaw, Quicklaw, or Google. One hyperlink led to an unrelated case. Another returned a 404 error. One cited case reached a conclusion opposite to the lawyer's submission. Another hyperlink led to a completely different case.
The lawyer delivered a letter to the court explaining that her factum had been prepared "in part with the use of generative AI, namely ChatGPT." She acknowledged being "not comfortable with technology such as generative AI" and was "shocked" when she discovered the cases could not be found.
The Outcome
The Ontario Superior Court declined to order sanctions against the lawyer, despite clear violations. The judge noted: "There had to be someone who was going to be the first lawyer to file AI hallucinations here."
While US courts have typically imposed monetary sanctions around $5,000 in similar cases, the judge observed that Canadian courts have different roles regarding lawyer regulation. The lawyer had no disciplinary history over 30 years of practice, which influenced the discretionary outcome.
Key Lessons
Rule 4.06.1 (2.1) was enacted in Ontario in 2024 requiring factums to include signed statements certifying the authenticity of every authority cited. The Ko v. Li facts would now trigger clearer violations of this specific rule.
The absence of sanctions does not mean the conduct was acceptable. The lawyer escaped contempt consequences but faced professional embarrassment and potential future regulatory scrutiny.
Morgan & Morgan (D. Wyoming, February 2025)
The Facts
In the products liability case Wadsworth v. Walmart Inc. and Jetson Electric Bikes, LLC, Morgan & Morgan attorneys submitted motions in limine citing eight non-existent cases. Attorney Rudwin Ayala used his firm's in-house AI platform, MX2.law, to generate case law.
His prompts included instructions to "add to this Motion in Limine Federal Case law from Wyoming setting forth requirements for motions in limine" and "add more case law regarding motions in limine." Without verifying accuracy, Ayala included the AI-generated citations in the filings.
The motions bore signatures from Ayala, supervising attorney T. Michael Morgan, and local counsel Taly Goody, though neither Morgan nor Goody was involved in creating the motion or saw it before filing.
The Sanctions
Judge Kelly H. Rankin ordered:
- Rudwin Ayala: $3,000 fine and pro hac vice admission revoked
- T. Michael Morgan: $1,000 fine
- Taly Goody: $1,000 fine
Morgan & Morgan as a firm was not sanctioned because it had already implemented new training protocols requiring independent verification of AI-generated information. The firm issued a company-wide reminder that AI can generate fictitious case law and that using such information in filings can result in termination.
Key Lessons
Supervisory exposure extends to lawyers whose signatures appear on filings even when they were not involved in drafting. Judge Rankin stated that lawyers "still have an ethical duty to check the cites used in their legal filings."
This was the No. 42 law firm by headcount in the United States. Size and resources do not prevent AI-related failures; governance and verification processes do.
Zhang v. Chen (B.C. Supreme Court, 2024)
The Facts
In Zhang v. Chen, 2024 BCSC 285, the applicant's counsel cited two non-existent authorities. She ultimately admitted the citations came from ChatGPT and she had not verified them.
The Outcome
The British Columbia Supreme Court considered imposing special costs against the lawyer. While the court ultimately declined to award special costs, the lawyer was held personally liable for costs.
Key Lessons
Canadian courts have shown willingness to impose cost consequences even without formal sanctions. Personal cost liability creates meaningful financial exposure for AI-related failures.
R (Ayinde) v. London Borough of Haringey (UK High Court, June 2025)
The Facts
This was the first UK High Court case addressing AI misuse by lawyers. Barrister Sarah Forey and solicitor Abid Hussain submitted citations to cases that do not exist in two unrelated matters.
The President of the King's Bench Division, Dame Victoria Sharp, issued what the Bar Council termed "a wake-up call to the profession."
The Outcome
Both practitioners were referred to their professional regulators. This referral mechanism differs from US monetary sanctions but carries potentially more significant long-term consequences for practice rights.
Key Lessons
UK courts will not tolerate AI-generated fabrications in submissions. The regulatory referral path means disciplinary proceedings may follow, with outcomes ranging from warnings to practice restrictions.
Prevention Patterns
Analysis of these cases reveals consistent prevention strategies:
Mandatory Verification Protocols
Every sanctioned case involved failure to verify AI-generated citations against primary sources. Effective protocols require:
- Running all cited cases through legal databases (Westlaw, Lexis, CanLII) before submission
- Confirming the legal proposition attributed to each case is accurately stated
- Checking that cases remain good law
- Documenting the verification process
AI Tool Training
Lawyers using AI tools must understand that LLMs generate statistically probable text, not verified legal research. Training should address:
- How hallucinations occur
- Why AI-generated citations appear authoritative
- The limitations of AI for legal research versus other tasks
- Firm-specific verification requirements
Supervisory Oversight
The Morgan & Morgan case demonstrates that supervisory lawyers face exposure for filings they sign but do not draft. Firms should consider:
- Review workflows that include verification before supervisory signature
- Clear delegation of verification responsibility
- Documentation of who performed verification on each filing
Disclosure Policies
Over 200 federal judges have issued standing orders requiring AI disclosure. Proactive disclosure may mitigate sanctions exposure when problems are discovered.
Key Takeaways
- Sanctions typically range from $1,000 to $5,000 in US courts; UK and Canadian courts use regulatory referrals and cost orders
- All sanctioned cases involved failure to verify AI-generated citations against primary legal databases
- Supervisory lawyers face exposure even when not involved in drafting, as demonstrated in Morgan & Morgan
- Ontario now requires signed certification of citation authenticity in factums; US courts increasingly require AI disclosure
- No sanctioned lawyer claimed to knowingly submit false citations; all believed AI outputs were accurate
Sources
[Mata v. Avianca - Wikipedia]
Comprehensive overview of the landmark case including background facts, legal proceedings, and sanctions imposed on attorneys. Read Full Source ->
[AI Hallucination Cases Database - Damien Charlotin]
Maintained database tracking AI hallucination cases in courts globally, including case details and outcomes. Read Full Source ->
[Ko v. Li: AI Legal Ethics Case Analysis]
McCarthy Tetrault analysis of the Ontario Superior Court case and its implications for lawyers using generative AI. Read Full Source ->
[Morgan & Morgan Lawyers Fined for Hallucinated AI Citations]
Bloomberg Law coverage of the February 2025 sanctions against Morgan & Morgan attorneys, including detailed analysis of the court's reasoning. Read Full Source ->
[Federal Judge Sanctions Morgan & Morgan for AI-Generated Fake Cases]
LawNext detailed analysis of the Wadsworth v. Walmart ruling and its implications for law firm AI governance. Read Full Source ->

