Blogs

AI Opinion

By Ms. Anna Hubbard posted 07-11-2025 10:13 AM

  

Here is an article that came to the attention of the Ark Bar AI Task Force recently, and we thought you might find it interesting / informative.

 

There are many benefits that come from the proper use of AI, but the improper use can lead to BIG problems.  Be aware! 

Trial Court Decides Case Based On AI-Hallucinated Caselaw - Above the Law

The Short Story is:

  • An attorney cited fake cases,
  • These cases made it into the trial court order, which may have also been prepared by the same attorney.
  • On appeal, the attorney cited more hallucinated cases and asked for attorney's fees, based on one of the hallucinated cases.
  • Attorney sanctioned and the court was not happy.

 

We get this fun footnote in the order:

 

The percentage of bogus citations (73 percent of the 15 citations in the brief or 83 percent if the two bogus citations in the superior [*12]  court's opinion and the five additional bogus citations in Husband's response to Wife's petition to reopen Case are included) is consistent with the use of a general purpose large language model. According to a 2024 law review article: In attempting to find answers behind the phenomenon of the "hallucinations" to which generative AI seems prone, researchers at Stanford decided to test the technology. They measured more than 200,000 legal questions on OpenAI's ChatGPT 3.5, Google's PaLM 2, and Meta's Llama2 (all general purpose large‑language models not built specifically for legal use). The researchers found that these large‑language models hallucinate at least seventy‑five percent of the time when answering questions about a court's core ruling.

 

So the moral of the story is:  double check everything generated by AI.

 

AI Task Force Members: Meredith Lowry, John Cook, Devin Bates, Will Gruber, Jacob White, Judge Margaret Dobson, and Justice Rhonda Wood


#News
0 comments
16 views

Permalink