When AI Hallucinates to the Court: A Lesson from the Arizona Court of Appeals

When AI Hallucinates to the Court: A Lesson from the Arizona Court of Appeals

The Arizona Court of Appeals’ decision in Washburn v. Houston offers a stark warning about the risk of reliance on AI tools in legal practice — something that unfortunately is being done by self-represented parties as well as attorneys who should know better. While this case involved a family law relocation dispute, the most striking aspect of the case appears near the end of the opinion: the court identified hallucinated case citations and quotations in an attorney’s appellate brief.

It should be noted that this case was not cited as an opinion, meaning in general it should not be cited as precedent for other cases, although there are exceptions to the use of an unpublished decision in future proceedings. The fact that this isn’t a published opinion does not diminish the importance of the court’s findings to the parties and attorneys in this case, however.

The court found that counsel for Father cited cases in Father’s appellate brief stating legal propositions that those cases did not support, and the brief included quotations that simply did not exist in the cited authorities. Not surprisingly, misstatements consistently favored the attorney’s argument. The court emphasized that, regardless of the source of the errors, counsel has a nondelegable duty of candor to the tribunal. Accuracy is not optional, and blaming technology is not a defense. If AI produced the incorrect quotes and cases, counsel had an absolute duty to review all cases and cites before filing the brief with the court.

As a result, the court took the serious step of referring the matter to the State Bar of Arizona to determine whether professional conduct rules were violated. This referral underscores that AI-assisted research does not reduce ethical obligations; it heightens them. Lawyers must independently verify every citation, quotation, and factual assertion before filing.

This lesson from Washburn is clear: AI can be a powerful tool, but it is not a substitute for legal judgment. In fact, AI consistently produces made-up cases and made-up quotes, and this case is far from the first to expose an attorney’s (or litigant’s) failure to double-check AI. Used carelessly, AI can expose lawyers to sanctions, reputational harm, and disciplinary review. In short, while AI belongs in the workflow, it should never be counted on to be totally accurate.

Related Post