Skip to main content

Artificial intelligence has been creeping into almost every profession, including law. In Canada, a few recent cases show what happens when lawyers or self-represented individuals rely on AI to make their arguments.

Examples of AI “Hallucinations”

In Zhang v. Chen, 2024 BCSC 285, the lawyer representing Mr. Chen was ordered to pay costs after submitting a brief that quoted two made-up cases. Substantial effort was made by Ms. Zhang’s legal team to find the fabricated case law, even going so far as to hire a legal researcher. It was eventually clarified that the lawyer representing Mr. Chen had used ChatGPT, which provided the fictious cases. Additionally, the Court ordered that the lawyer had to revisit all of her files to determine whether ChatGPT was used, and if so, needed to advise the opposing parties and Court immediately. In the Court’s final comment, the Honourable Justice Masuhara stated:

[46]      As this case has unfortunately made clear, generative AI is still no substitute for the professional expertise that the justice system requires of lawyers.  Competence in the selection and use of any technology tools, including those powered by AI, is critical.  The integrity of the justice system requires no less.  

In Ko v. Li, 2025 ONSC 2985, another lawyer faced potential discipline after relying on three cases that were non-existent, and one case where the facts were wrong. The lawyer used these cases in written and oral submissions in open court. The lawyer was ordered to provide submissions on why she should not be held in contempt. The court stated that:

[14]    Irrespective of issues concerning artificial intelligence, counsel who misrepresent the law, submit fake case precedents, or who utterly misrepresent the holdings of cases cited as precedents, violate their duties to the court.

[16]     A court decision that is based on fake laws would be an outrageous miscarriage of justice to the parties and would reflect very poorly on the court and the civil justice system.

In Specter Aviation Limited c. Laprade, 2025 QCCS 352, a self-represented individual was fined $5,000.00 for giving the court “jurisprudence” invented by AI. The Court included in their decision that there were eight instances of non-existent citations, decisions not rendered, irrelevant references, and inconsistent conclusions. Mr. Laprade was further ordered to pay for all legal costs.

All these incidents have one thing in common: The Courts expect humans, not algorithms, to be responsible for what gets filed and argued. The consequences of Judges making decisions based on fake or faulty case law are severe.

Why This Matters in Criminal Law

In criminal cases, lives and liberty are on the line. When lawyers or self-represented people rely on AI to analyze the facts, the relevant law, or find Charter breaches, they risk serious mistakes.

AI doesn’t understand the nuances that each individual case contains or the human context behind a case when attempting to apply the Canadian Charter of Rights and Freedoms. It might confuse U.S. legal rules with Canadian ones, misquote a judgment, or overlook the fine details that determine whether someone’s rights were violated. If those errors make their way into a Judge’s decision, they can threaten a person’s right to a fair trial.

The Bigger Picture

AI tools can make legal research faster, especially for people who can’t afford a lawyer. But without careful human oversight, a Judge may rely on incorrect information for their decision, which would be a significant miscarriage of justice.

The takeaway is that AI can help people find legal information, but it cannot replace real legal decision making and expert analysis obtained by lawyers through years of practical experience. Additionally, AI makes mistakes, and sometimes, it will find incorrect and irrelevant legal information. In the worst-case scenario, it will create its own.

At Roulston Urquhart, we understand that technology is changing how information is shared and accessed. But no technology can replace the depth of understanding, critical thinking, and judgment that come from years of legal experience.

Our lawyers research every case carefully, verify every source, and apply the law with precision. We recognize that shortcuts, whether through AI or otherwise, can lead to real harm for clients. That’s why we remain committed to providing work that reflects the skill, professionalism, and diligence earned through years at the bar, and take no short cuts in our representation.

Leave a Reply