After Ko v. Li: Why Ontario Lawyers Must Verify AI Citations
The Ko v. Li decision exposed the dangers of AI-generated hallucinated citations in Ontario courtrooms. Learn what the case means for your practice and how to protect your firm from similar liability.
A Cautionary Tale from the Ontario Superior Court
On June 23, 2025, the Ontario Superior Court of Justice delivered a ruling that sent shockwaves through the legal community. In Ko v. Li, 2025 ONSC 2766, Justice Frederick Myers found that counsel for the plaintiff had relied on artificial intelligence to draft submissions that included fabricated case citations — cases that simply did not exist. The decision was not another routine procedural order. It was a vivid demonstration of a problem that had been theoretical for many Ontario lawyers until that moment.
The case involved a civil dispute where the plaintiff's counsel submitted written submissions containing multiple citations to decisions that, upon judicial review, could not be located in any Canadian legal database. When the court asked counsel to produce the cited authorities, the lawyer was unable to do so. The explanation: generative AI had produced plausible-sounding case names, neutral citations, and even summaries that appeared authoritative but were entirely fabricated — what the technology industry calls "hallucinations."
Justice Myers did not treat this as a harmless error. The court characterized the submission of non-existent authorities as a serious breach of counsel's duty to the court, one that undermines the administration of justice. For the lawyer involved, the consequences were immediate and severe — including a referral to the Law Society of Ontario for investigation and an order for costs personally payable by counsel.
What Happened in Ko v. Li
The facts of the case are straightforward and deeply instructive. The plaintiff's counsel used a generative AI tool to assist in drafting legal submissions. The tool produced text that included references to multiple Ontario court decisions, complete with neutral citations formatting (year, court, and docket number), party names, and brief parenthetical summaries of the supposed holdings.
None of these cases existed. The AI had generated text that was structurally indistinguishable from genuine legal citation but entirely fictional. Counsel filed these submissions without verifying the citations against any legal database — not CanLII, not Westlaw, not LexisNexis. The opposing party, perhaps sensing something was off, challenged the authorities, and the court confirmed they were fabricated.
Justice Myers's endorsement is notable for its directness. The court drew an unambiguous line: lawyers cannot delegate their professional judgment to machines, and the failure to verify AI-generated content is not a technological issue but a professional responsibility issue. Citing the longstanding principle that counsel certify the accuracy of their submissions to the court, the decision makes clear that the duty of verification falls squarely on the lawyer, regardless of how the content was produced.
The LSO's Position on AI and Professional Responsibility
The Law Society of Ontario has been developing its guidance on AI use in legal practice. While the LSO has not banned the use of generative AI, it has been unequivocal that existing professional obligations apply fully to AI-assisted work. This means:
- Competence (Rule 3.1-1): Lawyers must understand the tools they use, including their limitations. Using an AI tool without understanding that it can fabricate citations fails the competence standard.
- Quality of Service (Rule 3.1-2): Delivering work product containing fabricated authorities does not meet the standard of service clients are entitled to expect.
- Dishonesty and Misrepresentation (Rule 3.1-2, commentary): While the intent in Ko v. Li was not to deceive, the practical effect of filing fabricated citations is misleading to the court and opposing parties.
- Supervision (Rule 5.1-1): Lawyers who delegate AI-assisted research to junior counsel or staff must supervise the output with the same rigour they would apply to any other work product.
The LSO's Convocation adopted a resolution in late 2024 encouraging lawyers to adopt AI tools cautiously and to implement quality assurance measures when using generative AI. The society has also indicated that technology competency will increasingly be part of what constitutes competent practice under the Rules of Professional Conduct.
Why Citation Hallucinations Are a Systemic Risk
AI hallucination is not a bug that will be fixed in the next software update. It is a structural feature of how large language models work. These models generate text by predicting the most probable next token based on patterns in their training data. When asked to cite legal authority, they produce text that looks like a citation because they have seen many citations — not because they have accessed a legal database. The result is text that is formally correct but substantively fictional.
This is particularly dangerous in legal practice for several reasons:
- Legal citations have a specific format that is easy for AI to reproduce convincingly — year, court abbreviation, docket number, and party names all follow predictable patterns.
- The volume of Canadian case law means that even experienced lawyers may not immediately recognize a fabricated citation, particularly from a lower court or a recent year.
- The adversarial system depends on counsel being able to trust — and verify — each other's authorities. Fabricated citations erode this foundational trust.
- Unlike a factual error that can be corrected, a fabricated citation calls into question the entirety of the lawyer's work product on that matter.
Practical Steps to Protect Your Firm
Every Ontario law firm that uses — or is considering using — generative AI tools needs a verification protocol. Here is a practical framework:
1. Mandatory Citation Verification
Never file or submit any AI-generated citation without independently verifying it against a trusted legal database. Check the neutral citation, party names, and the holding as characterized in your submission. CanLII is free and accessible; there is no excuse for skipping this step.
2. Implement a Two-Person Rule for AI-Assisted Drafting
Any document produced with AI assistance that will be filed with a court or tribunal should be reviewed by a second lawyer who did not use the AI tool for that draft. Fresh eyes are more likely to catch AI-generated content that seems plausible but is fabricated.
3. Adopt Firm-Wide AI Use Policies
Written policies should specify: which AI tools are approved for use, which tasks they may be used for, what verification is required before AI-generated content leaves the firm, and who is responsible for verification. The LSO expects firms to have these policies. Courts will increasingly ask whether they exist.
4. Train All Firm Members
Hallucination risk is not limited to junior lawyers. Any member of the firm who uses AI tools — from students to senior partners — needs to understand that AI can fabricate content with confidence and that verification is non-negotiable.
5. Document Your Verification Process
If something goes wrong, you want to demonstrate that you took reasonable steps. Keep records showing that citations were verified, by whom, and when. This documentation may be critical in a Law Society investigation or a negligence claim.
The Broader Implications for Ontario Legal Practice
Ko v. Li is not an isolated incident. Reports of AI hallucination in legal filings have emerged in the United States, the United Kingdom, and now Canada. As AI tools become more widely adopted, the frequency of these incidents will increase unless the profession takes proactive action.
For Ontario lawyers, the case raises several strategic considerations. First, firms that invest in robust AI governance will be better positioned to withstand regulatory scrutiny and to demonstrate professional responsibility. Second, firms that fail to adopt verification protocols face growing liability exposure — both professional discipline and civil negligence claims from clients who suffer adverse outcomes due to fabricated authorities. Third, courts are likely to develop stricter expectations around AI use, and lawyers who cannot demonstrate compliance will be at a disadvantage.
The decision also has competitive implications. Clients are becoming more sophisticated about AI risks and will increasingly ask their lawyers about the firm's AI policies. Being able to articulate a thoughtful, responsible approach to AI adoption is a differentiator.
How LexIntake Helps
LexIntake was built with the risks of AI in legal practice front of mind. Our Citation Verifier tool cross-references every AI-generated citation against CanLII and other Canadian legal databases in real time, flagging any citation that cannot be verified before it leaves your desk. Our Legal Assistant is designed to support — never replace — lawyer judgment, providing research starting points that are clearly marked as AI-generated and requiring lawyer verification before any citation is included in work product. These tools are purpose-built for Ontario law firms that want to harness AI efficiency without sacrificing professional responsibility.
LexIntake Editorial Team
Legal Technology Insights
The LexIntake Editorial Team publishes practical guidance for Ontario law firms navigating AI adoption, compliance, and growth.