Back to BlogAI & Technology

AI in Legal Practice: LSO's Guidelines Every Ontario Lawyer Must Follow

The LSO has taken a principles-based approach to AI regulation in legal practice. Here's what every Ontario lawyer needs to understand about their professional obligations when using AI tools.

LexIntake Editorial Team · Legal Technology InsightsThursday, September 25, 20256 min read

The LSO's Principles-Based Approach

Unlike some jurisdictions that have rushed to enact AI-specific rules, the Law Society of Ontario has adopted a principles-based approach that applies existing professional obligations to AI use. The LSO's position is that lawyers' core duties — competence, confidentiality, supervision, and honesty — do not change because a new technology enters the picture. What changes is the context in which those duties must be fulfilled, and the risks that must be managed.

This approach has important implications. It means that there is no "AI exemption" and no separate standard of care for AI-assisted work. It also means that lawyers cannot argue that they did not understand their obligations — the duties are the same ones they have always had. The only question is whether they are fulfilling them in the context of AI tools.

The Core Obligations Applied to AI

Competence (Rule 3.1-1)

Competence requires lawyers to have the knowledge, skill, and thoroughness reasonably necessary for the matters they handle. In the AI context, this means:

  • Understanding the capabilities and limitations of any AI tool used in practice — including its potential for errors, omissions, and fabrication.
  • Knowing enough about the tool's operation to evaluate the reliability of its output, rather than treating AI-generated content as presumptively accurate.
  • Recognizing that competence in 2026 increasingly includes technology competence. The LSO has signalled through its technology guidance that ignorance of AI limitations is not a defence against the consequences of AI errors.

Confidentiality (Rule 3.3-1)

Confidentiality obligations apply directly and forcefully to AI use. When a lawyer inputs client information into an AI tool, they must ensure that the information is protected:

  • Cloud-based AI tools may store, process, or learn from input data. Lawyers must understand what happens to data they enter into a tool and whether it is used for model training, stored on third-party servers, or accessible to the tool provider.
  • PIPEDA and applicable provincial privacy legislation impose obligations regarding the collection, use, and disclosure of personal information. Entering client data into an AI tool without understanding the tool's data handling practices may constitute a privacy breach.
  • Where AI tools are used for transcription, document review, or research involving client files, firms must conduct due diligence on the tool provider's security practices, data residency, and retention policies.
  • Informed consent may be required before client information is processed by certain AI tools, particularly those that involve third-party access or cross-border data transfers.

Supervision (Rules 5.1-1 through 5.1-3)

Supervision obligations extend to AI tools just as they extend to human assistants. Lawyers who delegate work to AI-assisted processes must:

  • Review AI-generated output before it is used or filed. The level of review must be proportionate to the significance of the output and the risk of error.
  • Ensure that any person under their supervision who uses AI tools is trained on the tool's limitations and on the firm's verification protocols.
  • Maintain ultimate responsibility for the quality and accuracy of all work product, regardless of whether AI was involved in its production.

Candour and Honesty (Rule 3.2-2)

Lawyers must not mislead the court, tribunals, or other participants in the legal process. AI-specific applications of this duty include:

  • Never filing or relying on AI-generated content that has not been verified. The Ko v. Li decision made clear that unverified AI output that contains fabricated citations is a breach of the duty of candour.
  • Disclosing AI use where required. While there is currently no general obligation to disclose that AI was used in drafting, courts may develop specific disclosure requirements. Lawyers should monitor this developing area.
  • Being transparent with clients about how AI is used in their matter. This is a matter of both professional responsibility and client relationship management.

The LSO's Specific AI Guidance

In its technology guidance and through Convocation resolutions, the LSO has identified several areas of particular concern regarding AI:

  • AI Hallucination: The LSO is acutely aware of the risk that AI tools can generate fabricated information, including fictitious case citations, statutory references, and factual assertions. Verification is not optional.
  • Bias and Fairness: AI tools trained on historical data may reproduce or amplify biases present in that data. Lawyers must be alert to the possibility that AI-generated analysis may reflect systemic biases, particularly in areas like criminal law, immigration, and child protection.
  • Data Security: The LSO expects lawyers to conduct due diligence on the security and privacy practices of any AI tool provider before entrusting client data to that tool.
  • Transparency with Clients: The LSO encourages lawyers to discuss their use of AI with clients, including what tools are being used, what data is being processed, and what safeguards are in place.
  • Ongoing Monitoring: The LSO recognizes that AI technology is evolving rapidly and that guidance may need to be updated. Lawyers have an obligation to stay current with both technological developments and regulatory changes.

What a Compliant AI Policy Looks Like

Every Ontario law firm that uses AI should have a written AI use policy. At a minimum, the policy should address:

  • Approved Tools: Which AI tools are approved for use in the firm, and for what purposes. Not every AI tool is appropriate for every task.
  • Prohibited Uses: Clear statement of activities that must never be performed by AI without direct lawyer oversight — for example, filing documents with a court, providing legal opinions to clients, or generating citations for legal submissions.
  • Verification Protocols: Specific procedures for verifying AI-generated content before it is used, filed, or communicated. This includes who is responsible for verification and what standards apply.
  • Data Handling: Rules for what client information may be entered into AI tools, what consent is required, and what due diligence must be conducted on tool providers.
  • Training Requirements: What training all firm members must complete before using AI tools, including ongoing education as tools and guidance evolve.
  • Incident Response: What happens when an AI error is discovered — who is notified, how the error is corrected, and how the incident is documented.

The Competitive Advantage of Responsible AI Use

Firms that adopt AI thoughtfully and responsibly — with robust policies, verification protocols, and transparent client communication — are better positioned than those that either avoid AI entirely or adopt it recklessly. Responsible AI use delivers efficiency gains while maintaining the quality and reliability that clients and courts expect. In a market where clients increasingly ask about AI practices, demonstrable competence in AI governance is a competitive differentiator.

How LexIntake Helps

LexIntake's platform is designed with LSO compliance at its core. Our Compliance Dashboard monitors your firm's AI use against LSO guidelines and alerts you to emerging regulatory requirements. The Citation Verifier ensures that no AI-generated citation is filed without independent verification against Canadian legal databases. Every LexIntake tool operates under the principle that AI supports lawyer judgment — it never replaces it. Our platform includes built-in audit trails, data handling safeguards aligned with PIPEDA, and transparency features that make it easy to communicate your AI practices to clients and regulators.

LexIntake Editorial Team

Legal Technology Insights

The LexIntake Editorial Team publishes practical guidance for Ontario law firms navigating AI adoption, compliance, and growth.