Understanding the risks of using AI: And how an AI policy can help protect law firms

Artificial intelligence (AI) offers incredible potential for the legal sector. So much so that it’s already becoming part of daily practice. But as adoption grows, so too do the risks. Without clear oversight, AI can expose law firms to serious legal, ethical and regulatory issues. As such, an AI policy isn’t just a box-ticking exercise – it’s a critical safeguard for your firm, your clients, and your reputation. Here’s why.

Errors, hallucinations and inaccuracies

One of the biggest risks of using AI is that it can produce convincing but completely false information – commonly called “hallucinations.” Even ChatGPT states: “While I strive to provide accurate and helpful information, I cannot guarantee the accuracy or reliability of my responses.”

 Inaccurate or invented legal references could lead to:

  • Clients paying for flawed legal work that doesn’t achieve its intended purpose
  • Firms inadvertently misleading the court or regulators
  • Breaches of professional conduct rules and potential reputational damage.

This isn’t just theoretical. In 2023, two New York lawyers blamed AI for tricking them into including fictitious legal research in a court filing. The lawyers had used ChatGPT to assist with their legal research but failed to verify the accuracy of the output. ChatGPT had fabricated entirely non-existent cases that appeared legitimate, complete with invented judges and docket numbers. The court found the lawyers acted in bad faith by not verifying their sources before submission.

Closer to home, in May 2023, a litigant in person unknowingly submitted fictitious case law in a Manchester civil court after relying on ChatGPT to find supportive authorities. The AI-generated false citations, which the court recognised as such.

A recent article notes a troubling trend: In short, as AI becomes more popular, especially among time-pressured or resource-strapped legal teams, some users rely too heavily on these tools without validating the results. The article warns that – while AI tools can support legal research and drafting – human oversight and ethical responsibility remain critical.

An AI policy ensures all outputs are verified, supervised, and never relied upon without human oversight.

Confidentiality, privacy & data protection

AI systems process vast volumes of data – often sensitive, confidential, and subject to legal privilege. If not managed carefully, this creates significant risks around data protection, client confidentiality, and professional conduct.

The Information Commissioner’s Office (ICO) has made it clear that individuals’ data protection rights continue to apply when personal data is used in the development, training, or deployment of AI systems. Under the UK GDPR, this means firms remain fully accountable for how data is collected, stored, and processed, even if it’s handled by a third-party AI provider.

Key risks include:

  • Client data being input into public AI tools (e.g. ChatGPT): A well-meaning fee earner might copy and paste a client query into a generative AI tool to speed up a response, without realising that public tools often retain inputs for system training.
  • Vendor vulnerabilities and system bugs: Even if your firm uses AI responsibly, you could still be exposed through system failures beyond your control. For example, in 2023, a bug in ChatGPT allowed users to see the personal and billing information of other users.
  • Training data and unintended reuse: Some AI tools are trained on real-world data, and if that data includes client matter details, there’s a risk the model may “remember” and reproduce sensitive phrases or names in future outputs. This could have serious consequences if details from one client matter appear in an unrelated case or document.
  • Legal privilege and compliance with professional standards: AI tools that are not carefully governed may compromise privileged communications or result in breaches of the SRA Principles, particularly around confidentiality, integrity, and supervision.

An AI policy helps establish rules on acceptable use, secure systems, and how to protect sensitive information.

Intellectual property and plagiarism

AI systems often generate content by remixing existing material, raising complex questions about intellectual property (IP) rights. The UK Supreme Court has confirmed that AI systems cannot own or be assigned patent rights. However, the broader legal landscape is still evolving, particularly around:

  • Who owns AI-generated content
  • What constitutes infringement
  • The risk of unintended plagiarism.

Firms using AI to draft legal documents or marketing content need clear guidelines to avoid crossing IP boundaries. An AI policy helps mitigate this risk through appropriate training, usage protocols, and audit trails.

Transparency and explainability

Unlike human decision-making, AI systems can often provide detailed records of how decisions are reached – if configured correctly. This offers opportunities for greater transparency in legal reasoning, particularly when using AI to analyse case law or assist in litigation strategy.

However, many AI models still operate as “black boxes,” meaning their internal logic is not visible or auditable. Without clear governance, this lack of explainability can:

  • Undermine client trust
  • Breach regulatory obligations
  • Make it difficult to justify decisions in court or to regulators.

A good AI policy ensures your firm only uses tools with appropriate transparency controls, clearly defining when and how human oversight is required.

Deepfakes and cybercrime

AI is also powering a darker trend: the creation of deepfakes – hyper-realistic digital content that impersonates individuals in video, image, or audio formats. For example, in early 2024, criminals used deepfake technology to impersonate a company CFO in a video call, successfully scamming the business out of $25 million.

For law firms, the risk is twofold:

  1. Cybersecurity breaches via social engineering attacks: Sophisticated deepfakes could allow bad actors to impersonate senior partners, IT staff, or even clients, tricking employees into sharing credentials, authorising fund transfers, or disclosing sensitive case information.
  2. Evidence tampering and impersonation: The use of false or manipulated media as evidence or communication presents a new type of risk. For firms handling criminal, regulatory, or commercial litigation, this raises serious questions about the chain of custody, evidential reliability, and due diligence. If not identified early, deepfake content could compromise a case or a client’s defence.

A robust AI policy helps prepare your firm for this next threat frontier by setting clear controls around identity verification and cyber protocols.

Rapid adoption of AI in the legal sector

AI is not a future concept; it’s already being adopted across the profession. Indeed, a LexisNexis UK survey (2024) of over 1,200 legal professionals found:

  • 26% are using generative AI tools at least once a month (up from 11% in July 2023)
  • Large firms are the most likely to adopt AI
  • 35% plan to adopt AI tools soon (up from 28%)
  • In-house lawyers are leading the charge with 42% revealing AI adoption plans
  • Only 39% now say they have no plans to adopt AI, down from 61% in 2023.

The direction of travel is clear – AI is becoming part of standard legal practice. The question is not whether to use it, but how to use it safely. The SRA is keenly aware of the implications of AI and has produced updated guidance to help address the unique challenges posed by the technology.

 Why your law firm needs an AI policy

With increasing use and increasing risk, an AI policy is no longer optional – it’s essential. A tailored policy will help your firm:

  • Use AI tools safely, ethically, and in line with SRA CLC, CILEx and ICO guidance
  • Protect confidential data, legal privilege, and client trust
  • Ensure all AI outputs are subject to proper human oversight
  • Avoid regulatory breaches, reputational damage, or litigation
  • Provide clear internal guidance to staff using AI tools.

How Legal Eye can help

Our AI Policy for Law Firms is designed by compliance professionals, for legal professionals. We help you set boundaries, manage risks, and safely unlock the benefits of AI. Our AI Policy:

  • Is aligned with SRA principles and data protection law
  • Is customisable to your firm’s size, tools, and risk appetite
  • Includes templates, training, and implementation support.

Let us help you stay ahead of the curve while staying compliant. Speak to one of our consultants about our AI Policy for Law Firms today. Email: [email protected] or call: 020 3051 2049

Any Questions?

To find out more about Products and Services, please complete the form below.