Operations Management
 

How to Overcome Lawyer Concerns About AI ‘Hallucinations’

As AI adoption is pushed from all sides, legal administrators must guide attorneys through understanding AI hallucinations and how to spot and avoid them.
By Hilary Goldman
October 2025
 

Legal administrators seeking to identify and implement AI solutions in their law firms may still face obstacles from lawyers concerned about accuracy and ‘hallucinations,’ AI-generated outputs that aren’t rooted in relevant source material.

While the recent ABA Legal Technology Survey shows an increase in AI adoption by lawyers, the greatest barrier to entry, according to the survey, is the belief that AI is not totally accurate. Three-quarters of those surveyed responded that concerns about AI-generated hallucinations are the reason why they have been hesitant to implement it.  

At the same time, it’s hard to reject AI outright because of these concerns. Your competition is likely using AI and technology is constantly evolving. Late adopters risk falling behind the curve of lawyers who are using AI right now.  

So how can legal administrators support lawyers who want to use this powerful technology but aren’t confident it can be used ethically and effectively? Here are some steps you can take to educate and support your team to use AI confidently.  

Evaluate Your Solutions and Support Ongoing Education  

Legal teams don’t trust tools they don’t understand. Lawyers have an ethical responsibility to understand the benefits and risks of using the latest technology, including AI, as well as a duty to properly supervise the use of AI tools.   

When lawyers see high-profile examples of AI hallucinations that made their way into court filings, leading to serious consequences, this amplifies feelings of mistrust and uncertainty. But legal administrators can help their teams understand how AI works, how hallucinations happen, how to spot them and how to avoid them. This starts by looking “under the hood” of solutions and educating lawyers about ways to verify outputs to prevent AI-generated inaccuracies from making their way into work product.  

First, when evaluating AI solutions, legal administrators need to demand transparency from providers. When AI delivers an answer, your team needs to understand why it was generated and trace the result back to the source. Whether through linked citations, clear context or editable suggestions, AI should illuminate reasoning, not obscure it.  

Transparent, or “white box,” AI solutions enable users to examine, validate and even adjust the AI’s behavior to improve accuracy, fairness and ethical compliance. Other AI solutions operate like a sealed container, keeping their decision-making processes hidden. While these AI systems generate predictions or results, they do not reveal the methods behind their calculations. This is called “black box” AI because it’s unclear how the model generates outputs. Second, educate your team on the capabilities, risks and limitations of AI. Selecting a transparent AI solution is a good first step in reducing the risk of inaccuracies. But ongoing education and training is critical for fostering informed and ethical use of AI technologies.   

Consider incorporating these elements into your AI approach:  

  • Encourage the team to select continuing legal education (CLE) courses focused on the ethical use of AI.  
  • Hold short, demo sessions on tools with AI capabilities to highlight valuable use cases and discuss best practices for managing risk.  
  • Create an internal AI knowledge hub with usage guidelines, links to solution usage guides to empower self-guided exploration and tips for prompt engineering.  

Between firm training, CLE and vendor training, your practitioners can obtain different perspectives on the functionality and governance of AI.

Establish Boundaries for Data Usage  

Another way legal administrators can support their team’s AI implementation is to select the right tools. The first, and most important step, in this process starts with scouting and selection of AI tools: Client confidentiality isn’t optional, it’s foundational. Any AI solution you bring into your firm must uphold your data protection standards without compromise.  

Next, ensure that your AI solution does not share data between clients or matters. In addition, ensure that solutions don’t use client data to train their AI model. These steps not only ensure client confidentiality, but also accuracy. For example, if a member of your team asks AI to analyze a set of documents related to a case, you don’t want the AI to generate a response that references documents about another unrelated case that you also uploaded to your solution.   

Whatever AI platform you implement should give you control over data access and usage, with clear boundaries and auditability. By taking these steps up front to ensure your law firm controls what data is used by AI and how it uses it, you give your lawyers a solid foundation to draw from when they communicate with clients and answer their questions about AI usage. Clear client communication is critical to ensure transparency in AI usage, including disclosing the role of AI in legal processes and obtaining informed client consent when necessary.  

Whatever AI platform you implement should give you control over data access and usage, with clear boundaries and auditability. 

Remind Lawyers They Are in Control  

No matter what AI solutions you select and implement, ultimately, lawyers and legal professionals remain accountable for the final results and how they’re used. Legal administrators can help reduce the risk of overreliance or lack of oversight by scheduling AI check-ins to explore use cases and reinforce best practices when collaborating with AI.  

Smart AI design doesn’t just allow for lawyer involvement — it requires it. The best tools suggest, not decide. They highlight patterns, surface evidence and generate first drafts but always leave room for legal professionals to review, validate and determine the best way to leverage outputs. By thoroughly evaluating AI solutions for transparency, providing comprehensive education and enablement to legal teams and establishing clear boundaries for data usage, legal administrators can build confidence in AI and allay concerns about hallucinations.   

Regularly reinforcing the lawyer's indispensable role in reviewing and validating AI-generated content ensures that AI serves as a powerful tool to enhance, not replace, human expertise. Through these strategic steps, law firms can embrace AI's potential while upholding their commitments to accuracy, client confidentiality and professional responsibility.

Also in This Issue

Back to Top