While the recent ABA Legal Technology Survey shows an increase in AI adoption by lawyers, the greatest barrier to entry, according to the survey, is the belief that AI is not totally accurate. Three-quarters of those surveyed responded that concerns about AI-generated hallucinations are the reason why they have been hesitant to implement it.
At the same time, it’s hard to reject AI outright because of these concerns. Your competition is likely using AI and technology is constantly evolving. Late adopters risk falling behind the curve of lawyers who are using AI right now.
So how can legal administrators support lawyers who want to use this powerful technology but aren’t confident it can be used ethically and effectively? Here are some steps you can take to educate and support your team to use AI confidently.
Evaluate Your Solutions and Support Ongoing Education
Legal teams don’t trust tools they don’t understand. Lawyers have an ethical responsibility to understand the benefits and risks of using the latest technology, including AI, as well as a duty to properly supervise the use of AI tools.
When lawyers see high-profile examples of AI hallucinations that made their way into court filings, leading to serious consequences, this amplifies feelings of mistrust and uncertainty. But legal administrators can help their teams understand how AI works, how hallucinations happen, how to spot them and how to avoid them. This starts by looking “under the hood” of solutions and educating lawyers about ways to verify outputs to prevent AI-generated inaccuracies from making their way into work product.
First, when evaluating AI solutions, legal administrators need to demand transparency from providers. When AI delivers an answer, your team needs to understand why it was generated and trace the result back to the source. Whether through linked citations, clear context or editable suggestions, AI should illuminate reasoning, not obscure it.
Transparent, or “white box,” AI solutions enable users to examine, validate and even adjust the AI’s behavior to improve accuracy, fairness and ethical compliance. Other AI solutions operate like a sealed container, keeping their decision-making processes hidden. While these AI systems generate predictions or results, they do not reveal the methods behind their calculations. This is called “black box” AI because it’s unclear how the model generates outputs. Second, educate your team on the capabilities, risks and limitations of AI. Selecting a transparent AI solution is a good first step in reducing the risk of inaccuracies. But ongoing education and training is critical for fostering informed and ethical use of AI technologies.
Consider incorporating these elements into your AI approach:
- Encourage the team to select continuing legal education (CLE) courses focused on the ethical use of AI.
- Hold short, demo sessions on tools with AI capabilities to highlight valuable use cases and discuss best practices for managing risk.
- Create an internal AI knowledge hub with usage guidelines, links to solution usage guides to empower self-guided exploration and tips for prompt engineering.
Between firm training, CLE and vendor training, your practitioners can obtain different perspectives on the functionality and governance of AI.