Start in the finance department. Could the deployment of generative AI help law firms accelerate the trend toward more flexible or creative billing models?
What about the HR side of the house? Can generative AI play a role in onboarding activities and ongoing talent development initiatives, helping to make sure junior associates continually learn and grow and develop?
What if the IT team could roll out a brand-new practice management system or document management system and then lean on a generative-AI-powered chatbot to help provide product and behavioral change support and answer questions that new lawyers might have about the system?
These are heady possibilities. But while this newest flavor of AI holds the promise of transforming many different areas of “the business of law,” there are issues around training, security and risk mitigation that must be addressed before legal administrators can confidently embrace it.
INFORMATION ARCHITECTURE BEFORE ARTIFICIAL INTELLIGENCE
Legal administrators should begin their generative AI journey by evaluating their information architecture. Why? Because the large language models (LLMs) that underpin generative AI require extensive training on data to provide accurate answers and generate useful content.
Picture our lawyers above querying a tech support chatbot and getting wildly inaccurate answers about how to use their new practice management system. Alternatively, imagine the financial analyst who is trying to come up with an ideal billing model for a new client by using a generative AI tool that was trained on matter and billing data from 10 or even 20 years ago.
The bottom line is that the outputs provided by generative AI will be skewed in the wrong direction if improper data is used to train it. So, how best to get around these types of disasters-in-waiting and shore up the information architecture?
FEEDING TIME
The first step is to identify the trusted data sets within the organization and where exactly that data lives.
Organizations that already have a document management system (DMS) are one step ahead of the game, as they have implemented the context of matters and some key matter-centric factors like practice areas or regions. But the firm’s data will require some additional attention to find “the good stuff.”
Fortunately, the administrative teams will know where the data is good and where it is weaker. For example, a key business function may be run in spreadsheets stored in the DMS. Identify them and mark them out uniquely. Alternately, find where the practice teams “stash” the shared knowledge/best examples, and flag these matters as unique.
“No one should think of generative AI as a ‘plug and play’ technology that doesn’t require any groundwork. But that doesn’t mean that generative AI isn’t worth exploring — it just needs to be done in a thoughtful and considered manner.”
This filtering is key. Providing the LLM with access to all the files within the DMS can overwhelm the model with too much “noise” and not enough “signal.” A better approach is to train the LLM on a small subset of data within the DMS, like the final approved versions of documents from within a specific time range. Having some sort of internal knowledge curation team that’s in charge of determining what “good content” looks like for any particular legal administration workflow is essential. Simply put, what you feed your LLM matters.
Likewise, firms will want to ground the outputs that a generative AI tool is able to provide. In the case of the IT professional looking to create chatbot support for the latest platform rollout, that might mean making sure that the tool is pulling answers from vetted content, like an official support portal rather than the entirety of the world wide web. For the finance professional, it might mean making sure that outputs are grounded in specific areas of the DMS, practice management system or billing system.