Tips and Trends Industry Advice and Developments

Legal Administration Meets Generative AI: A Roadmap for Success

Within the sphere of legal administration, the allure of generative artificial intelligence (AI) looms large, beckoning practitioners to consider its potential benefits.  

Alex Smith

Start in the finance department. Could the deployment of generative AI help law firms accelerate the trend toward more flexible or creative billing models? 

What about the HR side of the house? Can generative AI play a role in onboarding activities and ongoing talent development initiatives, helping to make sure junior associates continually learn and grow and develop?  

What if the IT team could roll out a brand-new practice management system or document management system and then lean on a generative-AI-powered chatbot to help provide product and behavioral change support and answer questions that new lawyers might have about the system? 

These are heady possibilities. But while this newest flavor of AI holds the promise of transforming many different areas of “the business of law,” there are issues around training, security and risk mitigation that must be addressed before legal administrators can confidently embrace it. 


Legal administrators should begin their generative AI journey by evaluating their information architecture. Why? Because the large language models (LLMs) that underpin generative AI require extensive training on data to provide accurate answers and generate useful content.  

Picture our lawyers above querying a tech support chatbot and getting wildly inaccurate answers about how to use their new practice management system. Alternatively, imagine the financial analyst who is trying to come up with an ideal billing model for a new client by using a generative AI tool that was trained on matter and billing data from 10 or even 20 years ago.  

The bottom line is that the outputs provided by generative AI will be skewed in the wrong direction if improper data is used to train it. So, how best to get around these types of disasters-in-waiting and shore up the information architecture? 


The first step is to identify the trusted data sets within the organization and where exactly that data lives. 

Organizations that already have a document management system (DMS) are one step ahead of the game, as they have implemented the context of matters and some key matter-centric factors like practice areas or regions. But the firm’s data will require some additional attention to find “the good stuff.”  

Fortunately, the administrative teams will know where the data is good and where it is weaker. For example, a key business function may be run in spreadsheets stored in the DMS. Identify them and mark them out uniquely. Alternately, find where the practice teams “stash” the shared knowledge/best examples, and flag these matters as unique.  

“No one should think of generative AI as a ‘plug and play’ technology that doesn’t require any groundwork. But that doesn’t mean that generative AI isn’t worth exploring — it just needs to be done in a thoughtful and considered manner.”

This filtering is key. Providing the LLM with access to all the files within the DMS can overwhelm the model with too much “noise” and not enough “signal.” A better approach is to train the LLM on a small subset of data within the DMS, like the final approved versions of documents from within a specific time range. Having some sort of internal knowledge curation team that’s in charge of determining what “good content” looks like for any particular legal administration workflow is essential. Simply put, what you feed your LLM matters. 

Likewise, firms will want to ground the outputs that a generative AI tool is able to provide. In the case of the IT professional looking to create chatbot support for the latest platform rollout, that might mean making sure that the tool is pulling answers from vetted content, like an official support portal rather than the entirety of the world wide web. For the finance professional, it might mean making sure that outputs are grounded in specific areas of the DMS, practice management system or billing system. 

Skip to content


In the quest to feed generative AI models good content and deliver optimum outputs, legal administrators should ensure that they’re not accidentally stepping on any security or confidentiality landmines. After all, law firms traffic in highly privileged material.  

Some matters and files will be fully locked down and inaccessible, depending on how “open” or “closed” of a security model is in place at the organization. This raises the possibility of a variability of responses from generative AI, rather than uniformity.  

For instance, several different HR professionals who all work at the same firm — and who are looking to find the best examples of work product to use as training materials in a talent development seminar — might get totally different results from generative AI, depending on what kind of access they have to the firm’s files. 

To avoid this kind of scenario, firms should think about adopting a slightly different security posture for knowledge assets and best practices content that are used to train the LLMs. This will help avoid a scenario where the answers that generative AI provides are highly variable. 

Remember, however, that some business data is very confidential — things like billing, salary, bonuses and employee data (like working styles). Make sure this is treated as if it were client confidential, and only abstract out data that helps your AI needs.  


No one should think of generative AI as a “plug and play” technology that doesn’t require any groundwork. But that doesn’t mean that generative AI isn’t worth exploring — it just needs to be done in a thoughtful and considered manner. If legal administrators address these key areas around information architecture, training, security and consistency, they’ll find themselves with a realistic and practical deployment roadmap to help ensure generative AI success.

If you want more on generative AI, look no further than our latest white paper. Generative Artificial Intelligence: Benefits and Risks to Law Firms provides you with a fundamental, nontechnical, informative discussion of AI and GEN-AI concepts. It investigates the relationship between ethics and the rapidly (and sometimes unchecked) evolving world of GEN-AI. Finally, it identifies specific GEN-AI benefits and risks relevant to law firms when using GEN-AI in their activities and improve and elevate the reader’s overall AI literacy.