Innovations: Fresh Thoughts for Managing
 

AI on Trial: Cybersecurity Risks for Law Firms Are Only Increasing

How AI tools are powering the next wave of cybercrime and what legal teams must do now.
By Matt Durrin
September 2025
 

For today’s law firms, artificial intelligence (AI) is both a transformational tool and a growing cyber threat. Generative AI tools have evolved into powerful engines for both business innovation and criminal exploitation. While law firms and their clients are experimenting with AI to streamline research, drafting and administrative tasks, attackers are using similar tools to launch faster, more convincing and more damaging cyberattacks.

For legal administrators, the stakes couldn’t be higher. Client data, whether tied to confidential business deals, intellectual property or deeply personal matters, is a digital goldmine for cybercriminals. And now, with AI giving attackers unprecedented speed, precision and scale, the threat landscape is shifting faster than ever. Law firms aren’t just defending files; they’re safeguarding client trust, reputation and in some cases, the integrity of justice itself. 

The Dark Side of Generative AI 

At LMG Security, we’ve conducted hands-on research into AI-assisted hacking. Underground tools like WormGPT and FraudGPT have emerged on dark web marketplaces, which we like to call “Evil AI,” as they are designed for cybercrime. These programs provide attackers with everything from pre-packaged phishing campaigns to automated code that can exploit vulnerabilities in popular software. 

What’s most concerning is how quickly AI lowers the barrier to entry for cybercrime. A rookie hacker no longer needs advanced coding skills to craft ransomware or write malware; they can simply prompt an “Evil AI” system to do it for them. In testing, we’ve seen how attackers can generate tailored phishing messages that mimic a client’s style of communication — complete with accurate grammar, tone and formatting. Combined with deepfake audio or video impersonations of executives, these attacks are nearly indistinguishable from real communications. 

Why Law Firms Are Prime Targets 

Law firms sit at the crossroads of some of the world’s most sensitive information. A single breach can unravel mergers and acquisitions, expose intellectual property or compromise privileged communications, and this data is worth exponentially more than stolen credit cards on the black market. The danger is magnified in today’s always-connected environment where attorneys and staff rely on laptops, mobile devices and an expanding ecosystem of third-party vendors. Every vendor and every endpoint become a potential entry point.

A single breach can unravel mergers and acquisitions, expose intellectual property or compromise privileged communications, and this data is worth exponentially more than stolen credit cards on the black market.

With AI enabling attackers to probe and exploit weaknesses at lightning speed, legal administrators simply can’t afford “good enough” defenses. They must rigorously re-examine their safeguards before adversaries do it for them. 

Key Risks in the Age of AI 

What are today’s top risks?  

  1. Executive Impersonation: AI-generated voice clones and deepfakes make it easier for attackers to impersonate partners, clients or regulators. What sounds like a legitimate voicemail from a client requesting immediate document access could be from a criminal. 
  2. Accelerated Exploit Development: Attackers are using AI to scan code, identify vulnerabilities and write working exploits in record time. This means that the window between when a software vulnerability is discovered to when it is exploited has shrunk from days to hours. 
  3. AI-Powered Social Engineering: Generative AI enables attackers to craft hyper-personalized phishing messages. They can scrape public filings, LinkedIn updates or even past email threads to create convincing communications. 
  4. Third-Party Risks: Your security is only as strong as your weakest vendor — or their vendors. If a partner is using AI tools without strong safeguards or entering sensitive information into online AI programs, your data could be exposed.

What Legal Administrators Must Do Now 

AI-driven threats require AI-aware defenses. Here are steps your firm can take today: 

  • Raise Awareness Firmwide: Train attorneys and staff to recognize deepfakes, voice-cloned calls and AI-powered phishing. Use simulations to demonstrate how convincing these attacks can be. 
  • Strengthen Vendor Due Diligence: Implement a vendor risk management program to evaluate and monitor vendor security. Ask vendors how they’re securing their own AI tools and data pipelines and update your firm’s contracts with third-party providers to include specific requirements for proactive security measures, breach notification timelines and AI usage disclosures.  
  • Accelerate Patch Management: Because AI can generate exploits so quickly, firms must shrink their patching timelines. Don't wait weeks to update vulnerable systems. Establish a rapid patching protocol. 
  • Update Incident Response Plans: Ensure your protocols account for AI-generated content. For example, your response team should know how to handle a situation where a deepfake audio clip appears online. 
  • Adopt Strong Authentication: Multi-factor authentication (MFA) remains one of the most effective defenses, but attackers are now targeting MFA codes directly. Consider phishing-resistant MFA solutions like hardware tokens or passkeys. 

Preparing for What’s Next 

Generative AI will continue to evolve, becoming more powerful, accessible and integrated into daily business operations. For law firms, this presents both opportunities and serious risks. Legal administrators have a unique responsibility: protecting not only the firm’s data but also the sensitive information entrusted to them by clients. By understanding how AI tools are fueling a new wave of cybercrime, you can take proactive steps to safeguard your organization.

Also in This Issue

Back to Top