Forget BYOD; today’s corporate nightmare acronym could more accurately be called BYOLLM — bring your own large language model. From low-level associates to senior-level executives, the average worker is using generative AI and similar tools in ways their employers may not even be aware of. Even corporate legal teams are guilty of using unapproved AI tools, warns Camilo Artiga-Purcell, general counsel of Kiteworks, a compliance software provider.
Picture a senior associate at a Fortune 500 legal department, racing against a deadline. They copy confidential merger documents into their personal ChatGPT account, paste litigation strategies into a free AI tool they found online and upload trade secrets to a system with zero data controls. Their company’s IT department has no idea. Their general counsel doesn’t know. But somewhere, on servers in unknown jurisdictions, that sensitive data now exists outside any corporate control.
What I’ve described isn’t really a list of hypothetical situations; events like this are happening thousands of times daily across corporate America. New research reveals that legal departments, the very teams responsible for managing corporate risk, are running what amounts to the largest uncontrolled data experiment in business history.
A survey conducted by Axiom Law, involving 300 in-house counsel at companies with revenues over $50 million, highlights a notable gap between AI adoption and governance in legal departments. The findings say that 83% of legal teams are using AI tools that were not provided by their company, and 81% acknowledge using tools that have not been formally approved. Additionally, 47% of respondents reported having no policies in place to guide AI use, and 46% of those without AI training are using these tools to draft legal contracts, the survey said.
Importantly, the numbers involved suggest these aren’t junior employees casually testing new technologies; these are seasoned legal professionals at major corporations, entrusted with some of the most sensitive data imaginable — M&A strategies, litigation tactics, intellectual property and trade secrets — using tools that offer little to no control over where that data goes or how it’s used.
A decade ago, shadow IT, or when workers bring personal devices and unauthorized software into the workplace, was one of infosec leaders’ biggest worries; now they’re concerned about shadow AI. But the difference between shadow IT and shadow AI is profound, and when legal teams are the ones using shadow AI, the risks are even higher.
When legal teams engage in shadow AI, they’re not simply risking operational data — they’re potentially exposing attorney-client privileged communications, merger and acquisition strategies worth billions, trade secrets and proprietary information, litigation strategies and settlement positions, regulatory compliance documentation and personal data subject to GDPR, CCPA and other privacy laws.
What can happen when legal AI goes wrong
Imagine if a legal team uploaded confidential merger documents to an AI tool for analysis. In this scenario, that data could potentially become part of the AI’s training set. What might happen months later? A competitor’s legal team, using the same AI service, could receive surprisingly specific insights about M&A strategies that mirror the confidential deal — learned from the original company’s data.
Let’s explore another possibility: What if, during a regulatory investigation, authorities discovered a legal department had been analyzing and processing personal data through unapproved AI tools? This could violate GDPR requirements, potentially resulting in penalties reaching into the millions. Beyond the financial impact, the reputational damage from a public enforcement action could follow the company for years.
Or finally, consider a particularly concerning scenario: attorney-client privileged communications uploaded to a consumer AI tool could potentially lose their protected status. In subsequent litigation, opposing counsel might successfully argue that privilege was waived when the information was shared with a third-party AI service. Years of confidential communications could suddenly become discoverable, fundamentally altering the course of litigation.
These hypothetical situations illustrate why understanding AI tool policies and implementing proper safeguards is crucial for legal departments navigating the intersection of technology and confidentiality.
Risk Leaders, Don’t Let FOMO Force a Hasty Move on AI
Policies, resource management and appetite for risk all part of the equation
Read moreDetailsWhy legal departments are particularly vulnerable to shadow AI
Legal departments face a perfect storm of risk factors that make shadow AI particularly dangerous. The extreme time pressure inherent in legal work, where deadlines are often non-negotiable, drives lawyers to use whatever tools can deliver results fastest. Unlike other departments, every document legal handles could be privileged, confidential or material to the company’s future.
The individual nature of legal work compounds the problem. Lawyers often work independently on matters, making it easier for shadow AI to proliferate without detection. While 99% of legal departments now use AI, according to the Axiom Law survey, only 16% report receiving adequate training. This technology adoption lag creates a dangerous gap between tool usage and understanding of risks.
A generational divide further complicates matters. Younger lawyers who grew up with consumer technology may be comfortable with AI tools but might not fully appreciate the risks to client confidentiality. They see efficiency gains without recognizing that each uploaded document could compromise years of careful confidentiality protection.
Building a governed AI framework for legal departments
The path forward requires immediate action combined with long-term strategic thinking. Legal departments must first conduct a shadow AI audit to understand the scope of ungoverned tool usage. This means surveying all legal staff about their AI tool usage, identifying every AI tool currently in use, assessing what types of data are being processed through each tool and documenting the risks and potential exposures.
Emergency controls must follow discovery. This includes blocking access to unapproved consumer AI tools while providing approved alternatives with proper security. Clear policies with meaningful consequences for violations are essential, as are reporting mechanisms for AI-related incidents. But emergency measures alone won’t solve shadow AI risk.
Building sustainable AI governance requires comprehensive policy development that addresses the unique needs of legal work. These policies must explicitly address confidentiality, privilege and data security while providing practical guidelines for different types of legal documents. Regular updates ensure policies evolve with rapidly changing technology.
Training and education form the foundation of any governance program. Mandatory AI training for all legal staff, from senior partners to first-year associates, ensures everyone understands both the power and the perils of AI tools. Regular updates on emerging risks and best practices keep the organization current, while internal champions can guide colleagues through the practical application of policies.
The technology infrastructure itself requires careful attention. Investment in enterprise-grade AI tools with proper controls ensures that efficiency gains don’t come at the cost of security. Similarly, vendor management becomes critical when AI tools handle sensitive legal data. Thorough vetting of AI vendors for security and compliance, negotiating contracts that protect client data, requiring transparency about data usage and storage, and including audit rights and termination provisions all become non-negotiable requirements rather than nice-to-have features.
The time for action is now
Legal departments stand at a critical juncture. They can continue the current path of ungoverned AI use, gambling with client trust and regulatory compliance. Or they can take decisive action to implement proper controls, turning AI from a risk into a competitive advantage.
The legal profession has always been built on trust. In the age of AI, maintaining that trust requires more than good intentions; it demands robust governance, clear policies and secure technology. The firms and departments that recognize this reality and act on it will thrive. Those that don’t may find themselves as cautionary tales in future case law arising from AI governance failures.