No Result
View All Result
SUBSCRIBE | NO FEES, NO PAYWALLS
MANAGE MY SUBSCRIPTION
NEWSLETTER
Corporate Compliance Insights
  • Home
  • About
    • About CCI
    • CCI Magazine
    • Writing for CCI
    • Career Connection
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Library
    • Download Whitepapers & Reports
    • Download eBooks
    • New: Living Your Best Compliance Life by Mary Shirley
    • New: Ethics and Compliance for Humans by Adam Balfour
    • 2021: Raise Your Game, Not Your Voice by Lentini-Walker & Tschida
    • CCI Press & Compliance Bookshelf
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
    • Upcoming
  • Events
  • Subscribe
Jump to a Section
  • At the Office
    • Ethics
    • HR Compliance
    • Leadership & Career
    • Well-Being at Work
  • Compliance & Risk
    • Compliance
    • FCPA
    • Fraud
    • Risk
  • Finserv & Audit
    • Financial Services
    • Internal Audit
  • Governance
    • ESG
    • Getting Governance Right
  • Infosec
    • Cybersecurity
    • Data Privacy
  • Opinion
    • Adam Balfour
    • Jim DeLoach
    • Mary Shirley
    • Yan Tougas
No Result
View All Result
Corporate Compliance Insights
Home Opinion

A Shadow AI Crisis Is Brewing in the GC’s Office

Legal teams using unauthorized AI are gambling with sensitive information

by Camilo Artiga-Purcell
July 24, 2025
in Opinion, Risk
ai doing work functions digital art collage

Forget BYOD; today’s corporate nightmare acronym could more accurately be called BYOLLM — bring your own large language model. From low-level associates to senior-level executives, the average worker is using generative AI and similar tools in ways their employers may not even be aware of. Even corporate legal teams are guilty of using unapproved AI tools, warns Camilo Artiga-Purcell, general counsel of Kiteworks, a compliance software provider.

Picture a senior associate at a Fortune 500 legal department, racing against a deadline. They copy confidential merger documents into their personal ChatGPT account, paste litigation strategies into a free AI tool they found online and upload trade secrets to a system with zero data controls. Their company’s IT department has no idea. Their general counsel doesn’t know. But somewhere, on servers in unknown jurisdictions, that sensitive data now exists outside any corporate control.

What I’ve described isn’t really a list of hypothetical situations; events like this are happening thousands of times daily across corporate America. New research reveals that legal departments, the very teams responsible for managing corporate risk, are running what amounts to the largest uncontrolled data experiment in business history.

A survey conducted by Axiom Law, involving 300 in-house counsel at companies with revenues over $50 million, highlights a notable gap between AI adoption and governance in legal departments. The findings say that 83% of legal teams are using AI tools that were not provided by their company, and 81% acknowledge using tools that have not been formally approved. Additionally, 47% of respondents reported having no policies in place to guide AI use, and 46% of those without AI training are using these tools to draft legal contracts, the survey said.

Importantly, the numbers involved suggest these aren’t junior employees casually testing new technologies; these are seasoned legal professionals at major corporations, entrusted with some of the most sensitive data imaginable — M&A strategies, litigation tactics, intellectual property and trade secrets — using tools that offer little to no control over where that data goes or how it’s used.

A decade ago, shadow IT, or when workers bring personal devices and unauthorized software into the workplace, was one of infosec leaders’ biggest worries; now they’re concerned about shadow AI. But the difference between shadow IT and shadow AI is profound, and when legal teams are the ones using shadow AI, the risks are even higher. 

When legal teams engage in shadow AI, they’re not simply risking operational data — they’re potentially exposing attorney-client privileged communications, merger and acquisition strategies worth billions, trade secrets and proprietary information, litigation strategies and settlement positions, regulatory compliance documentation and personal data subject to GDPR, CCPA and other privacy laws.

What can happen when legal AI goes wrong

Imagine if a legal team uploaded confidential merger documents to an AI tool for analysis. In this scenario, that data could potentially become part of the AI’s training set. What might happen months later? A competitor’s legal team, using the same AI service, could receive surprisingly specific insights about M&A strategies that mirror the confidential deal — learned from the original company’s data.

Let’s explore another possibility: What if, during a regulatory investigation, authorities discovered a legal department had been analyzing and processing personal data through unapproved AI tools? This could violate GDPR requirements, potentially resulting in penalties reaching into the millions. Beyond the financial impact, the reputational damage from a public enforcement action could follow the company for years.

Or finally, consider a particularly concerning scenario: attorney-client privileged communications uploaded to a consumer AI tool could potentially lose their protected status. In subsequent litigation, opposing counsel might successfully argue that privilege was waived when the information was shared with a third-party AI service. Years of confidential communications could suddenly become discoverable, fundamentally altering the course of litigation.

These hypothetical situations illustrate why understanding AI tool policies and implementing proper safeguards is crucial for legal departments navigating the intersection of technology and confidentiality.

divers getting head start
Risk

Risk Leaders, Don’t Let FOMO Force a Hasty Move on AI

by Dean Alms
July 21, 2025

Policies, resource management and appetite for risk all part of the equation

Read moreDetails

Why legal departments are particularly vulnerable to shadow AI

Legal departments face a perfect storm of risk factors that make shadow AI particularly dangerous. The extreme time pressure inherent in legal work, where deadlines are often non-negotiable, drives lawyers to use whatever tools can deliver results fastest. Unlike other departments, every document legal handles could be privileged, confidential or material to the company’s future.

The individual nature of legal work compounds the problem. Lawyers often work independently on matters, making it easier for shadow AI to proliferate without detection. While 99% of legal departments now use AI, according to the Axiom Law survey, only 16% report receiving adequate training. This technology adoption lag creates a dangerous gap between tool usage and understanding of risks.

A generational divide further complicates matters. Younger lawyers who grew up with consumer technology may be comfortable with AI tools but might not fully appreciate the risks to client confidentiality. They see efficiency gains without recognizing that each uploaded document could compromise years of careful confidentiality protection.

Building a governed AI framework for legal departments

The path forward requires immediate action combined with long-term strategic thinking. Legal departments must first conduct a shadow AI audit to understand the scope of ungoverned tool usage. This means surveying all legal staff about their AI tool usage, identifying every AI tool currently in use, assessing what types of data are being processed through each tool and documenting the risks and potential exposures.

Emergency controls must follow discovery. This includes blocking access to unapproved consumer AI tools while providing approved alternatives with proper security. Clear policies with meaningful consequences for violations are essential, as are reporting mechanisms for AI-related incidents. But emergency measures alone won’t solve shadow AI risk.

Building sustainable AI governance requires comprehensive policy development that addresses the unique needs of legal work. These policies must explicitly address confidentiality, privilege and data security while providing practical guidelines for different types of legal documents. Regular updates ensure policies evolve with rapidly changing technology.

Training and education form the foundation of any governance program. Mandatory AI training for all legal staff, from senior partners to first-year associates, ensures everyone understands both the power and the perils of AI tools. Regular updates on emerging risks and best practices keep the organization current, while internal champions can guide colleagues through the practical application of policies.

The technology infrastructure itself requires careful attention. Investment in enterprise-grade AI tools with proper controls ensures that efficiency gains don’t come at the cost of security. Similarly, vendor management becomes critical when AI tools handle sensitive legal data. Thorough vetting of AI vendors for security and compliance, negotiating contracts that protect client data, requiring transparency about data usage and storage, and including audit rights and termination provisions all become non-negotiable requirements rather than nice-to-have features.

The time for action is now

Legal departments stand at a critical juncture. They can continue the current path of ungoverned AI use, gambling with client trust and regulatory compliance. Or they can take decisive action to implement proper controls, turning AI from a risk into a competitive advantage.

The legal profession has always been built on trust. In the age of AI, maintaining that trust requires more than good intentions; it demands robust governance, clear policies and secure technology. The firms and departments that recognize this reality and act on it will thrive. Those that don’t may find themselves as cautionary tales in future case law arising from AI governance failures.


Tags: Artificial Intelligence (AI)
Previous Post

Beyond the Loading Dock: How Tariff Uncertainty Creates Financial Reporting Challenges

Next Post

For Today’s Startup, M&A Isn’t Just a Lucrative Exit — It Could Be a Path Forward

Camilo Artiga-Purcell

Camilo Artiga-Purcell

Camilo Artiga-Purcell is general counsel at Kiteworks, where he leads legal strategy and governance initiatives for secure content communications and collaboration. With extensive experience in data privacy, cybersecurity and emerging technology law, he advises organizations on managing AI-related risks while maintaining competitive advantage.

Related Posts

divers getting head start

Risk Leaders, Don’t Let FOMO Force a Hasty Move on AI

by Dean Alms
July 21, 2025

Policies, resource management and appetite for risk all part of the equation

robot waiting for job interview

If AI Can Easily Game Hiring Processes, Maybe It’s Time to Rethink What You’re Looking For

by Vera Cherepanova
July 15, 2025

Using AI to prepare for an interview is OK, but what about using it to perform?

nurse holding chart

Data Privacy at the Crossroads of AI & Life Sciences: US & EU Perspectives

by Marijn Storm, Katherine Wang and Joshua Fattal
July 15, 2025

Regulators and enforcers are watching how healthcare companies use advanced tools

photo collage text messages

Can AI Streamline E-Communications Compliance Program Reviews?

by Jonny Frank, Nathan Gibson, Michael Costa and Kashif Sheikh
July 14, 2025

Where manual reviews take weeks, AI can rapidly compare policy documentation to assessment criteria and flag control gaps

Next Post
path forward digital collage concept

For Today’s Startup, M&A Isn’t Just a Lucrative Exit — It Could Be a Path Forward

No Result
View All Result

Privacy Policy | AI Policy

Founded in 2010, CCI is the web’s premier global independent news source for compliance, ethics, risk and information security. 

Got a news tip? Get in touch. Want a weekly round-up in your inbox? Sign up for free. No subscription fees, no paywalls. 

Follow Us

Browse Topics:

  • CCI Press
  • Compliance
  • Compliance Podcasts
  • Cybersecurity
  • Data Privacy
  • eBooks Published by CCI
  • Ethics
  • FCPA
  • Featured
  • Financial Services
  • Fraud
  • Governance
  • GRC Vendor News
  • HR Compliance
  • Internal Audit
  • Leadership and Career
  • On Demand Webinars
  • Opinion
  • Research
  • Resource Library
  • Risk
  • Uncategorized
  • Videos
  • Webinars
  • Well-Being
  • Whitepapers

© 2025 Corporate Compliance Insights

Welcome to CCI. This site uses cookies. Please click OK to accept. Privacy Policy
Cookie settingsACCEPT
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT
No Result
View All Result
  • Home
  • About
    • About CCI
    • CCI Magazine
    • Writing for CCI
    • Career Connection
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Library
    • Download Whitepapers & Reports
    • Download eBooks
    • New: Living Your Best Compliance Life by Mary Shirley
    • New: Ethics and Compliance for Humans by Adam Balfour
    • 2021: Raise Your Game, Not Your Voice by Lentini-Walker & Tschida
    • CCI Press & Compliance Bookshelf
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
    • Upcoming
  • Events
  • Subscribe

© 2025 Corporate Compliance Insights