No Result
View All Result
SUBSCRIBE | NO FEES, NO PAYWALLS
MANAGE MY SUBSCRIPTION
NEWSLETTER
Corporate Compliance Insights
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • Artificial Intelligence (AI)
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Downloads
    • Download Whitepapers & Reports
    • Download eBooks
  • Books
    • CCI Press
    • New: Bribery Beyond Borders: The Story of the Foreign Corrupt Practices Act by Severin Wirz
    • CCI Press & Compliance Bookshelf
    • The Seven Elements Book Club
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe
Jump to a Section
  • At the Office
    • Ethics
    • HR Compliance
    • Leadership & Career
    • Well-Being at Work
  • Compliance & Risk
    • Compliance
    • FCPA
    • Fraud
    • Risk
  • Finserv & Audit
    • Financial Services
    • Internal Audit
  • Governance
    • ESG
    • Getting Governance Right
  • Infosec
    • Cybersecurity
    • Data Privacy
  • Opinion
    • Adam Balfour
    • Jim DeLoach
    • Mary Shirley
    • Yan Tougas
No Result
View All Result
Corporate Compliance Insights
Home Compliance

‘AI Everywhere’ Mandates Fail Without Credible Use Cases and Human Checkpoints

Secure AI adoption at scale is a leadership and change management challenge, not a purely technical one

by Molly Lebowitz and Anthony Prestia
March 2, 2026
in Compliance
series of doors down hallway

Broad top-down mandates to use AI fail because they’re too vague to act on, while unmanaged employee experimentation can expose sensitive data to unauthorized parties. Molly Lebowitz and Anthony Prestia argue that successful AI adoption requires identifying bona fide use cases and establishing clear human checkpoints — and making it easier for employees to experiment safely rather than trying to shut down experimentation.

Generative AI has moved out of specialist teams and into everyday work, with adoption now spanning finance, marketing, product, operations and people teams. Employees encounter large language models not only through their personal ChatGPT or Claude accounts but also through AI features embedded in the business software they already rely on for email, collaboration and HR.

As usage spreads across the enterprise, urgency for quick results follows close behind. In many cases, AI platform adoption is happening without shared intent, clarity of ownership or alignment to real work.

Adoption is often driven from two directions. From the top, broad mandates tell people to “use AI” in hopes of driving value, whether that be reducing cost, driving efficiency or increasing output. From the bottom, employees experiment with personal LLM accounts and AI-powered features inside sanctioned tools. Each of these scenarios introduces new privacy and security risks, burdensome compliance reviews and employee concerns over what the adoption of AI will mean for their jobs. 

Both approaches can fail for the same reason: lack of intentional design.

Successful AI adoption depends less on the sophistication of the models than on the intentionality of the approach. Organizations need to be deliberate about where large language models create meaningful value today and align safeguards to the risk and impact of each use case. Equally critical is engaging employees in that effort by clearly explaining changes, providing approved tools, sharing concrete examples and listening to the people closest to the work.

When these elements are missing, adoption stalls or introduces risk without return. In practice, secure AI adoption at scale is a leadership and change management challenge, not a purely technical one.

business using AI concept collage
Opinion

The Rising Tide of AI-Washing Cases in Securities Fraud Litigation

by James Christie and Nick Manningham
February 24, 2026

Opendoor algorithm couldn’t adjust to changing conditions; Upstart model didn’t respond dynamically to macroeconomic changes — both faced fraud claims

Read moreDetails

Turn top-down AI mandates into tangible progress

A sweeping order to “use AI everywhere” often fails because it is too broad to act on and doesn’t leverage the technology strategically. Leaders need to focus on outcomes that bring the most business value, which raises a practical question: Which specific tasks can LLMs take on today, and which still require human judgment?

Generative models handle repetitive drafting and pattern-finding across large data sets fairly well. They can organize unstructured material into something workable, but they also hallucinate, producing confident output that is wrong. In most environments, they raise the floor more than the ceiling. In other words, they make the average output better, but they don’t make the best output brilliant. They’re useful for baseline efficiency, but they can’t replace expertise or judgment at the point of use.

Governance must reflect this reality, providing enough structure to manage risk without burying people in process or shutting down learning. With an intentional approach, leaders set expectations early, identify the few use cases that fit the current state and name the checkpoints that remain human — such as final risk classification, regulatory interpretation or decisions that affect customer eligibility. Those checkpoints do not move. 

Change management work then carries those decisions into day-to-day behavior. Teams adjust workflows, receive targeted training and hear consistent messages about how and when to use these tools, so the guardrails show up in practice rather than only on paper.

The message to employees matters as much as the control. When leaders acknowledge limits and frame LLMs as aids to human judgment, employees engage rather than resist. The fundamentals have not changed: Human judgment should remain responsible for decisions and risk, with AI serving as an input rather than a decision-maker.

Mitigate risks associated with “shadow” AI adoption and outside platforms

Unmanaged use of AI — or shadow adoption — can expose sensitive data and lead to security incidents. The risk can be subtle: Drafting an email or announcement with the help of a personal chatbot account may save a few minutes on writing but can also reveal confidential information to an unauthorized third party. Similar risks can surface inside sanctioned tools, like when new, automatically enabled AI features are allowed to train on confidential information or route data outside the enterprise. 

In many of these situations, employees are not trying to circumvent policy; they assume that if a feature appears inside a trusted tool, someone has already vetted its use.

Education is the first control. Employees need a plain explanation of how LLM outputs are produced, where models tend to fail and which data stays off-limits. That kind of awareness turns the workforce into an early line of defense rather than something leaders need to contain.

Vendor discipline is a second control. A short list of approved providers under blanket privacy and security terms gives employees a safer channel for experimentation. Those terms can include a prohibition on model training with company data and clear rules for retention and logging. That step channels experimentation into defined lanes and weakens the pull of shadow tools. Examples like ChatGPT or Gemini can sit on the approved list as options, not as the only route.

Material decisions still need human ownership. In many sectors, regulation already assumes that, and internal risk standards do as well. Be clear about where a model can help and where a human must make the call, particularly for decisions that meaningfully affect people, such as access to employment, benefits, healthcare, credit or other services. In these cases, generative tools may support analysis or drafting, but accountability for the outcome must remain with a person who can apply judgment, context and responsibility.

The goal is to make it easier for people to experiment safely, not to shut experimentation down. When guardrails are clear, employees know how far they can go with a tool, when to stop and ask for help and who has the final say. That keeps adoption moving without taking on risk the organization never agreed to.

Ensure that you’re getting value out of your AI deployment

Getting value out of an AI transformation starts with knowing what “better” looks like. Goals and metrics need definition before work scales; in many cases, the right measures already exist inside the business. When results show up in the same reports leaders already read, measurement becomes part of normal performance management, not a separate dashboard off to the side.

The people side decides whether results hold. Explain the “why,” make the risk-reward trade visible, and treat feedback from teams as input on whether the transformation is working. Create simple channels where teams share safe experiments and short examples of time saved, quality improved or friction removed. Over time, those stories and metrics build a culture that treats mistakes as information rather than failure. That kind of culture draws people in and makes changed behavior stick.

Lead AI adoption with intent and people at the center

Even as tools evolve, a company remains a collection of people trying to solve problems and do real work. Generative AI adds a powerful tool to that mix, but leadership isn’t off the hook for deciding where the organization is headed, which risks are acceptable and how people spend their time.

When leaders decide which use cases fit the current risk posture, define where a model should never act alone and bring people into the process, employees hear a clear story about what the organization is trying to achieve, how new tools change their work and why their judgment still matters. Simple, business-facing measures show whether the transformation is doing what it promised, instead of just shifting work from one part of the organization to another.

For compliance, risk and HR leaders, AI adoption is best understood as an acceleration of familiar responsibilities rather than a departure from them. The fundamentals remain the same: shaping behavior, setting boundaries and enabling the organization to move with confidence. 

What has changed is the pace and visibility of those decisions. Organizations that acknowledge this shift and learn through controlled experimentation are better positioned than those that hesitate or rely on blanket restrictions. Treating AI as an extension of existing governance and change practices, rather than a substitute for them, allows new capabilities to take hold without eroding trust or accountability.

Tags: Artificial Intelligence (AI)Corporate Culture
Previous Post

California’s Prescription for Healthcare Investors: New Restrictions, New Reporting

Next Post

4 Priorities for Compliance Officers Navigating Europe’s Transformed Financial Landscape

Molly Lebowitz and Anthony Prestia

Molly Lebowitz and Anthony Prestia

Molly Lebowitz is the managing director of the tech industry at management consultancy Propeller. She has extensive experience helping technology organizations tackle large-scale, complex operational challenges and transformations. Her experience in software, hardware, media and online travel brings the expertise and perspective to drive transformative results. She holds a bachelor's degree in engineering from Cornell University.
Anthony Prestia is vice president of privacy at TerraTrue, a data privacy provider.

Related Posts

ai policy concept collage

Effective AI Policy Is Not a Crock-Pot; You Can’t Just Set It and Forget It

by Cory McNeley
March 24, 2026

Step One: inventory and classify AI use cases by risk level

office space printer

Uh-Oh, You Built a Compliance Automation Tool & Everybody Hates It

by Sumit Sharma
March 23, 2026

When the parallel run has no exit criteria, it stops being a safety net and becomes the process

school library shelves

Compliance Classroom: Emerging Perspectives on AI

by Corporate Compliance Insights
March 20, 2026

Essays on moral distancing, information silos and IP infringement

news roundup_june 14 2024

US Regulatory Fines Plummet in 2025

by Staff and Wire Reports
March 19, 2026

Majority of orgs report breach involving AI

Next Post
eu tiny flag

4 Priorities for Compliance Officers Navigating Europe’s Transformed Financial Landscape

No Result
View All Result

Privacy Policy | AI Policy

Founded in 2010, CCI is the web’s premier global independent news source for compliance, ethics, risk and information security. 

Got a news tip? Get in touch. Want a weekly round-up in your inbox? Sign up for free. No subscription fees, no paywalls. 

Follow Us

Browse Topics:

  • CCI Press
  • Compliance
  • Compliance Podcasts
  • Cybersecurity
  • Data Privacy
  • eBooks Published by CCI
  • Ethics
  • FCPA
  • Featured
  • Financial Services
  • Fraud
  • Governance
  • GRC Vendor News
  • HR Compliance
  • Internal Audit
  • Leadership and Career
  • On Demand Webinars
  • Opinion
  • Research
  • Resource Library
  • Risk
  • Uncategorized
  • Videos
  • Webinars
  • Well-Being
  • Whitepapers

© 2026 Corporate Compliance Insights

Welcome to CCI. This site uses cookies. Please click OK to accept. Privacy Policy
Cookie settingsACCEPT
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT
No Result
View All Result
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • Artificial Intelligence (AI)
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Downloads
    • Download Whitepapers & Reports
    • Download eBooks
  • Books
    • CCI Press
    • New: Bribery Beyond Borders: The Story of the Foreign Corrupt Practices Act by Severin Wirz
    • CCI Press & Compliance Bookshelf
    • The Seven Elements Book Club
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe

© 2026 Corporate Compliance Insights