No Result
View All Result
SUBSCRIBE | NO FEES, NO PAYWALLS
MANAGE MY SUBSCRIPTION
NEWSLETTER
Corporate Compliance Insights
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • Artificial Intelligence (AI)
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Downloads
    • Download Whitepapers & Reports
    • Download eBooks
  • Books
    • CCI Press
    • New: Bribery Beyond Borders: The Story of the Foreign Corrupt Practices Act by Severin Wirz
    • CCI Press & Compliance Bookshelf
    • The Seven Elements Book Club
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe
Jump to a Section
  • At the Office
    • Ethics
    • HR Compliance
    • Leadership & Career
    • Well-Being at Work
  • Compliance & Risk
    • Compliance
    • FCPA
    • Fraud
    • Risk
  • Finserv & Audit
    • Financial Services
    • Internal Audit
  • Governance
    • ESG
    • Getting Governance Right
  • Infosec
    • Cybersecurity
    • Data Privacy
  • Opinion
    • Adam Balfour
    • Jim DeLoach
    • Mary Shirley
    • Yan Tougas
No Result
View All Result
Corporate Compliance Insights
Home Opinion

Effective AI Policy Is Not a Crock-Pot; You Can’t Just Set It and Forget It

Step One: inventory and classify AI use cases by risk level

by Cory McNeley
March 24, 2026
in Opinion, Risk
ai policy concept collage

As businesses continue to adopt AI tools, the well-meaning ones are also contemplating policies to govern usage. But many of the policies lack any true enforceability, making them more compliance theater than proper controls. Cory McNeley, a managing director in UHY’s technology innovation section, explores how companies can evolve their AI policies beyond just a press release.

Immediately in the wake of the rapid rise in popularity of ChatGPT, organizations were fearful. As a result, they adopted grandiose aspirational policies to try and control the sprawl of AI. The problem is these policies were often vague and unenforceable, much less well-controlled. Let’s be honest: A policy without an understanding of the risk surface and accompanying controls is not a policy. It is simply a false sense of compliance and security.

An AI policy must move beyond basic principles. It must clearly define the governance of who, what, when and how AI is used. This includes basics like who is allowed to use AI, what AI they are allowed to use, what are the acceptable business uses and what principles must be followed to help ensure the protection of customers, personnel and shareholders. Reasonable boundaries must be set, implementation standards followed and internal controls implemented to prevent and detect unintended and harmful outcomes. We all thought acceptable-use policies for the internet were tricky. This is a whole other level. Without sound policy, AI becomes a liability instead of a tool.

Overly broad policies don’t work

So why do most AI policies fail? It is quite simple. They are overly broad. They contain loose language, such as “use AI responsibly.” 

Your definition of responsible and mine could differ dramatically. Other policies instruct employees to “follow the law,” but regulation and case law lag the real world. For example, if AI is used in lending decisions, Fair Credit Reporting Act (FCRA) requirements may come into play. Without specificity, there is no real direction. A second common issue is policy ownership. Who owns the AI policy? IT? Compliance? Legal? Internal audit? Risk management? A lack of accountability creates control gaps. Without a clearly defined owner, the AI policy and the reality on the ground drift apart quickly.

Enforcement is frequently overlooked. If AI usage is not logged and documented, how do you know whether you are in compliance? How do you know there is not a shadow system where someone is using a personal subscription to AI tools? The reality is even if your organization believes it is not using AI, it likely is. Employees may have personal accounts, and most SaaS platforms — such as Salesforce and Microsoft — now have AI built directly into their systems.

series of doors down hallway
Compliance

‘AI Everywhere’ Mandates Fail Without Credible Use Cases and Human Checkpoints

by Molly Lebowitz and Anthony Prestia
March 2, 2026

Secure AI adoption at scale is a leadership and change management challenge, not a purely technical one

Read moreDetails

True AI governance

If you want defensibility, you need governance and structure.

First, you need a strong governance framework. This should include an AI governance committee that encompasses compliance, legal, HR and other business stakeholders. However, even with a committee, there must be one clearly defined policy owner, someone accountable for maintaining and evolving the framework.

Second, you cannot write this policy and forget about it. AI technology is evolving rapidly. The policy must be reviewed regularly — quarterly, at a minimum — to ensure it reflects the current environment.

The policy must include an escalation process for AI-related incidents. It must also incorporate a data classification framework. On the one hand, there is inherent risk in analyzing financial documents with AI. On the other hand, using AI to develop a team-building activity is extremely low risk. Your policy should reflect those distinctions.

Approved and prohibited use cases must be clearly defined and publicly documented. Drafting non-confidential internal communications may be acceptable in your environment. Brainstorming and research summarization may also fall into approved categories. However, assisted coding, often referred to as “vibe coding” may require some scrutiny. While it can significantly increase efficiency, individuals using it must understand the code being generated. It should be a time-saving mechanism, not blind code creation. If you do not understand what the code is doing, you cannot control the outcome or ensure its fitness for purpose.

Other important conversations to have include:

  • Clear prohibitions are equally important. Unless you are operating within a private, secure and compliant environment, confidential client data should never be uploaded into public AI systems. Sensitive information, including personally identifiable information (PII), protected health information (PHI), proprietary data or trade secrets must be sanitized or excluded entirely.
  • Automated decisions that could negatively affect individuals, such as lending or hiring decisions, should never rely solely on AI. A human must remain in the loop. The same principle applies to financial advice, legal advice or automated client commitments.
  • A centralized registry of approved AI tools is essential. New tools should require formal approval. You must understand how data is stored, how it is transmitted, whether it is used to train models, who owns the data and how it can be deleted.
  • Data classification and privacy controls are another area that needs to be incorporated to have a solid foundation for your policy. Companies need to incorporate requirements to comply with laws they are subject to, such as HIPAA, GDPR, etc. Cross-border data transfers can pose hidden liability and should be evaluated carefully; make sure you know where your servers are located and what data is going there. Additionally, existing contractual obligations with clients may restrict how AI tools can be used. Have you updated and communicated your policies with those stakeholders?
  • Human review is critical. AI is not a solution that you can set and forget. It must be monitored for alignment and drift over time. All decisions, outputs or products produced from AI should be reviewed by staff who have been trained to ensure accuracy and completeness. Areas of high impact should require formal validation and documentation of processes.
  • Shadow AI must also be detected and addressed. Organizations should monitor network usage for unauthorized use, review new SaaS feature updates to help ensure continued alignment with policy and require employee acknowledgement in high-risk areas. Version control of tools, prompt review and retention standards can bolster defensibility, but an overall governance program holds it all together.
  • When evaluating risk, categorize use cases into tiers. Low-risk activities, such as internal brainstorming, require basic oversight. Moderate-risk activities, such as that involving marketing content, should just involve a manager’s review. High- and critical-risk activities, such as financial reporting or employment decisions, require compliance review, validation, testing and comprehensive audit trails.

Organizations must ask themselves critical questions: Do we know where AI is being used? Can we reproduce a decision if challenged? Who is accountable for ensuring compliance? A practical roadmap begins with inventorying all technologies in use and classifying them by risk. Develop a policy supported by strong operational controls and cross-functional oversight. Train employees regularly. Monitor consistently.

An AI policy is not a press release. It is a control document. Organizations that prioritize operational safeguards will reduce exposure, protect confidential information, enable responsible innovation and strengthen their position in audits, regulatory reviews and client trust.

Tags: Artificial Intelligence (AI)Internal Controls
Previous Post

14 Risk Oversight Principles You Haven’t Heard Before

Cory McNeley

Cory McNeley

Cory McNeley is a managing director with UHY Consulting and leader of the Technology innovation service line. Drawing from over 20 years of experience, his expertise spans international operations, manufacturing, defense and aerospace, retail, government and service sectors.

Related Posts

office space printer

Uh-Oh, You Built a Compliance Automation Tool & Everybody Hates It

by Sumit Sharma
March 23, 2026

When the parallel run has no exit criteria, it stops being a safety net and becomes the process

school library shelves

Compliance Classroom: Emerging Perspectives on AI

by Corporate Compliance Insights
March 20, 2026

Essays on moral distancing, information silos and IP infringement

news roundup_june 14 2024

US Regulatory Fines Plummet in 2025

by Staff and Wire Reports
March 19, 2026

Majority of orgs report breach involving AI

ai generated content collage

Managing the AI Content Explosion in Financial Services

by Jamie Hoyle
March 13, 2026

AI tools have multiplied adviser output in financial services — and FINRA’s supervision framework was written for a different volume

No Result
View All Result

Privacy Policy | AI Policy

Founded in 2010, CCI is the web’s premier global independent news source for compliance, ethics, risk and information security. 

Got a news tip? Get in touch. Want a weekly round-up in your inbox? Sign up for free. No subscription fees, no paywalls. 

Follow Us

Browse Topics:

  • CCI Press
  • Compliance
  • Compliance Podcasts
  • Cybersecurity
  • Data Privacy
  • eBooks Published by CCI
  • Ethics
  • FCPA
  • Featured
  • Financial Services
  • Fraud
  • Governance
  • GRC Vendor News
  • HR Compliance
  • Internal Audit
  • Leadership and Career
  • On Demand Webinars
  • Opinion
  • Research
  • Resource Library
  • Risk
  • Uncategorized
  • Videos
  • Webinars
  • Well-Being
  • Whitepapers

© 2026 Corporate Compliance Insights

Welcome to CCI. This site uses cookies. Please click OK to accept. Privacy Policy
Cookie settingsACCEPT
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT
No Result
View All Result
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • Artificial Intelligence (AI)
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Downloads
    • Download Whitepapers & Reports
    • Download eBooks
  • Books
    • CCI Press
    • New: Bribery Beyond Borders: The Story of the Foreign Corrupt Practices Act by Severin Wirz
    • CCI Press & Compliance Bookshelf
    • The Seven Elements Book Club
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe

© 2026 Corporate Compliance Insights