As businesses continue to adopt AI tools, the well-meaning ones are also contemplating policies to govern usage. But many of the policies lack any true enforceability, making them more compliance theater than proper controls. Cory McNeley, a managing director in UHY’s technology innovation section, explores how companies can evolve their AI policies beyond just a press release.
Immediately in the wake of the rapid rise in popularity of ChatGPT, organizations were fearful. As a result, they adopted grandiose aspirational policies to try and control the sprawl of AI. The problem is these policies were often vague and unenforceable, much less well-controlled. Let’s be honest: A policy without an understanding of the risk surface and accompanying controls is not a policy. It is simply a false sense of compliance and security.
An AI policy must move beyond basic principles. It must clearly define the governance of who, what, when and how AI is used. This includes basics like who is allowed to use AI, what AI they are allowed to use, what are the acceptable business uses and what principles must be followed to help ensure the protection of customers, personnel and shareholders. Reasonable boundaries must be set, implementation standards followed and internal controls implemented to prevent and detect unintended and harmful outcomes. We all thought acceptable-use policies for the internet were tricky. This is a whole other level. Without sound policy, AI becomes a liability instead of a tool.
Overly broad policies don’t work
So why do most AI policies fail? It is quite simple. They are overly broad. They contain loose language, such as “use AI responsibly.”
Your definition of responsible and mine could differ dramatically. Other policies instruct employees to “follow the law,” but regulation and case law lag the real world. For example, if AI is used in lending decisions, Fair Credit Reporting Act (FCRA) requirements may come into play. Without specificity, there is no real direction. A second common issue is policy ownership. Who owns the AI policy? IT? Compliance? Legal? Internal audit? Risk management? A lack of accountability creates control gaps. Without a clearly defined owner, the AI policy and the reality on the ground drift apart quickly.
Enforcement is frequently overlooked. If AI usage is not logged and documented, how do you know whether you are in compliance? How do you know there is not a shadow system where someone is using a personal subscription to AI tools? The reality is even if your organization believes it is not using AI, it likely is. Employees may have personal accounts, and most SaaS platforms — such as Salesforce and Microsoft — now have AI built directly into their systems.
‘AI Everywhere’ Mandates Fail Without Credible Use Cases and Human Checkpoints
Secure AI adoption at scale is a leadership and change management challenge, not a purely technical one
Read moreDetailsTrue AI governance
If you want defensibility, you need governance and structure.
First, you need a strong governance framework. This should include an AI governance committee that encompasses compliance, legal, HR and other business stakeholders. However, even with a committee, there must be one clearly defined policy owner, someone accountable for maintaining and evolving the framework.
Second, you cannot write this policy and forget about it. AI technology is evolving rapidly. The policy must be reviewed regularly — quarterly, at a minimum — to ensure it reflects the current environment.
The policy must include an escalation process for AI-related incidents. It must also incorporate a data classification framework. On the one hand, there is inherent risk in analyzing financial documents with AI. On the other hand, using AI to develop a team-building activity is extremely low risk. Your policy should reflect those distinctions.
Approved and prohibited use cases must be clearly defined and publicly documented. Drafting non-confidential internal communications may be acceptable in your environment. Brainstorming and research summarization may also fall into approved categories. However, assisted coding, often referred to as “vibe coding” may require some scrutiny. While it can significantly increase efficiency, individuals using it must understand the code being generated. It should be a time-saving mechanism, not blind code creation. If you do not understand what the code is doing, you cannot control the outcome or ensure its fitness for purpose.
Other important conversations to have include:
- Clear prohibitions are equally important. Unless you are operating within a private, secure and compliant environment, confidential client data should never be uploaded into public AI systems. Sensitive information, including personally identifiable information (PII), protected health information (PHI), proprietary data or trade secrets must be sanitized or excluded entirely.
- Automated decisions that could negatively affect individuals, such as lending or hiring decisions, should never rely solely on AI. A human must remain in the loop. The same principle applies to financial advice, legal advice or automated client commitments.
- A centralized registry of approved AI tools is essential. New tools should require formal approval. You must understand how data is stored, how it is transmitted, whether it is used to train models, who owns the data and how it can be deleted.
- Data classification and privacy controls are another area that needs to be incorporated to have a solid foundation for your policy. Companies need to incorporate requirements to comply with laws they are subject to, such as HIPAA, GDPR, etc. Cross-border data transfers can pose hidden liability and should be evaluated carefully; make sure you know where your servers are located and what data is going there. Additionally, existing contractual obligations with clients may restrict how AI tools can be used. Have you updated and communicated your policies with those stakeholders?
- Human review is critical. AI is not a solution that you can set and forget. It must be monitored for alignment and drift over time. All decisions, outputs or products produced from AI should be reviewed by staff who have been trained to ensure accuracy and completeness. Areas of high impact should require formal validation and documentation of processes.
- Shadow AI must also be detected and addressed. Organizations should monitor network usage for unauthorized use, review new SaaS feature updates to help ensure continued alignment with policy and require employee acknowledgement in high-risk areas. Version control of tools, prompt review and retention standards can bolster defensibility, but an overall governance program holds it all together.
- When evaluating risk, categorize use cases into tiers. Low-risk activities, such as internal brainstorming, require basic oversight. Moderate-risk activities, such as that involving marketing content, should just involve a manager’s review. High- and critical-risk activities, such as financial reporting or employment decisions, require compliance review, validation, testing and comprehensive audit trails.
Organizations must ask themselves critical questions: Do we know where AI is being used? Can we reproduce a decision if challenged? Who is accountable for ensuring compliance? A practical roadmap begins with inventorying all technologies in use and classifying them by risk. Develop a policy supported by strong operational controls and cross-functional oversight. Train employees regularly. Monitor consistently.
An AI policy is not a press release. It is a control document. Organizations that prioritize operational safeguards will reduce exposure, protect confidential information, enable responsible innovation and strengthen their position in audits, regulatory reviews and client trust.


Cory McNeley is a managing director with UHY Consulting and leader of the Technology innovation service line. Drawing from over 20 years of experience, his expertise spans international operations, manufacturing, defense and aerospace, retail, government and service sectors. 






