No Result
View All Result
SUBSCRIBE | NO FEES, NO PAYWALLS
MANAGE MY SUBSCRIPTION
NEWSLETTER
Corporate Compliance Insights
  • Home
  • About
    • About CCI
    • CCI Magazine
    • Writing for CCI
    • Career Connection
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Library
    • Download Whitepapers & Reports
    • Download eBooks
    • New: Living Your Best Compliance Life by Mary Shirley
    • New: Ethics and Compliance for Humans by Adam Balfour
    • 2021: Raise Your Game, Not Your Voice by Lentini-Walker & Tschida
    • CCI Press & Compliance Bookshelf
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe
Jump to a Section
  • At the Office
    • Ethics
    • HR Compliance
    • Leadership & Career
    • Well-Being at Work
  • Compliance & Risk
    • Compliance
    • FCPA
    • Fraud
    • Risk
  • Finserv & Audit
    • Financial Services
    • Internal Audit
  • Governance
    • ESG
    • Getting Governance Right
  • Infosec
    • Cybersecurity
    • Data Privacy
  • Opinion
    • Adam Balfour
    • Jim DeLoach
    • Mary Shirley
    • Yan Tougas
No Result
View All Result
Corporate Compliance Insights
Home Featured

Planning Your AI Policy? Start Here.

Well-designed policies can support innovation while protecting against legal, security and employment risks

by Bradford J. Kelley, Mike Skidgel and Alice Wang
May 7, 2025
in Featured, Risk
ai policy

Effective AI governance begins with clear policies that establish boundaries for workplace use. Bradford J. Kelley, Mike Skidgel and Alice Wang of Littler Mendelson reveal how well-designed AI policies can help organizations balance innovation with risk management, providing a framework for tool approval, data security, training and vendor management that addresses the complex legal and operational challenges of AI adoption. 

In recent years, many organizations have implemented new policies on AI use to help prevent bias, plagiarism or use of AI tools that produce inaccurate or misleading information. Meanwhile, many courts and state bars across the country have introduced AI policies to ensure that AI is properly used in the practice of law, including policies requiring attorneys to certify that generative AI did not draft any portion of a filing. 

Employers should consider similar measures, as the widespread use of generative AI programs like ChatGPT increases the risks associated with the use of AI. Indeed, because AI will continue to have an increasingly significant role throughout the workplace and at all stages of the employment lifecycle, organizations should strongly consider implementing policies to ensure that AI is used properly in the workplace.

AI usage policies can help minimize legal, business and regulatory risks by ensuring compliance with operative laws and regulations. AI usage policies are also beneficial amid an evolving regulatory landscape, as they can preemptively establish a framework that helps mitigate risks. Having a policy in place before engaging in high-risk uses of AI (such as, for example, AI systems intended to be used in HR processes to evaluate job candidates or make decisions affecting the employment relationship) is critical for businesses to protect themselves from open-ended liability.

In many cases, companies engage third-party vendors that offer AI-powered algorithms to perform HR tasks. Having an AI usage policy can also improve employers’ relationships with third-party software vendors by establishing clear expectations and guidelines.

robot hand pointing to sky
Risk

Agentic AI Can Be Force Multiplier — for Criminals, Too

by Steve Durbin
April 21, 2025

How polymorphic malware and synthetic identities are creating unprecedented attack vectors

Read moreDetails

What to include in an AI usage policy

At the outset, an employer should identify areas where they do not want AI to be used and set clear guidelines accordingly. To accomplish this, it is important to identify potential risks associated with AI usage and tailor the policy to address those particular areas. These risks can range from AI tools that undermine data security, exhibit biases or generate inaccurate or misleading information. To identify the potential risks, an employer needs to determine what tools will be approved for use and what tasks those tools are capable of performing. It is only by understanding what the tools can do that an employer can begin to understand the risks that might flow from their use.

In most cases, general AI usage policy templates should be avoided because the specific needs of the employer must be accounted for. Accordingly, employers should consider the following categories while creating tailored policies for their organizations:

Purpose or mission statement

An effective AI usage policy should include a purpose or mission statement that clearly defines the purpose of the policy. This will help promote trust, credibility and a greater awareness and appreciation of the merits of AI systems. The absence of a purpose or mission statement will likely undermine the benefits of its use. 

The goal of an effective AI usage policy is to create a policy that allows companies to monitor AI use and encourage innovation but also ensure that AI is only used to augment internal work and with proper data. Generally, the basic purpose of an AI policy is to provide clear guidelines for the acceptable use of AI tools, thereby ensuring consistently compliant behavior by all employees.

Define AI and the AI tools covered

Another critical component is a section providing key definitions, including how the employer defines AI for purposes of the policy. Defining AI is often challenging, in large part because of the multitude and ever-growing variety of use cases. However, with AI being widely incorporated into other tools, it is important to delineate what is and isn’t covered by the policy in plain, non-technical language to eliminate any doubts among employees and others. 

This section should also specify which AI tools are approved and covered by the policy. Specific generative AI tools like ChatGPT, Copilot or DALL-E can be included in this section, as applicable. Although generative AI has recently been the star of the AI world, a comprehensive AI policy must address all potential applications of AI. While the policy does not need to specifically identify every tool that is not approved for use, the policy should make clear that any AI system not explicitly approved in the policy is expressly prohibited.

Specify who the AI usage policy applies to

An effective AI usage policy should explain how the policy applies to its workforce, including employees, independent contractors and others. It is critical that an employer have a policy that covers anyone who might have access to the employer’s AI tools or systems.

Scope of the policy

An effective AI usage policy must clearly define the scope of its applicability. A policy may allow for open use or prohibit or limit certain AI use. For example, an AI usage policy may specify that human resources departments may not use AI in recruitment due to the risk of bias that may result and in light of the evolving legal landscape of this area. Or a policy may specify that employees are not to provide customer information to publicly available AI tools due to the data security risks involved.

The scope of the policy may differ based on several factors. For example, different categories of employees with different job roles are likely to need AI for different tasks, or they may need different tools entirely. While some positions may require open-ended access to AI tools, others may only need to use AI tools for specifically delineated job functions. The policy should be properly scoped to appropriately control the potential use by any groups and individuals with access to any AI tools or systems.

Data security and risk management

It is also important for an AI usage policy to establish guidelines for data collection, storage, processing and deletion. Addressing how AI technologies will handle personal and sensitive information ensures compliance with data protection laws and safeguards against unauthorized access or data breaches. An effective AI usage policy must also address an employer’s sensitive, proprietary and confidential information. For example, employers should consider an AI usage policy that prohibits any sensitive, proprietary and confidential information from being uploaded or used, especially with ChatGPT or other publicly available generative AI programs. Similarly, employers should consider prohibiting AI use related to any company or third-party proprietary information, any personal information or any customer or third-party data as an input. 

Employers need to be intimately familiar with the data security guarantees being made by any AI vendors and have a clear understanding of how those guarantees operate with respect to the employer’s data, its employees’ data and any customer data being used. And while it might be outside the scope of the AI usage policy for personnel, employers should take steps to communicate with individuals and customers whose data may be processed to provide notice and secure consent whenever possible.

Training

Employers should also consider addressing training and awareness in their AI usage policies. More specifically, employers should provide training to ensure employees are well-informed about the AI tools they’ll have available, the AI usage policy in general and how the tools impact their roles and responsibilities. Employers should consider training managers on how the individuals they supervise should and shouldn’t be using AI and on how managers can help to monitor for any usage of unapproved AI tools or for any misuse of approved AI tools. 

Training and awareness can help reinforce fairness, transparency and accountability. Training can help ensure that employees remain vigilant about the potential for AI to produce inaccurate or incomplete information or perpetuate or magnify historical biases. Because of the pace of AI-driven technology developments and the evolving legal framework, it is important for organizations to routinely review and update training materials to stay current.

Vendor guidelines

An AI usage policy can also establish guidelines for evaluating and selecting vendors and outline responsibilities for maintaining compliance with the AI usage policy. Some vendors may impose their own limitations on the use of their AI products that may need to be incorporated or otherwise addressed by an employer’s AI usage policy.

Additional guardrails

Employers should also consider including additional guardrails within the AI usage policy. Notably, employers may consider designating point people who can approve of AI use or troubleshoot problems if they arise. Another possible guardrail is including a section discussing any potential disciplinary actions for noncompliance within the AI usage policy. Employers should strongly consider whether certain tools need to be blocked by IT at the domain level to prevent employees from accessing those tools altogether.

One guardrail that is key to observe at all stages of AI selection, deployment and use is human oversight. Everyone interacting with AI systems needs to appreciate the overwhelming importance of keeping a human in the loop when utilizing these systems at work. An effective AI policy should specify that AI tools, including generative AI tools, cannot be used to make a final decision of any kind without independent human judgment and oversight, including but not limited to any business or employment decision.

Know the rapidly evolving regulatory landscape

Keeping tabs on the landscape will allow for the timely placement of safeguards, so when new laws go into effect, employers are already prepared. Similarly, because many states within the country are looking to European and other international standards, it is also important to account for international AI developments, especially the European Union’s AI Act. For instance, the Colorado AI Act was largely modeled on the EU AI Act and a bill being considered in Texas was also modeled on it. In the absence of any forthcoming national AI legislation, state regulations are likely to continue proliferating, leading to further inconsistencies.

Understand the interplay with other applicable policies

Awareness of the inherent risks of AI usage is key to understanding the potential interplay between an AI usage policy and employers’ other policies and ensuring alignment.  For example, algorithmic bias, or systemic errors that disadvantage individuals or groups of individuals based on a protected characteristic, is often cited as a leading concern for AI tools, especially in the recruiting context. Even generative AI tools designed to create images, videos or music may be alleged to contribute to a hostile work environment. Thus, employers would be well-served to cross-reference other applicable policies (e.g., anti-discrimination/harassment policies) in their AI usage policies. 

After the AI usage policy is in place

To ensure the guardrails are maintained, companies can conduct periodic audits to ensure compliance with their AI usage policy. Employers should also consider addressing training and awareness of their AI usage policies. Because AI tools are being seamlessly integrated into existing software products, including computers and phones, which may obscure the fact that the underlying technology is AI-driven, companies should cultivate awareness of the AI capabilities of the various technological platforms they use in the workplace, to avoid inadvertent or unknowing use of AI tools. As a result, employers should openly communicate how AI is used in the workplace to build trust, enhance credibility and promote a deeper appreciation of its benefits. Without transparency, accountability and clarity, even properly implemented AI may fail to deliver its full advantages.

Finally, employers should regularly review and update their AI usage policy to keep pace with evolving legal requirements and industry best practices. To continuously improve the AI usage policy, employers should strongly encourage feedback.

Conclusion

Properly tuned to an employer’s specific circumstances, the components above provide a strong initial framework for an AI usage policy. Each section needs to be appropriately tailored to address the specific issues that AI tools will present; those issues will depend on the nature of the employer’s business. A clear and effective policy can enable employers to take advantage of the benefits that properly leveraged AI tools can provide while helping to mitigate risks and minimize potential liabilities that can arise from the use of those tools.  

This article was first published on Littler Mendelson’s blog. It is republished here with permission.

Tags: Artificial Intelligence (AI)Risk Assessment
Previous Post

9 Emerging Use Cases for AI in TPRM

Next Post

Red Oak Updates Platform With Multi-Channel Communication Oversight

Bradford J. Kelley, Mike Skidgel and Alice Wang

Bradford J. Kelley, Mike Skidgel and Alice Wang

Bradford J. Kelley is a shareholder in the Washington, D.C., office of Littler Mendelson. He has a broad practice representing employers in employment anti-discrimination and wage and hour matters. He focuses on advising clients about emerging technologies, including AI and their impact in the workplace.
Mike Skidgel is knowledge management counsel at Littler Mendelson, based in the firm’s Kansas City global services center. He focuses his practice on the firm’s strategic approach to generative AI, including leading efforts to evaluate tools and platforms, identifying and testing potential use cases, working with vendors to provide critical feedback resulting in meaningful product improvements and providing generative AI training internally to Littler professionals and demos to internal and external audiences.
Alice H. Wang is a shareholder in Littler Mendelson’s San Francisco office. She advises and represents employers in a wide range of labor and employment law matters arising under state and federal law, including wage and hour, pay practice and worker classification issues, discrimination, harassment and retaliation and wrongful termination.

Related Posts

GAN Integrity TPRM & AI

Where TPRM Meets AI: Balancing Risk & Reward

by Corporate Compliance Insights
May 13, 2025

Is your organization prepared for the dual challenges of AI in third-party risk management? Whitepaper Where TPRM Meets AI: Balancing...

tracking prices

Pricing Algorithms Raise New Antitrust Concerns

by FTI Consulting
May 13, 2025

Interdisciplinary frameworks can help manage legal, privacy and consumer protection risks

news roundup data grungy

DEI, Immigration Regulations Lead List of Employers’ Concerns

by Staff and Wire Reports
May 9, 2025

Half of fraud driven by AI; finserv firms cite tech risks in ’25

robot reviewing contract

9 Emerging Use Cases for AI in TPRM

by Miriam Konradsen Ayed and Craig Moss
May 6, 2025

(Sponsored) As third-party ecosystems grow more complex, compliance teams face mounting pressure to assess and monitor external relationships effectively. Miriam...

Next Post
Red Oak Comms Update

Red Oak Updates Platform With Multi-Channel Communication Oversight

No Result
View All Result

Privacy Policy | AI Policy

Founded in 2010, CCI is the web’s premier global independent news source for compliance, ethics, risk and information security. 

Got a news tip? Get in touch. Want a weekly round-up in your inbox? Sign up for free. No subscription fees, no paywalls. 

Follow Us

Browse Topics:

  • CCI Press
  • Compliance
  • Compliance Podcasts
  • Cybersecurity
  • Data Privacy
  • eBooks Published by CCI
  • Ethics
  • FCPA
  • Featured
  • Financial Services
  • Fraud
  • Governance
  • GRC Vendor News
  • HR Compliance
  • Internal Audit
  • Leadership and Career
  • On Demand Webinars
  • Opinion
  • Research
  • Resource Library
  • Risk
  • Uncategorized
  • Videos
  • Webinars
  • Well-Being
  • Whitepapers

© 2025 Corporate Compliance Insights

Welcome to CCI. This site uses cookies. Please click OK to accept. Privacy Policy
Cookie settingsACCEPT
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT
No Result
View All Result
  • Home
  • About
    • About CCI
    • CCI Magazine
    • Writing for CCI
    • Career Connection
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Library
    • Download Whitepapers & Reports
    • Download eBooks
    • New: Living Your Best Compliance Life by Mary Shirley
    • New: Ethics and Compliance for Humans by Adam Balfour
    • 2021: Raise Your Game, Not Your Voice by Lentini-Walker & Tschida
    • CCI Press & Compliance Bookshelf
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe

© 2025 Corporate Compliance Insights