No Result
View All Result
SUBSCRIBE | NO FEES, NO PAYWALLS
MANAGE MY SUBSCRIPTION
NEWSLETTER
Corporate Compliance Insights
  • Home
  • About
    • About CCI
    • CCI Magazine
    • Writing for CCI
    • Career Connection
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Library
    • Download Whitepapers & Reports
    • Download eBooks
    • New: Living Your Best Compliance Life by Mary Shirley
    • New: Ethics and Compliance for Humans by Adam Balfour
    • 2021: Raise Your Game, Not Your Voice by Lentini-Walker & Tschida
    • CCI Press & Compliance Bookshelf
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe
Jump to a Section
  • At the Office
    • Ethics
    • HR Compliance
    • Leadership & Career
    • Well-Being at Work
  • Compliance & Risk
    • Compliance
    • FCPA
    • Fraud
    • Risk
  • Finserv & Audit
    • Financial Services
    • Internal Audit
  • Governance
    • ESG
    • Getting Governance Right
  • Infosec
    • Cybersecurity
    • Data Privacy
  • Opinion
    • Adam Balfour
    • Jim DeLoach
    • Mary Shirley
    • Yan Tougas
No Result
View All Result
Corporate Compliance Insights
Home Compliance

Rocky Mountain High on AI: Colorado Emerges as the First Mover on State AI Law

New law set to go into effect in 2026, takes similar approach as EU AI Act

by Vivien F. Peaden
June 4, 2024
in Compliance
colorado state flags

Colorado was the first state across the comprehensive AI law finish line, as its governor in May signed into law the Colorado AI Act. Baker Donelson’s Vivien F. Peaden explores details of the new law and what companies need to know.

Colorado has stepped boldly into the difficult area of regulating artificial intelligence (AI) with the enactment of the Colorado AI Act on May 17. Formally Senate Bill 205, the groundbreaking law has similarities to the EU AI Act, including taking a risk-based approach and establishing rules around AI impact assessment.

The law, which takes effect in 2026, will require developers and users of “high-risk AI systems” to adopt compliance measures and protect consumers from the perils of AI bias. Noncompliance with the Colorado AI Act (CAIA) could lead to hefty civil penalties for engaging in deceptive trade practices.

The enactment of an AI law in the Centennial State is the culmination of a nationwide trend in 2024 to regulate the use of AI, with three Cs leading the charge: California, Connecticut and Colorado. While California is making slow progress with its proposed regulations of automated decision-making technology (ADMT), Connecticut’s ambitious AI law (SB 2) was derailed by a veto threat from Gov. Ned Lamont. 

In the end, Colorado’s SB-205 became the lone horse crossing the finish line. Two other states, Utah and Tennessee, also passed state AI-related laws this year, focusing specifically on regulating generative AI and deepfakes. That makes the Colorado AI Act the first comprehensive U.S. state law with rules and guardrails for AI development, use and bias mitigation. 

AI systems regulated under Colorado’s law

The CAIA largely adopts the broad definition of “artificial intelligence system” nearly verbatim from the EU AI Act, which was in March 2024, (see this alert on EU AI Act). As illustrated below, the CAIA takes a technology-neutral stance and purposefully sets a broad definition so that it does not become obsolete as AI rapidly advances:

EU AI Act definition CAIA definition
A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments Any machine-based system that, for any explicit or implicit objective, infers from inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.

Colorado’s law follows the EU AI Act’s risk-based approach but has a narrower focus on the use of “high-risk AI systems” in the private sector. For Colorado residents, an AI system is “high-risk” if it “makes, or is a substantial factor in making, a consequential decision,” affecting their access to or conditions to receiving the following:

  • Education enrollment or an education opportunity
  • Employment or an employment opportunity
  • A financial or lending service
  • An essential government service
  • Healthcare services
  • Housing
  • Insurance
  • A legal service

Unlike the 2023 Colorado Privacy Act that exempts employee data and financial institutions subject to the Gramm-Leach-Bliley Act (GLBA), the Colorado AI Act expressly prohibits algorithmic discrimination affecting Colorado residents’ employment opportunities or access to financial or lending services. Further, Colorado’s definition of “high-risk AI systems” excludes a list of “low-risk AI tools” for anti-malware, cybersecurity, calculators, spam-filtering, web caching and spell-checking, among other low-risk activities.

confused toy robot
Risk

10 Questions to Ask About Generative AI

by Jim DeLoach
May 21, 2024

Boards and management should settle in for long journey

Read moreDetails

Developers & deployers

While the EU AI Act sets a comprehensive framework to regulate all activities across six key players that develop, use and distribute AI systems, the Colorado AI Act narrows the field down to only two players:

  • AI developer: legal entity doing business in Colorado that develops, or intentionally and substantially modifies an AI system
  • AI deployer: legal entity doing business in Colorado that uses a high-risk AI system

Under the CAIA, both developers and deployers of high-risk AI systems must use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. If the Colorado Attorney General’s Office brings an enforcement action, a company will be afforded a rebuttable presumption if it complies with its respective obligations for developer or deployer.

Upon creation of a new high-risk AI, or intentional and substantial modifications of a high-risk AI, an AI developer must comply with the following requirements:

  • AI instructions: Provide disclosures and documentation to downstream users regarding the intended use and specifics of its high-risk AI systems
  • Impact assessment facilitation: Make available additional documentation or information to facilitate impact assessment by downstream users (aka the AI deployer)
  • Public disclosure: Maintain and post a current public statement on the developer’s website summarizing: (i) what types of high-risk AI it has developed for use and license; and (ii) how it manages risks of algorithmic discrimination
  • Incident reporting: Report to the Colorado Attorney General’s Office upon discovery of any algorithmic discrimination

For AI deployers that are downstream users of high-risk AI, the CAIA imposes similar obligations around public disclosure and incident reporting:

  • Public disclosure: maintain and post a current public statement on the Deployer’s website summarizing its use of high-risk AI
  • Incident reporting: report to the Colorado Attorney General’s Office upon discovery of algorithmic discrimination

In addition, an AI deployer must comply with the following requirements:

  • Risk management program: Implement a risk-management policy and program that governs high-risk AI uses
  • Impact assessment: Conduct an impact assessment of the current use of high-risk AIs annually and within 90 days after any intentional and substantial modification of high-risk AI
  • Pre-use notice to consumers: Notify consumers with a statement disclosing information about the high-risk AI system in use
  • Consumer rights disclosure: Inform Colorado consumers of their rights under the CAIA, including the right to pre-use notice, the right to exercise data privacy rights and the right to an explanation if an adverse decision is made from the use of high-risk AI, among others

Exemptions

The CAIA provides an exemption for high-risk AI deployers if they are small to medium-sized enterprises (SMEs) employing 50 or fewer full-time employees and meet certain conditions. These organizations do not need to maintain a risk management program, conduct an impact assessment or create a public statement, but they are still subject to a duty of care and must provide the relevant consumer notices.

Enforcement

The CAIA vests the Colorado AG with exclusive enforcement authority. Any violation of the CAIA constitutes a deceptive trade practice subject to hefty civil penalties imposed by the Colorado Consumer Protection Act. Section 6-1-112 of the Colorado Consumer Protection Act currently imposes a civil penalty of up to $20,000 per violation and up to $50,000 per violation if a deceptive trade practice is committed against a resident over age 60.

Conclusion

With the Colorado AI Law set to take effect Feb. 1, 2026, and potentially serving as a blueprint for other states, companies must start planning their AI compliance roadmap, including policy development, AI audit and assessment and AI vendor contract management. The time to get ready is now to ensure compliance and mitigate potential regulatory and operational risks.

 


Tags: Artificial Intelligence (AI)
Previous Post

CA Companies Have About a Month to Finalize Workplace Violence Prevention Policies

Next Post

Transparency Alone Doesn’t Lead to Accountability

Vivien F. Peaden

Vivien F. Peaden

Vivien F. Peaden, of counsel in Baker Donelson’s Atlanta office, counsels clients on a variety of data privacy and security issues under international and domestic laws.

Related Posts

GAN Integrity TPRM & AI

Where TPRM Meets AI: Balancing Risk & Reward

by Corporate Compliance Insights
May 13, 2025

Is your organization prepared for the dual challenges of AI in third-party risk management? Whitepaper Where TPRM Meets AI: Balancing...

tracking prices

Pricing Algorithms Raise New Antitrust Concerns

by FTI Consulting
May 13, 2025

Interdisciplinary frameworks can help manage legal, privacy and consumer protection risks

news roundup data grungy

DEI, Immigration Regulations Lead List of Employers’ Concerns

by Staff and Wire Reports
May 9, 2025

Half of fraud driven by AI; finserv firms cite tech risks in ’25

ai policy

Planning Your AI Policy? Start Here.

by Bradford J. Kelley, Mike Skidgel and Alice Wang
May 7, 2025

Effective AI governance begins with clear policies that establish boundaries for workplace use. Bradford J. Kelley, Mike Skidgel and Alice...

Next Post
skyscraper windows corporate transparency concept

Transparency Alone Doesn’t Lead to Accountability

No Result
View All Result

Privacy Policy | AI Policy

Founded in 2010, CCI is the web’s premier global independent news source for compliance, ethics, risk and information security. 

Got a news tip? Get in touch. Want a weekly round-up in your inbox? Sign up for free. No subscription fees, no paywalls. 

Follow Us

Browse Topics:

  • CCI Press
  • Compliance
  • Compliance Podcasts
  • Cybersecurity
  • Data Privacy
  • eBooks Published by CCI
  • Ethics
  • FCPA
  • Featured
  • Financial Services
  • Fraud
  • Governance
  • GRC Vendor News
  • HR Compliance
  • Internal Audit
  • Leadership and Career
  • On Demand Webinars
  • Opinion
  • Research
  • Resource Library
  • Risk
  • Uncategorized
  • Videos
  • Webinars
  • Well-Being
  • Whitepapers

© 2025 Corporate Compliance Insights

Welcome to CCI. This site uses cookies. Please click OK to accept. Privacy Policy
Cookie settingsACCEPT
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT
No Result
View All Result
  • Home
  • About
    • About CCI
    • CCI Magazine
    • Writing for CCI
    • Career Connection
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Library
    • Download Whitepapers & Reports
    • Download eBooks
    • New: Living Your Best Compliance Life by Mary Shirley
    • New: Ethics and Compliance for Humans by Adam Balfour
    • 2021: Raise Your Game, Not Your Voice by Lentini-Walker & Tschida
    • CCI Press & Compliance Bookshelf
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe

© 2025 Corporate Compliance Insights