Just as GDPR prompted a wave of state privacy laws in the US, the EU AI Act is catalyzing state-level AI regulations across America. Kevin M. Alvero, chief compliance officer of Integral Ad Science, analyzes the common threads in emerging state AI laws and reveals why organizations need comprehensive governance frameworks that address both regulatory compliance and stakeholder expectations in this rapidly evolving environment.
As GDPR did less than a decade ago, corporate leaders should expect that the EU AI Act, which officially entered into force in August, will pave the way for US states to adopt their own AI laws. Indeed, the ball is already rolling.
While some policies regarding AI use in governmental processes have already been implemented, more than 20 US states have AI laws in various stages of approval whose scope includes private companies.
State AI laws
Utah
Utah’s Artificial Intelligence Policy Act was enacted May 1, 2024. It asserts that the state’s consumer protection laws apply to generative AI applications in the same manner as other business functions.
There are several aspects to this law that provide compliance leaders with clues as to how to approach the task of preparing their organizations for emerging AI legislation. The first is related to use cases. Utah’s law focuses on generative AI, which it defined as “an artificial system that: (i) is trained on data; (ii) interacts with a person using text, audio, or visual communication; and (iii) generates non-scripted outputs similar to outputs created by a human, with limited or no human oversight.”
Utah’s law was also the first in the US to impose disclosure requirements on private companies that use generative AI when interacting with consumers.
Emphasis on transparency and disclosure, as well as a relationship between use case and regulatory requirements, are two common threads we will see in existing and emerging US AI legislation, both of which are seen also in the EU AI Act.
Colorado
Colorado’s Artificial Intelligence Act sets out requirements for AI developers and those deploying high-risk AI systems to use reasonable care to protect consumers from risks of algorithmic discrimination, which it defines as “any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of this state or federal law.” Like the Utah law, the Colorado AI Act is use case-specific, but its focus is on “high-risk” AI systems.
Colorado’s law considers AI “high-risk” if it makes, or is a substantial factor in making, a consequential decision, which it defines as one that that “has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of: (a) education enrollment or an education opportunity; (b) employment or an employment opportunity; (c) a financial or lending service; (d) an essential government service; (e) health-care services; (f) housing; (g) insurance; or (h) a legal service.”
The EU AI Act also defines high-risk AI systems but does so with a longer list of business environments or AI techniques it considers high risk. Understanding if your own organization’s use of AI falls into a high-risk category is one of the most important aspects of assessing regulatory risk. Likewise, compliance leaders should be familiar with all high-risk AI definitions and whether they are relevant to any third-party relationships, as these could be tied to transparency requirements.
Also, because of the Colorado law’s emphasis on the avoidance of unfair treatment, it singles out AI that has the potential to treat individuals or groups of individuals unfairly based on some information about the person(s) that is known to the AI system. This would require that the AI system be fed data that can identify (directly or indirectly) individuals, meaning that such systems must also comply with applicable data privacy laws.
High-risk AI applications and those with the potential for unfair treatment (i.e. bias) are two more key takeaways for compliance leaders, as well as the connection to data privacy laws for AI systems that process personal information.
California
California has passed numerous AI-related laws. One of the most significant bills signed into law requires generative AI systems to disclose that the content they create is AI-generated. Beyond that, AB-2013 requires that generative AI providers reveal the sources, uses and key characteristics (e.g. dates, copyright status) of data used to train their models. Other laws provide consumer protections with regard to AI outputs such as robocalls, election deepfakes, deepfake pornography, and the use of the voices and likenesses of actors to create synthetic content.
The AI Regulation Pendulum Swings: Innovation vs. Privacy Protection
Federal retreat from oversight could trigger state-level privacy rules and compliance maze
Read moreDetailsWhat do state laws have in common?
State-level AI laws enacted so far include things like:
- Disclosure of AI generated content
- AI use-case transparency
- Data source transparency
- Special requirements for high-risk applications
- Controls to mitigate unfairness/bias in AI models
- Privacy requirements for AI that processes personal data
While California, Colorado and Utah have been the early leaders in enacting legislation, bills are pending in more than two dozen other states that would limit how the private sector can use AI. Laws vary but often include provisions like:
- AI governance program and documentation: Policies, procedures or a robust governance or risk-management program and retention of internal assessment and mitigation documentation.
- Assessments: Risk assessments, impact assessments or rights assessments.
- Training: Training staff on AI governance practices and procedures.
- Responsible individual: AI governance officer or other qualified and responsible individual.
- General notice: Post public notice of AI governance policies or general disclosures of system information.
- Explanation/incident reporting: Provide explanations of AI facilitated decisions or disclose AI incidents to affected consumers or governments. While different, both are post-facto requirements to notify individuals or governments about the behavior of a covered system.
- Labeling/notification: Label consumer-facing AI systems or provide up-front notification about their use.
- Provider documentation: Downstream documentation, such as specific disclosures from developers to deployers.
- Registration: Licensing, proactive predisclosure or registration with a government entity.
- Third-party review: External review of AI systems or governance programs, such as assessments or audits.
- Opt-out/appeal: Alternative to an AI-facilitated decision, respect other opt-out choices or provide a mechanism to appeal.
- Nondiscrimination: Avoid or mitigate discriminatory impacts of AI systems or duties of care to protect individuals from risks of algorithmic discrimination.
Policies, procedures and best practices
These provisions represent a clear case of emerging regulatory risk for companies that use AI and conduct business in the US. Besides the obvious risk that companies violating the law could face possible sanctions, fines, etc., organizations are also growing more insistent that the third parties they work with (vendors/suppliers, partners, etc.) are compliant to avoid exposing themselves to reputational risk or some other form of guilt by association. In short, there is more to AI regulatory compliance than just mitigating legal risk. Thus, private companies that are subject to AI regulation need to ensure that they are compliant with the law, and they also have to be able to demonstrate their compliance to key stakeholders who need or want assurance.
An AI certification program is an excellent way to accomplish this two-pronged task. The ISO 42001 certification process, for example, can help companies ensure that their AI-related business practices are current with the leading standards, regulations and governance frameworks, while also demonstrating this to customers, potential customers or the general public. However, not all organizations can commit to ISO certification and the resources/ongoing maintenance it requires.
Whether pursuing certification or not, there are steps every organization using AI should take in order to align themselves (fundamentally at least) with the leading AI governance standards and frameworks, while also ensuring they are prepared to demonstrate compliance with US AI laws.
AI policy
Companies should consider crafting a set of guidelines, rules and regulations that govern the use of AI technologies throughout the organization. AI policies are designed to ensure that AI is used in a responsible and ethical manner, and that it aligns with legal requirements, organizational values and ethical standards. Some of the key areas that should be covered by an AI policy include:
- Business justification for AI use.
- Who can use AI (access).
- Permitted/prohibited activities.
- Who is responsible for AI (accountability, roles & responsibilities).
- Ethical guidelines and principles (fairness, non-bias, transparency).
- Data privacy, security and intellectual property.
- Processes and procedures for making employees aware of AI risk and company policy.
AI development/deployment policy
This document sets the expected norms for the development and/or procurement of AI technologies and their implementation into use. It should include processes for training AI models, conducting pre-deployment testing, sign-offs/approvals, deployment procedures and ongoing monitoring of outputs (including human oversight) and retraining of models.
AI inventory
Every organization using AI needs a comprehensive listing of the tools, techniques, responsible personnel, third parties, etc. involved in the development, deployment and monitoring of all AI systems across the organization. This document is more technical than the AI policy and may include AI model types, platforms, data sources, outputs, monitoring tools, key points of interaction and other details about the construction of the AI systems. Documentation of AI systems is a fundamental element across the leading AI standards and governance frameworks and is critical to compliance with legislation with requirements for transparency.
AI use definitions
Once AI systems are documented, it is possible to see and express what the organization’s uses of AI are and categorize them based on applicable laws, which is critical to compliance with current and emerging legislation.
Disclosures
Organizations that use AI technologies must disclose the nature and purpose of their use to affected parties, which could include people who interact directly with AI outputs, people for whom the AI makes consequential decisions and people whose personal data is input to AI systems. The organization should also be transparent about the legitimate business purposes for which it uses AI and the guiding principles under which it operates.
AI risk & impact assessment
A risk assessment that considers the organization’s uses of AI and their potential to cause financial, legal or regulatory impacts to the organization or to inflict harm to society is a fundamental requirement of any AI governance program.