No Result
View All Result
SUBSCRIBE | NO FEES, NO PAYWALLS
MANAGE MY SUBSCRIPTION
NEWSLETTER
Corporate Compliance Insights
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • Artificial Intelligence (AI)
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Downloads
    • Download Whitepapers & Reports
    • Download eBooks
  • Books
    • CCI Press
    • New: Bribery Beyond Borders: The Story of the Foreign Corrupt Practices Act by Severin Wirz
    • CCI Press & Compliance Bookshelf
    • The Seven Elements Book Club
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe
Jump to a Section
  • At the Office
    • Ethics
    • HR Compliance
    • Leadership & Career
    • Well-Being at Work
  • Compliance & Risk
    • Compliance
    • FCPA
    • Fraud
    • Risk
  • Finserv & Audit
    • Financial Services
    • Internal Audit
  • Governance
    • ESG
    • Getting Governance Right
  • Infosec
    • Cybersecurity
    • Data Privacy
  • Opinion
    • Adam Balfour
    • Jim DeLoach
    • Mary Shirley
    • Yan Tougas
No Result
View All Result
Corporate Compliance Insights
Home Opinion

Preemption is No Panacea: Congress Must Create a Workable National Framework for American AI Dominance

Even with light-touch regulation as its lodestar, new AI action plan requires authorization and funding for standards development, testing infrastructure and permitting reform

by David Miller and Clarine Nardi Riddle
February 10, 2026
in Opinion
us flag on computer chip

After broad AI preemption proposals drew sharp bipartisan opposition from state attorneys general and legislators, the path forward requires congressional action on specific framework components beyond determining division of authority. Kasowitz attorneys David Miller and Clarine Nardi Riddle argue that the US cannot afford for preemption complexities to derail vital legislative work and detail bipartisan bills on NIST standards, regulatory sandboxes and export controls that could advance the AI action plan’s objectives while preserving necessary state regulatory roles.

Federal preemption of state law has emerged as both an essential tool and a tripwire for lawmakers and the Trump Administration aiming to position the United States as the world leader in AI. After Congress failed to include an unprecedented broad moratorium on state authority to regulate AI in last year’s budget reconciliation package, the Trump Administration’s AI action plan took a more nuanced approach. Then, after an unsuccessful attempt to include broad AI-related preemption in the National Defense Authorization Act (NDAA) in early December, Trump issued an executive order on Dec. 11 and accompanying fact sheet, which builds on the Trump AI action plan by setting out a decidedly more direct and federally preemptive approach to AI policy.   

Grounded in the Supremacy Clause of the US Constitution (Article VI, Clause 2), when state and federal law conflict, federal law displaces, or preempts, state law. Preemption applies regardless of whether the conflicting laws come from legislatures, courts, administrative agencies or constitutions. Congress has preempted state regulation in many areas. In some, Congress has allowed federal regulatory agencies to set national minimum standards but did not preempt state regulation imposing more stringent standards. Where federal laws or regulations do not clearly state whether preemption should apply, the Supreme Court tries to follow lawmakers’ intent and prefers interpretations that avoid preempting state laws.

Regarding AI, proponents of strong preemption argue that preemption is needed to prevent a patchwork of state rules from burdening AI developers and deployers with numerous, expensive and potentially inconsistent compliance obligations. Opponents contend that preserving state authority is essential to address local harms, protect residents and fill gaps left by federal law. 

Critically, the White House and Congress now each appear to believe that any workable national AI governance framework that effectively promotes US interests is going to require congressional action. As White House Office of Science and Technology Policy (OSTP) Director Michael Kratsios said, “the administration can only promote America’s position as the global AI standard-setter with the Legislative Branch’s support.”

Preemption certainly remains a flashpoint. Unfortunately, the decades-long failures of Congress to enact comprehensive privacy legislation show how preemption disputes can stall otherwise popular tech-focused reforms despite bipartisan support. Moreover, even with deregulation and promoting innovation as its lodestars, the Trump Administration’s AI policy agenda as stated in the AI action plan and executive order will require congressional action to effectuate.  Whether Congress enacts broad or narrow preemption language, the precise preemptive scope will also be shaped through future litigation and state enforcement activity, as states test the boundaries of their authority and courts interpret statutory text in line with longstanding doctrines of express and implied preemption.

The US simply cannot afford for this admittedly important debate to derail establishing a workable national AI framework. 

Preemption in the AI context and the urgent need for action

There is near-unanimous consensus that AI is a generational technology with the potential to upend the global order. Control over its future will directly affect American national security and economic vitality while vastly changing the lives of all Americans. The need to “beat” China and other adversaries in a global AI arms race permeates all aspects of federal AI policymaking. This imperative to outcompete China is not hypothetical. According to one study, China has already emerged as the leading global power in creating emerging technologies and is nearly equal to the US in AI development.

The Trump Administration’s AI agenda is grounded in promoting innovation and unleashing American dominance. On Day One of his second term, Trump wholly rescinded the Biden Administration’s expansive, 110-page executive order that sought to create a government-wide AI framework centered around safe, secure and trustworthy AI development. This removal of obligations for federal agencies along with a lack of congressional consensus expanded the growing gap between state legislative and regulatory activity and the lack of federal AI legislation. This gap is now quite wide: according to the National Conference of State Legislatures, all 50 states, the District of, the US Virgin Islands and Puerto Rico each introduced AI-related legislation during the 2025 session, with 38 jurisdictions adopting around 100 AI-related measures.   

The House then included a 10-year moratorium on state AI laws in the One Big Beautiful Bill budget reconciliation measure, generally barring states from enforcing laws that target AI or automated systems. The provision quickly drew sharp bipartisan criticism, with a coalition of 40 state attorneys general and 260 state legislators representing all 50 states publicly opposing it. The Senate’s revised language, which attempted to tie the moratorium to Broadband Equity Access and Deployment Program (BEAD) funding and make clearer exemptions for certain state laws, failed to gain traction. It was ultimately stripped from the final OBBB by a 99–1 Senate vote. Although language was never released, in November 2025 similar bipartisan coalitions of 36 state attorneys general and 290 state legislators also vocally opposed inclusion of any broad AI-related state law moratorium in the NDAA.  

army of robots
Data Privacy

Decoding Duty of Care in the Agentic AI Era

by Saumitra Das
January 26, 2026

By nature, autonomous agents look for the path of least resistance, which can mean finding ways around existing safeguards

Read moreDetails

The Trump AI EO outlines federally preemptive approach to AI

Expanding on the Trump AI action plan, the Trump executive order on AI sets forth the administration’s priorities at the federal and state levels to advance a policy “to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI.” Notably, this is a more measured approach than the proposed moratoria. Rather than insisting on eliminating state AI laws, the action plan instructs federal agencies to review state regulatory environments when awarding AI-related funding and to limit support for states deemed too restrictive, which the plan leaves undefined. More specifically, the executive order requires the Commerce Department to lead an effort to evaluate existing state laws and identify “onerous” laws that “conflict with the policy” in the EO within 90 days of Dec. 11, 2025. The Commerce Department must also issue a policy notice within the same period specifying that states with identified “onerous AI laws” be ineligible for BEAD funding to the maximum extent allowed by law. (See Sections 4-5 of the Trump AI EO.)

Perhaps most critically, in Section 1, the executive order makes clear that the “Administration must work with Congress to ensure there is a minimally burdensome national standard.” Section 8 of the order tasks the White House special advisor for AI and crypto and the assistant to the president for science and technology to “jointly prepare a legislative recommendation establishing a uniform Federal policy framework for AI that preempts State AI laws that conflict with the policy” of the executive order. It does not specify whether such a recommendation is limited to preemption legislation, but importantly it does make explicit certain carveouts for lawful state AI laws. Contemplating a specific role for state regulation, those carveouts include child safety protections, AI compute and data center infrastructure (excluding generally applicable permitting reforms), state government use of AI and “other topics” to be determined.

While some of these provisions, including withholding state funds, would likely still raise constitutional or legal challenges, it is important that the Trump action plan and executive order recognize preemption is a complex puzzle that delicately balances an active role for the states rather than an all-or-nothing proposition like the failed moratoria. 

In this sense the congressional struggle to enact comprehensive data privacy legislation provides lessons for AI, but there are substantial differences. The closest Congress came to setting a federal data privacy standard was the American Data Privacy and Protection Act (ADPPA), introduced in 2022. ADPPA would have broadly preempted state privacy laws but allowed for numerous, heavily negotiated carveouts in areas like data breaches, biometrics, facial recognition, civil rights, law enforcement, criminal laws and specified existing state laws. Despite a 55-2 vote to advance out of committee, the ADPPA stalled amid resistance from California lawmakers, including then-House Speaker Nancy Pelosi, who did not allow for a floor vote over concerns with overriding California’s privacy laws and enforcement authorities. 

If preemption in legislating data privacy is complicated, creating a national AI framework is a multi-layered labyrinth. AI is not an industry, a discrete technology or a set of rights and obligations enjoyed by individuals or firms. It is an array of general-purpose capabilities, rapidly integrating into every sector of society in myriad and unknown ways, including healthcare, housing, finance, entertainment, education, defense and law enforcement. That makes AI regulation highly context-dependent and inextricably intertwined with areas traditionally governed by state law. Moreover, unlike privacy, the risks of delayed action on AI are more immediate and consequential — from national security to economic competitiveness to the trust required for AI adoption and use.

The Trump AI policy agenda requires congressional action

Merely determining the division of authority between the federal government and the states on AI policy through preemption — by executive action or legislation — does not establish a “uniform federal policy framework for AI” that this or any future administration would seek to enact. The Trump AI action plan, along with the EO, outlines an expansive national strategy, including over 90 federal policy positions centered on three main pillars of action: (1) accelerating innovation; (2) building infrastructure; and (3) ensuring US international leadership. Although arguing for light-touch regulation, the action plan nonetheless addresses technology, infrastructure, telecommunications, trade, national security, cybersecurity, energy, labor, education, the environment, competition, science, finance and more. 

The action plan and executive order are important statements of policy priorities, but neither of them creates any of the new authorities or provides the additional resources that such a vision requires. Congress must therefore provide the legislative scaffolding necessary to authorize, fund and codify necessary elements to build out a national AI framework in line with the Trump AI agenda. 

Below are foundational components of the Trump AI action plan that require congressional action as well as related congressional proposals. Although Congress may appear far from enacting sweeping AI-related legislation, it is, in fact, working on numerous specific bipartisan bills that would take real steps toward enacting the pillars of the Trump Administration plan and begin to flesh out a real federal framework for AI regulation. This Congress alone has introduced hundreds of bills, which build on the more than 150 AI-related bills in the 118th Congress.

Accelerating AI innovation

Standards setting

The Trump AI action plan prioritizes “the development and adoption of national standards for AI systems” throughout. It assigns the National Institutes for Standards and Technology (NIST) at the Commerce Department a central role in developing technical standards for AI evaluations, safety and secure-by-design practices. This builds on the Biden public-private partnership and voluntary standards approach. NIST’s US AI Safety Institute was renamed the Center of AI Standards and Innovation (CAISI), but retains its central role leading on standards development.

While NIST may already issue voluntary guidance, Congress is crucial for directing and promoting durability of any such standards. Notably, the action plan also calls on NIST to revise its AI risk management framework. The bipartisan Future of AI Innovation Act advanced out of the commerce, science and transportation committee last Congress and would have formally enshrined the NIST AI Safety Institute in law and mandated standards development. Then-committee chair and now ranking member Maria Cantwell (D-WA) introduced the bill, which was co-sponsored by Sens. Todd Young (R-IN), John Hickenlooper (D-CO), Marsha Blackburn (R-TN), Kyrsten Sinema (I-AZ), Roger Wicker (R-MS), Ben Ray Lujan (D-NM), Mike Rounds (D-SD) and Chuck Schumer (D-NY). Young has indicated a similar bill is forthcoming this Congress.  

Sens. Blackburn and Mark Warner (D-VA) also introduced the Promoting United States Leadership in Standards Act of 2025 that would create a pilot program for grants to fund standards development along with a centralized web portal to help stakeholders navigate and actively engage in international standardization efforts.   

Testing and guidance

AI innovation requires access to compute, datasets and evaluation tools. The Trump AI action plan calls for an “AI evaluations ecosystem,” with semiannual interagency convenings and publication of test results. Specifically, the plan assigns the Department of Energy (DOE), National Science Foundation (NSF) and NIST with ensuring American leadership on standards, building evaluation testbeds and expanding the National Artificial Intelligence Research Resource Pilot (NAIRR). Part of NSF, the NAIRR aims to connect researchers and educators to computational, data and training resources to advance AI research. Relatedly, the plan also calls for operationalizing the National Secure Data Service to make federal government data more accessible to AI technologies. Without appropriations and the certainty of statutory permanence, however, DOE, NSF and NIST cannot build the robust testing infrastructure envisioned.

The proposed bipartisan Creating Resources for Every American to Experiment with Artificial Intelligence (CREATE AI) Act of 2025 would codify the NAIRR as a permanent shared compute and dataset hub open to researchers and others. The proposed Trustworthy AI (TEST AI) Act of 2025 would direct NIST to create testbeds, expand measurement science and provide the resources and mandate needed to scale evaluations beyond pilot projects. 

Together, CREATE AI and TEST AI would go far to supply the testing bedrock that the plan envisions. Additionally, the Future of AI Innovation Act would have also created testbed programs at the National Laboratories through cooperation of NIST, the NSF, DOE and the private sector to develop security risk tools and testing environments for companies to evaluate their systems.

Regulatory sandboxes

The Trump action plan encourages regulatory “sandboxes” that allow experimentation through controlled environments for rapid deployment and testing of AI technologies while sharing data and results. Senate CST Committee Chairman Ted Cruz (R-TX) introduced a federal AI sandbox proposal that differs from the AI action plan approach. In his SANDBOX Act the OSTP would provide a system where companies (and regulators) could apply for relief from existing regulatory requirements. To date, Cruz remains the lone signatory on the bill. The proposed bipartisan Unleashing AI Innovation in Financial Services Act, introduced in both the House and Senate, would establish regulatory sandboxes for regulated financial entities through AI test projects at federal financial regulatory agencies.

Building American AI Infrastructure

Permitting reform

Permitting reform has become a major bipartisan priority. The Trump action plan recommends streamlining federal permitting processes for AI-related infrastructure — including data centers, semiconductor fabs and supporting energy and telecom projects — by expanding the PermitAI initiative, using categorical exclusions, and creating “fast lanes” through mechanisms like FAST-41.

PermitAI is an initiative of the Pacific Northwest National Laboratory (PNNL), in collaboration with DOE’s Office of Policy, to improve the speed and quality of federal permitting processes through investments in data, AI and public access. FAST-41 was codified under Title 41 of the 2015 Fixing America’s Surface Transportation (FAST) Act and established new federal agency oversight and coordination procedures for reviewing infrastructure projects. FAST-41 set “covered project” eligibility criteria, including a $200 million investment minimum. The Trump AI plan also calls for ensuring that AI infrastructure remains free of the components of information and communications technology systems (ICTS) from adversaries, directly linking permitting reform to national security.

Notably, only Congress has the power to expand FAST-41 coverage to AI infrastructure projects, enact statutory “shot-clocks” for default approval of environmental reviews or codify restrictions on ICTS in AI infrastructure. 

Energy grid

The Trump AI action plan calls for stabilizing the power grid, optimizing existing assets and prioritizing dispatchable generation. But the Federal Energy Regulatory Commission (FERC) and other regulators cannot act without authorization. Congress could consider directing FERC to issue reliability and resource-adequacy rules tailored to AI load centers, appropriating funding for transmission build-out and grid-enhancing technologies or incentivizing demand-management programs to smooth data center energy peaks.

There has been less bipartisan legislation proposed along these lines than other areas of the AI action plan. One notable exception is the Guaranteeing Reliability through the Interconnection of Dispatchable (GRID) Power Act passed the House in September 2025 with five Democratic votes. It directs FERC to reform interconnection queue rules so that dispatchable generation projects — i.e. those capable of providing reliable, forecastable supply — are prioritized in the queue process.

Workforce and skills

People are needed to build and develop AI infrastructure and innovation. The Trump action plan emphasizes both frontier model talent and skilled trades — electricians, advanced HVAC technicians and others essential for data center construction. 

Congress has taken incremental steps through the proposed bipartisan Artificial Intelligence & Critical Tech Workforce Framework Act introduced in the Senate last year, which directs NIST to define AI workforce skills. Additionally, the OBBB made permanent employer educational benefits under Section 127 of the internal revenue code.

But gaps remain. Congress could, for example, fund apprenticeships tied directly to AI infrastructure projects or expand the OBBB tax incentives for employer-funded AI training, which are currently subject to an annual limit of $5,250, indexed to inflation.

Leading international AI diplomacy and security

Export controls

The Trump AI action plan stresses that protecting US leadership requires preventing adversaries from acquiring advanced AI capabilities, but this requires legislation. The proposed bipartisan Chip Security Act would require location verification and anti-diversion safeguards for advanced chips. This would build on the already enacted Maintaining American Superiority by Improving Export Control Transparency Act, which enhances reporting requirements.

National security risks of frontier AI models

The AI action plan recommends that CAISI lead frontier AI evaluation for national security risks in partnership with developers. Broader legislation is needed to extend assurance standards and classified compute protections across the AI ecosystem and would likely present an opportunity for Congress to set some preemption standards around high-risk AI use cases, which have become a frequent target in state AI legislation. Sens. Hickenlooper and Shelley Moore Capito (R-WV) reintroduced the VET AI Act, which would require NIST to work with federal agencies and private stakeholders to develop specifications, guidelines and recommendations for third-party evaluators to work with AI companies on robust independent external auditing of how their AI systems are developed and tested.

Conclusion

Preemption remains a highly relevant part of the discussion for AI policy. However, the urgent objective must be building the components of a national framework that promotes innovation, protects Americans and supports American leadership in setting global AI standards. Congress simply cannot allow the challenges and complexities of preemption to derail the vital work it must do to help enact a workable AI framework.

Tags: Artificial Intelligence (AI)
Previous Post

Q&A: How to Prepare for AI-Powered Investigations While Managing Your Own AI Risk

Next Post

Higher Power: Can AI Investment & Climate Strategy Co-Exist?

David Miller and Clarine Nardi Riddle

David Miller and Clarine Nardi Riddle

David A. Miller serves as special counsel in Kasowitz’s government affairs and strategic counsel group. His practice focuses on matters related to AI, data privacy and security, emerging technologies, and other issues that fall within the jurisdiction of the Senate commerce and House energy & commerce committees.
Clarine Nardi Riddle is senior counsel in the D.C. office of Kasowitz. She serves as chair of the firm’s government affairs and strategic counsel practice group and formerly was chief of staff to US Sen. Joseph Lieberman, drawing on her insider’s perspective of the legislative and judicial systems to provide legal, strategic and policy advice to national and international clients on matters at the intersection of law, business and public policy.

Related Posts

ai policy concept collage

Effective AI Policy Is Not a Crock-Pot; You Can’t Just Set It and Forget It

by Cory McNeley
March 24, 2026

Step One: inventory and classify AI use cases by risk level

school library shelves

Compliance Classroom: Emerging Perspectives on AI

by Corporate Compliance Insights
March 20, 2026

Essays on moral distancing, information silos and IP infringement

news roundup_june 14 2024

US Regulatory Fines Plummet in 2025

by Staff and Wire Reports
March 19, 2026

Majority of orgs report breach involving AI

ai generated content collage

Managing the AI Content Explosion in Financial Services

by Jamie Hoyle
March 13, 2026

AI tools have multiplied adviser output in financial services — and FINRA’s supervision framework was written for a different volume

Next Post
data center under construction

Higher Power: Can AI Investment & Climate Strategy Co-Exist?

No Result
View All Result

Privacy Policy | AI Policy

Founded in 2010, CCI is the web’s premier global independent news source for compliance, ethics, risk and information security. 

Got a news tip? Get in touch. Want a weekly round-up in your inbox? Sign up for free. No subscription fees, no paywalls. 

Follow Us

Browse Topics:

  • CCI Press
  • Compliance
  • Compliance Podcasts
  • Cybersecurity
  • Data Privacy
  • eBooks Published by CCI
  • Ethics
  • FCPA
  • Featured
  • Financial Services
  • Fraud
  • Governance
  • GRC Vendor News
  • HR Compliance
  • Internal Audit
  • Leadership and Career
  • On Demand Webinars
  • Opinion
  • Research
  • Resource Library
  • Risk
  • Uncategorized
  • Videos
  • Webinars
  • Well-Being
  • Whitepapers

© 2026 Corporate Compliance Insights

Welcome to CCI. This site uses cookies. Please click OK to accept. Privacy Policy
Cookie settingsACCEPT
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT
No Result
View All Result
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • Artificial Intelligence (AI)
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Downloads
    • Download Whitepapers & Reports
    • Download eBooks
  • Books
    • CCI Press
    • New: Bribery Beyond Borders: The Story of the Foreign Corrupt Practices Act by Severin Wirz
    • CCI Press & Compliance Bookshelf
    • The Seven Elements Book Club
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe

© 2026 Corporate Compliance Insights