No Result
View All Result
SUBSCRIBE | NO FEES, NO PAYWALLS
MANAGE MY SUBSCRIPTION
NEWSLETTER
Corporate Compliance Insights
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • Artificial Intelligence (AI)
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Downloads
    • Download Whitepapers & Reports
    • Download eBooks
  • Books
    • CCI Press
    • New: Bribery Beyond Borders: The Story of the Foreign Corrupt Practices Act by Severin Wirz
    • CCI Press & Compliance Bookshelf
    • The Seven Elements Book Club
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe
Jump to a Section
  • At the Office
    • Ethics
    • HR Compliance
    • Leadership & Career
    • Well-Being at Work
  • Compliance & Risk
    • Compliance
    • FCPA
    • Fraud
    • Risk
  • Finserv & Audit
    • Financial Services
    • Internal Audit
  • Governance
    • ESG
    • Getting Governance Right
  • Infosec
    • Cybersecurity
    • Data Privacy
  • Opinion
    • Adam Balfour
    • Jim DeLoach
    • Mary Shirley
    • Yan Tougas
No Result
View All Result
Corporate Compliance Insights
Home Risk

Negligence & AI: Can the Courts Keep Up?

At this early stage, be cautious in how you talk about your commitment to AI best practices

by Elizabeth Alice “Liz” Och
April 20, 2026
in Risk
virtual gavel pixels

Without a single federal standard governing what harmful use of AI looks like, courts are continuing to take up AI-related cases, establishing in real-time the bounds of AI liability. As Elizabeth A. Och of Hogan Lovells writes, two likely upshots are increased fragmentation and inconsistent approaches across jurisdictions.

Hardly a day passes without a headline about a new lawsuit tied to an AI chatbot or other AI-enabled system. In the absence of comprehensive federal legislation and a corresponding private right of action for AI harms, plaintiffs are increasingly turning to common-law tort theories to frame these claims. Negligence has emerged as an attractive vehicle for plaintiffs. It is flexible, broadly available, familiar to courts and can be asserted in virtually any court and adapted to a wide range of factual scenarios.

These cases are rarely “about” AI in the abstract. Instead, they focus on human judgment: how AI systems were designed, selected, governed, deployed, monitored and constrained, or in some cases, how they were not. As AI models grow more sophisticated and agentic systems move from experimental deployments into everyday use, negligence claims are likely to become more frequent and more consequential.

At the same time, courts move slowly. Cases are often resolved years after the events in question, during which the underlying technology and industry practices will evolve. Courts will be asked to assess what was “reasonable” or “foreseeable” at a particular moment in time, using doctrines developed for more stable technologies. Whether courts can adapt traditional negligence principles to this rapidly changing landscape — and do so consistently — will shape the contours of AI liability in the years ahead.

Who can be sued and for what duty?

Negligence claims can target human or corporate actors across the AI lifecycle. Plaintiffs may name developers, model providers, integrators, deployers, platforms or even end users, particularly in high‑stakes professional or commercial settings. Importantly, plaintiffs are not required to identify a single “responsible” actor at the outset. Instead, they can sue broadly and allow discovery to reveal where decision‑making authority and risk control resided.

Early complaints reflect an expansive view of potential duties. Plaintiffs may allege that a defendant (depending on its position in the AI lifecycle) had obligations to design and train systems responsibly; to use appropriate and representative data; to anticipate foreseeable misuse; to warn of known or reasonably knowable limitations; to implement safeguards against risks; to ensure meaningful human oversight; to conduct use-case-specific testing; to train downstream users; to enforce terms of use and safety policies; to select AI tools that were appropriate for the task at hand; and to avoid blind reliance on AI outputs in contexts requiring independent judgment. 

Whether any such duties exist is ultimately a question for the courts. The analysis turns on familiar negligence considerations, such as the nature of the relationship between the parties, the degree of control exercised by defendants and the context in which the conduct occurred. These inquiries often cut across corporate boundaries and contractual layers, complicating early attempts to narrow the case.

ai insurance concept robot hand
Risk

AI Insurance Exists. Getting It Is the Hard Part.

by Corey Gray and Jon Mills
April 13, 2026

Coverage is still catching up to AI risks, but companies need to get a jump on policies

Read moreDetails

Reasonable care without a federal benchmark

The success of a negligence claim frequently hinges on whether the defendant exercised “reasonable care.” In the AI context, that inquiry is complicated by the absence of a single, authoritative federal standard of care. Courts are likely to assemble the reasonable‑care benchmark from a patchwork of sources, including industry practices, internal policies, voluntary frameworks, expert testimony and post‑hoc assessments of what precautions could have been taken.

Defendants may point to this regulatory vacuum as a defense, arguing that no settled industry standards existed at the relevant time, that they followed prevailing practices and that plaintiffs are attempting to impose hindsight‑driven expectations. Evidence of compliance with existing regulatory regimes, such as consumer protection laws, professional standards or sector‑specific safety requirements, may be offered as proof of reasonableness.

Plaintiffs may use the same federal statutory void as a reason to look inward. Internal AI policies, governance frameworks and aspirational public statements may be cited as evidence of the applicable standard of care. Language intended to signal a commitment to best practices can be reframed as a self‑imposed duty that the organization failed to meet. The lesson is not that companies should avoid adopting AI policies but that such policies should be realistic, risk‑tiered and demonstrably implemented rather than purely aspirational.

Known risks, evolving systems

Negligence liability extends only to harms that are foreseeable. Plaintiffs can increasingly point to widely recognized categories of AI risk — such as bias, hallucinations, data drift, misuse and overreliance — as evidence that harm of the general type was foreseeable, even if the precise outcome was not.

For courts, the central question is often not whether a particular outcome could have been predicted in advance but whether reasonable actors should have anticipated the relevant risk category and taken proportionate steps to mitigate it. As a result, documentation matters. Risk assessments, testing protocols, monitoring practices and incident-response procedures may all play a critical role in evaluating foreseeability.

No negligence claim can succeed without proof of causation, and this element may prove the most challenging for plaintiffs in AI cases. Model behavior can be opaque, outputs may vary based on prompts and context, and multiple human actors often intervene between system output and the alleged harm. Model updates, retraining or version changes can further complicate efforts to identify which system caused a particular injury.

Defendants, for their part, may attempt to push liability upstream or downstream, emphasizing their own lack of control, the absence of a direct relationship with the plaintiff or intervening human decisions. Traceability becomes a strategic asset. Version control, audit trails, documentation of human review and records of incident detection and remediation can all influence whether causation arguments are resolved early or survive into costly discovery.

The role of courts & the risk of inconsistent outcomes

As AI systems continue to evolve, courts’ application of negligence principles will develop alongside them. Courts draw on the existing precedent in their jurisdiction (or suitable analogies from similar cases), which may take the law in different directions depending on the jurisdiction. Some may treat downstream misuse or overreliance as an intervening cause that breaks the causal chain; others may view such conduct as foreseeable, particularly where warnings or safeguards were inadequate. What was unforeseeable one year may be deemed foreseeable the next. 

Judicial philosophy and technical familiarity with AI systems will also play a role. Some courts may be reluctant to expand tort liability in the absence of legislative guidance, while others may view tort law as a necessary gap‑filler. At the same time, plaintiffs are unlikely to rely on negligence alone, instead pairing it with claims under state AI laws, consumer protection statutes, product liability theories or other available causes of action. The result will be increased fragmentation, forum shopping and inconsistent approaches across jurisdictions.

Tags: Artificial Intelligence (AI)
Previous Post

Will AI Change FinServ Regulation? Here’s What History Tells Us.

Next Post

Data Authenticity & Accountability Crucial in the AI Age

Elizabeth Alice “Liz” Och

Elizabeth Alice “Liz” Och

Elizabeth Alice “Liz” Och is of counsel in the litigation, arbitration and employment practice at Hogan Lovells in Denver. She counsels clients through every stage of litigation, from early case assessment through trial. Her experience spans matters involving federal statutes, tort claims, class actions, government investigations and regulatory compliance.

Related Posts

big data concept

Data Authenticity & Accountability Crucial in the AI Age

by Greg Campanella and Ken Feinstein
April 20, 2026

Companies must blend innovative and traditional methods for policy development, privacy programs and regulatory alignment

sec office front

Will AI Change FinServ Regulation? Here’s What History Tells Us.

by Hollie Mason and Ryan Murphy
April 20, 2026

Regulators’ actions concerning AI in financial services are likely to increase in scope and frequency

news roundup bundled papers

Executive & GCs at Odds Over Legal’s Business Contributions

by Staff and Wire Reports
April 17, 2026

And why aren’t all boards talking about AI?

ai insurance concept robot hand

AI Insurance Exists. Getting It Is the Hard Part.

by Corey Gray and Jon Mills
April 13, 2026

Coverage is still catching up to AI risks, but companies need to get a jump on policies

Next Post
big data concept

Data Authenticity & Accountability Crucial in the AI Age

No Result
View All Result

Privacy Policy | AI Policy

Founded in 2010, CCI is the web’s premier global independent news source for compliance, ethics, risk and information security. 

Got a news tip? Get in touch. Want a weekly round-up in your inbox? Sign up for free. No subscription fees, no paywalls. 

Follow Us

Browse Topics:

  • CCI Press
  • Compliance
  • Compliance Podcasts
  • Cybersecurity
  • Data Privacy
  • eBooks Published by CCI
  • Ethics
  • FCPA
  • Featured
  • Financial Services
  • Fraud
  • Governance
  • GRC Vendor News
  • HR Compliance
  • Internal Audit
  • Leadership and Career
  • On Demand Webinars
  • Opinion
  • Research
  • Resource Library
  • Risk
  • Uncategorized
  • Videos
  • Webinars
  • Well-Being
  • Whitepapers

© 2026 Corporate Compliance Insights

Welcome to CCI. This site uses cookies. Please click OK to accept. Privacy Policy
Cookie settingsACCEPT
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT
No Result
View All Result
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • Artificial Intelligence (AI)
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Downloads
    • Download Whitepapers & Reports
    • Download eBooks
  • Books
    • CCI Press
    • New: Bribery Beyond Borders: The Story of the Foreign Corrupt Practices Act by Severin Wirz
    • CCI Press & Compliance Bookshelf
    • The Seven Elements Book Club
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe

© 2026 Corporate Compliance Insights