No Result
View All Result
SUBSCRIBE | NO FEES, NO PAYWALLS
MANAGE MY SUBSCRIPTION
NEWSLETTER
Corporate Compliance Insights
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • Artificial Intelligence (AI)
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Downloads
    • Download Whitepapers & Reports
    • Download eBooks
  • Books
    • CCI Press
    • New: Bribery Beyond Borders: The Story of the Foreign Corrupt Practices Act by Severin Wirz
    • CCI Press & Compliance Bookshelf
    • The Seven Elements Book Club
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe
Jump to a Section
  • At the Office
    • Ethics
    • HR Compliance
    • Leadership & Career
    • Well-Being at Work
  • Compliance & Risk
    • Compliance
    • FCPA
    • Fraud
    • Risk
  • Finserv & Audit
    • Financial Services
    • Internal Audit
  • Governance
    • ESG
    • Getting Governance Right
  • Infosec
    • Cybersecurity
    • Data Privacy
  • Opinion
    • Adam Balfour
    • Jim DeLoach
    • Mary Shirley
    • Yan Tougas
No Result
View All Result
Corporate Compliance Insights
Home Opinion

The Rising Tide of AI-Washing Cases in Securities Fraud Litigation

Opendoor algorithm couldn’t adjust to changing conditions; Upstart model didn’t respond dynamically to macroeconomic changes — both faced fraud claims

by James Christie and Nick Manningham
February 24, 2026
in Opinion, Risk
business using AI concept collage

Public companies have strong incentive to portray themselves as AI leaders, but when the promised AI-driven benefits fail to materialize, stock prices fall and investors bring securities fraud claims. James Christie and Nick Manningham of Labaton Keller Sucharow examine recent cases where companies exaggerated the role AI played in solving problems and detail what transparent AI reporting should look like. 

Over the past few years, the market has been inundated by a wave of claims — some real, some not — by public companies asserting that AI is transforming their industry and that they are uniquely positioned to capitalize on that transformation.

Although AI has enormous potential, it also presents a host of challenges for investors. Investors often seek out companies that are using AI to improve their operations. But it is often unclear to investors which companies are genuinely using AI to improve their business and which are simply riding a hype wave in an attempt to get rich quick on false promises of AI transformation. 

These exaggerated claims of a company’s AI capabilities have come to be known as AI-washing. What exactly is AI-washing, how can investors tell the difference between it and legitimate AI use and what are some ways in which the SEC and private securities litigation have sought to hold companies accountable for it?

What is AI-washing

Derived from the term “greenwashing,” which is used to describe a public company’s false claims about their supposedly environmentally friendly practices, AI-washing refers to exaggerating or misrepresenting a company’s AI capabilities or the role that AI plays in the company’s products or services. This practice can mislead — and ultimately harm — investors who reasonably rely on a company’s AI representations when making investment decisions.

Indeed, over the past few years, investors have shown a strong interest in companies that claim they use AI to enhance their business operations. As a result, the stock prices of these AI-connected companies have often soared, creating a strong incentive for companies to portray themselves as AI leaders. But when the promised AI-driven benefits fail to materialize, stock prices fall and investors are left holding the bag.

Regulatory risks for public companies

The SEC and DOJ have launched a series of enforcement actions targeting AI-washing, underscoring that AI-washing is a focus for the federal government. Indeed, the SEC has stated plainly that “AI washing hurts investors” because deceptive AI claims can mislead investment decisions, distort risk assessments and improperly attract capital.

The SEC has publicly pursued and settled multiple actions against firms that touted AI capabilities that did not exist. For example, in March 2024, the SEC charged two investment advisers for making false and misleading statements about their purported use of AI, with then-Chairman Gary Gensler stating, “if you claim to use AI … you need to ensure that your representations are not false or misleading.” The DOJ has similarly signaled its intent to pursue fraudulent AI-related conduct, including where individuals or entities exploit the allure of AI to induce investments. DOJ officials have framed such AI-washing schemes as not only defrauding investors but also misallocating capital from legitimate innovation, indicating that AI themes will be embedded in broader securities and fraud prosecutions.

Taken together, the statements and actions from the SEC and DOJ reflect a coordinated enforcement posture in which exaggerated or unsubstantiated AI claims are viewed as modern iterations of well-established fraud principles rather than industry buzzwords exempt from scrutiny.

ai black box
Ethics

Your Foreign AI Vendor’s Black Box Is an Ethics Problem, Not a Technical One

by Vera Cherepanova
February 18, 2026

Without someone inside the organization who can meaningfully challenge an AI system's behavior, documented controls slide into paperwork rather than true oversight

Read moreDetails

The emergence of “AI-washing” in private securities fraud class actions

Working alongside the SEC and DOJ, investors have also brought private claims against companies when they engage in AI-washing in violation of the federal securities laws. Two recent securities fraud cases serve as excellent examples.

Opendoor

The first example occurred in In re Opendoor Technologies Incorporated Securities Litigation (Case No. 2:22-cv-01717-MTL, D. Ariz.). Opendoor is a publicly traded real estate technology company that operates a digital platform and uses a supposedly AI-powered pricing algorithm to make instant offers to purchase homes. The company went public in December 2020. 

According to the plaintiffs, Opendoor’s offering documents contained materially false and misleading statements about the company’s use of AI to gain a competitive advantage. Specifically, the offering documents claimed that Opendoor’s AI-powered pricing algorithm was superior to traditional ways of buying and selling real estate because it priced properties more accurately than traditional human real estate agents and allowed Opendoor to stay profitable across all housing markets by adjusting quickly to fluctuating market conditions. 

The plaintiffs, however, alleged these statements were false because the pricing algorithm could not adjust to changing market conditions and economic cycles, and as a result, Opendoor relied on a human-driven process that was neither revolutionary nor unique. The court upheld the falsity of the challenged AI-related statements because Opendoor exaggerated the extent of AI’s role in creating the competitive advantage Opendoor claimed it did.

Upstart

Another notable example is In re Upstart Holdings, Inc. Securities Litigation (Case No. 22-cv-02935 S.D. Ohio). The complaint contends that Upstart, which markets itself as an AI-driven lending platform, made materially false and misleading statements about its AI credit underwriting models and how they would perform in adverse economic conditions. 

Specifically, Upstart’s public statements promoted its AI underwriting model as being capable of overcoming the pitfalls of traditional credit underwriting, such as FICO scores, because it considered a far greater volume of variables and could respond “dynamically” to changing conditions. There, the court held that statements about the supposed “significant advantage” over FICO scores and its “ability to respond very dynamically to macroeconomic changes” were actionable misstatements in part because the complaint alleged that the AI model “did not provide these verifiable advantages, in general or in times of macroeconomic turbulence.”

Both of these cases highlight that exaggerating AI capabilities can lead to securities fraud liability. In both cases, the company exaggerated the role that AI played in solving a previously unsolvable problem and that the AI models could dynamically adjust to changing market conditions faster than previous methods. This was far from the truth, and both companies suffered significant stock price declines when their purportedly superior AI models failed to accurately react to changing economic conditions, resulting in investor losses as the truth about each company’s AI-capabilities came to light. 

The lesson for investors is simple: If a company claims that adding AI miraculously solves a previously unsolvable problem, and it sounds too good to be true, it probably is. In these situations, investors should closely monitor all of the company’s disclosures to make an informed decision on whether the AI-claims are truthful or exaggerated.

What transparent AI reporting should look like

The SEC has been considering whether to push companies toward more robust, standardized AI-related disclosures. For example, in December 2025, the investor advisory committee (IAC) — a committee created to advise the SEC on regulatory priorities and initiatives to protect investor interests — voted to advance a recommendation that the SEC issue guidance requiring public companies to disclose information about the impact of AI on their companies. The IAC cited a “lack of consistency” in contemporary AI disclosures, which “can be problematic for investors seeking clear and comparable information.” We agree.

At a minimum, companies should describe with reasonable specificity how AI systems are used in their business, distinguishing between automated decision-making, decision-support tools and processes that remain primarily human-driven. Companies should use precise language and should avoid vague or exaggerated claims that could mislead investors, such as “our AI capabilities give us a significant competitive advantage.” Similarly, public companies should articulate the scope and maturity of their AI deployment, including whether systems are internally developed or third-party licensed, and avoid conflating experimental initiatives with revenue-generating capabilities. The company should also have evidence to support claims touting their AI technology, particularly when comparing it to non-AI products performing similar functions.

Moreover, public companies should be transparent about the strengths and limitations of their AI capabilities. Where AI plays a material role in core business functions, companies should also disclose known limitations, assumptions and circumstances under which the technology may perform less effectively, particularly during periods of market stress or atypical conditions. Public companies should also explain to investors what they mean by “AI” to prevent any misunderstanding, as AI can have different meanings in different contexts.

Finally, companies should avoid using boilerplate language in AI-related risk disclosures. Instead, a company’s risk disclosures should be meaningful and tailored to the company’s actual use of AI. This may encompass risks related to data quality, model drift, regulatory uncertainty, human oversight and reliance on historical patterns that may not hold in future environments. Importantly, companies should ensure consistency across all public communications — including earnings calls, investor decks and marketing materials — so that AI-related statements do not present a misleadingly uniform picture of technological certainty.

As regulators and courts have made clear, accurate and balanced AI disclosures not only promote investor confidence but also serve as a critical safeguard against enforcement actions and private litigation alleging AI-washing under the federal securities laws. 

Conclusion

The past several years has seen a marked increase in securities fraud allegations — particularly under Section 10(b) and Rule 10b-5 — grounded in claims of “AI-washing.” As this technology becomes integral to corporate strategy and investor expectations, plaintiffs and regulators alike are scrutinizing the accuracy and completeness of AI-related disclosures. 

Companies that fail to ground their representations in verifiable facts, make unfounded claims about their technological capabilities or omit material information regarding AI products and strategies increasingly risk robust litigation exposure. The expanding body of case law underscores that while AI is a legitimate driver of market value, representations about that technology must withstand rigorous legal and factual examination to avoid liability under the federal securities laws.


Tags: Artificial Intelligence (AI)
Previous Post

Centralis Group Acquires US Fund Services Provider PINE Advisor Solutions

Next Post

If AI Search Engines Don’t Know Your Brand, Fraudsters Will Define It for You

James Christie and Nick Manningham

James Christie and Nick Manningham

James Christie is a partner in the New York office of Labaton Keller Sucharow. He focuses on prosecuting complex securities fraud cases on behalf of institutional investors and is currently involved in litigating cases against major US and non-US corporations, such as Estée Lauder, ZoomInfo, Roblox, Lockheed Martin and Regeneron Pharmaceuticals. He is a member of the firm's executive committee and also serves as assistant general counsel and co-chair of the technology committee.
Nick Manningham is of counsel in the New York office of Labaton Keller Sucharow, focusing on litigating securities fraud class actions on behalf of institutional investors. He began his legal career as an assistant corporation counsel for the New York City law department where he represented the city of New York in federal civil rights actions.

Related Posts

ai summary on google

If AI Search Engines Don’t Know Your Brand, Fraudsters Will Define It for You

by Jonathan Armstrong
February 24, 2026

Financial services organizations face particular exposure as investment and employment scams proliferate through AI-generated content

ai black box

Your Foreign AI Vendor’s Black Box Is an Ethics Problem, Not a Technical One

by Vera Cherepanova
February 18, 2026

Without someone inside the organization who can meaningfully challenge an AI system's behavior, documented controls slide into paperwork rather than...

data center under construction

Higher Power: Can AI Investment & Climate Strategy Co-Exist?

by Tim Weiss
February 11, 2026

At your next board meeting where AI appears on the agenda, add one question: Can our AI growth plans and...

us flag on computer chip

Preemption is No Panacea: Congress Must Create a Workable National Framework for American AI Dominance

by David Miller and Clarine Nardi Riddle
February 10, 2026

Even with light-touch regulation as its lodestar, new AI action plan requires authorization and funding for standards development, testing infrastructure...

Next Post
ai summary on google

If AI Search Engines Don’t Know Your Brand, Fraudsters Will Define It for You

No Result
View All Result

Privacy Policy | AI Policy

Founded in 2010, CCI is the web’s premier global independent news source for compliance, ethics, risk and information security. 

Got a news tip? Get in touch. Want a weekly round-up in your inbox? Sign up for free. No subscription fees, no paywalls. 

Follow Us

Browse Topics:

  • CCI Press
  • Compliance
  • Compliance Podcasts
  • Cybersecurity
  • Data Privacy
  • eBooks Published by CCI
  • Ethics
  • FCPA
  • Featured
  • Financial Services
  • Fraud
  • Governance
  • GRC Vendor News
  • HR Compliance
  • Internal Audit
  • Leadership and Career
  • On Demand Webinars
  • Opinion
  • Research
  • Resource Library
  • Risk
  • Uncategorized
  • Videos
  • Webinars
  • Well-Being
  • Whitepapers

© 2026 Corporate Compliance Insights

Welcome to CCI. This site uses cookies. Please click OK to accept. Privacy Policy
Cookie settingsACCEPT
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT
No Result
View All Result
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • Artificial Intelligence (AI)
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Downloads
    • Download Whitepapers & Reports
    • Download eBooks
  • Books
    • CCI Press
    • New: Bribery Beyond Borders: The Story of the Foreign Corrupt Practices Act by Severin Wirz
    • CCI Press & Compliance Bookshelf
    • The Seven Elements Book Club
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe

© 2026 Corporate Compliance Insights