Public companies have strong incentive to portray themselves as AI leaders, but when the promised AI-driven benefits fail to materialize, stock prices fall and investors bring securities fraud claims. James Christie and Nick Manningham of Labaton Keller Sucharow examine recent cases where companies exaggerated the role AI played in solving problems and detail what transparent AI reporting should look like.
Over the past few years, the market has been inundated by a wave of claims — some real, some not — by public companies asserting that AI is transforming their industry and that they are uniquely positioned to capitalize on that transformation.
Although AI has enormous potential, it also presents a host of challenges for investors. Investors often seek out companies that are using AI to improve their operations. But it is often unclear to investors which companies are genuinely using AI to improve their business and which are simply riding a hype wave in an attempt to get rich quick on false promises of AI transformation.
These exaggerated claims of a company’s AI capabilities have come to be known as AI-washing. What exactly is AI-washing, how can investors tell the difference between it and legitimate AI use and what are some ways in which the SEC and private securities litigation have sought to hold companies accountable for it?
What is AI-washing
Derived from the term “greenwashing,” which is used to describe a public company’s false claims about their supposedly environmentally friendly practices, AI-washing refers to exaggerating or misrepresenting a company’s AI capabilities or the role that AI plays in the company’s products or services. This practice can mislead — and ultimately harm — investors who reasonably rely on a company’s AI representations when making investment decisions.
Indeed, over the past few years, investors have shown a strong interest in companies that claim they use AI to enhance their business operations. As a result, the stock prices of these AI-connected companies have often soared, creating a strong incentive for companies to portray themselves as AI leaders. But when the promised AI-driven benefits fail to materialize, stock prices fall and investors are left holding the bag.
Regulatory risks for public companies
The SEC and DOJ have launched a series of enforcement actions targeting AI-washing, underscoring that AI-washing is a focus for the federal government. Indeed, the SEC has stated plainly that “AI washing hurts investors” because deceptive AI claims can mislead investment decisions, distort risk assessments and improperly attract capital.
The SEC has publicly pursued and settled multiple actions against firms that touted AI capabilities that did not exist. For example, in March 2024, the SEC charged two investment advisers for making false and misleading statements about their purported use of AI, with then-Chairman Gary Gensler stating, “if you claim to use AI … you need to ensure that your representations are not false or misleading.” The DOJ has similarly signaled its intent to pursue fraudulent AI-related conduct, including where individuals or entities exploit the allure of AI to induce investments. DOJ officials have framed such AI-washing schemes as not only defrauding investors but also misallocating capital from legitimate innovation, indicating that AI themes will be embedded in broader securities and fraud prosecutions.
Taken together, the statements and actions from the SEC and DOJ reflect a coordinated enforcement posture in which exaggerated or unsubstantiated AI claims are viewed as modern iterations of well-established fraud principles rather than industry buzzwords exempt from scrutiny.
Your Foreign AI Vendor’s Black Box Is an Ethics Problem, Not a Technical One
Without someone inside the organization who can meaningfully challenge an AI system's behavior, documented controls slide into paperwork rather than true oversight
Read moreDetailsThe emergence of “AI-washing” in private securities fraud class actions
Working alongside the SEC and DOJ, investors have also brought private claims against companies when they engage in AI-washing in violation of the federal securities laws. Two recent securities fraud cases serve as excellent examples.
Opendoor
The first example occurred in In re Opendoor Technologies Incorporated Securities Litigation (Case No. 2:22-cv-01717-MTL, D. Ariz.). Opendoor is a publicly traded real estate technology company that operates a digital platform and uses a supposedly AI-powered pricing algorithm to make instant offers to purchase homes. The company went public in December 2020.
According to the plaintiffs, Opendoor’s offering documents contained materially false and misleading statements about the company’s use of AI to gain a competitive advantage. Specifically, the offering documents claimed that Opendoor’s AI-powered pricing algorithm was superior to traditional ways of buying and selling real estate because it priced properties more accurately than traditional human real estate agents and allowed Opendoor to stay profitable across all housing markets by adjusting quickly to fluctuating market conditions.
The plaintiffs, however, alleged these statements were false because the pricing algorithm could not adjust to changing market conditions and economic cycles, and as a result, Opendoor relied on a human-driven process that was neither revolutionary nor unique. The court upheld the falsity of the challenged AI-related statements because Opendoor exaggerated the extent of AI’s role in creating the competitive advantage Opendoor claimed it did.
Upstart
Another notable example is In re Upstart Holdings, Inc. Securities Litigation (Case No. 22-cv-02935 S.D. Ohio). The complaint contends that Upstart, which markets itself as an AI-driven lending platform, made materially false and misleading statements about its AI credit underwriting models and how they would perform in adverse economic conditions.
Specifically, Upstart’s public statements promoted its AI underwriting model as being capable of overcoming the pitfalls of traditional credit underwriting, such as FICO scores, because it considered a far greater volume of variables and could respond “dynamically” to changing conditions. There, the court held that statements about the supposed “significant advantage” over FICO scores and its “ability to respond very dynamically to macroeconomic changes” were actionable misstatements in part because the complaint alleged that the AI model “did not provide these verifiable advantages, in general or in times of macroeconomic turbulence.”
Both of these cases highlight that exaggerating AI capabilities can lead to securities fraud liability. In both cases, the company exaggerated the role that AI played in solving a previously unsolvable problem and that the AI models could dynamically adjust to changing market conditions faster than previous methods. This was far from the truth, and both companies suffered significant stock price declines when their purportedly superior AI models failed to accurately react to changing economic conditions, resulting in investor losses as the truth about each company’s AI-capabilities came to light.
The lesson for investors is simple: If a company claims that adding AI miraculously solves a previously unsolvable problem, and it sounds too good to be true, it probably is. In these situations, investors should closely monitor all of the company’s disclosures to make an informed decision on whether the AI-claims are truthful or exaggerated.
What transparent AI reporting should look like
The SEC has been considering whether to push companies toward more robust, standardized AI-related disclosures. For example, in December 2025, the investor advisory committee (IAC) — a committee created to advise the SEC on regulatory priorities and initiatives to protect investor interests — voted to advance a recommendation that the SEC issue guidance requiring public companies to disclose information about the impact of AI on their companies. The IAC cited a “lack of consistency” in contemporary AI disclosures, which “can be problematic for investors seeking clear and comparable information.” We agree.
At a minimum, companies should describe with reasonable specificity how AI systems are used in their business, distinguishing between automated decision-making, decision-support tools and processes that remain primarily human-driven. Companies should use precise language and should avoid vague or exaggerated claims that could mislead investors, such as “our AI capabilities give us a significant competitive advantage.” Similarly, public companies should articulate the scope and maturity of their AI deployment, including whether systems are internally developed or third-party licensed, and avoid conflating experimental initiatives with revenue-generating capabilities. The company should also have evidence to support claims touting their AI technology, particularly when comparing it to non-AI products performing similar functions.
Moreover, public companies should be transparent about the strengths and limitations of their AI capabilities. Where AI plays a material role in core business functions, companies should also disclose known limitations, assumptions and circumstances under which the technology may perform less effectively, particularly during periods of market stress or atypical conditions. Public companies should also explain to investors what they mean by “AI” to prevent any misunderstanding, as AI can have different meanings in different contexts.
Finally, companies should avoid using boilerplate language in AI-related risk disclosures. Instead, a company’s risk disclosures should be meaningful and tailored to the company’s actual use of AI. This may encompass risks related to data quality, model drift, regulatory uncertainty, human oversight and reliance on historical patterns that may not hold in future environments. Importantly, companies should ensure consistency across all public communications — including earnings calls, investor decks and marketing materials — so that AI-related statements do not present a misleadingly uniform picture of technological certainty.
As regulators and courts have made clear, accurate and balanced AI disclosures not only promote investor confidence but also serve as a critical safeguard against enforcement actions and private litigation alleging AI-washing under the federal securities laws.
Conclusion
The past several years has seen a marked increase in securities fraud allegations — particularly under Section 10(b) and Rule 10b-5 — grounded in claims of “AI-washing.” As this technology becomes integral to corporate strategy and investor expectations, plaintiffs and regulators alike are scrutinizing the accuracy and completeness of AI-related disclosures.
Companies that fail to ground their representations in verifiable facts, make unfounded claims about their technological capabilities or omit material information regarding AI products and strategies increasingly risk robust litigation exposure. The expanding body of case law underscores that while AI is a legitimate driver of market value, representations about that technology must withstand rigorous legal and factual examination to avoid liability under the federal securities laws.


James Christie
Nick Manningham







