No Result
View All Result
SUBSCRIBE | NO FEES, NO PAYWALLS
MANAGE MY SUBSCRIPTION
NEWSLETTER
Corporate Compliance Insights
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • Artificial Intelligence (AI)
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Downloads
    • Download Whitepapers & Reports
    • Download eBooks
  • Books
    • CCI Press
    • New: Bribery Beyond Borders: The Story of the Foreign Corrupt Practices Act by Severin Wirz
    • CCI Press & Compliance Bookshelf
    • The Seven Elements Book Club
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe
Jump to a Section
  • At the Office
    • Ethics
    • HR Compliance
    • Leadership & Career
    • Well-Being at Work
  • Compliance & Risk
    • Compliance
    • FCPA
    • Fraud
    • Risk
  • Finserv & Audit
    • Financial Services
    • Internal Audit
  • Governance
    • ESG
    • Getting Governance Right
  • Infosec
    • Cybersecurity
    • Data Privacy
  • Opinion
    • Adam Balfour
    • Jim DeLoach
    • Mary Shirley
    • Yan Tougas
No Result
View All Result
Corporate Compliance Insights
Home Financial Services

Will AI Change FinServ Regulation? Here’s What History Tells Us.

Regulators’ actions concerning AI in financial services are likely to increase in scope and frequency

by Hollie Mason and Ryan Murphy
April 20, 2026
in Financial Services
sec office front

Securities regulators aren’t creating new rules to govern financial services firms’ use of AI — yet. But, as Hollie Mason and Ryan Murphy of consultancy Stout explain, history shows old rules can still apply to new technology.

The use of emerging AI technologies, such as generative AI, machine learning and large language models (LLMs), is becoming more commonplace in financial services, creating new compliance and operational considerations for both AI buyers and users in the US and abroad.

While adapting to AI has become somewhat of a necessity, securities industry regulators, such as FINRA and the SEC are not yet responding with targeted rulemaking but instead are reminding broker-dealers and other industry participants of the applicability and neutrality of current regulations.

Regulators’ current approach to governing AI activities is consistent with prior technological advancements in the industry and predictive of compliance and operational challenges to come.

The evolution of AI regulation

Rulemaking

In 2023, the SEC proposed a new rule addressing AI-induced conflicts of interest. To date, this has been the only attempt by securities industry-specific regulators to implement rules concerning firms’ uses of AI. This proposal has since been withdrawn, receiving mixed reviews by several SEC commissioners and industry commentary.

While the regulation is unlikely to resurface, an interesting takeaway might be that industry regulators and participants have since taken the position that current rules and regulations adequately address evolving AI compliance and operational advancements despite the SEC’s 2023 proposal seemingly projecting a bit of ambiguity.

State regulators, on the other hand, are taking it upon themselves to enact targeted and comprehensive laws governing AI usage. For example, California, Texas and Colorado have passed comprehensive AI legislation, similar to the European Union’s approach. Numerous other states have proposed or enacted more limited, AI-related legislation focused on consumer privacy, deceptive media, fair use of protected works and general disclosure requirements in instances where consumers interact with AI. While these laws are not specifically tailored to the financial services industry, many may have consumer rights implications in states in which firms do business.

Regulatory actions

A review of past regulatory actions showed a general focus on ensuring firms have accurately disclosed AI relationships, risks and usage and that firms using AI tools in place of more manual internal processes have adequate human intervention and supervision.

For example, in March 2025 in SEC vs. Rimar Capital USA, the SEC claimed the respondents raised funds via false promises about the firm’s use of AI for automated trading. This is one of only a handful of enforcement actions the SEC has brought against companies for what is being referred to as “AI washing.” Also, FINRA has taken some AI-centric disciplinary actions. In 2024, one FINRA action specifically mentioned AI, involving a broker-dealer’s implementation of a flawed machine learning program designed to assist in their compliance with AML requirements.

As firms continue to leverage AI technology, however, regulatory actions will likely broaden in scope and increase in frequency. In their most recent examination priorities, the SEC indicated an increased scrutiny concerning firm policies and procedures related to using and monitoring AI. Similarly, in its 2026 annual report, FINRA emphasized a focus on AI testing and monitoring. One may speculate as to what extent this will continue in the years ahead, but regulatory leadership — irrespective of party affiliation — has consistently committed to utilizing and overseeing AI, especially as it pertains to the detection and prevention of fraud.

As SEC Chairman Paul Atkins said at an AI roundtable in March, “In short, while the mechanisms of fraud may change, our obligation does not. The commission’s mandate to protect investors is technology neutral. And misconduct remains misconduct, regardless of the medium.”

In other words, AI is likely to remain a regular focus despite any shifting in priorities or a changing of the guards.

History reveals US securities industry’s approach

As with many other technological advancements in the securities industry, US regulators and trade associations suggest that relevant industry rules are technology-neutral and adequately govern AI activities. History suggests that this technology-neutral approach may create regulatory risks.

The financial services industry is no stranger to industry-changing advancements in technology being met with a technology-neutral approach by regulators inclusive of reminders and guidance about the application of existing rules. Anything from electronic trading, electronic communications with the public and cloud computing to high-frequency trading, alternative trading platforms and robo-advisers have resulted in a similar regulatory approach.

Consider the case of electronic communications with the public. When firms and customers began communicating via email, FINRA issued a notice reminding firms that recordkeeping rules were technology agnostic and governed changes in communications. Despite occasional regulatory notices issued throughout the latter half of the 1990s, real clarity concerning how these rules applied to electronic communications and technologies would not arrive until regulators began related examinations and enforcement proceedings.

In the early 2000s, in a joint initiative among the SEC, the National Association of Securities Dealers and the New York Stock Exchange, multiple firms were fined for an array of failures with respect to how they supervised and maintained records related to electronic communications. The SEC reported failures to preserve or maintain electronic communication records and establish procedures to ensure compliance with these requirements. For firms that preserved records, they reported a wide range of methodologies ranging from disaster recovery tapes to hard drives on personal computers, each of which presented additional risk management concerns involving inadvertent destruction, poor organization and inadequate policies to ensure records were kept in accordance with regulation. In addition to prior regulatory notices and reminders, the SEC’s enforcement activities brought clarity to how technology-neutral rules should be applied to technological advancements in communication activities.

This regulatory approach is familiar and understandable given technology often evolves faster than regulation. It is also predictive, acknowledging that new technologies are often not designed solely for or exclusively used by the securities industry and may be regulated by other industries or regulatory bodies, creating a measure of undefined space between regulatory jurisdiction and technology advancement, availability and usage. We are still somewhat early in the ramp-up phase of AI in the securities industry, particularly with public-facing capabilities, but if history is any indication of future risks, regulators will soon be identifying problematic behaviors taking place within undefined and unchartered spaces.

This “technology-neutral” stance by regulators has a long history of turning out rules by examination or enforcement activities. With the uptick in AI usage by firms and mentions of AI in recently released examination priorities, AI-focused regulatory enforcement will likely increase soon. For firms that prefer a more proactive approach to regulatory compliance, an increased focus should be placed on risk-based planning, particularly in instances where AI technology is driven by customers’ identity information or other confidential or restricted information.

ai generated content collage
Financial Services

Managing the AI Content Explosion in Financial Services

by Jamie Hoyle
March 13, 2026

AI tools have multiplied adviser output in financial services — and FINRA’s supervision framework was written for a different volume

Read moreDetails

Risk management and compliance considerations

Whether a firm is seeking to dip its toe into AI or expand and innovate using generative AI technology, firm processes should begin by seeking and obtaining input from key stakeholders in related business units, such as cybersecurity professionals, AML officers and data privacy officers. As true expertise on AI systems may be a limited resource, firms should ensure qualified individuals are in place to opine on implementation and serve in oversight capacities. One thing that securities regulators have been clear about when it comes to AI is that human escalation points are necessary for AI driven processes.

Other risk-based considerations could be applicable:

  • Firms should specifically define what they mean by AI. What systems, processes and technology are included in your definition? Involve subject matter experts to ensure policies do not rely on overly broad and potentially incorrect definitions of AI and that policies do not haphazardly include non-AI technology without distinction.
  • Consider identifying when any AI tools can be utilized by employees. Whether firms address employee use of AI by business unit in desktop procedures or written supervisory procedures or by developing an enterprise-wide AI policy document, they should provide specific guidance and examples of relevant systems and circumstances in which AI can or cannot be used.
  • Firms should prioritize training and supervision so that employees are clear about what tools are authorized and for what purpose. Employees should also be clear about how to escalate or report unauthorized uses and be able to explain to regulators how AI tools facilitate their job responsibilities.
  • Firms must consider a control framework that denies access to any AI technology found to be out of compliance with policy or deemed restricted given an employee’s role or function. If a firm deems use of a particular AI provider unacceptable, ensure steps are taken to deny employees access to it at their workstations.
  • Firms should continually conduct and document targeted risk assessments and ensure emerging changes remain part of ongoing audit and compliance testing. This could include oversight and ongoing testing concerning employee access and ongoing utilization of AI, with interest in detecting unsupervised or unapproved use. Firms could also document decisions concerning which employees and teams are permitted access to specific AI tools and for what purpose. Like other systems or data access protocols, these decisions should be reviewed periodically. Records related to permitted users should be made available for inspection. 
  • Firms should document determinations concerning the applicability of other countries’ AI policies in which they do business. Be sure to clearly address how the firm evaluates, tests and supervises any third-party AI technology. This includes understanding how new or existing technology works, what data is being targeted, used or stored and who has access to it. As existing vendors begin to incorporate AI solutions, ensure a firm’s vendor risk management program accounts for any such change to products or services.

The future of AI accountability

A time may come when regulators and customers start asking whether a firm’s lack of AI technology makes it more at risk for regulatory failures. This will be a time when AI technology evolves from a “nice to have” to a necessary expense firms must budget for to optimize compliance and remain viable.

Imagine a brokerage firm that customers could not electronically communicate with because the firm found the implementation costs of supervisory and archival systems intolerable. The premise of a firm refusing to incorporate electronic communications into their service model due to costs of compliance seems a bit absurd to 2026 eyes, but was it so absurd in 1995? Back then, under the same “technology-neutral” regulatory messaging, firms began seeking to facilitate communication and delivery of information electronically as opposed to through the postal service. Failure to onboard the necessary tools and incorporate supervisory and record-keeping solutions would have conceivably hindered a firm’s ability to permit electronic communications with their customers leading, in turn, to an outdated service model and ultimately dissatisfaction among customers. 

As advancements in technology occur, financial services firms may want to invest in maintaining operational awareness and be mindful about not only how AI could be useful but how not advancing may affect their ability to meet customers’ expectations, compete or avoid risk when such advancements make regulatory compliance more efficient and expansive.

AI technology will certainly change financial services regulation, but for the foreseeable future, it seems these changes will manifest as the result of securities industry regulators’ targeted reports, reminders about how existing rules may apply to AI activities and guidance via regulatory examinations, rather than proposing new AI targeted rules. Firms may want to take a proactive approach to integrating AI into their business models and involve AI specialists into compliance and risk processes.

Tags: Artificial Intelligence (AI)SEC
Previous Post

Executive & GCs at Odds Over Legal’s Business Contributions

Next Post

Negligence & AI: Can the Courts Keep Up?

Hollie Mason and Ryan Murphy

Hollie Mason and Ryan Murphy

Hollie Mason is managing director of the Chicago office of consultancy Stout. She has extensive experience in the financial services industry, including presenting multifaceted securities cases before arbitration panels, courts and regulatory bodies; determining compliance and causation; engaging in rulemaking, interpretation, and enforcement; conducting regulatory investigations; and managing regulatory risks.
Ryan Murphy is a senior manager in the Washington, D.C., office of consultancy Stout. A certified fraud examiner, he has extensive experience in the financial services industry developing and testing brokerage industry compliance and risk management systems.

Related Posts

big data concept

Data Authenticity & Accountability Crucial in the AI Age

by Greg Campanella and Ken Feinstein
April 20, 2026

Companies must blend innovative and traditional methods for policy development, privacy programs and regulatory alignment

virtual gavel pixels

Negligence & AI: Can the Courts Keep Up?

by Elizabeth Alice “Liz” Och
April 20, 2026

At this early stage, be cautious in how you talk about your commitment to AI best practices

news roundup bundled papers

Executive & GCs at Odds Over Legal’s Business Contributions

by Staff and Wire Reports
April 17, 2026

And why aren’t all boards talking about AI?

ai insurance concept robot hand

AI Insurance Exists. Getting It Is the Hard Part.

by Corey Gray and Jon Mills
April 13, 2026

Coverage is still catching up to AI risks, but companies need to get a jump on policies

Next Post
virtual gavel pixels

Negligence & AI: Can the Courts Keep Up?

No Result
View All Result

Privacy Policy | AI Policy

Founded in 2010, CCI is the web’s premier global independent news source for compliance, ethics, risk and information security. 

Got a news tip? Get in touch. Want a weekly round-up in your inbox? Sign up for free. No subscription fees, no paywalls. 

Follow Us

Browse Topics:

  • CCI Press
  • Compliance
  • Compliance Podcasts
  • Cybersecurity
  • Data Privacy
  • eBooks Published by CCI
  • Ethics
  • FCPA
  • Featured
  • Financial Services
  • Fraud
  • Governance
  • GRC Vendor News
  • HR Compliance
  • Internal Audit
  • Leadership and Career
  • On Demand Webinars
  • Opinion
  • Research
  • Resource Library
  • Risk
  • Uncategorized
  • Videos
  • Webinars
  • Well-Being
  • Whitepapers

© 2026 Corporate Compliance Insights

Welcome to CCI. This site uses cookies. Please click OK to accept. Privacy Policy
Cookie settingsACCEPT
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT
No Result
View All Result
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • Artificial Intelligence (AI)
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Downloads
    • Download Whitepapers & Reports
    • Download eBooks
  • Books
    • CCI Press
    • New: Bribery Beyond Borders: The Story of the Foreign Corrupt Practices Act by Severin Wirz
    • CCI Press & Compliance Bookshelf
    • The Seven Elements Book Club
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe

© 2026 Corporate Compliance Insights