Securities regulators aren’t creating new rules to govern financial services firms’ use of AI — yet. But, as Hollie Mason and Ryan Murphy of consultancy Stout explain, history shows old rules can still apply to new technology.
The use of emerging AI technologies, such as generative AI, machine learning and large language models (LLMs), is becoming more commonplace in financial services, creating new compliance and operational considerations for both AI buyers and users in the US and abroad.
While adapting to AI has become somewhat of a necessity, securities industry regulators, such as FINRA and the SEC are not yet responding with targeted rulemaking but instead are reminding broker-dealers and other industry participants of the applicability and neutrality of current regulations.
Regulators’ current approach to governing AI activities is consistent with prior technological advancements in the industry and predictive of compliance and operational challenges to come.
The evolution of AI regulation
Rulemaking
In 2023, the SEC proposed a new rule addressing AI-induced conflicts of interest. To date, this has been the only attempt by securities industry-specific regulators to implement rules concerning firms’ uses of AI. This proposal has since been withdrawn, receiving mixed reviews by several SEC commissioners and industry commentary.
While the regulation is unlikely to resurface, an interesting takeaway might be that industry regulators and participants have since taken the position that current rules and regulations adequately address evolving AI compliance and operational advancements despite the SEC’s 2023 proposal seemingly projecting a bit of ambiguity.
State regulators, on the other hand, are taking it upon themselves to enact targeted and comprehensive laws governing AI usage. For example, California, Texas and Colorado have passed comprehensive AI legislation, similar to the European Union’s approach. Numerous other states have proposed or enacted more limited, AI-related legislation focused on consumer privacy, deceptive media, fair use of protected works and general disclosure requirements in instances where consumers interact with AI. While these laws are not specifically tailored to the financial services industry, many may have consumer rights implications in states in which firms do business.
Regulatory actions
A review of past regulatory actions showed a general focus on ensuring firms have accurately disclosed AI relationships, risks and usage and that firms using AI tools in place of more manual internal processes have adequate human intervention and supervision.
For example, in March 2025 in SEC vs. Rimar Capital USA, the SEC claimed the respondents raised funds via false promises about the firm’s use of AI for automated trading. This is one of only a handful of enforcement actions the SEC has brought against companies for what is being referred to as “AI washing.” Also, FINRA has taken some AI-centric disciplinary actions. In 2024, one FINRA action specifically mentioned AI, involving a broker-dealer’s implementation of a flawed machine learning program designed to assist in their compliance with AML requirements.
As firms continue to leverage AI technology, however, regulatory actions will likely broaden in scope and increase in frequency. In their most recent examination priorities, the SEC indicated an increased scrutiny concerning firm policies and procedures related to using and monitoring AI. Similarly, in its 2026 annual report, FINRA emphasized a focus on AI testing and monitoring. One may speculate as to what extent this will continue in the years ahead, but regulatory leadership — irrespective of party affiliation — has consistently committed to utilizing and overseeing AI, especially as it pertains to the detection and prevention of fraud.
As SEC Chairman Paul Atkins said at an AI roundtable in March, “In short, while the mechanisms of fraud may change, our obligation does not. The commission’s mandate to protect investors is technology neutral. And misconduct remains misconduct, regardless of the medium.”
In other words, AI is likely to remain a regular focus despite any shifting in priorities or a changing of the guards.
History reveals US securities industry’s approach
As with many other technological advancements in the securities industry, US regulators and trade associations suggest that relevant industry rules are technology-neutral and adequately govern AI activities. History suggests that this technology-neutral approach may create regulatory risks.
The financial services industry is no stranger to industry-changing advancements in technology being met with a technology-neutral approach by regulators inclusive of reminders and guidance about the application of existing rules. Anything from electronic trading, electronic communications with the public and cloud computing to high-frequency trading, alternative trading platforms and robo-advisers have resulted in a similar regulatory approach.
Consider the case of electronic communications with the public. When firms and customers began communicating via email, FINRA issued a notice reminding firms that recordkeeping rules were technology agnostic and governed changes in communications. Despite occasional regulatory notices issued throughout the latter half of the 1990s, real clarity concerning how these rules applied to electronic communications and technologies would not arrive until regulators began related examinations and enforcement proceedings.
In the early 2000s, in a joint initiative among the SEC, the National Association of Securities Dealers and the New York Stock Exchange, multiple firms were fined for an array of failures with respect to how they supervised and maintained records related to electronic communications. The SEC reported failures to preserve or maintain electronic communication records and establish procedures to ensure compliance with these requirements. For firms that preserved records, they reported a wide range of methodologies ranging from disaster recovery tapes to hard drives on personal computers, each of which presented additional risk management concerns involving inadvertent destruction, poor organization and inadequate policies to ensure records were kept in accordance with regulation. In addition to prior regulatory notices and reminders, the SEC’s enforcement activities brought clarity to how technology-neutral rules should be applied to technological advancements in communication activities.
This regulatory approach is familiar and understandable given technology often evolves faster than regulation. It is also predictive, acknowledging that new technologies are often not designed solely for or exclusively used by the securities industry and may be regulated by other industries or regulatory bodies, creating a measure of undefined space between regulatory jurisdiction and technology advancement, availability and usage. We are still somewhat early in the ramp-up phase of AI in the securities industry, particularly with public-facing capabilities, but if history is any indication of future risks, regulators will soon be identifying problematic behaviors taking place within undefined and unchartered spaces.
This “technology-neutral” stance by regulators has a long history of turning out rules by examination or enforcement activities. With the uptick in AI usage by firms and mentions of AI in recently released examination priorities, AI-focused regulatory enforcement will likely increase soon. For firms that prefer a more proactive approach to regulatory compliance, an increased focus should be placed on risk-based planning, particularly in instances where AI technology is driven by customers’ identity information or other confidential or restricted information.
Managing the AI Content Explosion in Financial Services
AI tools have multiplied adviser output in financial services — and FINRA’s supervision framework was written for a different volume
Read moreDetailsRisk management and compliance considerations
Whether a firm is seeking to dip its toe into AI or expand and innovate using generative AI technology, firm processes should begin by seeking and obtaining input from key stakeholders in related business units, such as cybersecurity professionals, AML officers and data privacy officers. As true expertise on AI systems may be a limited resource, firms should ensure qualified individuals are in place to opine on implementation and serve in oversight capacities. One thing that securities regulators have been clear about when it comes to AI is that human escalation points are necessary for AI driven processes.
Other risk-based considerations could be applicable:
- Firms should specifically define what they mean by AI. What systems, processes and technology are included in your definition? Involve subject matter experts to ensure policies do not rely on overly broad and potentially incorrect definitions of AI and that policies do not haphazardly include non-AI technology without distinction.
- Consider identifying when any AI tools can be utilized by employees. Whether firms address employee use of AI by business unit in desktop procedures or written supervisory procedures or by developing an enterprise-wide AI policy document, they should provide specific guidance and examples of relevant systems and circumstances in which AI can or cannot be used.
- Firms should prioritize training and supervision so that employees are clear about what tools are authorized and for what purpose. Employees should also be clear about how to escalate or report unauthorized uses and be able to explain to regulators how AI tools facilitate their job responsibilities.
- Firms must consider a control framework that denies access to any AI technology found to be out of compliance with policy or deemed restricted given an employee’s role or function. If a firm deems use of a particular AI provider unacceptable, ensure steps are taken to deny employees access to it at their workstations.
- Firms should continually conduct and document targeted risk assessments and ensure emerging changes remain part of ongoing audit and compliance testing. This could include oversight and ongoing testing concerning employee access and ongoing utilization of AI, with interest in detecting unsupervised or unapproved use. Firms could also document decisions concerning which employees and teams are permitted access to specific AI tools and for what purpose. Like other systems or data access protocols, these decisions should be reviewed periodically. Records related to permitted users should be made available for inspection.
- Firms should document determinations concerning the applicability of other countries’ AI policies in which they do business. Be sure to clearly address how the firm evaluates, tests and supervises any third-party AI technology. This includes understanding how new or existing technology works, what data is being targeted, used or stored and who has access to it. As existing vendors begin to incorporate AI solutions, ensure a firm’s vendor risk management program accounts for any such change to products or services.
The future of AI accountability
A time may come when regulators and customers start asking whether a firm’s lack of AI technology makes it more at risk for regulatory failures. This will be a time when AI technology evolves from a “nice to have” to a necessary expense firms must budget for to optimize compliance and remain viable.
Imagine a brokerage firm that customers could not electronically communicate with because the firm found the implementation costs of supervisory and archival systems intolerable. The premise of a firm refusing to incorporate electronic communications into their service model due to costs of compliance seems a bit absurd to 2026 eyes, but was it so absurd in 1995? Back then, under the same “technology-neutral” regulatory messaging, firms began seeking to facilitate communication and delivery of information electronically as opposed to through the postal service. Failure to onboard the necessary tools and incorporate supervisory and record-keeping solutions would have conceivably hindered a firm’s ability to permit electronic communications with their customers leading, in turn, to an outdated service model and ultimately dissatisfaction among customers.
As advancements in technology occur, financial services firms may want to invest in maintaining operational awareness and be mindful about not only how AI could be useful but how not advancing may affect their ability to meet customers’ expectations, compete or avoid risk when such advancements make regulatory compliance more efficient and expansive.
AI technology will certainly change financial services regulation, but for the foreseeable future, it seems these changes will manifest as the result of securities industry regulators’ targeted reports, reminders about how existing rules may apply to AI activities and guidance via regulatory examinations, rather than proposing new AI targeted rules. Firms may want to take a proactive approach to integrating AI into their business models and involve AI specialists into compliance and risk processes.


Hollie Mason
Ryan Murphy 





