When AI handles the drafting, financial advisers produce more — and the supervision frameworks most firms have in place were built for a fraction of that volume. MirrorWeb’s Jamie Hoyle looks at where the math stops working under FINRA Rule 3110 and what firms should be examining in their written supervisory procedures as a result.
Financial advisers are using AI tools to draft client communications, create presentations and summarize research. Portfolio managers are generating more detailed analysis in less time. This wave of AI adoption in financial services creates real value — faster response times, more thorough documentation, better client service.
The operational reality is more complicated. Financial advisers who once sent carefully crafted client emails at a manageable pace now produce far more because AI handles the drafting work. Marketing output has multiplied accordingly, and compliance teams didn’t grow to match. The promised time savings from AI haven’t freed up capacity for more thorough review; they’ve just raised output expectations across the organization.
AI-generated content overwhelms FINRA and SEC compliance
When employees can draft content faster, they produce more of it. Meanwhile, the buffer time that used to exist between drafting and review has compressed or disappeared entirely.
This creates specific challenges across both surveillance and supervision functions under FINRA and SEC requirements. FINRA Rule 3110 requires firms to establish procedures for reviewing correspondence and internal communications through ongoing surveillance, while also mandating supervision of marketing materials and public communications before distribution. Sampling rates that provided adequate coverage at previous volumes may no longer be sufficient as output multiplies. Similarly, compliance teams reviewing marketing materials face dramatically higher submission volumes without additional capacity.
The accuracy problem compounds this difficulty across both surveillance and supervision. When an adviser drafts an email manually, they (in theory) think through every claim and figure. When AI generates content and the adviser edits it, the cognitive process is different. Subtle errors slip through more easily — performance data that sounds authoritative but reflects false information, fund characteristics that were accurate six months ago, incomplete regulatory disclosures.
The multiplication effect makes this more concerning: If an AI tool pulls an incorrect statistic into one communication, that same error can propagate across dozens of outputs. Worse, that flawed data may then feed into future AI generations, creating a cascade of related errors. A single wrong number about fund performance, replicated across 40 client emails and then referenced in subsequent marketing materials creates exponentially more regulatory exposure than one manually drafted error.
4 Priorities for Compliance Officers Navigating Europe’s Transformed Financial Landscape
Digitalization and globalization have created financial institutions of every size and form, requiring compliance functions that scale from one-person teams to multi-layered departments
Read moreDetailsThe FINRA 3110 gap that AI volume opens up
FINRA Rule 3110 was drafted in a world where human output had natural limits. The rule’s supervision requirements — reviewing correspondence, monitoring internal communications and approving marketing content — assume a volume that compliance teams could reasonably manage with structured sampling and periodic review.
AI breaks that assumption. The rule’s obligations don’t change with output volume, but the capacity to meet them does. A compliance function that sampled 10% of communications and considered that adequate at 500 emails a month faces a different problem entirely when that same team is looking at 2,000.
FINRA’s 2024 guidance on AI made the stakes explicit: Existing rules apply regardless of whether firms use AI technology, and firms cannot point to AI adoption as a mitigating factor when examiners find supervision gaps. The obligation to demonstrate reasonable oversight remains, and it has to be met at whatever volume your advisers are now producing.
The specific risk under 3110 is that firms operating on pre-AI supervision frameworks are systematically undersampling. Examiners looking at a firm’s written supervisory procedures will be asking whether the procedures reflect operational reality, and for many firms, the honest answer is that they don’t. Rule 3110 also makes clear that the requirement isn’t just that supervision happens but that it’s documented.
Why explainable AI matters
The solution isn’t banning AI tools or trying to return to slower processes. That approach ignores market reality. Competitors are using these tools, employees expect them, and the productivity gains are significant. What firms need are surveillance approaches that acknowledge current output levels and can prioritize what genuinely warrants human attention, rather than applying uniform sampling that made sense at a fraction of the volume.
Explainability becomes essential in this environment. When a communication is flagged for review or assessed as low risk and not flagged, compliance teams need to be able to explain that decision to examiners. A defensible surveillance process isn’t just one that catches violations; it’s one where the reasoning behind each decision is documented and auditable. As that same FINRA guidance makes clear, the standard for adequate oversight doesn’t lower because technology is involved. The burden of demonstrating a reasonable process remains squarely with the firm.
Black-box systems, whether AI-powered or otherwise, leave firms in a difficult position during examination. If you can’t explain why something was or wasn’t flagged, you can’t demonstrate that your supervision framework was working as intended. That problem is compounded when AI tools are generating the content being monitored; in fact, the need for explainable oversight becomes harder to avoid the more your advisers rely on AI drafting assistance.
The gap won’t close on its own
AI adoption in financial services isn’t slowing down. This is a structural problem, and firms that treat their supervision frameworks as fixed infrastructure, rather than something that needs to evolve alongside how their people actually work, are accumulating regulatory exposure with every communication their advisers send.


Jamie Hoyle is vice president of product for MirrorWeb, a provider of communications archiving and surveillance software. 






