CCI staff share recent surveys, reports and analysis on risk, compliance, governance, infosec and leadership issues. Share details of your survey with us: editor@corporatecomplianceinsights.com.
Most banks lack a defined AI strategy even as regulatory clarity tops their wish list
Most financial institutions have yet to establish a mature AI strategy, though a majority say regulatory guidance is the single thing that would most advance their efforts, according to a survey of 148 financial institutions by Wolters Kluwer Financial & Corporate Compliance.
The company’s first-quarter survey of banking compliance leaders found that while roughly 32% of institutions surveyed have deployed AI or machine learning technologies into production, just 12% describe their AI strategy as well-defined and resourced. More than half (59%) said regulatory guidance would most help advance their AI strategy, followed by technical training (46%) and industry benchmarks (37%).
Governance gaps compound the challenge. Only 36% of institutions have established internal policies for ethical AI use, while another 34% say such policies are in development. Just 26% expressed confidence that their AI initiatives align with regulatory requirements, compared with 48% who said they were somewhat confident.
Other key findings:
- Explainability and transparency (28%) and bias and discrimination rank as the most acute regulatory concerns, followed by data privacy (22%) and fair lending (18.2%).
- Only 10% of respondents say they are very prepared to support AI with their existing data infrastructure, while 48% describe themselves as somewhat prepared.
AI adoption near-universal among fraud teams, but headcount and budgets keep climbing
Despite near-universal adoption of AI in fraud prevention and AML workflows, organizations are not scaling back their human teams: 94% plan to add at least one full-time employee in 2026, up from 88% the prior year, according to a global survey of 1,010 fraud, risk and compliance leaders by SEON, a fraud prevention and AML software company.
The survey found that 98% of organizations already integrate AI into daily workflows and 95% express confidence in its effectiveness. Yet 83% expect their fraud and AML budgets to increase in 2026, and the share of leaders who believe fraud losses are not outpacing revenue growth dropped by nearly 40% year over year, a signal of mounting financial pressure even amid widespread AI adoption.
Other key findings:
- Implementation timelines remain slow: Only 10% of organizations go live with new systems in under two weeks, while 38% take one to three months and 24% take four months or more.
- 33% of respondents cite data privacy regulations, such as GDPR and CCPA, as the biggest external force shaping AML strategy, and 78% say decentralized digital identity will become central to fraud and AML operations.
AI tops compliance concerns in financial services
Nearly 7 in 10 financial services compliance professionals globally say AI is the factor most likely to cause compliance issues in 2026, according to a survey of 300 senior regulatory compliance decision-makers across the US, Europe and Asia-Pacific published by eflow Global, a provider of regulatory compliance technology for financial services.
Nearly two-thirds (65%) of respondents also cited increasing regulatory uncertainty as a top concern, followed by geopolitical and economic instability (54%) and digital assets and crypto markets (51%). US respondents ranked regulatory uncertainty first (75%) and AI second (68%), a divergence the report links to personnel changes at the SEC and a broader shift in regulatory approach under the current administration.
Despite recognizing AI as a top risk, adoption of AI in trade surveillance remains uneven. Only 16% of firms globally have fully deployed AI in their trade surveillance operations, and nearly a quarter of US firms (24%) — and 29% of all firms — have no formal AI strategy for trade surveillance or no plans to develop one.








