Financial institutions face mounting pressure from rising financial crime risk, operational costs and regulatory complexity — and many see agentic AI as the solution. But enthusiasm alone won’t drive success: generative AI pilot projects have failed to deliver ROI at rates approaching 95%, which does not bode well for their agentic AI counterparts. Tata Consultancy Services financial crime subject matter expert Abhishek Bhasin outlines five critical considerations for successful operationalization, from conducting thorough readiness assessments to establishing robust governance frameworks that ensure agentic AI delivers on its transformative promise.
According to a recent Kroll report, more than 70% of surveyed executives expect financial crime risk to increase in 2025 and only one-third of respondents believe their financial crime compliance programs are “very prepared” to address geopolitical issues over the next 12 months. While the banks are working on multiple initiatives to enhance their financial crime programs, 57% of the surveyed executives believe that AI development will benefit their financial crime compliance programs and therefore will be the key focus areas for addressing the financial crime compliance landscape.
With increased efficiency pressures, high operational costs, tightened regulatory frameworks and rising high-tech risk typologies, institutions are now looking toward agentic AI to address their existing challenges and bring in large scale efficiencies with enhanced detection coverage to build efficient and effective fincrime compliance programs.
Agentic AI in fincrime compliance
Unlike traditional AI machine learning models, agentic AI is not dependent on human prompts or retraining and can therefore play pivotal role in automating multiple activities across the fincrime compliance value chain utilizing its advanced functions, including:
- Autonomous decision-making: Enabling AI agents to make independent financial crime investigation and detection decisions based on pre-defined frameworks, parameters, logics, typologies and historical data pertaining to financial crime compliance with no to minimal human supervision.
- Real time learning & reinforcement: Empowering processing and analysis of real time information/data to conduct complex financial crime investigation and decisions based on past experiences, past alert/case dispositions, data behaviors and typologies.
- Multi-agent systems: Building robust multi-AI agents frameworks to achieve the common goal of preventing, investigating and reporting financial crime, leveraging the strengths of different agents to analyze complex scenarios and make autonomous decisions.
- Integration with systems & applications: Integrating with external data service providers and internal systems (KYC, CRM, payments etc.) enabling comprehensive investigation of the alerted transaction and decision making on identifying and reporting suspicious activities and behaviors. While integration with systems helps in making the right independent decisions, these points also help the system retrain itself through historic records and patterns improving overall accuracy of autonomous decision-making ability of agents.
- Flexibility & communication with investigators: Conducting pre-defined analysis but also facilitating users with real time customized, user-driven analysis and navigation through copilot functionality, helping in thorough investigation of the generated alerts along with automated narratives, summarization and suspicious activity report (SAR) collation.
Considerations for successful operationalization
A recent Fenergo survey indicated that 93% of financial institutions plan to implement agentic AI within the next two years and 27% of the surveyed executives believe agentic AI can save more than $4 million per year.
However, as the industry enthusiastically looks forward to adopting agentic AI, an MIT study showed a nearly 95% rate of ROI failure of GenAI pilot projects. While that study was focused on generative AI and not necessarily agentic applications, it emphasizes the need for deep analysis and a structured adoption approach.
Institutions, as part of their adoption strategy, should review all important aspects of successful transformation, such as:
Readiness assessment
Readiness assessment is a prerequisite for adopting any new technology within an organization, playing a critical role and aligning the institution’s business, technology, data landscape, problems, challenges and strategic priorities. As part of this assessment, the institution conducts detailed evaluations of:
- Current operating model: The operating model directly aligns with the institution’s policies, procedures, technology and people, and thereby helps in developing clear understanding of real problem statements, gaps, priorities and readiness to be addressed by agentic AI adoption.
- Data readiness: Like all other AI technologies, the success of agentic AI is heavily dependent on data input and quality, so the institution’s data maturity and governance standards play a critical role in successful adoption of agentic AI across the fincrime compliance landscape. Readiness assessment helps in identifying and mitigating gaps in existing sourcing, governance and transformation, ensuring accurate and consistent data input to AI agents.
- Technology readiness: With the evolving financial crime technology landscape, institutions are upgrading their technology and application stacks, bringing in new technologies and advanced frameworks to enhance and optimize existing fincrime compliance operations. As part of the technology assessment, the institutions should evaluate their existing and pipeline implementations to ascertain the compatibilities, overlaps and scalability. It is imperative for institutions to conduct “buy vs build” evaluations and make decisions based on the level of complexity and maturity of existing and target technology architectures.
- People readiness & alignment: While AI deployment aims to reduce human interventions, the end objective of agentic AI adoption is not to replace humans with agents but to empower financial crime compliance teams with additional intelligence and insights, enhancing their ability to identify, report and respond to complex financial crimes. Teams both at the first and second lines of defense should align with the overall objective and plan for agentic AI adoption across the compliance landscape and should collaborate in building/deploying customized AI agents to deliver the desired results.
An Open Letter to AI Governance Committees: Leave Room for Innovation
Firms should focus on protecting risk without paralyzing progress
Read moreDetailsUse cases
While institutions are planning to embrace agentic AI in their fincrime compliance environment, the core decision of selecting the right use cases plays a vital role in success.
This is a multi-dimensional process that starts with selecting the right functional work stream (KYC, name screening, payment screening & transaction monitoring, etc.) and drives to selection of the appropriate subprocesses (detection, investigation, outreach or reporting) within the workstream. Institutions, as part of their use-case selection process, should consider combination of these aspects in determining the ideal use cases for agentic AI:
- Criticality, effort, cost and impact
- Complexity of the workstream and subprocesses
- Availability of structured data with clear and concise documentation
- Count of internal and external integrations
- Existing challenges and gaps
- Quantified benefits and strategic alignment
- Regulatory expectations
Convergence with the existing AI framework
While agentic AI technology helps elevate an institution’s fincrime compliance landscape, it is important to align with existing AI and machine learning frameworks. Agentic AI can lead to replacement of existing frameworks in some cases, but overall convergence and alignment with existing AI frameworks helps drive synergies and improves the overall efficacy.
The existing AI/machine learning frameworks act as a bedrock for the success of agentic AI and helps in understanding, building and reinforcing the financial crime landscape across institution, utilizing:
- Pre-defined rules & processes: Existing AI/ML frameworks comprise various pre-defined rules and processes across financial crime workstreams (KYC, screening and transaction monitoring), which helps agentic AI technology in understanding end-to-end processes and design effective agents to attain optimized results.
- Scenarios, parameters & benchmarks: Customized and out-of-the-box (OOTB) detection scenarios, parameters and benchmarks are key pillars for effective detection. Utilizing these critical details, agentic AI can help in improving institution’s detection frameworks by reducing false positives or bringing auto closure processes.
- AI & ML frameworks & algorithms: Institutions post-adoption of AI/ML and GenAI have developed complex models (including LLMs) and algorithms. Utilizing these existing frameworks and algorithms, agentic AI can take optimization to the next level, helping institutions further improve their financial crime detection, investigation or reporting processes ensuring data privacy, confidentiality and scalability.
- Data pipelines & ETL components: Data is the main input to all AI ML frameworks, including agentic AI, so utilization of existing data input frameworks, ETL pipelines and integration touchpoints ensures better efficiency and automation for agentic AI technology across the financial crime value chain.
Setting up effective guardrails and controls
Agentic AI being at a nascent stage brings great opportunities for the institutions to transform their financial crime landscape; however, to ensure success and fruitful results, the institutions should focus on developing clear guardrails, checks and controls in line with business requirements and regulatory compliance. Institutions should consider important factors such as:
- Privacy, scalability & resilience
- Data quality & uniformity
- Validation & governance
- Human validation
- Explainability & audit trails
- Regulatory corroboration
Phased adoption & change management
After selecting the right FCC use cases and setting up robust guardrails and controls, the institution should look forward to conducting relevant pilots and proof of value (PoV) assessments to ascertain the efficacy and indicative benefits for the selected use case which if successful can be followed by a phased implementation approach. Institutions can design adoption phase based on:
- Geographies, markets and regulations: Institutions can start with jurisdictions with less regulatory conditions and gradually cover those with high regulatory requirements.
- Business lines: Both first and second lines of defense are critical parts of the fincrime compliance landscape, and institutions can adopt a phased implementation approach for each business line, addressing their respective problem statements and objectives.
- Other factors: In addition to the factors listed above, institutions can adopt other approaches for segregating phases based on applications, fundings, dependencies, existing pipelines, etc.
While the adoption of agentic AI is executed in a phased manner, it is critical for the institution to closely monitor and govern each phase to utilize lessons and improvement opportunities across the adoption lifecycle.
As institutions progress toward successful implementation of agentic AI in the financial crime compliance landscape, structured change management and training programs for relevant stakeholders complement each implementation, ensuring proper adoption and benefits in line with the strategic objectives.
Conclusion
Emerging agentic AI technology brings in large transformation opportunities within the financial crime industry, elevating institution’s capability in responding to financial crime.
Adoption of AI technologies like generative and agentic is inevitable for many institutions and will continue to play a crucial role in making financial crime programs comprehensive and efficient. However, it is imperative for institutions to employ cautious adoption strategies that evaluate all aspects of agentic AI technology in alignment with regulatory expectations.