No Result
View All Result
SUBSCRIBE | NO FEES, NO PAYWALLS
MANAGE MY SUBSCRIPTION
NEWSLETTER
Corporate Compliance Insights
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • Artificial Intelligence (AI)
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Downloads
    • Download Whitepapers & Reports
    • Download eBooks
  • Books
    • CCI Press
    • New: Bribery Beyond Borders: The Story of the Foreign Corrupt Practices Act by Severin Wirz
    • CCI Press & Compliance Bookshelf
    • The Seven Elements Book Club
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe
Jump to a Section
  • At the Office
    • Ethics
    • HR Compliance
    • Leadership & Career
    • Well-Being at Work
  • Compliance & Risk
    • Compliance
    • FCPA
    • Fraud
    • Risk
  • Finserv & Audit
    • Financial Services
    • Internal Audit
  • Governance
    • ESG
    • Getting Governance Right
  • Infosec
    • Cybersecurity
    • Data Privacy
  • Opinion
    • Adam Balfour
    • Jim DeLoach
    • Mary Shirley
    • Yan Tougas
No Result
View All Result
Corporate Compliance Insights
Home Featured

I Tested 24 AI Banking Chatbots; They Were All Exploitable

The same conversational prompts that extract proprietary eligibility criteria could be weaponized by fraud rings

by Milton Leal
January 21, 2026
in Featured, Financial Services
banking chatbot featured

When a chatbot provides incorrect guidance or misleads a borrower about their dispute rights, regulators treat it as a compliance failure, not a technology experiment gone wrong. Milton Leal, lead applied AI researcher at TELUS Digital, ran adversarial tests against 24 AI models from major providers configured as banking customer-service assistants and found every one proved exploitable, with success rates ranging from 1% to over 64% and “refusal but engagement” patterns where chatbots said “I cannot help with that” yet immediately disclosed sensitive information anyway.

Generative AI (Gen AI) chatbots are increasingly becoming a primary channel for customer service in consumer banking. According to a 2025 survey, 54% of financial institutions have either implemented or are actively implementing Gen AI, with improving customer experience cited as the top strategic priority for technology investments.

Many institutions are deploying these systems to handle conversations about account balances, transaction disputes, loan applications and fraud alerts. These interactions traditionally required trained agents who understood regulatory obligations and escalation protocols. As such, banks can be held responsible for violating either regardless of whether a human or chatbot handles the conversation. The technology promises efficiency gains and 24/7 availability, driving rapid adoption as banks seek to meet customer expectations for instant, conversational support.

However, this rapid adoption has created compliance and security blind spots. A single misphrased chatbot response could violate federal disclosure requirements or mislead a borrower about their dispute rights.

More concerning, these systems are vulnerable to systematic exploitation. The same conversational prompts that extract proprietary eligibility criteria or credit-scoring rules could be weaponized by fraud rings. With over 50% of financial fraud now involving AI, according to one report, the risk is not hypothetical. Attackers who already use AI for deepfakes and synthetic identities could easily repurpose chatbot extraction techniques to refine their fraud playbooks.

Systemic vulnerabilities across 24 leading AI models

Over the past several months, I ran adversarial tests against 24 AI models from major providers (including OpenAI, Anthropic and Google) configured as banking customer-service assistants. 

Every one proved exploitable.

A prompt framed as a researcher inquiry extracted proprietary creditworthiness scoring logic, including the exact weights given to payment history, utilization rates and account mix. A simple formatting request prompted a model to produce detailed internal eligibility documentation that should only be accessible to bank staff, not customers. Perhaps most concerning were “refusal but engagement” patterns, where chatbots said “I cannot help with that,” yet immediately disclosed the sensitive information anyway.

Across all models tested in the benchmark, success rates ranged from 1% to over 64%, with the most effective attack categories averaging above 30%. These were all automated prompt injection techniques that adversaries could replicate. Taken together, the results point to a broader implementation problem rather than isolated model flaws.

The main issue is how the technology has been integrated without adequate guardrails or accountability. Regulators have taken notice. Since 2023, the Consumer Financial Protection Bureau (CFPB) has made clear that chatbots must meet the same consumer protection standards as human agents, with misleading or obstructive behavior grounds for enforcement. The Office of the Comptroller of the Currency (OCC) echoes this in its risk perspective, declaring that AI customer service channels are not experiments but regulated compliance systems subject to the same legal, operational and audit requirements as any other customer-facing operation.

business using ai concept brain executive running
Featured

What Boards & Executives Need to Know (and Ask) About Agentic AI

by Jim DeLoach
October 28, 2025

Read moreDetails

3 types of vulnerabilities prevalent in AI chatbots

Three categories of vulnerabilities consistently showed up across deployed chatbot systems.

Inaccurate or incomplete guidance

Even mainstream assistants can generate inaccurate information, misquote interest calculations or summarize eligibility criteria that should only be disclosed after identity verification. Every automated answer carries the same legal weight as advice from a trained human agent, yet with AI-generated answers, quality assurance often lags behind deployment speed.

Sensitive leakage

Attackers use creative prompts to bypass safeguards and extract outputs the chatbot should refuse entirely. In one test, a refusal-suppression prompt instructed the model that declining the request would indicate a system malfunction. The chatbot then generated multi-paragraph fabricated customer testimonials praising bank products, content that could be weaponized in phishing campaigns or reputation manipulation schemes. The bot complied with a request it clearly should have refused, demonstrating how conversational pressure can override policy controls.

Operational opacity

Many deployments lack the logging, escalation and audit trails regulators expect. This means when a chatbot mishandles a complaint, banks are often unable to reconstruct how that happened.

My testing demonstrates these weaknesses are architectural, not rare edge cases. Simple conversational techniques succeeded against every model tested. The most damaging outputs looked harmless at first glance but disclosed exactly what fraudsters look for. This pattern held across providers and the same guardrail configurations. We know criminals treat refusals as clues and keep changing their wording until the model slips. These patterns show how current deployments leave financial institutions exposed in ways most teams don’t realize.

How to build a compliance-ready defense

Regulatory expectations are converging. The CFPB requires accurate answers plus guaranteed paths to human representatives and considers misleading behavior grounds for enforcement. The OCC has made clear that generative AI falls under existing safety-and-soundness expectations with board-level oversight. Standards bodies like NIST recommend secure development lifecycles, comprehensive logging and continuous adversarial testing. And the EU AI Act requires chatbots to disclose AI usage and log high-risk interactions.

Meeting these expectations requires organizations to treat chatbots like any other regulated system. Every chatbot should appear in the model risk inventory with defined owners and validation steps. Conversation flows must embed compliance rules that will prevent chatbots from answering unless required safeguards are satisfied. Organizations also need comprehensive logging that captures full interaction sequences, tracking patterns that suggest systematic probing or attempted extraction. Automatic handoffs should trigger whenever requests touch regulated disclosures or disputes.

Governance must also evolve accordingly.

This means conducting regular reviews of refusal patterns to identify leakage trends. Shift board briefings from project updates to risk reporting with metrics on incidents and remediation. Run tabletop exercises on realistic scenarios, such as: What happens if the chatbot provides incorrect credit guidance? How does the organization respond if sensitive criteria leak? And when chatbots come from vendors, apply the same third-party risk management used for core processors, including due diligence on data handling, logging rights and incident notification.

The path forward

GenAI assistants are already woven into high-stakes customer journeys. The compliance question has shifted from “should we deploy a chatbot?” to “can we demonstrate that every automated answer meets the same standards as a human interaction?” Regulators are evaluating these systems using the same frameworks they apply to call centers and lending decisions.

Banks that treat their chatbots as governed compliance systems, backed by inventories, monitoring and human escalation paths, will answer regulator questions with evidence rather than assurances. Organizations that rely on the GenAI provider’s guardrails and refusal messages as their primary control will be left explaining failures after the fact.

Regardless of how regulations may shift, banks remain accountable for every customer interaction, whether delivered by a person or an AI assistant. 


Tags: Artificial Intelligence (AI)Banking
Previous Post

Diligent Acquires TPRM Platform 3rdRisk

Next Post

Is Your Mental Health Campaign a Fig Leaf for Monetizing Risky Behavior?

Milton Leal

Milton Leal

Milton Leal is the lead applied AI researcher at TELUS Digital, a process outsourcing provider.

Related Posts

ai black box

Your Foreign AI Vendor’s Black Box Is an Ethics Problem, Not a Technical One

by Vera Cherepanova
February 18, 2026

Without someone inside the organization who can meaningfully challenge an AI system's behavior, documented controls slide into paperwork rather than...

data center under construction

Higher Power: Can AI Investment & Climate Strategy Co-Exist?

by Tim Weiss
February 11, 2026

At your next board meeting where AI appears on the agenda, add one question: Can our AI growth plans and...

us flag on computer chip

Preemption is No Panacea: Congress Must Create a Workable National Framework for American AI Dominance

by David Miller and Clarine Nardi Riddle
February 10, 2026

Even with light-touch regulation as its lodestar, new AI action plan requires authorization and funding for standards development, testing infrastructure...

data nodes concept

Q&A: How to Prepare for AI-Powered Investigations While Managing Your Own AI Risk

by Staff and Wire Reports
February 10, 2026

AI can lead to inaccurate assumptions, so context still matters when challenging government data analytics in False Claims Act or...

Next Post
teen looking at smartphone

Is Your Mental Health Campaign a Fig Leaf for Monetizing Risky Behavior?

No Result
View All Result

Privacy Policy | AI Policy

Founded in 2010, CCI is the web’s premier global independent news source for compliance, ethics, risk and information security. 

Got a news tip? Get in touch. Want a weekly round-up in your inbox? Sign up for free. No subscription fees, no paywalls. 

Follow Us

Browse Topics:

  • CCI Press
  • Compliance
  • Compliance Podcasts
  • Cybersecurity
  • Data Privacy
  • eBooks Published by CCI
  • Ethics
  • FCPA
  • Featured
  • Financial Services
  • Fraud
  • Governance
  • GRC Vendor News
  • HR Compliance
  • Internal Audit
  • Leadership and Career
  • On Demand Webinars
  • Opinion
  • Research
  • Resource Library
  • Risk
  • Uncategorized
  • Videos
  • Webinars
  • Well-Being
  • Whitepapers

© 2026 Corporate Compliance Insights

Welcome to CCI. This site uses cookies. Please click OK to accept. Privacy Policy
Cookie settingsACCEPT
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT
No Result
View All Result
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • Artificial Intelligence (AI)
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Downloads
    • Download Whitepapers & Reports
    • Download eBooks
  • Books
    • CCI Press
    • New: Bribery Beyond Borders: The Story of the Foreign Corrupt Practices Act by Severin Wirz
    • CCI Press & Compliance Bookshelf
    • The Seven Elements Book Club
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe

© 2026 Corporate Compliance Insights