AI tools are increasingly being deployed to detect fraud and improve compliance. Sophie Luskin, a communications fellow at the law firm Kohn, Kohn & Colapinto, explores what happens when those same algorithms are turned inward.
As the implementation of artificial intelligence (AI) compliance and fraud detection algorithms within corporations and financial institutions continues to grow, it is crucial to consider how this technology has a twofold effect.
It’s a classic double-edged sword: In the right hands, the technology can help detect fraud and bolster compliance, but in the wrong hands, it can snuff out would-be whistleblowers and weaken accountability mechanisms.
Algorithms are already pervasive in our legal and governmental systems: the SEC, a champion of whistleblowers, employs these very compliance algorithms to detect trading misconduct and determine whether a legal violation has taken place.
There are two major downsides to the implementation of compliance algorithms that experts foresee: Institutions avoiding culpability and tracking whistleblowers. AI can uncover fraud but cannot guarantee the proper reporting of it. This same technology can be used against employees to monitor and detect signs of whistleblowing.
Strengths of AI compliance systems
AI excels at analyzing vast amounts of data to identify fraudulent transactions and patterns that might escape human detection, allowing institutions to quickly and efficiently spot misconduct that would otherwise remain undetected.
AI compliance algorithms are promised to operate as follows:
- Real-time detection: AI can analyze vast amounts of data, including financial transactions, communication logs and travel records, in real-time. This allows for immediate identification of anomalies that might indicate fraudulent activity.
- Pattern recognition: AI excels at finding hidden patterns, analyzing spending habits, communication patterns and connections between seemingly unrelated entities to flag potential conflicts of interest, unusual transactions or suspicious interactions.
- Efficiency and automation: AI can automate data collection and analysis, leading to quicker identification and investigation of potential fraud cases.
Yuktesh Kashyap, associate vice president of data science at Sigmoid, explained in TechTarget that AI allows financial institutions, for example, to “streamline compliance processes and improve productivity. Thanks to its ability to process massive data logs and deliver meaningful insights, AI can give financial institutions a competitive advantage with real-time updates for simpler compliance management. … AI technologies greatly reduce workloads and dramatically cut costs for financial institutions by enabling compliance to be more efficient and effective. These institutions can then achieve more than just compliance with the law by actually creating value with increased profits.”
AI Is the Wild West, but Not for the Reasons You Think
As Europe moves closer to blanket rules regarding its use, CCI’s Jennifer L. Gaskin explores the evolving compliance and regulatory picture around artificial intelligence, the technology everyone seems to be using (but that we’re also all afraid of?).
Read moreDetailsDue diligence and human oversight
Stephen M. Kohn, founding partner of Kohn, Kohn & Colapinto, argues that AI compliance algorithms will be an ineffective tool that allow institutions to escape liability. He worries that corporations and financial institutions will implement AI systems and evade enforcement action by calling it due diligence.
“Companies want to use AI software to show the government that they are complying reasonably. Corporations and financial institutions will tell the government that they use sophisticated algorithms, and it did not detect all that money laundering, so you should not sanction us because we did due diligence.” He insists that the U.S. government should not allow these algorithms to be used as a regulatory benchmark.
Legal scholar Sonia Katyal writes in her piece “Democracy & Distrust in an Era of Artificial Intelligence” that “While automation lowers the cost of decision-making, it also raises significant due process concerns, involving a lack of notice and the opportunity to challenge the decision.”
While AI can be used as a powerful tool for identifying fraud, there is still no method for it to contact authorities with its discoveries. Compliance personnel are still required to blow the whistle. These algorithms should be used in conjunction with human judgment to determine compliance or lack thereof. Due process is needed so that individuals can understand the reasoning behind algorithmic determinations.
The double-edged sword
Darrell West, senior fellow at Brookings Institute’s Center for Technology Innovation, warns about the dangerous ways these same algorithms can be used to find whistleblowers and silence them.
Nowadays most office jobs (whether remote or in person) conduct operations fully online. Employees are required to use company computers and networks to do their jobs. Data generated by each employee passes through these devices and networks. Meaning, your privacy rights are questionable.
Because of this, whistleblowing will get much harder — organizations can employ the technology they initially implemented to catch fraud to instead catch whistleblowers. They can monitor employees via the capabilities built into our everyday tech: cameras, emails, keystroke detectors, online activity logs, what is downloaded and more. West urges people to operate under the assumption that employers are monitoring their online activity.
These techniques have been implemented in the workplace for years, but AI automates tracking mechanisms. AI gives organizations more systematic tools to detect internal problems.
West explains, “All organizations are sensitive to a disgruntled employee who might take information outside the organization, especially if somebody’s dealing with confidential information, budget information or other types of financial information. It is just easy for organizations to monitor that because they can mine emails. They can analyze text messages; they can see who you are calling. Companies could have keystroke detectors and see what you are typing. Since many of us are doing our jobs in Microsoft Teams meetings and other video conferencing, there is a camera that records and transcribes information.”
If a company is defining a whistleblower as a problem, they can monitor this very information and look for keywords that would indicate somebody is engaging in whistleblowing.
With AI, companies can monitor specific employees they might find problematic and all the information they produce, including the keywords that might indicate fraud. Creators of these algorithms promise that soon their products will be able to detect all sorts of patterns and feelings, such as emotion and sentiment. AI cannot determine whether somebody is a whistleblower, but it can flag unusual patterns and refer those patterns to compliance analysts. AI then becomes a tool to monitor what is going on within the organization, making it difficult for whistleblowers to go unnoticed. The risk of being caught by internal compliance software will be much greater.
“The only way people could report under these technological systems would be to go offline, using their personal devices or burner phones. But it is difficult to operate whistleblowing this way and makes it difficult to transmit confidential information. A whistleblower must, at some point, download information. Since you will be doing that on a company network, and that is easily detected these days.”
But the question of what becomes of the whistleblower is based on whether the compliance officers operate in support of the company or the public interest — they will have an extraordinary amount of information about the company and the whistleblower.
West says organizations should be more transparent about their use of monitoring tools, informing employees about what kind of information they are using, how they are monitoring employees and what kind of software they use.
The importance of whistleblower programs
Whistleblower programs, with robust protections for those who speak out, remain essential for exposing fraud and holding organizations accountable. This ensures that detected fraud is not only identified but also reported and addressed, protecting taxpayer money, and promoting ethical business practices.
If AI algorithms are used to track down whistleblowers, their implementation would hinder these programs. Companies will undoubtedly retaliate against employees they suspect of blowing the whistle, creating a massive chilling effect where potential whistleblowers would not act out of fear of detection.
Already being employed in our institutions, experts believe these AI-driven compliance systems must have independent oversight for transparency’s sake. The software must also be designed to adhere to due process standards.
This article was adapted from a post on Kohn, Kohn & Colapinto’s blog and is used here with permission.