No Result
View All Result
SUBSCRIBE | NO FEES, NO PAYWALLS
MANAGE MY SUBSCRIPTION
NEWSLETTER
Corporate Compliance Insights
  • Home
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Career Connection
  • Events
    • Calendar
    • Submit an Event
  • Library
    • Whitepapers & Reports
    • eBooks
    • CCI Press & Compliance Bookshelf
  • Podcasts
  • Videos
  • Subscribe
  • Home
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Career Connection
  • Events
    • Calendar
    • Submit an Event
  • Library
    • Whitepapers & Reports
    • eBooks
    • CCI Press & Compliance Bookshelf
  • Podcasts
  • Videos
  • Subscribe
No Result
View All Result
Corporate Compliance Insights
Home Financial Services

Machine Learning Governance in Financial Services: A New Perspective on Core Principles

These Principles and Features Can Help Financial Institutions Govern Various Uses of Machine Learning

by Sanjukta Dhar
June 23, 2021
in Financial Services, Risk
A hand interacts with a diagram illustrating algorithmic decision-making

Financial institutions can leverage machine learning to make an array of functions faster, more accurate and more efficient. But deployment is just the first step. In the end, machine learning governance makes or breaks the efficacy of any system.

In today’s banking and financial services world, we are rebuilding many finance, risk, actuary, forecasting and macroeconomic models using Python, R and other open-source programming languages. Increasingly, we incorporate machine learning (ML) algorithms into these functions so that a traditional deterministic model becomes a self-learning and self-tuning program, aided by supervised, unsupervised or reinforced learning methods.

Such emerging technologies mostly lack explicit pre-programmed routines. They feed on a diverse spectrum of datasets, and they evolve continuously by learning from past examples and experience. Naturally, these ML models can’t simply be left alone to derive insights on their own for any period of time. They demand constant revision to account for model control, audit and regulatory compliance requirements.

As more and more rule-based or deterministic models (statistical, financial or quantitative models) adopt ML capabilities, we must build controls in such a way that the model’s objectives, construct, data need, desired performance level and trustworthiness can be measured and appropriately managed in alignment with the company’s risk appetite. Moreover, in an autonomous model management framework, we expect to see early warning indicators or alerts whenever a threshold is breached under any of the aforementioned areas.

However, none of these governance initiatives should diminish the pace of AI uptake, innovation and research excellence in model governance. This article covers some key principles underpinning the target machine learning governance framework and some essential components.

Core Principles of Machine Learning Governance

Principle 1: Be Transparent

Transparency has a threefold connotation. First, in an end-user interaction, it is often critical to disclose that the other party or usable system is a machine learning model and not a regular program or a human agent. Second, it involves understanding how a typical ML model is developed, trained, deployed and operated. This understanding could be either from a developer’s standpoint or from an end user’s standpoint. Third, developers should raise general awareness about standard ML algorithms and how they are leveraged for typical financial use cases, such as fair lending, credit decisioning, anomaly detection, customer onboarding, stock price forecasting, fraud management, underwriting and so on.

Principle 2: Be Predictable

This generally refers to understanding the various factors (e.g., data, logic, algorithms) behind a certain autonomous decision and their correlation with the outcome. In doing so, one must be careful not to compromise data privacy boundaries. The understanding should be clear enough for the end users so that in case of a valid dispute, they can challenge the outcome.

Principle 3: Reduce Bias

A key objective of an ML model framework should be to lower societal bias as much as possible. No system should impede the fundamental rights or financial inclusion of the end user. An algorithm that is opaque and makes questionable or discriminatory decisions on the basis of sensitive parameters such as race, ethnicity, religion, national origin or age can immediately elicit deeper scrutiny and penalties. Take, for example, the Equal Credit Opportunity Act (ECOA), which mandates that creditors notify applicants about the principal reasons of a credit decline. An ML-based fair lending model must be self-explanatory to generate a commentary against any decline, and the reason should not be a societal bias embedded into the model due to a poor choice of training data.

Principle 4: Be Fair and Ethical

If ML models promote more human-centric values such as fairness, equality and justice, then it wins more public trust and, therefore, the potential for faster adoption.

The adoption of ML models should not have negative human rights implications due to the denial of social rights. The same goes for any labor market transitions due to manual jobs being replaced by robots. This is where an ethical risk assessment becomes critical.

Unfair, deceptive or abusive acts or practices (UDAAP) under the Dodd-Frank Act serves as a huge guardrail against any potential unethical judgment made by ML models.

Principle 5: Be Accountable

Accountability lays out responsibility on specific stakeholders who are responsible for developing, deploying and maintaining ML models. The stakeholders could be the end-user organization or a third-party firm. However, they should be legally liable for any consequences due to the decisions taken by the models. Moreover, accountability also means that the identified stakeholders should be fully aware of the decision logic and should be able to explain any outcome caused by the models. This attempts to minimize the possibility of any algorithmic harm caused by the system due to the breach of any social norm, legal guideline or human expectation.

Model Governance Reimagined: Some Components

Banks and fintech firms are busy redesigning their existing model management architecture to introduce new components to help realize some or all of these principles. Many of these components essentially help to remediate the typical challenges that ML models bring in, such as opacity, complexity, algorithmic bias, etc.

Let us look at few examples.

A Model Registry

A model registry is a centralized repository and platform for collaboratively building, managing, training, deploying and providing comprehensive annotations for ML model artifacts. A successful model registry system increases transparency and results in fewer handoffs between the dev team and release engineers. Since model registry is the new-age model version control and life cycle management system enabling one-click deployment or integration of the model, it is a critical tool for model auditors, model validators and release/integration engineers.

Open-source platforms such as MLFlow offer a comprehensive model registry platform that can be used across multiple first- and second-line model risk teams.

A collaborative model registry system attempts to enhance transparency and predictability by making the model development life cycle more collaborative and understandable by clear lineage documentation.

Model Data Control

When an ML model goes awry, the training dataset is to blame in most cases. It is critical to ensure that the training data is neither preferentially sampled, nor reflecting an existing societal bias. Both can be detected early and fixed by tweaking the data preprocessing techniques. An example could be when our credit application dataset is collected from a very narrow social segment, with the model showing only working males of a specific age group as having been granted loans. This may wrongly train the algorithm to not consider nonworking female candidates as creditworthy candidates.

Even after the ML model is deployed, it is possible that the supervised model is overwhelmed by a wide variety of unseen data with an unknown pattern. Some of the premeditated data poisoning attacks also fall in this category. The result is simple: Model integrity is lost, and it comes up with outrageous decisions.

Additionally, data privacy must not be compromised. We must ensure that sensitive parameters critical for an accurate model forecast are used for building the model.

Data remediation is not simple; extensive training for datasets and rigorous validation processes are recommended. Periodic monitoring of operational model outcome helps to assess model decay and detect adversarial attacks.

An Independent Model Audit

Involving a human in independent model oversight and understanding logic and data is a significant control and a chief asset to any ML governance practice. This can ensure that complete model autonomy does not break the system or make unpredictable decisions with negative consequences. Another benefit of bringing a human into the equation is ensuring that the typical human-centered values, fundamental rights or democratic values are not endangered by the ML models. Consider an AI-powered customer onboarding system: The ML model responsible for performing digital due diligence on and onboarding new customers may reject an application due to a discriminatory profiling logic. A human layer of model outcome validation may help to fix such wrongful decisions.

A European Approach to Excellence in AI

The European Commission has recently proposed a legal framework toward achieving trustworthy artificial intelligence, the essence of which is a risk-based approach for classifying and remediating AI-based systems. Regulatory authorities are encouraging usage of regulatory sandboxes for controlled testing and validation of ML models before they are operationalized. This helps to identify bigger systemic risks (e.g., an ML model that ignores genuine fraudulent transactions as nonsuspicious) at the pre-marketing phase.

An Ethical Risk Assessment Framework

It is possible that human prejudices or historical societal biases get perpetuated into ML models as well because the models are trained using historical real data and human engineers are training those. It is also possible that due to imbalanced or underrepresented datasets, the ML model overlooks a certain segment during its training process.

All these risks have potential impact on human rights, including equality, liberty and privacy. An online campaign management system driven by ML models may target a specific racial group for a financial product due to its training data. This outcome is discriminatory on prohibited biases, and hence, not ethical by certain standards.

A thorough ethical risk assessment is needed for the various ML models impacting processes including new customer onboarding, online loan approval, cross-selling or upselling a financial product, credit scoring and so on. This assessment is not only to detect algorithmic bias, but also to take note of any proxy or parameters used by the model that can be considered sensitive. Unethical behavior by ML models could be a massive source of misconduct risk and subsequent litigation. This is important.

Conclusion

With the advent of more innovation in the ML models, we will expect to see extremely sophisticated tools and techniques in this space in the future. Therefore, the machine learning governance framework must equally evolve to manage all the societal, legal and reputational implications brought in by this new wave of intelligent models.


Tags: Artificial Intelligence (AI)FinTechMachine Learning
Previous Post

Primasia Digitizes Client Onboarding in Corporate Services with Know Your Customer

Next Post

Drata Raises $25M Series A to Accelerate Rapid Growth of its Security and Compliance Automation Platform

Sanjukta Dhar

Sanjukta Dhar

Sanjukta Dhar leads the market and treasury risk management portfolio within the BFSI CRO Strategic Initiative of Tata Consultancy Services (TCS). Dhar has played the role of a business analyst, solution architect, SME and implementation lead across multiple financial risk management system implementations for major banks and financial services.

Related Posts

cci top 10 stories collage

Top 10 Compliance Stories of 2022

by Jennifer L. Gaskin
December 7, 2022

The more things change, the more they stay the same. This time last year, we summarized the top 10 ESG...

ai bias_f

Still Racist After All These Datasets: Once Bias Is Baked Into Your AI, It’s Hard to Root Out

by Nigel Cannings
November 9, 2022

Spending on artificial intelligence across all sectors is expected to more than double by 2025, but Nigel Cannings of Intelligent...

ai in hiring

Algorithms Behaving Badly: New NYC Law Tackles Bias in Hiring Technology

by Lofred Madzou
June 2, 2022

From recruitment to retention, technology has long been crucial to effective workforce management. And while companies may be flocking to...

QA logo_bailey leslie

Q&A: For Effective Financial Crime Prevention, Build a Better Mix of Machines and Humans

by Bill Millar
May 3, 2022

To police financial crime, more businesses are incorporating artificial intelligence — machine learning, in particular — into monitoring, prevention and...

Next Post
Compliance key icon neon style

Drata Raises $25M Series A to Accelerate Rapid Growth of its Security and Compliance Automation Platform

Compliance Job Interview Q&A

Jump to a Topic

AML Anti-Bribery Anti-Corruption Artificial Intelligence (AI) Automation Banking Board of Directors Board Risk Oversight Business Continuity Planning California Consumer Privacy Act (CCPA) Code of Conduct Communications Management Corporate Culture COVID-19 Cryptocurrency Culture of Ethics Cybercrime Cyber Risk Data Analytics Data Breach Data Governance DOJ Download Due Diligence Enterprise Risk Management (ERM) ESG FCPA Enforcement Actions Financial Crime Financial Crimes Enforcement Network (FinCEN) GDPR HIPAA Know Your Customer (KYC) Machine Learning Monitoring RegTech Reputation Risk Risk Assessment SEC Social Media Risk Supply Chain Technology Third Party Risk Management Tone at the Top Training Whistleblowing
No Result
View All Result

Privacy Policy

Founded in 2010, CCI is the web’s premier global independent news source for compliance, ethics, risk and information security. 

Got a news tip? Get in touch. Want a weekly round-up in your inbox? Sign up for free. No subscription fees, no paywalls. 

Follow Us

Browse Topics:

  • CCI Press
  • Compliance
  • Compliance Podcasts
  • Cybersecurity
  • Data Privacy
  • eBooks Published by CCI
  • Ethics
  • FCPA
  • Featured
  • Financial Services
  • Fraud
  • Governance
  • GRC Vendor News
  • HR Compliance
  • Internal Audit
  • Leadership and Career
  • On Demand Webinars
  • Opinion
  • Resource Library
  • Risk
  • Uncategorized
  • Videos
  • Webinars
  • Well-Being
  • Whitepapers

© 2022 Corporate Compliance Insights

No Result
View All Result
  • Home
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Career Connection
  • Events
    • Calendar
    • Submit an Event
  • Library
    • Whitepapers & Reports
    • eBooks
    • CCI Press & Compliance Bookshelf
  • Podcasts
  • Videos
  • Subscribe

© 2022 Corporate Compliance Insights

Welcome to CCI. This site uses cookies. Please click OK to accept. Privacy Policy
Cookie settingsACCEPT
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT