No Result
View All Result
SUBSCRIBE | NO FEES, NO PAYWALLS
MANAGE MY SUBSCRIPTION
NEWSLETTER
Corporate Compliance Insights
  • Home
  • About
    • About CCI
    • CCI Magazine
    • Writing for CCI
    • Career Connection
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Library
    • Download Whitepapers & Reports
    • Download eBooks
    • New: Living Your Best Compliance Life by Mary Shirley
    • New: Ethics and Compliance for Humans by Adam Balfour
    • 2021: Raise Your Game, Not Your Voice by Lentini-Walker & Tschida
    • CCI Press & Compliance Bookshelf
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe
Jump to a Section
  • At the Office
    • Ethics
    • HR Compliance
    • Leadership & Career
    • Well-Being at Work
  • Compliance & Risk
    • Compliance
    • FCPA
    • Fraud
    • Risk
  • Finserv & Audit
    • Financial Services
    • Internal Audit
  • Governance
    • ESG
    • Getting Governance Right
  • Infosec
    • Cybersecurity
    • Data Privacy
  • Opinion
    • Adam Balfour
    • Jim DeLoach
    • Mary Shirley
    • Yan Tougas
No Result
View All Result
Corporate Compliance Insights
Home Compliance

Beyond Transparency for AI: Justification is Essential for Risk Management

by Gurjeet Singh
December 12, 2017
in Compliance, Featured
robot and screen

AI’s Vulnerabilities Threaten to Limit its Progress

Like Achilles or Sampson, the power and potential of AI has a weakness and a vulnerability. In the case of financial services, that weakness is its opacity. In general, the better the machine learning system — the blacker the box, that is, — the harder it is to decipher what is going on inside the model.

Financial services is not only a regulated industry, it also is an industry built on the notion of risk. From managing risk, to understanding it. If you cannot understand the machine it is difficult, if not impossible, to manage the risk it seeks to solve. If you cannot understand the machine’s workings, you cannot explain it to the regulator.

These two components conspire to limit AI’s progress. The response to the trust challenge in financial services has been to open the kimono: to offer full transparency or explainable AI. While this is a fundamental requirement, it has little bearing on the problem of trust.

What the financial services industry needs is justification, which is something entirely different than transparency. The ability to have this level of detail makes conversations with regulators and internal model review boards far more productive because every action can be explained. But there are implications that go beyond the regulator. This ability to justify what the model is doing allows firms to improve existing models using the same techniques. The technical challenges, operational challenges and organizational changes associated with artificial intelligence pale in comparison to the trust challenge.

To explain, let’s define transparency.

Transparency is identifying what algorithm was used and what parameters were learned from the data. While interesting, this does not provide any intuition as what is going on inside the model. It allows one to “check the math,” but it is not valuable to know that your computer can do matrix multiplication or basic linear algebra operations. This is akin to checking whether the Oracle database can join tables correctly.

This is not to suggest there is no utility in transparency — far from it. Done well, it reveals what has been done with a level of precision that lets us replicate valuable work. Transparency might also include information about why the design of calculations was made in a particular way. This, however, is essentially QA, and does not provide any real evidence for the machine’s actions.

Let me give you an example. Imagine you train a three-layer neural network for a prediction task. A transparent system would provide training parameters (e.g. momentum, regularization etc.) as well as final parameters (the two weight matrices between the three layers). This can be inspected as for every possible input. You can essentially hand-verify the outputs, but it isn’t actually useful as the verification amounts to ensuring that the library implements matrix multiplication correctly. The problem is that this exercise provides you with no intuition about why the model behaves the way it does.

Beyond Transparency: Justification

The concept of justification is far more robust than simple transparency and is what is required to move AI into production. Like transparency, justification identifies the algorithm that was used and the parameters that were applied, but it also provides the additional ingredient of intuition. It’s the ability to see what the machine is thinking: “when x, because y.”

Justification tells us, for every atomic operation, here is the reason(s). For every classification, prediction, regression, event, anomaly or hotspot we can identify matching examples in the data as proof. These are presented in an output understood by humans and represent the variables, which are the ingredients of the model.

Getting to the atomic level is the key to cracking the AI black box. So how might we achieve that in practice?

Machine learning is the practice of optimization – all the algorithms maximize/minimize some objective. An important feature of optimization is the distinction between global and local optima. Finding global optima is difficult considering the mathematical conditions to check whether we are near an optimum or are unable to distinguish between global and local optima. The challenge is that at the global level, it is difficult to know when you have found that maxima.

If this sounds obscure, consider the well-worn yet highly effective example of climbing a hill in the fog. Your visibility is highly constrained a few feet. How do you know when you are at the top? Is it when you start to descend? What if you crested a false summit? You wouldn’t know it, but you would claim victory as you began to descend.

But what if you had a GPS or a map and a way to locate yourself in the fog?

This is one of the areas where technologies like Topological Data Analysis (TDA)—a type of AI that can illuminate the black box—is particularly effective. Unlike other AI solutions, TDA produces “maps” of data, which can even be visualized.

By using technologies that offer justification, one can, for every atomic operation or “action,” find the location somewhere in the network. As a result, we can know where we are, where we came from and where (to the extent the prediction is correct) we are going next.

Say you have a mortgage model that predicts the rate for which a customer is eligible. Often, these models have been in production for years. For any model, however, there are two sources of error: systematic and random. Random errors are errors where the underlying behavior is not predictable and generally are impossible to tackle. On the other hand, systematic errors occur when the model makes the same error over and over again. Using the concept of a network, the error occurs because the data point(s) appear in the same region (e.g. if the model is not calibrated for young millennials, it might predict the rates to be lower or higher than they should be).

Having a map of the data overlaid with both the model’s prediction as well as the ground truth can help determine, for each prediction, whether it lies in the model’s systematic blind spot and if it does, the predictions can be improved significantly.

Again, justification delivers this capability, transparency does not.

Ultimately, justification is not simply a “feature” of AI, rather it is core to the success of the technology. Justification will pave the way for AI in terms of regulators and internal model review boards. Justification will win over skeptics in the boardroom. Justification will enhance risk management for new and existing models. Justification will deliver on the promise of AI and eliminate its biggest vulnerability when it comes to adoption.

Welcome to the age of justification.


Tags: Artificial Intelligence (AI)
Previous Post

TRACE (Bonus Episode): Spotlight on Israel

Next Post

55% of Organizations Unaware of Policy Violations in their Own Enterprise, Reveals MetricStream Research Survey

Gurjeet Singh

Gurjeet Singh

Gurjeet Singh is Ayasdi's executive chairman and co-founder. He leads a technology movement that emphasizes the importance of extracting insight from data, not just storing and organizing it. Singh developed key mathematical and machine-learning algorithms for Topological Data Analysis (TDA) and their applications during his tenure as a graduate student in Stanford’s mathematics department, where he was advised by Ayasdi co-founder Professor Gunnar Carlsson. He is the author of numerous patents and has been published in a variety of top mathematics and computer-science journals. Before starting Ayasdi, he worked at Google and Texas Instruments. Singh was named by Silicon Valley Business Journal as one of their “40 Under 40” in 2015. He holds a B.Tech. degree from Delhi University, and a Ph.D. in Computational Mathematics from Stanford University. He lives in Palo Alto with his wife and two children and develops multi-legged robots in his spare time.

Related Posts

GAN Integrity TPRM & AI

Where TPRM Meets AI: Balancing Risk & Reward

by Corporate Compliance Insights
May 13, 2025

Is your organization prepared for the dual challenges of AI in third-party risk management? Whitepaper Where TPRM Meets AI: Balancing...

tracking prices

Pricing Algorithms Raise New Antitrust Concerns

by FTI Consulting
May 13, 2025

Interdisciplinary frameworks can help manage legal, privacy and consumer protection risks

news roundup data grungy

DEI, Immigration Regulations Lead List of Employers’ Concerns

by Staff and Wire Reports
May 9, 2025

Half of fraud driven by AI; finserv firms cite tech risks in ’25

ai policy

Planning Your AI Policy? Start Here.

by Bradford J. Kelley, Mike Skidgel and Alice Wang
May 7, 2025

Effective AI governance begins with clear policies that establish boundaries for workplace use. Bradford J. Kelley, Mike Skidgel and Alice...

Next Post
violation word graphic

55% of Organizations Unaware of Policy Violations in their Own Enterprise, Reveals MetricStream Research Survey

No Result
View All Result

Privacy Policy | AI Policy

Founded in 2010, CCI is the web’s premier global independent news source for compliance, ethics, risk and information security. 

Got a news tip? Get in touch. Want a weekly round-up in your inbox? Sign up for free. No subscription fees, no paywalls. 

Follow Us

Browse Topics:

  • CCI Press
  • Compliance
  • Compliance Podcasts
  • Cybersecurity
  • Data Privacy
  • eBooks Published by CCI
  • Ethics
  • FCPA
  • Featured
  • Financial Services
  • Fraud
  • Governance
  • GRC Vendor News
  • HR Compliance
  • Internal Audit
  • Leadership and Career
  • On Demand Webinars
  • Opinion
  • Research
  • Resource Library
  • Risk
  • Uncategorized
  • Videos
  • Webinars
  • Well-Being
  • Whitepapers

© 2025 Corporate Compliance Insights

Welcome to CCI. This site uses cookies. Please click OK to accept. Privacy Policy
Cookie settingsACCEPT
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT
No Result
View All Result
  • Home
  • About
    • About CCI
    • CCI Magazine
    • Writing for CCI
    • Career Connection
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Library
    • Download Whitepapers & Reports
    • Download eBooks
    • New: Living Your Best Compliance Life by Mary Shirley
    • New: Ethics and Compliance for Humans by Adam Balfour
    • 2021: Raise Your Game, Not Your Voice by Lentini-Walker & Tschida
    • CCI Press & Compliance Bookshelf
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe

© 2025 Corporate Compliance Insights