Monday, March 1, 2021
Corporate Compliance Insights
  • Home
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Articles
    • See All Articles
    • NEW: COVID-Related
    • Compliance
    • Ethics
    • Risk
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Leadership and Career
  • Vendor News
  • Jobs
    • Compliance & Risk
    • Information Security
  • Events
    • Webinars & Events
    • Submit an Event
  • Downloads
    • eBooks
    • Whitepapers
  • Podcasts
  • Videos
  • Subscribe
No Result
View All Result
  • Home
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Articles
    • See All Articles
    • NEW: COVID-Related
    • Compliance
    • Ethics
    • Risk
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Leadership and Career
  • Vendor News
  • Jobs
    • Compliance & Risk
    • Information Security
  • Events
    • Webinars & Events
    • Submit an Event
  • Downloads
    • eBooks
    • Whitepapers
  • Podcasts
  • Videos
  • Subscribe
No Result
View All Result
Corporate Compliance Insights
Home Compliance

Beyond Transparency for AI: Justification is Essential for Risk Management

by Gurjeet Singh
December 12, 2017
in Compliance, Featured
robot and screen

AI’s Vulnerabilities Threaten to Limit its Progress

Like Achilles or Sampson, the power and potential of AI has a weakness and a vulnerability. In the case of financial services, that weakness is its opacity. In general, the better the machine learning system — the blacker the box, that is, — the harder it is to decipher what is going on inside the model.

Financial services is not only a regulated industry, it also is an industry built on the notion of risk. From managing risk, to understanding it. If you cannot understand the machine it is difficult, if not impossible, to manage the risk it seeks to solve. If you cannot understand the machine’s workings, you cannot explain it to the regulator.

These two components conspire to limit AI’s progress. The response to the trust challenge in financial services has been to open the kimono: to offer full transparency or explainable AI. While this is a fundamental requirement, it has little bearing on the problem of trust.

What the financial services industry needs is justification, which is something entirely different than transparency. The ability to have this level of detail makes conversations with regulators and internal model review boards far more productive because every action can be explained. But there are implications that go beyond the regulator. This ability to justify what the model is doing allows firms to improve existing models using the same techniques. The technical challenges, operational challenges and organizational changes associated with artificial intelligence pale in comparison to the trust challenge.

To explain, let’s define transparency.

Transparency is identifying what algorithm was used and what parameters were learned from the data. While interesting, this does not provide any intuition as what is going on inside the model. It allows one to “check the math,” but it is not valuable to know that your computer can do matrix multiplication or basic linear algebra operations. This is akin to checking whether the Oracle database can join tables correctly.

This is not to suggest there is no utility in transparency — far from it. Done well, it reveals what has been done with a level of precision that lets us replicate valuable work. Transparency might also include information about why the design of calculations was made in a particular way. This, however, is essentially QA, and does not provide any real evidence for the machine’s actions.

Let me give you an example. Imagine you train a three-layer neural network for a prediction task. A transparent system would provide training parameters (e.g. momentum, regularization etc.) as well as final parameters (the two weight matrices between the three layers). This can be inspected as for every possible input. You can essentially hand-verify the outputs, but it isn’t actually useful as the verification amounts to ensuring that the library implements matrix multiplication correctly. The problem is that this exercise provides you with no intuition about why the model behaves the way it does.

Beyond Transparency: Justification

The concept of justification is far more robust than simple transparency and is what is required to move AI into production. Like transparency, justification identifies the algorithm that was used and the parameters that were applied, but it also provides the additional ingredient of intuition. It’s the ability to see what the machine is thinking: “when x, because y.”

Justification tells us, for every atomic operation, here is the reason(s). For every classification, prediction, regression, event, anomaly or hotspot we can identify matching examples in the data as proof. These are presented in an output understood by humans and represent the variables, which are the ingredients of the model.

Getting to the atomic level is the key to cracking the AI black box. So how might we achieve that in practice?

Machine learning is the practice of optimization – all the algorithms maximize/minimize some objective. An important feature of optimization is the distinction between global and local optima. Finding global optima is difficult considering the mathematical conditions to check whether we are near an optimum or are unable to distinguish between global and local optima. The challenge is that at the global level, it is difficult to know when you have found that maxima.

If this sounds obscure, consider the well-worn yet highly effective example of climbing a hill in the fog. Your visibility is highly constrained a few feet. How do you know when you are at the top? Is it when you start to descend? What if you crested a false summit? You wouldn’t know it, but you would claim victory as you began to descend.

But what if you had a GPS or a map and a way to locate yourself in the fog?

This is one of the areas where technologies like Topological Data Analysis (TDA)—a type of AI that can illuminate the black box—is particularly effective. Unlike other AI solutions, TDA produces “maps” of data, which can even be visualized.

By using technologies that offer justification, one can, for every atomic operation or “action,” find the location somewhere in the network. As a result, we can know where we are, where we came from and where (to the extent the prediction is correct) we are going next.

Say you have a mortgage model that predicts the rate for which a customer is eligible. Often, these models have been in production for years. For any model, however, there are two sources of error: systematic and random. Random errors are errors where the underlying behavior is not predictable and generally are impossible to tackle. On the other hand, systematic errors occur when the model makes the same error over and over again. Using the concept of a network, the error occurs because the data point(s) appear in the same region (e.g. if the model is not calibrated for young millennials, it might predict the rates to be lower or higher than they should be).

Having a map of the data overlaid with both the model’s prediction as well as the ground truth can help determine, for each prediction, whether it lies in the model’s systematic blind spot and if it does, the predictions can be improved significantly.

Again, justification delivers this capability, transparency does not.

Ultimately, justification is not simply a “feature” of AI, rather it is core to the success of the technology. Justification will pave the way for AI in terms of regulators and internal model review boards. Justification will win over skeptics in the boardroom. Justification will enhance risk management for new and existing models. Justification will deliver on the promise of AI and eliminate its biggest vulnerability when it comes to adoption.

Welcome to the age of justification.


Tags: Artificial Intelligence/A.I.regulatory
Previous Post

TRACE (Bonus Episode): Spotlight on Israel

Next Post

55% of Organizations Unaware of Policy Violations in their Own Enterprise, Reveals MetricStream Research Survey

Gurjeet Singh

Gurjeet Singh is Ayasdi’s executive chairman and co-founder. He leads a technology movement that emphasizes the importance of extracting insight from data, not just storing and organizing it. Singh developed key mathematical and machine-learning algorithms for Topological Data Analysis (TDA) and their applications during his tenure as a graduate student in Stanford’s mathematics department, where he was advised by Ayasdi co-founder Professor Gunnar Carlsson. He is the author of numerous patents and has been published in a variety of top mathematics and computer-science journals. Before starting Ayasdi, he worked at Google and Texas Instruments. Singh was named by Silicon Valley Business Journal as one of their “40 Under 40” in 2015. He holds a B.Tech. degree from Delhi University, and a Ph.D. in Computational Mathematics from Stanford University. He lives in Palo Alto with his wife and two children and develops multi-legged robots in his spare time.

Related Posts

woman looking at horizon from mountain top

What’s on the Horizon for Anti-Corruption Enforcement?

February 25, 2021
cannabis leaf on $100 bill

The Intersection of EDD and Banking Cannabis

February 24, 2021
gold cup award on red background with stars

Ethisphere Announces the 2021 World’s Most Ethical Companies

February 23, 2021
illustration of hand holding flashlight illuminating hidden stairs

The Corporate Transparency Act: Pulling Back the Veil

February 23, 2021
Next Post
violation word graphic

55% of Organizations Unaware of Policy Violations in their Own Enterprise, Reveals MetricStream Research Survey

Access realtime data
Addressing systemic racism in the workplace SAI Global
Dynamic Risk Assessments with Workiva
Top 10 Risk and Compliance Trends

Special Coverage

Special COVID page graphic

Jump to a Topic:

anti-corruption anti-money laundering/AML Artificial Intelligence/A.I. automation banks board of directors board risk oversight bribery CCPA/California Consumer Privacy Act Cloud Compliance communications management Coronavirus/COVID-19 corporate culture crisis management cyber crime cyber risk data analytics data breach data governance decision-making diversity DOJ due diligence fcpa enforcement actions financial crime GDPR GRC HIPAA information security KYC/know your customer machine learning monitoring ransomware regtech reputation risk risk assessment Sanctions SEC social media risk supply chain technology third party risk management tone at the top training whistleblowing
No Result
View All Result

Privacy Policy

Follow Us

  • Facebook
  • Twitter
  • LinkedIn
  • RSS Feed

Category

  • CCI Press
  • Compliance
  • Compliance Podcasts
  • Cybersecurity
  • Data Privacy
  • eBooks
  • Ethics
  • FCPA
  • Featured
  • Financial Services
  • Fraud
  • Governance
  • GRC Vendor News
  • HR Compliance
  • Internal Audit
  • Leadership and Career
  • Opinion
  • Resource Library
  • Risk
  • Uncategorized
  • Videos
  • Webinars
  • Whitepapers

© 2019 Corporate Compliance Insights

No Result
View All Result
  • Home
  • About
  • Articles
  • Vendor News
  • Podcasts
  • Videos
  • Whitepapers
  • eBooks
  • Events
  • Jobs
  • Subscribe

© 2019 Corporate Compliance Insights