No Result
View All Result
SUBSCRIBE | NO FEES, NO PAYWALLS
MANAGE MY SUBSCRIPTION
NEWSLETTER
Corporate Compliance Insights
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • Artificial Intelligence (AI)
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Downloads
    • Download Whitepapers & Reports
    • Download eBooks
  • Books
    • CCI Press
    • New: Bribery Beyond Borders: The Story of the Foreign Corrupt Practices Act by Severin Wirz
    • CCI Press & Compliance Bookshelf
    • The Seven Elements Book Club
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe
Jump to a Section
  • At the Office
    • Ethics
    • HR Compliance
    • Leadership & Career
    • Well-Being at Work
  • Compliance & Risk
    • Compliance
    • FCPA
    • Fraud
    • Risk
  • Finserv & Audit
    • Financial Services
    • Internal Audit
  • Governance
    • ESG
    • Getting Governance Right
  • Infosec
    • Cybersecurity
    • Data Privacy
  • Opinion
    • Adam Balfour
    • Jim DeLoach
    • Mary Shirley
    • Yan Tougas
No Result
View All Result
Corporate Compliance Insights
Home Ethics

Your Foreign AI Vendor’s Black Box Is an Ethics Problem, Not a Technical One

Without someone inside the organization who can meaningfully challenge behavior of an AI tool, documented controls slide into paperwork rather than true oversight

by Vera Cherepanova
February 18, 2026
in Ethics
ai black box

If something goes wrong with a vendor’s AI system, who can explain what happened — the vendor, the in-house engineers, the board? Ask an Ethicist columnist Vera Cherepanova and guest ethicist Brian Haman argue that this is an ethical question about risk and responsibility, not a technical one. Their answer: be explicit about your risk appetite, naming precisely what risks you are accepting, for what benefits and under what conditions — because that is what separates genuine governance from the kind that looks good on paper. 

Our team depends on a critical AI system hosted by a foreign vendor with non-transparent algorithms and limited auditability. Operational efficiency is high, but governance policies require clear oversight. Should use continue with documented controls, or adoption be paused until transparency and accountability are assured?” — JM

Your dilemma is something I hear more and more from boards and senior leaders: When a critical system is a vendor’s black box, but formal accountability stays with you, where should you draw the line, and how much opacity is too much?

At first glance, this looks like a technical question. In reality, though, it’s an ethical question about risk and responsibility. If something goes wrong — and something can and will always go wrong — who can explain what happened and who can be held to account? The vendor? The in-house engineers? The board?

For this dilemma, I am joined by Brian Haman, a guest ethicist who works at the intersection of cyber, AI and moral philosophy, a joyful mix and a perfect fit for your question.

***

brian haman
Brian Haman

Organizations increasingly face a strategic and ethical dilemma in the age of AI. Should they continue using black-box AI systems that drive efficiency but limit transparency or pause adoption until greater accountability can be assured? Far from strictly a technical decision, it cuts to the core of ethical governance in a world shaped by digital sovereignty and geopolitical rivalry.

The tension between transparency and dependency exposes the conflict between two obligations, namely the philosophical ideal of traceable, auditable algorithms and the pragmatic need for uninterrupted operations. Transparency underpins accountability. Without it, organizations cannot meaningfully assess bias or compliance risk. In practice, however, operational pressures often dominate, particularly when AI systems are embedded in essential workflows where downtime carries significant cost.

This calculus becomes even more complex when foreign vendors are involved. Many AI products developed outside domestic jurisdictions, particularly in China, limit auditability or algorithmic disclosure due to trade secrecy or national security restrictions. Recent examples, such as DeepSeek’s inference engines or voice-cloning platforms like Qwen3-TTS, illustrate the uncertainties that come with geopolitical entanglement. Meanwhile, transatlantic dynamics, e.g., the US-EU tension over data protection and digital sovereignty, further blur the ethical boundaries between efficiency and control.

From a governance standpoint, pausing adoption entirely may be unrealistic. Instead, ethical compliance should focus on layered mitigation, which would include implementing rigorous third-party risk assessments; requiring contractual transparency clauses; documenting decision accountability; and defining clear procedures for escalation if vendor trust erodes. At the same time, boards and compliance leaders must advocate for international standards that demand transparency as a condition of ethical AI procurement.

Within this evolving landscape, the question, then, is not simply whether to continue or to pause but rather how to maintain operational continuity without surrendering ethical oversight or strategic autonomy. And the answer may very well redefine the nature of responsible dependency itself.

***

All signs are that ​​AI will keep getting more capable and more entangled with foreign infrastructure. As Brian noted, for many organizations, “black box” system can be so mission-critical and commercially attractive that the decision may be to keep using it despite the concerns. 

If so, in addition to Brian’s thoughts, I’d suggest two things. First, be very explicit and clear about your risk appetite: We are accepting X types of risk for Y benefits, under Z conditions. Emmanuel Kant would call that naming your maxim. Secondly, have someone inside your organization who can meaningfully challenge the system’s behavior. Without it, all your efforts will eventually slide into the territory of paperwork — but not oversight.

teen looking at smartphone
Ethics

Is Your Mental Health Campaign a Fig Leaf for Monetizing Risky Behavior?

by Vera Cherepanova
January 21, 2026

When a company's public messaging and internal incentives diverge, remediation is often designed more for reputation than impact

Read moreDetails

Readers respond

The previous question came from a chief risk and compliance officer at a fast-growing online platform wrestling with “fig-leaf” ethics: a business model built on engagement features that may harm teens’ mental health, paired with a glossy youth-wellbeing campaign. The dilemma was whether donating profits to good causes can ever offset a core product design that may itself be part of the harm.

In my response, I noted: “Think of it this way: If you discovered one of your suppliers was using forced labor, would you keep the contract and just give a portion of the savings to an anti-slavery charity? Most people would instinctively say no, because the core business practice is the ethical issue. Your situation is structurally similar. Some harms just can’t be offset with charity. There’s also a long-term business argument. The scandals they teach in business ethics classes, including Enron, Wells Fargo’s fake accounts or VW emissions cheating, all involve leaders convincing themselves that they could separate performance from integrity. When the reckoning came, it wasn’t only about moral norms violation. Shareholder value, careers and trust were all destroyed. In each case, it would have been cheaper (financially and reputationally) to adjust the business model earlier than to pay for the fallout later.” Read the full column here.

I liked how you dissected this issue. When I started to think about this, another thought came to my mind: Perhaps justification depends on how close the person judging is to the situation, both with the donation and with the negative side. If the business is a casino, and a person has an addict in their close circle, they are unlikely to justify and will perceive and deny possible charity more strongly (perceive it as a mockery). And vice versa — if a person is in trouble, their child is sick and the business conducts a powerful charitable program in relation to children, then the parent will see good sense in this charity, and the negative may not be perceived so strongly. Thanks again for the great issues you raise! — ET

Have a response? Share your feedback on what I got right (or wrong). Send me your comments or questions.

Tags: Artificial Intelligence (AI)Board of DirectorsBoard Risk OversightCulture of Ethics
Previous Post

Sphinx Raises $7M Seed Round for AI Compliance Agents

Next Post

A Year After Designation of Cartels as Terrorists, What Is the Risk Landscape for Multinationals Operating in Mexico?

Vera Cherepanova

Vera Cherepanova

Vera Cherepanova is an award-winning ethics and compliance expert who writes and speaks about business ethics, workplace culture, behavioral compliance, risk and governance. She is the author of "Corporate Compliance Program," the first-ever book on compliance in the Russian language, and a co-author of "The Transnationalization of Anti-Corruption Law," as well as hundreds of articles on all aspects of ethics, compliance and governance. Her insights have been featured in the Financial Times, Wall Street Journal, Law360 and Chartered Management Institute publications. Vera serves as an ethics advisor for market-leading corporations and international nonprofits. 

Related Posts

data abstract pixelated

US Companies Increasingly Face Investor Pushback on M&A deals

by Staff and Wire Reports
February 18, 2026

94% of PE firms report financial impact from cyber risk; half of logistics professionals unprepared for UAE cargo security enforcement

kids playing tag

Congressional Testimony Part II: Find Your Home Base

by Dan Small and Christopher Armstrong
February 13, 2026

The second pillar: message — settle on a theme you can deliver repeatedly, no matter how often questions are asked

epa sign on building

The EPA’s Retreat on Emissions Threatens to Make ESG Reporting More Complicated — Not Less

by Jennifer L. Gaskin
February 12, 2026

Agency rescinds determination that serves as foundation for most federal emissions regulation

LRN 2026 E&C Program Effectiveness Report

2026 E&C Program Effectiveness Report

by Corporate Compliance Insights
February 11, 2026

How are organizations navigating rapid technological change? LRN 2026 report E&C Program Effectiveness Report What’s in this report from LRN:...

Next Post
mexico landscape viewer

A Year After Designation of Cartels as Terrorists, What Is the Risk Landscape for Multinationals Operating in Mexico?

No Result
View All Result

Privacy Policy | AI Policy

Founded in 2010, CCI is the web’s premier global independent news source for compliance, ethics, risk and information security. 

Got a news tip? Get in touch. Want a weekly round-up in your inbox? Sign up for free. No subscription fees, no paywalls. 

Follow Us

Browse Topics:

  • CCI Press
  • Compliance
  • Compliance Podcasts
  • Cybersecurity
  • Data Privacy
  • eBooks Published by CCI
  • Ethics
  • FCPA
  • Featured
  • Financial Services
  • Fraud
  • Governance
  • GRC Vendor News
  • HR Compliance
  • Internal Audit
  • Leadership and Career
  • On Demand Webinars
  • Opinion
  • Research
  • Resource Library
  • Risk
  • Uncategorized
  • Videos
  • Webinars
  • Well-Being
  • Whitepapers

© 2026 Corporate Compliance Insights

Welcome to CCI. This site uses cookies. Please click OK to accept. Privacy Policy
Cookie settingsACCEPT
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT
No Result
View All Result
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • Artificial Intelligence (AI)
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Downloads
    • Download Whitepapers & Reports
    • Download eBooks
  • Books
    • CCI Press
    • New: Bribery Beyond Borders: The Story of the Foreign Corrupt Practices Act by Severin Wirz
    • CCI Press & Compliance Bookshelf
    • The Seven Elements Book Club
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe

© 2026 Corporate Compliance Insights