If something goes wrong with a vendor’s AI system, who can explain what happened — the vendor, the in-house engineers, the board? Ask an Ethicist columnist Vera Cherepanova and guest ethicist Brian Haman argue that this is an ethical question about risk and responsibility, not a technical one. Their answer: be explicit about your risk appetite, naming precisely what risks you are accepting, for what benefits and under what conditions — because that is what separates genuine governance from the kind that looks good on paper.
Our team depends on a critical AI system hosted by a foreign vendor with non-transparent algorithms and limited auditability. Operational efficiency is high, but governance policies require clear oversight. Should use continue with documented controls, or adoption be paused until transparency and accountability are assured?” — JM
Your dilemma is something I hear more and more from boards and senior leaders: When a critical system is a vendor’s black box, but formal accountability stays with you, where should you draw the line, and how much opacity is too much?
At first glance, this looks like a technical question. In reality, though, it’s an ethical question about risk and responsibility. If something goes wrong — and something can and will always go wrong — who can explain what happened and who can be held to account? The vendor? The in-house engineers? The board?
For this dilemma, I am joined by Brian Haman, a guest ethicist who works at the intersection of cyber, AI and moral philosophy, a joyful mix and a perfect fit for your question.
***

Organizations increasingly face a strategic and ethical dilemma in the age of AI. Should they continue using black-box AI systems that drive efficiency but limit transparency or pause adoption until greater accountability can be assured? Far from strictly a technical decision, it cuts to the core of ethical governance in a world shaped by digital sovereignty and geopolitical rivalry.
The tension between transparency and dependency exposes the conflict between two obligations, namely the philosophical ideal of traceable, auditable algorithms and the pragmatic need for uninterrupted operations. Transparency underpins accountability. Without it, organizations cannot meaningfully assess bias or compliance risk. In practice, however, operational pressures often dominate, particularly when AI systems are embedded in essential workflows where downtime carries significant cost.
This calculus becomes even more complex when foreign vendors are involved. Many AI products developed outside domestic jurisdictions, particularly in China, limit auditability or algorithmic disclosure due to trade secrecy or national security restrictions. Recent examples, such as DeepSeek’s inference engines or voice-cloning platforms like Qwen3-TTS, illustrate the uncertainties that come with geopolitical entanglement. Meanwhile, transatlantic dynamics, e.g., the US-EU tension over data protection and digital sovereignty, further blur the ethical boundaries between efficiency and control.
From a governance standpoint, pausing adoption entirely may be unrealistic. Instead, ethical compliance should focus on layered mitigation, which would include implementing rigorous third-party risk assessments; requiring contractual transparency clauses; documenting decision accountability; and defining clear procedures for escalation if vendor trust erodes. At the same time, boards and compliance leaders must advocate for international standards that demand transparency as a condition of ethical AI procurement.
Within this evolving landscape, the question, then, is not simply whether to continue or to pause but rather how to maintain operational continuity without surrendering ethical oversight or strategic autonomy. And the answer may very well redefine the nature of responsible dependency itself.
***
All signs are that AI will keep getting more capable and more entangled with foreign infrastructure. As Brian noted, for many organizations, “black box” system can be so mission-critical and commercially attractive that the decision may be to keep using it despite the concerns.
If so, in addition to Brian’s thoughts, I’d suggest two things. First, be very explicit and clear about your risk appetite: We are accepting X types of risk for Y benefits, under Z conditions. Emmanuel Kant would call that naming your maxim. Secondly, have someone inside your organization who can meaningfully challenge the system’s behavior. Without it, all your efforts will eventually slide into the territory of paperwork — but not oversight.
Is Your Mental Health Campaign a Fig Leaf for Monetizing Risky Behavior?
When a company's public messaging and internal incentives diverge, remediation is often designed more for reputation than impact
Read moreDetailsReaders respond
The previous question came from a chief risk and compliance officer at a fast-growing online platform wrestling with “fig-leaf” ethics: a business model built on engagement features that may harm teens’ mental health, paired with a glossy youth-wellbeing campaign. The dilemma was whether donating profits to good causes can ever offset a core product design that may itself be part of the harm.
In my response, I noted: “Think of it this way: If you discovered one of your suppliers was using forced labor, would you keep the contract and just give a portion of the savings to an anti-slavery charity? Most people would instinctively say no, because the core business practice is the ethical issue. Your situation is structurally similar. Some harms just can’t be offset with charity. There’s also a long-term business argument. The scandals they teach in business ethics classes, including Enron, Wells Fargo’s fake accounts or VW emissions cheating, all involve leaders convincing themselves that they could separate performance from integrity. When the reckoning came, it wasn’t only about moral norms violation. Shareholder value, careers and trust were all destroyed. In each case, it would have been cheaper (financially and reputationally) to adjust the business model earlier than to pay for the fallout later.” Read the full column here.
I liked how you dissected this issue. When I started to think about this, another thought came to my mind: Perhaps justification depends on how close the person judging is to the situation, both with the donation and with the negative side. If the business is a casino, and a person has an addict in their close circle, they are unlikely to justify and will perceive and deny possible charity more strongly (perceive it as a mockery). And vice versa — if a person is in trouble, their child is sick and the business conducts a powerful charitable program in relation to children, then the parent will see good sense in this charity, and the negative may not be perceived so strongly. Thanks again for the great issues you raise! — ET


Vera Cherepanova






