AI, automation and algorithms are proliferating across many sectors of the global economy, including in regulated industries like pharma, where they are doing things like selecting clinical trial sites. But as regulators catch up, corporate leaders need to get clarity on areas where there can be no substitute for human accountability, says AI governance and board adviser Theodora Monye.
Across regulated industries, investment in AI is accelerating. The ambitions of AI projects tend to be consistent: faster decisions, reduced operational cost, better outcomes at scale. What is less consistent is where human judgment ends and algorithmic authority begins. That boundary is not a technical question but a governance one. And most organizations are still working on answering it.
The assumption that sufficiently sophisticated algorithms can eventually manage an organization, or significant parts of it, is increasingly embedded in how boards and leadership teams think about AI strategy. It is also wrong, not because algorithms lack capability, but because capability is not the same as accountability. In regulated industries, accountability is not optional.
EU AI Act
Algorithmic decision-making is no longer theoretical in regulated environments, such as pharmaceutical and adjacent sectors. AI systems are already making or informing consequential decisions: clinical trial site selection, vendor qualification, regulatory documentation, risk classification and compliance monitoring.
In each of these contexts, the question regulators, auditors and boards are beginning to ask is not whether an algorithm was involved. It is who was responsible for the decision the algorithm supported and whether that accountability was documented before the decision was made.
The EU AI Act addresses this directly. Article 14 requires that high-risk AI systems be designed to allow effective human oversight during the period in which they are in use. Article 26 places obligations on deployers to use such systems in accordance with instructions for use and to assign human oversight to natural persons who have the necessary competence, training and authority. The accountability obligation falls on the organization deploying the system, not on the system itself. Nonetheless, the precise scope of these obligations remains subject to legal interpretation. Article 26 also preserves deployer discretion in organizing oversight measures.
What algorithms can & can’t do (yet)
Algorithms are powerful tools for managing complexity at scale. In pharmaceutical and contract research organization environments, the practical advantages are real: processing volumes of data that no human team could analyze in equivalent time, identifying patterns across variables invisible to manual review and reducing inconsistency in routine decision processes.
In clinical operations, algorithmic tools have changed how study teams approach site selection, patient recruitment forecasting and real-world evidence generation in ways that matter operationally. In compliance functions, they surface anomalies and risks that manual review would miss.
Three governance responsibilities remain with the organization regardless of what the algorithm provides.
- Ethical judgment under uncertainty. Algorithms optimize for defined objectives within known parameters. They cannot weigh competing values, interpret ambiguous obligations or exercise the discretion that regulated industries require when situations fall outside established categories. A model can flag a transaction as anomalous. Determining whether that anomaly represents fraud, an error or a legitimate but unusual business activity requires human judgment and judgment carries accountability.
- Strategic direction. An algorithm can identify the most efficient path to a defined goal. It cannot determine whether that goal remains appropriate as circumstances change, whether the organization’s risk appetite has shifted or whether a regulatory landscape has moved in ways that require the strategy itself to be reconsidered. Those are questions for leadership.
- Accountability. When a governance failure occurs, regulators do not ask which model made the decision. They ask who was responsible for ensuring the model was appropriate, properly overseen and operating within sanctioned boundaries. Remember that under Article 26 of the EU AI Act, accountability sits with the deploying organization. It cannot be automated. It must be held by identifiable people with the authority and the mandate to exercise it.
The EU AI Act’s ‘Wait and See’ Window Is Closing
AI literacy has survived attempts to water it down and remains a direct organizational obligation — not a policy aspiration
Read moreDetailsAutonomy levels matter
Determining how much autonomy an AI system should be permitted to exercise, and what oversight is required at each level, is one of the most consequential practical decisions in AI governance, and one that most organizations have not made explicitly. AI systems do not simply operate autonomously or not. They exist on a spectrum: from systems that provide information for human decision-making, through systems that recommend actions, to systems that execute decisions within defined parameters, to fully agentic systems capable of taking consequential actions without real-time human involvement.
Article 14 of the EU AI Act reflects this reality, requiring that oversight measures be commensurate with the risks, level of autonomy and use context of the high-risk AI system. The accountability structures, oversight mechanisms and documentation obligations appropriate for an information-providing system are not sufficient for an agentic one. Treating them as equivalent is a governance error with direct regulatory consequences.
Classifying autonomy levels before deploying AI systems, and designing oversight proportionate to each system’s level of autonomy, is not a compliance exercise. It is the foundation on which everything else rests.
Lessons for leaders
Establish accountability before deployment. Who is responsible for each AI system’s decisions and how that accountability is documented should be resolved before the system goes live. Resolving it retrospectively is harder and carries less credibility under regulatory scrutiny.
Classify autonomy levels explicitly. A system that surfaces information for human review requires different oversight from one that executes decisions autonomously. Making those distinctions explicit and building oversight proportionate to each level is the practical work of AI governance.
Build governance into operating models, not compliance functions. Governance that sits only within the compliance team is fragile. When governance obligations are integrated into how decisions are made, documented and reviewed across the organization, they have a better chance of holding.
Treat regulatory frameworks as a floor. The EU AI Act Regulation, ISO/IEC 42001:2023 and the NIST AI risk management framework set minimum expectations. Organizations that treat those minimums as the target will find themselves reactive as frameworks evolve. Those that build beyond the minimum are better placed to absorb change without disruption.


Theodora Monye is an AI governance and board adviser. She formerly was a public governor of the Frimley Health NHS Foundation Trust. 







