No Result
View All Result
SUBSCRIBE | NO FEES, NO PAYWALLS
MANAGE MY SUBSCRIPTION
NEWSLETTER
Corporate Compliance Insights
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • Artificial Intelligence (AI)
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Downloads
    • Download Whitepapers & Reports
    • Download eBooks
  • Books
    • CCI Press
    • New: Bribery Beyond Borders: The Story of the Foreign Corrupt Practices Act by Severin Wirz
    • CCI Press & Compliance Bookshelf
    • The Seven Elements Book Club
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe
Jump to a Section
  • At the Office
    • Ethics
    • HR Compliance
    • Leadership & Career
    • Well-Being at Work
  • Compliance & Risk
    • Compliance
    • FCPA
    • Fraud
    • Risk
  • Finserv & Audit
    • Financial Services
    • Internal Audit
  • Governance
    • ESG
    • Getting Governance Right
  • Infosec
    • Cybersecurity
    • Data Privacy
  • Opinion
    • Adam Balfour
    • Jim DeLoach
    • Mary Shirley
    • Yan Tougas
No Result
View All Result
Corporate Compliance Insights
Home Governance

The Time to Set Rules Around AI Use Is Before — Not After — You Deploy It Everywhere

Algorithms can flag suspicious activity, but they can’t yet tell you whether fraud is likely

by Theodora Monye
May 14, 2026
in Governance
A robot locked in cage.

AI, automation and algorithms are proliferating across many sectors of the global economy, including in regulated industries like pharma, where they are doing things like selecting clinical trial sites. But as regulators catch up, corporate leaders need to get clarity on areas where there can be no substitute for human accountability, says AI governance and board adviser Theodora Monye. 

Across regulated industries, investment in AI is accelerating. The ambitions of AI projects tend to be consistent: faster decisions, reduced operational cost, better outcomes at scale. What is less consistent is where human judgment ends and algorithmic authority begins. That boundary is not a technical question but a governance one. And most organizations are still working on answering it.

The assumption that sufficiently sophisticated algorithms can eventually manage an organization, or significant parts of it, is increasingly embedded in how boards and leadership teams think about AI strategy. It is also wrong, not because algorithms lack capability, but because capability is not the same as accountability. In regulated industries, accountability is not optional.

EU AI Act

Algorithmic decision-making is no longer theoretical in regulated environments, such as pharmaceutical and adjacent sectors. AI systems are already making or informing consequential decisions: clinical trial site selection, vendor qualification, regulatory documentation, risk classification and compliance monitoring.

In each of these contexts, the question regulators, auditors and boards are beginning to ask is not whether an algorithm was involved. It is who was responsible for the decision the algorithm supported and whether that accountability was documented before the decision was made.

The EU AI Act addresses this directly. Article 14 requires that high-risk AI systems be designed to allow effective human oversight during the period in which they are in use. Article 26 places obligations on deployers to use such systems in accordance with instructions for use and to assign human oversight to natural persons who have the necessary competence, training and authority. The accountability obligation falls on the organization deploying the system, not on the system itself. Nonetheless, the precise scope of these obligations remains subject to legal interpretation. Article 26 also preserves deployer discretion in organizing oversight measures. 

What algorithms can & can’t do (yet)

Algorithms are powerful tools for managing complexity at scale. In pharmaceutical and contract research organization environments, the practical advantages are real: processing volumes of data that no human team could analyze in equivalent time, identifying patterns across variables invisible to manual review and reducing inconsistency in routine decision processes.

In clinical operations, algorithmic tools have changed how study teams approach site selection, patient recruitment forecasting and real-world evidence generation in ways that matter operationally. In compliance functions, they surface anomalies and risks that manual review would miss.

Three governance responsibilities remain with the organization regardless of what the algorithm provides.

  • Ethical judgment under uncertainty. Algorithms optimize for defined objectives within known parameters. They cannot weigh competing values, interpret ambiguous obligations or exercise the discretion that regulated industries require when situations fall outside established categories. A model can flag a transaction as anomalous. Determining whether that anomaly represents fraud, an error or a legitimate but unusual business activity requires human judgment and judgment carries accountability.
  • Strategic direction. An algorithm can identify the most efficient path to a defined goal. It cannot determine whether that goal remains appropriate as circumstances change, whether the organization’s risk appetite has shifted or whether a regulatory landscape has moved in ways that require the strategy itself to be reconsidered. Those are questions for leadership.
  • Accountability. When a governance failure occurs, regulators do not ask which model made the decision. They ask who was responsible for ensuring the model was appropriate, properly overseen and operating within sanctioned boundaries. Remember that under Article 26 of the EU AI Act, accountability sits with the deploying organization. It cannot be automated. It must be held by identifiable people with the authority and the mandate to exercise it.
eu desktop flags
Compliance

The EU AI Act’s ‘Wait and See’ Window Is Closing

by Naomi Grossman
April 6, 2026

AI literacy has survived attempts to water it down and remains a direct organizational obligation — not a policy aspiration

Read moreDetails

Autonomy levels matter

Determining how much autonomy an AI system should be permitted to exercise, and what oversight is required at each level, is one of the most consequential practical decisions in AI governance, and one that most organizations have not made explicitly. AI systems do not simply operate autonomously or not. They exist on a spectrum: from systems that provide information for human decision-making, through systems that recommend actions, to systems that execute decisions within defined parameters, to fully agentic systems capable of taking consequential actions without real-time human involvement.

Article 14 of the EU AI Act reflects this reality, requiring that oversight measures be commensurate with the risks, level of autonomy and use context of the high-risk AI system. The accountability structures, oversight mechanisms and documentation obligations appropriate for an information-providing system are not sufficient for an agentic one. Treating them as equivalent is a governance error with direct regulatory consequences.

Classifying autonomy levels before deploying AI systems, and designing oversight proportionate to each system’s level of autonomy, is not a compliance exercise. It is the foundation on which everything else rests.

Lessons for leaders

Establish accountability before deployment. Who is responsible for each AI system’s decisions and how that accountability is documented should be resolved before the system goes live. Resolving it retrospectively is harder and carries less credibility under regulatory scrutiny.

Classify autonomy levels explicitly. A system that surfaces information for human review requires different oversight from one that executes decisions autonomously. Making those distinctions explicit and building oversight proportionate to each level is the practical work of AI governance.

Build governance into operating models, not compliance functions. Governance that sits only within the compliance team is fragile. When governance obligations are integrated into how decisions are made, documented and reviewed across the organization, they have a better chance of holding.

Treat regulatory frameworks as a floor. The EU AI Act Regulation, ISO/IEC 42001:2023 and the NIST AI risk management framework set minimum expectations. Organizations that treat those minimums as the target will find themselves reactive as frameworks evolve. Those that build beyond the minimum are better placed to absorb change without disruption.

Tags: Artificial Intelligence (AI)
Previous Post

84% of Leaders Expect Effects From AI Regulation Over Next Year

Next Post

The DOJ Wants Strong FCA Whistleblower Lawsuits From Data Miners

Theodora Monye

Theodora Monye

Theodora Monye is an AI governance and board adviser. She formerly was a public governor of the Frimley Health NHS Foundation Trust.

Related Posts

news roundup data abstract rainbow lines

84% of Leaders Expect Effects From AI Regulation Over Next Year

by Staff and Wire Reports
May 14, 2026

1 in 3 employees fear retaliation if they report misconduct; boards beef up security

robot fallen over

‘Blame the Bot’ Won’t Cut It in Front of Regulators

by Jonny Frank, Nathan Gibson, Michael Costa and Kashif Sheikh
May 11, 2026

Responsible automation requires human judgment, independence and evidence

robot and human hand touching

Your Next AI Risk Is Inside the Systems You Trust the Most

by Bill Lewis
May 11, 2026

If an organization is not inventorying and analyzing its AI agents, it isn’t managing risk

brain obscured behind glass

Why Experience Still Matters in an Automated Finance World

by Ryan Padget
May 8, 2026

AI is reshaping workflows in finance, but the judgment that protects organizations remains deeply human

Next Post
data abstract numbers

The DOJ Wants Strong FCA Whistleblower Lawsuits From Data Miners

No Result
View All Result

Privacy Policy | AI Policy

Founded in 2010, CCI is the web’s premier global independent news source for compliance, ethics, risk and information security. 

Got a news tip? Get in touch. Want a weekly round-up in your inbox? Sign up for free. No subscription fees, no paywalls. 

Follow Us

Browse Topics:

  • CCI Press
  • Compliance
  • Compliance Podcasts
  • Cybersecurity
  • Data Privacy
  • eBooks Published by CCI
  • Ethics
  • FCPA
  • Featured
  • Financial Services
  • Fraud
  • Governance
  • GRC Vendor News
  • HR Compliance
  • Internal Audit
  • Leadership and Career
  • On Demand Webinars
  • Opinion
  • Research
  • Resource Library
  • Risk
  • Uncategorized
  • Videos
  • Webinars
  • Well-Being
  • Whitepapers

© 2026 Corporate Compliance Insights

Welcome to CCI. This site uses cookies. Please click OK to accept. Privacy Policy
Cookie settingsACCEPT
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT
No Result
View All Result
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • Artificial Intelligence (AI)
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Downloads
    • Download Whitepapers & Reports
    • Download eBooks
  • Books
    • CCI Press
    • New: Bribery Beyond Borders: The Story of the Foreign Corrupt Practices Act by Severin Wirz
    • CCI Press & Compliance Bookshelf
    • The Seven Elements Book Club
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe

© 2026 Corporate Compliance Insights