No Result
View All Result
SUBSCRIBE | NO FEES, NO PAYWALLS
MANAGE MY SUBSCRIPTION
NEWSLETTER
Corporate Compliance Insights
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • Artificial Intelligence (AI)
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Downloads
    • Download Whitepapers & Reports
    • Download eBooks
  • Books
    • CCI Press
    • New: Bribery Beyond Borders: The Story of the Foreign Corrupt Practices Act by Severin Wirz
    • CCI Press & Compliance Bookshelf
    • The Seven Elements Book Club
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe
Jump to a Section
  • At the Office
    • Ethics
    • HR Compliance
    • Leadership & Career
    • Well-Being at Work
  • Compliance & Risk
    • Compliance
    • FCPA
    • Fraud
    • Risk
  • Finserv & Audit
    • Financial Services
    • Internal Audit
  • Governance
    • ESG
    • Getting Governance Right
  • Infosec
    • Cybersecurity
    • Data Privacy
  • Opinion
    • Adam Balfour
    • Jim DeLoach
    • Mary Shirley
    • Yan Tougas
No Result
View All Result
Corporate Compliance Insights
Home Compliance

Is the Three Lines Model Still Valid in the Agentic Era?

Humans in the loop — actually empowered to act

by Antonella Serine
March 30, 2026
in Compliance
three lines of barbed wire

Corporate compliance leaders don’t need to throw the baby out with the bathwater when it comes to AI governance, says Antonella Serine of KLA Digital. In fact, our familiar three lines model is still valid, as long as the three lines show up in the right places.

Many organizations have already mapped AI governance onto the three lines of defense. On paper, the architecture looks tidy. In practice, the model starts to strain when AI moves from helping employees think to taking steps inside business processes, where the real question is no longer whether a policy exists but whether someone can still intervene, challenge the system and explain what happened.

Compliance leaders are used to translating new risks into familiar structures. Cybersecurity, privacy and third-party risk all found a place inside the three lines of defense. AI should be no different. But something odd happens once AI stops behaving like a research assistant and starts behaving like part of the workflow — the shift the industry calls “agentic AI.”

If a tool only drafts a memo, summarizes a regulation or suggests research, the risk is mostly about bad content. Existing review processes can often absorb that. When AI triages alerts, routes a case, prepopulates a disclosure, recommends an adverse action or triggers a customer communication, the control problem changes. The organization now has to explain who owned the decision, who could intervene and what evidence exists if the answer was wrong.

That is where many three-lines diagrams begin to wobble.

The model itself is not broken. But it has to move from committee slides and policy binders into the workflow itself. The better question for compliance leaders is not “Do we have AI governance?” It is “At what point in this process can a human stop, challenge or override the system?” — and, crucially, what does that really mean?

The first line must own decision boundaries

In many companies, AI governance sits far from operations. There is a policy, a review forum and maybe an inventory of use cases. Meanwhile, the actual system is already embedded in frontline work.

The first line, the business team that owns the process, cannot outsource accountability to a vendor, a data science team or a policy document. Its job is to define decision boundaries before deployment and keep refining them as the use case changes.

That means getting concrete:

  • Is the system drafting, recommending or acting?
  • Which inputs are off-limits?
  • Which outputs can move forward automatically and which require human signoff?
  • What level of uncertainty, exception volume or policy conflict should pause the workflow?
  • If a vendor changes the model or pushes a major update, who decides whether the process still fits the original risk appetite?

These are operational control questions. Often, organizations describe AI use cases at the level of aspiration: “assist investigators,” “improve onboarding,” “increase efficiency in monitoring.” That language is harmless in a steering committee memo and useless in an incident review. A defensible first-line design names the exact task, the exact boundary and the exact point where human judgment remains mandatory.

If nobody in the business can say in plain English, “Here is where the system stops and a person must decide,” then the first line has not actually designed a control. It has written an aspiration.

business person before robot on screen
Compliance

Where in the Loop? Testing AI Across 120 Compliance Tasks to Find Out Where Humans Are Most Needed

by Steph Holmes
November 12, 2025

Read moreDetails

The second line must set the conditions for intervention

Once the first line defines the process, the second line (compliance, risk, legal, privacy, information security and related control functions) has to decide what meaningful oversight looks like in practice. This is where many AI programs lose specificity. Policies speak of fairness, accountability, transparency and governance. But when asked what those values mean in live operations, the answers get foggy:

  • How much drift triggers review?
  • Which types of decisions require sampling?
  • What evidence has to be preserved?
  • When should an issue be escalated to legal, to senior management or to the board?
  • What constitutes a material model change versus ordinary maintenance?

Second-line oversight becomes real when it turns principles into thresholds, triggers and evidence expectations.

Under the current EU AI Act timeline, most provisions apply from Aug. 2, though some obligations already apply and some product-related high-risk rules have a later date. For high-risk systems, the law’s logic is concrete: Human oversight matters, logs matter and, for certain uses, deployers must complete a fundamental rights impact assessment before deployment. Even for organizations whose current use cases may not fall neatly into the act’s high-risk categories, that logic is useful. Oversight has to be specific enough that someone can reconstruct what happened and respond when risk materializes.

In other words, second-line functions should stop asking only whether a policy exists and start asking whether the process produces evidence. Can the organization reconstruct who reviewed what, when a human overrode the system, how exceptions were handled and whether similar cases received consistent treatment? If the answer is no, the organization has a mechanics problem, not a language problem.

Internal audit should test controls in motion, not just on paper

Internal audit has a straightforward job in the AI era: find out whether the controls people talk about are the controls that actually operated.

That means testing more than approval records or committee minutes. Auditors should be able to sample real cases and trace the chain from input to action. They should ask whether the organization can identify the business owner, the reviewer, the escalation path and the basis for any override. They should test whether logs are retained long enough to support reconstruction and whether those logs are understandable to someone beyond the engineering team.

For deployers of high-risk AI systems, the EU AI Act requires automatically generated logs under the deployer’s control to be kept for at least six months. That is a legal expression of a broader governance truth: If the record disappears before the question arrives, the oversight was never fully in place.

Audit also needs to examine change management. A control set tested in January may no longer describe the system in April if the vendor updates a model, swaps a feature or changes how confidence scores are presented. The faster AI capabilities evolve, the less useful a one-time design review becomes.

This is where internal audit can add real value. Not by pretending to reverse-engineer every model but by asking the stubborn, practical questions that often get lost in the excitement:

  • Did the human reviewer have enough context to challenge the output?
  • Did the escalation path work under time pressure?
  • Were policy exceptions visible and approved?
  • Could the organization explain the outcome to a regulator, a customer or a board member without needing three engineers in the room?

When audit tests controls in motion rather than admiring them in slides, the three lines start working the way they should.

Conclusion

Compliance leaders do not need an exotic new governance doctrine for AI. The three lines of defense still work. What needs to change is where those lines show up.

First-line ownership has to live inside the workflow, where decisions are made and exceptions occur. Second-line standards have to define intervention points, evidence requirements and escalation triggers in language the business can actually use. Third-line testing has to examine whether those controls operated under normal conditions and under stress.

Plenty of organizations can show a policy that says “human in the loop.” But can they show what that human saw, what authority that person had and what happened when the human disagreed with the system?

Because when AI begins to act rather than simply advise, the real test of governance is whether the organization can demonstrate, case by case, that responsibility remained legible, intervention remained possible and evidence remained intact. That is what keeps the three lines of defense from turning into three lines of paperwork.

Tags: Artificial Intelligence (AI)
Previous Post

2026 Commercial Litigation Outlook

Next Post

When Efficiency Becomes Fragility

Antonella Serine

Antonella Serine

Antonella Serine is co-founder of KLA Digital. She previously served in roles at BNP Paribas and Accenture Argentina.

Related Posts

openclaw homepage

OpenClaw Reveals Hidden Security Risks of Agentic AI

by Jonathan Armstrong
April 27, 2026

Drive for innovation cannot outpace robust security and compliance measures

robots at job interview waiting with person

Layoff Two-Step Underscores AI’s Limitations

by Jennifer L. Gaskin
April 22, 2026

AI can make some workers more efficient, but is it ready yet to completely eliminate them? Some companies very publicly...

big data concept

Data Authenticity & Accountability Crucial in the AI Age

by Greg Campanella and Ken Feinstein
April 20, 2026

Companies must blend innovative and traditional methods for policy development, privacy programs and regulatory alignment

virtual gavel pixels

Negligence & AI: Can the Courts Keep Up?

by Elizabeth Alice “Liz” Och
April 20, 2026

At this early stage, be cautious in how you talk about your commitment to AI best practices

Next Post
shattered glass fragility

When Efficiency Becomes Fragility

No Result
View All Result

Privacy Policy | AI Policy

Founded in 2010, CCI is the web’s premier global independent news source for compliance, ethics, risk and information security. 

Got a news tip? Get in touch. Want a weekly round-up in your inbox? Sign up for free. No subscription fees, no paywalls. 

Follow Us

Browse Topics:

  • CCI Press
  • Compliance
  • Compliance Podcasts
  • Cybersecurity
  • Data Privacy
  • eBooks Published by CCI
  • Ethics
  • FCPA
  • Featured
  • Financial Services
  • Fraud
  • Governance
  • GRC Vendor News
  • HR Compliance
  • Internal Audit
  • Leadership and Career
  • On Demand Webinars
  • Opinion
  • Research
  • Resource Library
  • Risk
  • Uncategorized
  • Videos
  • Webinars
  • Well-Being
  • Whitepapers

© 2026 Corporate Compliance Insights

Welcome to CCI. This site uses cookies. Please click OK to accept. Privacy Policy
Cookie settingsACCEPT
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT
No Result
View All Result
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • Artificial Intelligence (AI)
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Downloads
    • Download Whitepapers & Reports
    • Download eBooks
  • Books
    • CCI Press
    • New: Bribery Beyond Borders: The Story of the Foreign Corrupt Practices Act by Severin Wirz
    • CCI Press & Compliance Bookshelf
    • The Seven Elements Book Club
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe

© 2026 Corporate Compliance Insights