No Result
View All Result
SUBSCRIBE | NO FEES, NO PAYWALLS
MANAGE MY SUBSCRIPTION
NEWSLETTER
Corporate Compliance Insights
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • Artificial Intelligence (AI)
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Downloads
    • Download Whitepapers & Reports
    • Download eBooks
  • Books
    • CCI Press
    • New: Bribery Beyond Borders: The Story of the Foreign Corrupt Practices Act by Severin Wirz
    • CCI Press & Compliance Bookshelf
    • The Seven Elements Book Club
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe
Jump to a Section
  • At the Office
    • Ethics
    • HR Compliance
    • Leadership & Career
    • Well-Being at Work
  • Compliance & Risk
    • Compliance
    • FCPA
    • Fraud
    • Risk
  • Finserv & Audit
    • Financial Services
    • Internal Audit
  • Governance
    • ESG
    • Getting Governance Right
  • Infosec
    • Cybersecurity
    • Data Privacy
  • Opinion
    • Adam Balfour
    • Jim DeLoach
    • Mary Shirley
    • Yan Tougas
No Result
View All Result
Corporate Compliance Insights
Home Governance

Responsible AI Governance Starts With Ownership

AI governance must be collaboration among IT, HR, legal, compliance and leadership

by Diana Kelley
April 30, 2026
in Governance
a chip in birdcage

When AI influences decisions about people at agentic speed, having a human-centered governance framework in place is critical. Diana Kelley, CISO at Noma Security, details how to establish this framework and why it won’t be the AI vendors who take the blame for failures.

As agentic systems move into production, AI will increasingly be used to make workplace decisions. Systems help screen job candidates, optimize employee schedules, flag productivity patterns and inform workforce planning. But there is a governance question many organizations still struggle to answer clearly: Who is accountable when AI influences decisions about people?

It’s not the model or the algorithm or the AI provider. It’s the organization deploying the system.

Much of the conversation about AI governance focuses on frameworks, policies and regulatory checklists. Those are valuable. Organizations should align with guidance like the NIST framework, which emphasizes accountability, transparency and oversight. But after decades in cybersecurity and technology leadership, I’ve learned governance success rarely hinges on whether the right framework exists on paper. It comes down to something that’s simpler yet harder: ownership.

Once AI touches hiring, scheduling, productivity measurement or compensation, it’s no longer just a technology system. It becomes part of how the organization governs its workforce. And accountability for how it is used ultimately sits with the employer.

Where responsibility resides

When I was at Microsoft, some customers wanted Azure, the company’s cloud computing platform, locked down by default, security maximized, everything closed until explicitly opened. The instinct was understandable. But a platform locked that tight wouldn’t have been adopted. We developed a shared responsibility framework, and to explain it, I often returned to one analogy: We are the bank. When your money is in our vault, the security of the vault is our responsibility. But if you overdraft your account or hand your login credentials to a stranger, that’s where your responsibility begins.

That same logic applies directly to AI vendors today. Vendors have real obligations to build systems that are secure, reliable and designed to reduce bias within their perimeter. But the moment an organization decides how that system is used, what data feeds it and how its outputs influence decisions about employees, the accountability shifts. You can’t outsource accountability by pointing to a contract or a compliance certification.

To ensure that those decisions are aligned with the business, cross-functional oversight is essential before any workplace AI system is deployed. AI governance can’t sit solely within IT or data science. HR, legal, compliance, security and business leadership all bring perspectives that technical teams alone will miss. 

Practical application

In practice, the question isn’t just who is involved. It’s who has decision authority at the moment risk appears. A practical starting point is to inventory AI-use cases already in production or pilot. Effective AI governance must account for the speed, scale and unpredictability of these systems. Identify the use case, assign a decision owner, define intervention triggers tied to model outputs and workforce impact and establish which function has authority at each trigger point before deployment begins.

I’ve seen organizations struggle when they try to define ownership after the fact. Once a system is already in production, the focus shifts to keeping it running rather than questioning whether it should have been deployed at all.

From there, establish a lightweight working group anchored in compliance, risk or legal, with stakeholders engaged based on the specific use case. Rather than a standing committee reviewing every system, ownership is scoped at the use-case level, with clear decision-makers identified before deployment begins. If no one owns the decision, the system shouldn’t go live.

blockbuster sign
Governance

Lessons Learned From 3 Corporate Governance Failures

by Jim DeLoach
April 27, 2026

Innovation, risk management & honesty should never hit these lows

Read moreDetails

The right specificity

A common failure mode to avoid is defining these trigger points too generically, which leaves teams debating ownership in the middle of an incident instead of acting on a decision that was already agreed upon.

For example, trigger points might include any system that influences hiring or termination decisions, uses sensitive employee data or directly impacts compensation, scheduling or performance evaluation. If a system affects hiring or candidate selection, HR and legal take the lead, with compliance ensuring regulatory alignment and security validating data handling. If a system processes sensitive employee data, security and privacy or data protection functions are likely to lead, with compliance reinforcing policy obligations. If a system is optimizing schedules, productivity or compensation, business leaders own the decision, but only within guardrails defined jointly with legal and compliance. Map who and what a system impacts and align ownership and oversight accordingly.

Impact assessments matter, too, and they need to go beyond technical accuracy to the real-world human outcome. I encountered a scheduling optimization system during an early enterprise AI deployment. On paper, the model was highly efficient, maximizing coverage while minimizing labor costs. But when a cross-functional team examined the outputs, they found it was disproportionately concentrating less desirable shifts among certain demographic groups. The system had learned from historical inequities embedded in the data. In a compliance context, this can create potential labor law or discrimination risks.

In another case, a productivity monitoring system flagged high performers as risks due to anomalous work patterns, triggering unnecessary interventions.

What made the difference in both of these cases wasn’t the model itself. It was the presence of stakeholders who understood workforce impact and were empowered to challenge the output before deployment. If the people reviewing the system can’t explain how its outputs could affect worker rights or employee retention, you don’t yet have the right governance in place.

Applying the brakes

Ownership also means defining when a function has the authority to say no. Compliance and legal should have clear authority to halt or escalate decisions that affect protected classes or worker rights or that cannot be adequately explained and audited. Security should be able to halt or escalate deployment decisions when data lineage, access controls or model integrity are unclear. Business leaders can move quickly when impact is low and reversible, but decisions that materially affect people’s livelihoods should require cross-functional approval, not unilateral action.

That imperative becomes even more urgent as organizations deploy agentic AI systems that make decisions at machine speed. We’ve all seen how quickly rules based AI in an applicant tracking system can automatically reject hundreds or thousands of candidates. Now imagine an autonomous agentic version of that system also writing and sending job offers and kicking off background checks at the same pace, all with little to no human oversight.

The “human in the loop” principle was designed for a world where AI operated slowly enough for a person to review outputs before anything consequential happened. Agentic systems, where AI autonomously takes actions and chains decisions together, break that assumption. Humans can’t review outputs fast enough to provide meaningful oversight without slowing down the system.

The organizations thinking most seriously about this aren’t abandoning oversight, but they are thinking about it differently. Rather than relying on a person to review each output, they build on the guardrails and decision authorities defined by the cross-functional team to create layered, automated governance directly into the system. In an agentic system, this could look like one agent proposing a change and another modeling downstream impact with a third evaluating policy and access controls. Oversight still exists, but it operates at the speed of the system itself.

As AI becomes more autonomous, this is what responsible governance looks like: not a checklist applied once at deployment but oversight designed into how decisions are made and continuously monitored for drift, bias and unintended behavior.

Responsible workplace AI governance requires clarity about who owns decisions, who has the authority to intervene and when that intervention must happen. If an AI system discriminates against employees or mishandles their data, it’s your organization, not the vendor, that will be held accountable.

If you’re starting this journey, focus on two things first. Define who has the authority to stop an AI deployment when risks to workers emerge. Then ensure that every system affecting employees has a clearly assigned decision owner and is reviewed by the appropriate stakeholders before it goes live, not after. 

Governance doesn’t fail because organizations lack frameworks. It fails because the right people weren’t involved in the planning and pre-deployment phases to ask the right questions — and no one was unambiguously empowered to act when it matters.

The speed of AI may change how decisions are made, but it doesn’t dilute accountability. If anything, it concentrates it. And the responsibility for how those decisions affect people will always remain human.

Tags: Artificial Intelligence (AI)
Previous Post

The $5B Test: Why Healthcare Compliance Programs Keep Failing the Same Way

Next Post

5 Structural Barriers Breaking Your Cybersecurity Compliance Framework

Diana Kelley

Diana Kelley

Diana Kelley is the chief information security officer at Noma Security. She also serves on the boards of WiCyS, The Executive Women’s Forum (EWF) and InfoSec World. Diana previously held roles at Protect AI, Microsoft, IBM Security, Symantec, Burton Group (now Gartner) and KPMG.

Related Posts

robot fallen over

‘Blame the Bot’ Won’t Cut It in Front of Regulators

by Jonny Frank, Nathan Gibson, Michael Costa and Kashif Sheikh
May 11, 2026

Responsible automation requires human judgment, independence and evidence

robot and human hand touching

Your Next AI Risk Is Inside the Systems You Trust the Most

by Bill Lewis
May 11, 2026

If an organization is not inventorying and analyzing its AI agents, it isn’t managing risk

brain obscured behind glass

Why Experience Still Matters in an Automated Finance World

by Ryan Padget
May 8, 2026

AI is reshaping workflows in finance, but the judgment that protects organizations remains deeply human

magritte son of man deepfake

Deepfakes Are Now a Board-Level Risk & Regulators Are Watching

by Matt Flegg
May 1, 2026

Recent UK regulatory developments are making deepfake risk a board-level disclosure and accountability issue, not just an IT problem

Next Post
barrier on track

5 Structural Barriers Breaking Your Cybersecurity Compliance Framework

No Result
View All Result

Privacy Policy | AI Policy

Founded in 2010, CCI is the web’s premier global independent news source for compliance, ethics, risk and information security. 

Got a news tip? Get in touch. Want a weekly round-up in your inbox? Sign up for free. No subscription fees, no paywalls. 

Follow Us

Browse Topics:

  • CCI Press
  • Compliance
  • Compliance Podcasts
  • Cybersecurity
  • Data Privacy
  • eBooks Published by CCI
  • Ethics
  • FCPA
  • Featured
  • Financial Services
  • Fraud
  • Governance
  • GRC Vendor News
  • HR Compliance
  • Internal Audit
  • Leadership and Career
  • On Demand Webinars
  • Opinion
  • Research
  • Resource Library
  • Risk
  • Uncategorized
  • Videos
  • Webinars
  • Well-Being
  • Whitepapers

© 2026 Corporate Compliance Insights

Welcome to CCI. This site uses cookies. Please click OK to accept. Privacy Policy
Cookie settingsACCEPT
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT
No Result
View All Result
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • Artificial Intelligence (AI)
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Downloads
    • Download Whitepapers & Reports
    • Download eBooks
  • Books
    • CCI Press
    • New: Bribery Beyond Borders: The Story of the Foreign Corrupt Practices Act by Severin Wirz
    • CCI Press & Compliance Bookshelf
    • The Seven Elements Book Club
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe

© 2026 Corporate Compliance Insights