When AI influences decisions about people at agentic speed, having a human-centered governance framework in place is critical. Diana Kelley, CISO at Noma Security, details how to establish this framework and why it won’t be the AI vendors who take the blame for failures.
As agentic systems move into production, AI will increasingly be used to make workplace decisions. Systems help screen job candidates, optimize employee schedules, flag productivity patterns and inform workforce planning. But there is a governance question many organizations still struggle to answer clearly: Who is accountable when AI influences decisions about people?
It’s not the model or the algorithm or the AI provider. It’s the organization deploying the system.
Much of the conversation about AI governance focuses on frameworks, policies and regulatory checklists. Those are valuable. Organizations should align with guidance like the NIST framework, which emphasizes accountability, transparency and oversight. But after decades in cybersecurity and technology leadership, I’ve learned governance success rarely hinges on whether the right framework exists on paper. It comes down to something that’s simpler yet harder: ownership.
Once AI touches hiring, scheduling, productivity measurement or compensation, it’s no longer just a technology system. It becomes part of how the organization governs its workforce. And accountability for how it is used ultimately sits with the employer.
Where responsibility resides
When I was at Microsoft, some customers wanted Azure, the company’s cloud computing platform, locked down by default, security maximized, everything closed until explicitly opened. The instinct was understandable. But a platform locked that tight wouldn’t have been adopted. We developed a shared responsibility framework, and to explain it, I often returned to one analogy: We are the bank. When your money is in our vault, the security of the vault is our responsibility. But if you overdraft your account or hand your login credentials to a stranger, that’s where your responsibility begins.
That same logic applies directly to AI vendors today. Vendors have real obligations to build systems that are secure, reliable and designed to reduce bias within their perimeter. But the moment an organization decides how that system is used, what data feeds it and how its outputs influence decisions about employees, the accountability shifts. You can’t outsource accountability by pointing to a contract or a compliance certification.
To ensure that those decisions are aligned with the business, cross-functional oversight is essential before any workplace AI system is deployed. AI governance can’t sit solely within IT or data science. HR, legal, compliance, security and business leadership all bring perspectives that technical teams alone will miss.
Practical application
In practice, the question isn’t just who is involved. It’s who has decision authority at the moment risk appears. A practical starting point is to inventory AI-use cases already in production or pilot. Effective AI governance must account for the speed, scale and unpredictability of these systems. Identify the use case, assign a decision owner, define intervention triggers tied to model outputs and workforce impact and establish which function has authority at each trigger point before deployment begins.
I’ve seen organizations struggle when they try to define ownership after the fact. Once a system is already in production, the focus shifts to keeping it running rather than questioning whether it should have been deployed at all.
From there, establish a lightweight working group anchored in compliance, risk or legal, with stakeholders engaged based on the specific use case. Rather than a standing committee reviewing every system, ownership is scoped at the use-case level, with clear decision-makers identified before deployment begins. If no one owns the decision, the system shouldn’t go live.
Lessons Learned From 3 Corporate Governance Failures
Innovation, risk management & honesty should never hit these lows
Read moreDetailsThe right specificity
A common failure mode to avoid is defining these trigger points too generically, which leaves teams debating ownership in the middle of an incident instead of acting on a decision that was already agreed upon.
For example, trigger points might include any system that influences hiring or termination decisions, uses sensitive employee data or directly impacts compensation, scheduling or performance evaluation. If a system affects hiring or candidate selection, HR and legal take the lead, with compliance ensuring regulatory alignment and security validating data handling. If a system processes sensitive employee data, security and privacy or data protection functions are likely to lead, with compliance reinforcing policy obligations. If a system is optimizing schedules, productivity or compensation, business leaders own the decision, but only within guardrails defined jointly with legal and compliance. Map who and what a system impacts and align ownership and oversight accordingly.
Impact assessments matter, too, and they need to go beyond technical accuracy to the real-world human outcome. I encountered a scheduling optimization system during an early enterprise AI deployment. On paper, the model was highly efficient, maximizing coverage while minimizing labor costs. But when a cross-functional team examined the outputs, they found it was disproportionately concentrating less desirable shifts among certain demographic groups. The system had learned from historical inequities embedded in the data. In a compliance context, this can create potential labor law or discrimination risks.
In another case, a productivity monitoring system flagged high performers as risks due to anomalous work patterns, triggering unnecessary interventions.
What made the difference in both of these cases wasn’t the model itself. It was the presence of stakeholders who understood workforce impact and were empowered to challenge the output before deployment. If the people reviewing the system can’t explain how its outputs could affect worker rights or employee retention, you don’t yet have the right governance in place.
Applying the brakes
Ownership also means defining when a function has the authority to say no. Compliance and legal should have clear authority to halt or escalate decisions that affect protected classes or worker rights or that cannot be adequately explained and audited. Security should be able to halt or escalate deployment decisions when data lineage, access controls or model integrity are unclear. Business leaders can move quickly when impact is low and reversible, but decisions that materially affect people’s livelihoods should require cross-functional approval, not unilateral action.
That imperative becomes even more urgent as organizations deploy agentic AI systems that make decisions at machine speed. We’ve all seen how quickly rules based AI in an applicant tracking system can automatically reject hundreds or thousands of candidates. Now imagine an autonomous agentic version of that system also writing and sending job offers and kicking off background checks at the same pace, all with little to no human oversight.
The “human in the loop” principle was designed for a world where AI operated slowly enough for a person to review outputs before anything consequential happened. Agentic systems, where AI autonomously takes actions and chains decisions together, break that assumption. Humans can’t review outputs fast enough to provide meaningful oversight without slowing down the system.
The organizations thinking most seriously about this aren’t abandoning oversight, but they are thinking about it differently. Rather than relying on a person to review each output, they build on the guardrails and decision authorities defined by the cross-functional team to create layered, automated governance directly into the system. In an agentic system, this could look like one agent proposing a change and another modeling downstream impact with a third evaluating policy and access controls. Oversight still exists, but it operates at the speed of the system itself.
As AI becomes more autonomous, this is what responsible governance looks like: not a checklist applied once at deployment but oversight designed into how decisions are made and continuously monitored for drift, bias and unintended behavior.
Responsible workplace AI governance requires clarity about who owns decisions, who has the authority to intervene and when that intervention must happen. If an AI system discriminates against employees or mishandles their data, it’s your organization, not the vendor, that will be held accountable.
If you’re starting this journey, focus on two things first. Define who has the authority to stop an AI deployment when risks to workers emerge. Then ensure that every system affecting employees has a clearly assigned decision owner and is reviewed by the appropriate stakeholders before it goes live, not after.
Governance doesn’t fail because organizations lack frameworks. It fails because the right people weren’t involved in the planning and pre-deployment phases to ask the right questions — and no one was unambiguously empowered to act when it matters.
The speed of AI may change how decisions are made, but it doesn’t dilute accountability. If anything, it concentrates it. And the responsibility for how those decisions affect people will always remain human.


Diana Kelley is the chief information security officer at Noma Security. She also serves on the boards of WiCyS, The Executive Women’s Forum (EWF) and InfoSec World. Diana previously held roles at Protect AI, Microsoft, IBM Security, Symantec, Burton Group (now Gartner) and KPMG. 







