A 1979 IBM training manual declared, “A computer can never be held accountable, therefore a computer must never make a management decision.” Saumitra Das, vice president of engineering at Qualys, examines how organizations implementing systems where 10 or more autonomous models collaborate face new liability challenges, why regulators focus on duty of care over intent when AI systems fail and how we’re entering a gold-rush era where the need for standardization is urgent.
A 1979 IBM training manual declared the following: “A computer can never be held accountable, therefore a computer must never make a management decision.”
However, over the past several years, we’ve seen generative AI models being integrated into nearly every facet of our lives, automating and streamlining previously manual processes to simplify both personal and professional efforts. As the application of these models becomes more strategic and complex, specifically in use cases for the modern enterprise, so do the larger implications.
Enter agentic AI, model-centric pipelines (MCP) and machine learning (ML) workflows. Suddenly, organizations are implementing systems in which 10 or more different autonomous models are collaborating to complete tasks. When it comes to security and compliance, this becomes a much larger challenge now.
As these chains of models work to complete the tasks at hand, they do so by looking for the path of least resistance. By nature, autonomous agents are trained to find the easiest and most efficient way to complete the assigned job. This means that they can often identify ways around guardrails that may be in place for humans but elusive to computers. Thus, the need for comprehensive regulations, robust access control and significant human oversight is imperative.
As these agents are granted access to tools they can leverage to facilitate operations, including supply chain management, customer service and security operations, this introduces a new realm of challenges on liability and accountability for organizations.
This paradigm shift will impact the tech industry for years to come.
Responsibility and liability: Who is accountable when AI makes decisions?
As demonstrated by incidents with SolarWinds and Yahoo over the past decade, where the SEC has held business leaders personally responsible for cybersecurity failures, the question of liability for legal issues stemming from the application of agentic AI systems is more pertinent than ever.
The bottom line: Even if an organization did not intend to cause harm, they are likely to be held to a duty-of-care standard.
For regulators, the concept of intent or control when it comes to AI-driven compliance failures is still evolving. However, intent is not as important as duty of care. In instances of security or compliance failures, organizations will need to be prepared to show that they followed best practices to a reasonable extent in deploying such systems, including audit trails and risk assessments. They must maintain thorough documentation detailing the specific processes followed and provide clear evidence that a balanced evaluation of risks and benefits was conducted prior to deployment.
Best practices for ensuring visibility
One of the major concerns associated with agentic AI is the lack of visibility into organizational assets, inventory of models in production, agentic workflows and data exposure.
In order to verify that the decisions made by autonomous agents are auditable and traceable for regulatory purposes, each agentic workflow should be logged as often as possible. Specifically, this includes logging context, such as:
- Agent identity
- The user for which it was performing the action
- The chain of thought for each action
- What tools were used for the actions
- What data was sent out of the system for external search
The logs should be immutable to resist tampering. Additionally, any decision made needs to be mapped back to which model was used and its associated data, as well as any data that was provided as context to the model, which is known as retrieval-augmented generation (RAG). Unified platforms offering real-time visibility can enhance this process, ensuring comprehensive, tamper-proof records.
Best practices for ensuring compliance include a robust data governance strategy with origin, purpose and bias mitigation; having a centralized governance framework that facilitates cross-functional oversight; and implementing tiered risk frameworks that match the level of oversight to use case sensitivity. For instance, a lighter touch may be employed for managing basic chatbot functionality but more stringent reviews for high-stakes areas like financial decision-making.
Defining guardrails and responsibilities
While legal frameworks are emerging to govern the use of agentic AI in high-risk sectors like finance or healthcare, these are all still evolving.
To ensure that data processed by agentic AI systems remains compliant with evolving regulations like the EU AI Act or the multitude of both federal and state-based US privacy laws by the Federal Trade Commission (FTC), California Consumer Privacy Act (CCPA) and National Institute of Standards and Technology (NIST), among others, organizations should do the following:
- Focus on data governance (including marking origin, purpose or data) and ensure that the data is free from bias if it is used for decision-making or model training.
- Establish a clear log of AI actions, including the planning actions of an agent (chain of thought) and which tools it ended up using with what role.
- Define where a human-in-the-loop approval is needed for an agentic workflow to continue.
- Proactively improve red team assessments of AI models and agentic workflows, as well as general cybersecurity assessments of all first-party and third-party systems delivering the agentic AI (IaaS, PaaS and SaaS).
How should organizations approach agentic AI in 2026 and beyond?
We are currently in a gold-rush era of AI, and in order for organizations to navigate this new frontier while maintaining compliance and safety, the need for standardization is urgent.
Over the next several years, we will likely see an increase in organizations adding “chief AI officers” and establishing a centralized, cross-functional risk management team that provides guidance on implementing AI models, how they should be used and what frameworks are in place to ensure compliance.
The pursuit of innovation and efficiency cannot completely overtake the responsibility for secure and compliant operations. Business leaders need to carefully balance innovation and accountability, and AI may be the catalyst that forces companies to finally take data governance seriously.


Saumitra Das, PhD, is vice president of engineering at Qualys, a cybersecurity compliance provider. Das was CTO and co-founder at Blue Hexagon and worked in roles at Qualcomm, Intel and Microsoft. 







