Organizations have treated new technology as something visible and reviewable. That model is breaking down because of platforms’ built-in AI agents, business advisor Bill Lewis explains. For these agents, compliance teams must be building visibility, assigning ownership, documenting permissions and making sure default settings do not quietly become policy.
Most compliance teams are still preparing for AI as if it arrives through a formal proposal. That is no longer the main risk.
A new class of software is appearing inside the enterprise systems companies already use and trust — Microsoft 365, Google Workspace, Salesforce and others. These tools do more than answer questions. They can read information, make recommendations, trigger workflows, move data between systems and, in some cases, act with a degree of autonomy that creates real compliance exposure before anyone knows it.
Many organizations are not prepared for the way agentic AI is entering business: quietly, through routine software changes, default settings, partner ecosystems and embedded capabilities that do not always trigger the same scrutiny as a new standalone deployment.
For compliance and risk teams, this matters. If an AI capability can access sensitive data, influence decisions, initiate actions or operate inside a regulated workflow, it must be governed.
Traditionally, companies treated new technology as something visible and reviewable. A business team would request it, IT would assess it, security would review it, legal would check the contract, and leadership would decide whether the risk was acceptable. That model is breaking down. Agentic capabilities can appear inside tools already approved by organizations. In some cases, they may be available before internal approval or compliance review.
This trend has created a risk blindspot and a governance problem. If an organization does not have a clear inventory of where AI agents exist, what they are allowed to do, which systems they touch and who controls them, then it cannot honestly say it is managing the risk. It is simply assuming the risk is under control because the software came from a trusted vendor.
Why the risk is different now
The threat is not that AI agents are mysterious or futuristic. The threat is that they are becoming ordinary.
Microsoft has said it now has visibility into more than 500,000 AI agents inside its own company and that those agents were generating tens of thousands of employee responses each day. Google products allow agent sharing within organizations unless administrators change defaults. Salesforce has continued expanding agentic offerings into regulated sectors, including healthcare.
These are not edge cases; they are signals of how enterprise software is changing. The compliance challenge is that these tools do not need to be malicious to create risk. A well-intentioned agent that can read confidential information, summarize sensitive records, trigger a workflow or transfer data between systems can still create serious problems if no one has defined boundaries, oversight, auditability or accountability.
In other words, the risk is not just what the agent is. It is what the organization has allowed it to become.
Responsible AI Governance Starts With Ownership
AI governance must be collaboration among IT, HR, legal, compliance and leadership
Read moreDetailsNew questions and governance
Compliance, risk, legal, executive and board oversight is necessary with systems deploying AI agents. An AI agent that mishandles sensitive information or behaves unexpectedly is not just a technology incident. It is an enterprise governance failure.
Organization leaders must ask: Which systems contain agents? Which teams are using them? Which are sanctioned? Leaders also have to ask what can these AI agents do? Are they limited to drafting text, or can they access regulated data, recommend actions, trigger workflows, move information or act autonomously? Finally, leaders need to know: Who controls the AI agents? Who approves them? Who sets the rules? Who reviews the logs? Who is accountable if something goes wrong?
If those answers are unclear, an organization is exposed.
Existing governance frameworks simply were not designed for software that spreads through normal enterprise tasks without a distinct launch moment while having the ability to make and act on decisions. That means compliance leaders need to move from a project-based mindset to an inventory-based mindset. This starts by asking the questions above.
An inventory-based approach toward AI agents is especially important in regulated environments, where the combination of sensitive data, workflow automation and delegated authority can create exposure under privacy, security and sector-specific obligations.
A company does not need to wait for a catastrophic incident to discover that an agent has been over-permissioned.
The practical takeaway
The right response is not panic; it is achieving clarity. Compliance teams should assume that AI agents are already entering the enterprise through trusted software and should treat them as a live governance category. That means building visibility, assigning ownership, documenting permissions and making sure default settings do not quietly become policy.











