To survive in the era of AI — and keep up with the flood of legislation surrounding the evolving technology — organizations need to transition from a document-first mindset to a data-first one, says Steph Holmes of EQS Group. That means embedding the rules and principles of your traditional compliance policies into a living compliance system, one that adapts and serves your employees in real time.
Traditional compliance policies served as the standard resource for an organization’s corporate compliance and governance for decades. They shielded the organization from liability and provided a centralized, uniform and static set of rules for its employees. This document-first philosophy assumed that a written set of principles could effectively mitigate risk and align a company around a culture of compliance.
Times have changed. The rapid advances and adoption of AI — and the ensuing fragmentation of AI, privacy and data laws — has created a fast-moving regulatory landscape where a static set of compliance policies has become not only outdated, but a significant legal liability.
Nearly three-quarters of companies in a recent survey said it was somewhat difficult to keep handbooks in compliance with federal law, while over half reported that they struggled to stay in compliance with state or local laws. At the same time, 85% of executives feel compliance requirements have become more complex in the past three years.
The world that necessitated a document-first compliance philosophy no longer exists, and compliance leaders can no longer anticipate and prepare for new regulations gradually. To navigate this constantly changing new world, compliance leaders need to become “system architects” and shift to a data-first philosophy, where compliance is a living system embedded across every part of an organization.
A fragmented, dynamic regulatory world
The 2026 legal landscape is defined by competing regulatory philosophies regarding AI. In the US, state legislatures have passed or introduced hundreds of AI, data and privacy laws, establishing rigorous duty of care standards. For example, Colorado’s AI Act (SB 24-205) mandates a duty of care for high-risk AI systems to prevent algorithmic discrimination, treating AI as a technology requiring active, ongoing oversight.
President Donald Trump then signed an executive order to establish a “minimally burdensome” approach to AI governance by the federal government. The order intends to significantly limit the ability of individual states to regulate AI. Internationally, the EU AI Act enforces a rigorous framework for regulating AI based on risk classification, with a higher potential for harm equaling stricter regulatory obligations. And organizations must continue to comply with strict data privacy laws under the existing GDPR.
The modern digital workforce is increasingly borderless thanks to remote and hybrid models. This creates a jurisdictional paradox for compliance leaders: the impossibility of maintaining a uniform handbook of rules when workers across different states and countries operate under fundamentally incompatible legal foundations. An employee accessing data or managing an AI deployment may be legal in one jurisdiction, classified as “high-risk” in the EU and prohibited in a third.
Is the Three Lines Model Still Valid in the Agentic Era?
Humans in the loop — actually empowered to act
Read moreDetailsThe human element
How do workers actually process rules? We know that expecting an employee to manually refer to a dense set of policies while performing demanding tasks is a recipe for compounding compliance errors. There’s only so much cognitive load an employee can manage before they resort to making decisions based on localized knowledge rather than official policies. For document-first organizations, this creates an illusion of compliance, where a company feels legally insulated from risk even as their employees struggle to find missing information or apply outdated rules to rapidly evolving, high-velocity work.
This friction has a knock-on effect of stunting innovation. Employees who are overwhelmed by complex, outdated or inflexible compliance guidance will either bypass rules or stop progressing altogether, fearful of breaking a law. This actively discourages creativity and experimentation — important ingredients for growth. Ultimately, if compliance professionals are confused, you can bet that your employees are, too.
The data-first philosophy
How do organizations protect themselves from risks while empowering their employees to do great work? First, a CCO needs to embrace a data-first philosophy, where how rules are disseminated across an organization is just as important as what the rules say. Their objective should be building a living compliance system, which means moving compliance rules out of static policies and embedding them directly into the tools their employees are already using, including customer relationship management (CRM), content management systems (CMS) CMS and AI products. This shifts the burden of finding rules from the human to the system.
The crux is contextual awareness, meaning the need for rules to understand where (location) and who (role) a user is. Contextual awareness moves away from passive resources to providing active guidance at the moment of decision. For example, a living compliance system can identify who is asking a question and where they are located to provide the correct jurisdictional answer in real-time, such as surfacing approved language, flagging a risk or pointing the employee to the right resource, all within the tools they already use. It’s a risk-based engagement model that meets employees where they are, rather than expecting them to go looking for answers. This reduces cognitive load on the employee while helping them adhere to relevant laws. Employees feel empowered knowing they have the right information at their fingertips the moment they need it.
Ethical and strategic dilemmas
Transitioning to a more automated, context-aware system comes with its own issues, starting with employee privacy. While location-aware technology helps companies adhere to state or international laws, it raises legitimate privacy and employment law considerations. Best practice includes documenting the legal basis for location processing, conducting a privacy impact assessment, minimizing location retention and notifying employees about how and why location data is used. These safeguards help ensure the convenience of location-aware policies doesn’t come at the expense of employee privacy or data protection compliance.
AI systems that touch compliance must be responsibly implemented and overseen as well. The human-in-the-loop strategy remains essential. Organizations must bake human oversight into critical touchpoints, from setting the objectives and guardrails within which AI agents operate to checking AI outputs for accuracy and escalating the most sensitive decisions or problems for human judgment alone. And when using AI models to summarize and explain content, it’s important the information is grounded in truth. Techniques like retrieval-augmented generation (RAG) and model context protocol (MCP) make this possible by connecting AI tools directly to verified, organization-approved sources rather than allowing the AI to pull answers from the open web. This keeps the compliance team in control of the information AI can access and share.


Steph Holmes is director of compliance & ethics strategy at EQS Group. With more than a decade of industry experience, she helps organizations achieve strategic business goals through cultivating ethics, risk and corporate compliance. Passionate about empowering organizations to foster trust, transparency and accountability, she draws on her background in psychology and her credentials as a Leadership Professional in Ethics & Compliance (LPEC) and Certified Compliance & Ethics Professional (CCEP). In her role at EQS, she provides insights and guidance to clients to enhance their ethical culture and performance. 






