The ever-changing status around AI, from an unsettled regulatory picture to a brisk pace of innovation, is keeping many corporations on the back foot. But a governance-first mindset can help ensure responsible use of this potentially transformative tech. Tara Cho of Womble Bond Dickinson offers her keys for crafting an AI governance playbook.
As with most technology, the rapid rise of AI has outpaced the legislative process. Principles surrounding AI have also created political divides that further stall lawmakers and corporate leaders’ ability to predict regulatory constraints. That uncertainty leaves companies looking for answers on how to keep up with the changes in technology while also protecting themselves from potential liability.
The US lacks a centralized federal law governing AI use, and the Trump Administration has clearly signaled its desire for deregulation and full emphasis on innovation. However, there are currently hundreds of proposed AI bills in state legislatures across the country and several already enacted, which creates a dizzying array of issues for compliance teams and in-house legal departments to track.
This uncertainty and rapidly evolving legal landscape point to the need for companies to implement an AI governance plan. With agentic AI rapidly proliferating, businesses must be proactive about protecting themselves, their customers and their employees.
Governance-first model
A recent whitepaper by Womble Bond Dickinson UK showed that among surveyed CEOs, 41% cited regulatory requirements as the most significant challenge for operationalizing AI. Even companies that aren’t using AI yet are planning for how to do so responsibly: A survey by the International Association of Privacy Professionals (IAPP) showed that 30% of companies not using AI reported actively working on AI governance. This trend suggests a governance-first approach, emphasizing the establishment of strong governance frameworks before adopting AI. This governance model is akin to data privacy strategies that start with global principles rather than building from laws of a single jurisdiction, which can better position organizations to adapt to changes in law.
Many global companies can lean on their existing AI compliance and ethics programs. These existing guidelines may need to be tailored for state- or nation-specific laws, but they can give companies a solid foundation on which to build and in many regards mirror the concepts of overall data governance, including such concepts as notice, transparency, security and knowing where and how AI is deployed across an organization.
From there, companies can craft an AI governance plan that serves their needs in every applicable jurisdiction. It’s important that companies first understand where AI is being used within their organization, then develop a comprehensive plan around its usage, in conjunction with data governance, to understand what data or IP may be impacted.
Agentic AI is here — and it isn’t going away
Agentic AI, the rapidly evolving technology designed to make autonomous decisions with minimal human oversight, has ignited intense debate about its workplace applications.
Beyond the common chatbots, agentic AI is being used to provide personalized, sophisticated customer service — to the point users may think they are interacting with a human. For example, agentic AI is being used to troubleshoot tech fixes and resolve customer complaints. On the retail side, such software can constantly scan consumer buying trends and update prices in real time to maximize revenue. Agentic AI has many other business applications, ranging from handing invoices to managing supply chains.
Some larger, more risk-averse companies have been slow to adopt agentic AI, and many individual users are reluctant to give up that much autonomy to AI.
A key issue is determining legal responsibility when agentic AI makes harmful decisions or produces incorrect outcomes. For example, what would happen if an agentic AI overseeing building access allowed an unauthorized person into a workplace, and that intruder then stole from the company? Who would be responsible, the business or the AI developer?
In such an example, legal liability might hinge on whether the business owner exercised due diligence in selecting a reliable AI program. In addition, the use of AI agents in customer service and similar functions intersects with other longstanding laws related to wiretap standards for call recording, pre-recorded messages and auto-dialing. AI governance with ongoing risk assessment of use cases, very similar to privacy impact assessments for novel data uses, can help reduce potential regulatory action, litigation and loss of IP. Formalizing the review process will also help reduce “shadow AI,” which is an increasing risk for organizations of all sizes.
Privacy is another key challenge of using agentic AI. Agentic AI’s ability to gather information from diverse sources opens the door for disclosing personal data or creating profiles and, thus, potentially breaching data protection laws or creating security vulnerabilities. This creates complex privacy challenges for compliance and legal departments, as well as IT teams.
Finally, agentic AI can overwhelm human oversight through its sheer scope. AI agents can work around the clock, at a much faster pace than human team members. Organizations need strong oversight procedures to ensure AI is functioning as intended, including human audits and controls.
What Boards & Executives Need to Know (and Ask) About Agentic AI
Last year, generative AI was the buzzed-about tech. Now, it’s all about agentic AI systems that can make decisions and take actions independently to achieve specific goals, often with minimal human intervention. Protiviti’s Jim DeLoach explores this new frontier and poses strategic questions for board
Read moreDetailsKeys to writing an AI governance playbook
The rise of agentic AI, as well as continued use of other generative AI and automated decision-making technology underscores the urgent need for robust AI compliance programs. As the regulatory landscape evolves and the ethical challenges of AI grow more complex, proactive governance has become a strategic necessity. Organizations must prioritize creating standardized AI governance policies, fostering cross-departmental collaboration and ensuring transparency and accountability in AI deployment.
By addressing challenges like bias, security vulnerabilities and third-party integration complexities, companies can harness AI’s transformative potential and maintain stakeholder trust while mitigating its risks. A strong governance framework not only minimizes exposure to pitfalls but also empowers businesses to innovate responsibly and maintain trust in an era of rapid technological advancement.
- Establish a strong foundation. Define AI governance principles and form a cross-functional AI governance committee (e.g., privacy, legal, IT, data security, marketing, etc.). This group should set clear objectives and build consensus throughout the organization, maintaining both legal oversight as well as stewardship of the organization’s core values.
- Create robust frameworks and policies. Establish specific, clear guidelines for developing, deploying and using AI systems in the organization. Implement a systematic approach to identifying, assessing and mitigating risks through risk assessments and review of use cases, including defining prohibited practices. High-risk use cases should consider more robust mitigating controls and escalated approvals.
- Implement and operationalize governance. Promote transparency, define roles and conduct continuous monitoring. Employee training is critical; team members at all levels should be educated on proper workplace AI usage. Integrate AI governance into the organization’s GRC model and acceptable-use concepts. However, not all governance should be founded on restrictions but should also include business strategy. Understanding data assets and AI objectives across the enterprise can better position compliance teams to also maximize innovation and better future proof deployment even in an uncertain regulatory regime. Therefore, the more teams communicate and the better visibility given to business and sales team imperatives, the better governance can clear a path to faster deployment and strategic outcomes.
- Promote a culture of responsible AI. Such a culture starts at the top, with securing buy-in from leadership on the importance of responsible AI use. Given that AI regulations are rapidly changing, companies will need to regularly review and update AI governance frameworks and maintain a dynamic inventory of licensed and developed AI to ensure ongoing oversight and IP protections, both to secure IP and to prevent potential infringement or violations of acceptable use terms for licensed products.
- Reduce potential for systemic bias. Organizations must validate training data, conduct audits and implement bias correction techniques to promote fairness and document the principles applied. Ensure governance efforts are well-documented and can be produced in response to any potential disputes or compliance scrutiny.
- Know the data. AI can potentially expose regulated data, proprietary or confidential information and other IP and the risk assessment or deployment reviews should carefully consider and define what data can be ingested, who is responsible for the outputs and other safeguards to protect various data sets. Use of testing environments or sandboxes for development activities or testing with anonymized or dummy data can also reduce risk.


Tara Cho, CIPP/US, CIPP/E, chairs Womble Bond Dickinson’s privacy and cybersecurity team and is one of the co-leads for the AI and machine learning team and the firm’s global digital solutions. Her practice is dedicated to counseling clients on privacy, data security, AI and other digital compliance and strategy issues across industries like technology, retail, logistics, e-commerce, energy and critical infrastructure, healthcare, healthtech and life sciences. 








