From the EU’s AI Act to the DOJ’s enforcement warnings, pressure is mounting on legal and compliance leaders to govern AI use. Gartner analyst Lauren Kornutick urges compliance professionals to consider code of conduct and policy updates to ensure risk is adequately managed.
As artificial intelligence (AI) is rapidly adopted by organizations, expectations have intensified for legal and compliance leaders to provide clear guidance on its responsible use. Given increasing regulatory scrutiny, such as the European Union’s AI Act, the Colorado AI Act and New York City’s AI bias law, the inability to manage — or even a lack of awareness of — AI-related risks could threaten an organization’s compliance standing.
Most recently, the DOJ warned that it will take a forceful stance against misuse of AI and will consider a company’s AI risk management as part of its overall compliance efforts. Given these regulatory and enforcement urgency drivers, legal and compliance leaders are prioritizing and updating their AI risk management programs and are communicating AI guidelines to employees across their organizations.
To this end, legal and compliance leaders should review and update their codes of conduct and other organizational policy, as these documents establish guardrails for employees. Most employees now have access to AI, and without guardrails, they may inadvertently leak sensitive data, use AI to make decisions that are biased as the result of the AI model, or use the technology to draft misleading or deceiving communications.
Codes of conduct and policy documents also provide critical information to external stakeholders monitoring a firm’s governance. There is growing stakeholder demand for the transparency and explainability of AI. Investors, suppliers, customers and other external stakeholders want to understand more about the guardrails being placed around companies’ use of AI — both those developed internally and deployed applications from third parties.
Updating and issuing guidance on any new technology, especially one as revolutionary as AI, can be a daunting task. Corporate compliance leaders wanting to incorporate guidance on the use of AI in their organization’s code of conduct should address three key considerations: the current code structure, practical examples of expected conduct and consistency.
SEC’s Quiet AI Revolution
As artificial intelligence reshapes the business landscape, the SEC is gearing up for a new era of oversight. With a handful of cases already on the books and warnings from top officials, the message is clear: AI isn't just disrupting industries — it's disrupting regulatory enforcement.
Read moreDetailsConsider current code structure
Integrate AI content into your current code structure and risk assessment. Legal and compliance leaders should use this as an opportunity to highlight a specific corporate value, tying the ethical use of AI to a company-level principle. This can be a strong message for the workforce.
Legal and compliance leaders can also approach guidance in the context of an existing risk. Companies with limited AI use cases may see the risk manifest in one area or when various AI use cases need to cover more complex issues, a dedicated section in the code of conduct can help provide context and clarity.
Provide examples
Give employees practical guidance and examples of expected conduct. Explain why AI matters to the business, such as how it provides new solutions or faster service, which raises the stakes for responsible and ethical use of AI.
This guidance should also provide examples of role-specific responsibilities, such as staff who design, deploy or test AI as part of their remit or company executives who may benefit from a standalone public-facing AI code that outlines their duties with teams, vendors and business processes. The code of conduct should also serve as a summary of expectations, with linked sources to relevant policies or documents that detail the topics related to AI.
Be clear and consistent
Do not overstate your AI risk controls and avoid inconsistency. The AI section in the code should align with any lower-level guidance already issued, such as a generative AI (GenAI) use policy if the company has one. Compliance leaders should also be mindful with statements about their risk controls. To avoid making claims that cannot be backed up, they should work with their partners, including IT, data privacy and enterprise risk management to confirm that relevant processes are in place and followed in practice before highlighting them in their code.
Legal and compliance leaders can also take additional steps to provide oversight of AI in their organizations.
- Establish an AI board or similar governing body to balance the organization’s AI ambition with risk tolerance. Legal and compliance officers should partner with other key stakeholders in assurance such as privacy, IT security and risk to establish a cross-functional team to identify and alleviate the risks associated with AI solutions. Team members should also include representation from IT, data & analytics and AI strategy (technical teams) to align objectives. The technical teams should seek to facilitate the deployment of AI to meet AI ambitions while also addressing the actual and residual risks related to the specific use case and deployment model for each solution.
- Test and monitor AI across all phases of the AI lifecycle: The team should then test and monitor AI solutions at various stages: during vendor selection, prelaunch and even throughout their use. Once testing is complete and after identifying tech components that support trust risk and security in AI applications, models and other AI entities, it’s time to set up proofs of concept to test emerging AI products. This step helps to augment traditional security controls, and these should be applied to production applications once they perform as required.
AI continues to permeate all aspects of business operations, making it imperative that legal and compliance leaders show diligence in integrating comprehensive AI guidelines into organization policy and risk management processes. This strategic move not only ensures regulatory compliance and ethical AI usage but also enhances operational efficiency and risk management, ultimately contributing to the organization’s long-term success.