The decision of whether to implement an AI governance policy depends on a range of factors, including how the company uses AI, how much risk is involved and how use will affect stakeholders. Sarah Hutchins and Robert Botkin of Parker Poe explore how to know when to establish an AI governance policy — and when companies might not need them.
In the complex landscape of technological advancements, the implementation of artificial intelligence (AI) is becoming commonplace across various industries.
As businesses integrate AI into their operations, they might ask themselves: Do I need a policy governing how I use and deploy this technology? The answer is not always a clear yes, as some industry experts might lead you to believe. Understanding when an AI governance policy is necessary is crucial for companies navigating this evolving terrain.
Generally, we can categorize businesses into three buckets: Creators, open-ended users and closed-end users. The AI policy for these three types of businesses will vary considerably.
Creators are businesses that either directly develop AI tools or are integrating an AI tool into an existing product to enhance functionality. A retail brand, for example, that integrates a generative AI chatbot into its mobile app to help consumers find the product they are looking for would fall into the creator bucket.
Open-end users are those businesses that allow employees to use AI tools but do not necessarily configure the large language model (LLM) or have any control with what goes into the model.
Last, closed-end users are businesses without consumer-facing AI tools but that may still implement autonomous systems that aid in the productivity of the business. Clients of a warehouse robotics company that use robots in their warehouses would be closed-end users, as the AI systems have a different set of risks than those AI tools developed by creators and utilized by open-end users.
These three types of businesses will need drastically different AI governance policies. Here is how to help fit the policy to your business’ needs and when it makes sense to implement one.
AI Is the Wild West, but Not for the Reasons You Think
As Europe moves closer to blanket rules regarding its use, CCI’s Jennifer L. Gaskin explores the evolving compliance and regulatory picture around artificial intelligence, the technology everyone seems to be using (but that we’re also all afraid of?).
Read moreWhen your company will likely need an AI governance policy
Data privacy and security concerns
In instances where AI systems handle sensitive user data, the implementation of a robust governance policy becomes imperative. This ensures compliance with data protection regulations and establishes guidelines for secure data handling.
Ethical considerations
AI systems often make decisions that impact individuals and society at large. An AI governance policy can help identify and mitigate ethical concerns by setting standards for responsible AI use, preventing biases and ensuring transparency in decision-making processes. For example, using AI software to evaluate candidates during the interview process could have a negative bias against minorities due to the underlying model’s training data.
Legal compliance
As AI technologies evolve, so do the legal frameworks surrounding them. Implementing an AI governance policy helps companies stay compliant with existing and emerging regulations, helping to reduce the risk of legal repercussions.
State lawmakers and federal regulators have started keeping a close eye on companies’ use of AI as the technology has made certain processes more efficient. States have moved forward on their own governance of data privacy and AI, both in the financial industry and beyond. California, Oregon, Florida, and New York are a few examples of states that have passed or are considering comprehensive data privacy laws or regulations focused on AI.
Risk mitigation
Businesses operating in industries with high-risk consequences, such as health care or finance, should adopt AI governance policies to mitigate potential risks. These policies can outline risk management strategies and establish accountability measures.
Stakeholder trust
Demonstrating a commitment to responsible AI through governance policies builds trust among stakeholders. This includes customers, partners, and regulatory bodies who seek assurance that AI systems are used ethically and responsibly.
Customization of AI systems
Companies heavily reliant on AI may need governance policies to guide the customization and adaptation of AI models. Clear guidelines ensure that modifications align with the company’s values and objectives.
When you might not need an AI governance policy
Minimal AI integration
If a company’s use of AI is minimal and doesn’t involve substantial data processing or decision-making, a detailed governance policy may be unnecessary. In such cases, adherence to existing data protection and ethical guidelines may suffice.
Low-risk environments
Businesses operating in low-risk environments where AI applications have minimal impact on individuals or society may find that an extensive governance policy is not immediately required. However, periodic assessments are advisable to adapt to changing circumstances.
Temporary or experimental AI projects
Companies engaging in short-term or experimental AI projects with minimal long-term implications may opt for a more flexible approach. A concise set of guidelines during the project’s duration can be preferable to an exhaustive governance policy. This decision should be revisited at regular intervals depending on the project’s scope and implementation.
Outsourced AI services
If a company relies on third-party AI services with well-established governance policies, it may not need an additional policy. However, due diligence is essential to ensure alignment between the third-party policy and the company’s values. Furthermore, the agreement with the vendor should carefully address terms related to risk to the company. Read our article about the importance of businesses seeking assurances from their AI vendors on collecting, using, and disclosing data used to train the model.
Small-business operations
Smaller businesses with limited resources and AI integration may not immediately require an elaborate governance policy. However, as the business expands its AI usage, returning to evaluate the need for a comprehensive policy is advisable.