Companies rushing headlong into deploying AI tools are tempting fate if they don’t embed fundamental risk management principles. Liban Jama and Emily McIntosh of EY suggest that compliance and risk must be at the center of AI strategies, especially as the regulatory picture around the technology keeps changing.
Following on the heels of the European Parliament adopting its AI Act, companies now need to be aware of their requirements as they deploy AI solutions to optimize their operations and benefit their customers. The act marks a significant milestone in the inevitable global shift toward increased regulation of AI.
Meanwhile, as boards, investors and customers show strong interest in AI’s promise to boost their businesses’ bottom line, more executives are placing AI at the top of their agendas. Pressure is mounting on them to stay at the cutting edge of innovation in an increasingly competitive business environment. According to a recent EY study, 43% of CEOs are already investing in AI and another 45% are planning to do so in the next year.
However, the desire to accelerate AI deployment and integration to meet stakeholder expectations must be supported by robust risk management controls for long-term success.
To remain competitive, innovative companies will closely evaluate and stay abreast of the regulatory environment, even as they factor in their own brand policies and operational ethics. Many will be well-served to create and make a task force responsible for developing and managing their AI governance internally, while embedding risk management strategies within the AI lifecycle or integrating risk identification and assessment into AI development and procurement.
AI compliance should be viewed as an innovation enabler rather than an administrative burden. Compliance can add value throughout the AI lifecycle by informing user access, test cases and ongoing monitoring to ensure models are maximizing their value to the business.
Understand the environment
In October 2023, President Joe Biden released an executive order intended to promote the “safe, secure, and trustworthy development and use of artificial intelligence.” This order empowered the DOJ to formally announce both that AI governance will be incorporated into its “Evaluation of Corporate Compliance Programs” guidance and that the DOJ has appointed a chief AI officer. Although no formal legislation is in place, the bipartisan Senate AI working group published in May a roadmap for AI policy priorities, including “enforcement of existing laws for AI, including ways to address any gaps or unintended harmful bias; prioritizing the development of standards for testing to understand potential AI harms; and developing use case-specific requirements for AI transparency and explainability.”
Taking time now to build an adaptable risk management foundation will better prepare companies to address future regulatory mandates, allowing for innovation in the face of change. Companies should begin to document existing AI systems it uses and examine areas of the business where AI may improve efficiency and effectiveness. Entities should also catalog the entries in a model repository and categorize them based on incremental risk. Documentation that is meaningful and tailored will be invaluable when formulating processes to comply with regulations, as it will show stakeholders and investors that AI technology is being used responsibly.
For example, the EU AI Act provides a ranking system based on four levels of risk: unacceptable, high, limited and minimal risk. A system that could interfere with people’s fundamental rights, such as one that evaluates the reliability of evidence for law enforcement, is likely to be viewed as high-risk. Limited and minimal risks, however, reflect concerns surrounding transparency and the obligation of companies to inform users when they interact with AI technology.
Evaluating whether existing and potential systems fall within the rankings may give companies a better understanding of what requirements they may face in the future. Although compliance with the EU AI Act is not mandatory for companies that do not operate in the EU, it does offer entities guidance when seeking to identify areas of risk and most importantly a potential approach to anticipate future regulatory efforts in this space.
Businesses Need to Upgrade TPRM Programs Ahead of AI Regulations
Take a risk-based look at how third parties are using artificial intelligence
Read moreDetailsImplement the right framework
A solid risk management framework should address all functions within a business, including information technology, where testing and experimentation with AI happens. Compliance teams can work alongside engineers and developers to construct controls and contingency plans that address potential failures or incidents during or after deployment and establish procedures for safely decommissioning and phasing out AI systems. Resources, such as the National Institute of Standards and Technology framework, also are available for direction when drafting policies that account for all aspects of the business.
A task force dedicated to monitoring and evaluating the effectiveness of the strategy is vital for success. By selecting a team and assigning clear roles and responsibilities to manage AI-related risk, companies can engage in transparent discussions about how such functions align with the organization’s principles, policies and strategic priorities. The result: a complicated process made more streamlined and efficient. Training, such as the International Association of Privacy Professionals (IAPP) AI governance training, can help ensure team members are making informed decisions throughout.
Build risk management into the AI lifecycle
As mentioned, the key to adhering to regulatory requirements while simultaneously fostering growth is embedding risk management into the AI lifecycle. That should be woven into the design phase and throughout deployment to allow teams to innovate and iterate quickly while adhering to requirements.
Combined with the necessary governance, companies can use accountability mechanisms to ensure the data used to train AI systems is high-quality, accurate and consistent. Model interpretability, or the ability to decipher and explain the cause and effect within a system, is one way of monitoring data to identify errors. The ability to explain the input of a system, the function of its process and the produced output provides a means of surveying data for bias and variability. Understanding how an AI system functions enables teams to use the appropriate security audits to evaluate bias, safety concerns and data validity.
An audit history also can invoke defensibility within your system. Documenting the data fed into and produced by each model provides evidence that can be used later to demonstrate how the system was designed and used responsibly. For instance, American Bar Association Resolution 604 charges developers to ensure proper “human authority, oversight and control” are in place to drive appropriate accountability. When fighting racial bias, as an example, it behooves corporations to perform their monitoring activities, like model output sampling, in a platform that tracks audit history to improve defensibility of human reviewer decisions.
Once risk management strategies are integrated within AI development and deployment, companies can better capitalize on the massive AI opportunity that awaits them. Given that the regulatory environment is constantly changing, businesses must closely monitor policy changes and embed risk management practices around AI. If addressed proactively, AI regulation does not have to limit innovation; companies can be empowered to pursue business goals with greater confidence and clarity around its use.