Nielsen’s Kevin Alvero and Randy Pierson explore the fundamental elements that should be included in any approach to doing internal audit of artificial intelligence.
Many internal audit departments are in the process of developing approaches to auditing their company’s artificial intelligence (AI) activities. There is no single, definitive framework yet for auditing artificial intelligence, although organizations such as the Institute of Internal Auditors and ISACA have issued guidance on the matter. Regardless of what approach internal audit departments choose to take and what still-developing AI auditing frameworks will ultimately look like, there are some critical elements that internal audit teams can already begin planning for, and even executing against, knowing that they will be a core element of any AI auditing framework they may utilize in the future.
A single, definitive framework for auditing AI has yet to be written, and internal audit frameworks for AI continue to evolve along with the technology itself. Still, there is guidance available for internal audit functions that are in the process of scoping and defining their own approach to auditing AI within their organizations.
Institute of Internal Auditors
In 2017, the Institute of Internal Auditors (IIA) published its Global Perspectives and Insights, Artificial Intelligence – Considerations for the Profession of Internal Auditing. In it, the authors propose considering the auditing of AI under the three overarching components of governance, strategy and the human factor. The IIA guidance provides example procedures and areas of inquiry that internal audit functions could use as a starting point for auditing the key elements that fall under these three components.
ISACA, meanwhile, published Auditing Artificial Intelligence in 2018, which describes how to leverage ISACA’s existing COBIT 2019 framework to apply to auditing AI. While this approach is more granular and technical, ISACA concedes that organizations’ approaches to AI are not likely to be as mature as IT in general, so it will require customization on the part of the organization and the internal audit function.
As AI auditing frameworks continue to evolve and internal audit functions adapt and customize them for their own use, there are certain core elements that must be covered. Even organizations that are just beginning to define and hone their approach to auditing AI can confidently plan on these things being included in any future effort to provide assurance around AI.
As described in the IIA’s AI auditing framework, AI governance refers to the structures, processes and procedures implemented to direct, manage and monitor the AI activities of the organization.[i] These things ensure that there is ownership and accountability over AI activities, that there are controls in place to manage the associated risks and that the objectives of these activities are ultimately met. Board/senior management oversight, company policies and procedures, business-unit management and their internal controls, internal and external audit and regulators all play a role in AI governance. The broader governance landscape for AI includes areas such as data, cybersecurity and third parties, as well as activities specific to AI.
Internal audit of AI must include an effort to determine whether the organization has clearly articulated its AI strategy and whether it clearly expresses the intended result of AI activities. If not, it will not be possible to determine whether AI initiatives are ultimately successful. It is also important to get a sense of whether the strategy is realistic. A realistic strategy considers the supporting competencies needed to execute the AI initiatives and should be developed collaboratively between business and technology leaders to ensure that neither has unrealistic expectations of the other.
Finally, the AI strategy must be consistent with the mission and values of the organization. Internal audit should be alert to potential conflicts (or perceived conflicts) between the AI strategy and the organization’s values related to fairness, transparency, privacy, discrimination and corporate citizenship.
Big data is the raw material, so to speak, that AI algorithms use as the basis for making decisions and determining probabilities. Therefore, the AI audit program must include looking at the organization’s data assets. In inspecting the foundation upon which AI systems are built, internal auditors should understand aspects such as:
- Data Inventory – Does the organization have an accurate knowledge of what data (and metadata) it possesses, how/where that data is stored and what systems the data is integrated into?
- Data Quality – Is the data accurate, complete, available and structured appropriately to meet the needs of AI systems? What controls are in place to ensure this is the case?
- Data Security – How is the data kept secure from theft and/or unauthorized access?
- Data Privacy – Is the collection, processing, storage, reporting and destruction of data done in a way that is ethical, legal and respectful of the privacy rights of the data’s sources?
- Data Management/Ownership – Are clear lines of ownership and stewardship of the organization’s data defined, and are they being followed?
- Technology infrastructure – Is the organization’s overall technology infrastructure able to support the data needs of its AI strategy, now and going forward?
To what extent AI algorithms should be audited, and by whom, is still a topic of debate. But, it is safe to say that internal auditors should be looking to provide some assurance that AI algorithms are competently designed, that they are performing as expected, that they are sufficiently transparent to users and that they are not exposing the organization to risk through unintended outcomes. Internal auditors need not possess the subject matter expertise of the algorithm’s programmer, but they should understand enough about the AI system development process to understand what the algorithm’s objective is, what data it is using as input and what criteria it is using to make decisions/predictions.
Internal auditors should focus on governance and controls around algorithm design and performance and seek to answer questions such as:
- Is the algorithm biased is a way that is inconsistent with the company’s mission, ethics or values?
- Is it producing outcomes that could lead the company to take risks that are inconsistent with its risk appetite?
- Could the algorithm expose the company to legal/reputational risk as it relates to fairness and/or transparency?
As noted in the IIA’s AI auditing framework, “the potentially disastrous effects of a cybersecurity breach involving AI cannot be overstated.”[ii] Most internal audit functions are already performing some form of assessment around their organization’s cyber resilience. Therefore, when it comes to AI, internal audit’s focus should be to ensure that risk exposures emerging from the organization’s use of AI are being accounted for and incorporated into the larger cybersecurity audit plan. It is critical that internal audit work collaboratively with IT, security, legal and other business areas to gain assurance that the organization is prepared to resist, respond to and recover from cyberattacks and to ensure that senior management and the board have an accurate understanding of the cyber risks facing the organization and its level of readiness.[iii]
Third-Party Risk Management
Whether the organization is leveraging vendors to provide cloud services, build AI applications, analyze data, or even provide an end-to-end AI solution, internal audit has a critical responsibility to ensure that sound third-party risk management practices are in place to safeguard the organization.
As organizations become more dependent on technology and data to operate and to create value, third-party relationships are becoming less one-way and increasingly interdependent. Where in the past, assurance in the form of periodic audit results and/or vendor-supplied performance metrics may have sufficed, internal audit should be prepared to work more collaboratively with third parties to provide a holistic view that operational and security vulnerabilities are being managed appropriately and that proactive measures are being taken to address emerging risks.[iv]
As with any organizational activity, internal audit should be providing assurance that the company’s use of AI is in compliance with all relevant industry standards and regulations. While there is not yet a universally accepted set of AI standards, standards are being developed at several different levels. (ISO, for example, has published three AI-specific standards to date with more in development.) Organizations (and their internal audit groups) should have compliance with emerging standards on their radar as it relates to AI risk management.
Due to the emerging nature of AI technology, the urgency with which firms are trying to leverage AI to gain a competitive advantage and the hype in the marketplace surrounding AI’s potential, it may be less intuitive for internal audit to draw a line from AI initiatives to benefits and value delivery than with other initiatives it is more familiar with. Nevertheless, when looking at AI, the topic of return on investment (ROI) cannot be ignored.
Organizations that are starting out on their AI journeys may not yet have clear plans to align AI use cases to the business or recognize return on AI investment.[v] Ultimately, however, AI must be able to demonstrate that it is supporting the organization’s strategy and objectives, that it is meeting success criteria and that it is a better option that other tools, techniques and technologies.
If the organization is becoming increasingly reliant on AI for its operations and/or value creation, then internal audit need not wait until it has honed and polished its AI auditing approach or until a definitive, industry standard AI auditing framework is established. Internal audit can begin delivering valuable assurance immediately by looking at areas that should be core components of any AI auditing framework. Doing this, and enhancing/adjusting its AI auditing approach over time may be the best way to ensure that internal audit delivers valuable, AI-related assurance over the long term.
[i] Global Perspectives and Insights: Artificial Intelligence – Considerations for the Profession of Internal Auditing. The Institute of Internal Auditors. 2017.
[ii] Global Perspectives and Insights: Artificial Intelligence – Considerations for the Profession of Internal Auditing. The Institute of Internal Auditors. 2017.
[iii] Global Perspectives and Insights: Artificial Intelligence – Considerations for the Profession of Internal Auditing. The Institute of Internal Auditors. 2017.
[iv] Spusta, R. “Third-Party Risks Need New Approaches.” May 28, 2019. https://securityintelligence.com/posts/third-party-risks-need-new-approaches/
[v] Auditing Artificial Intelligence. ISACA. 2018.