AI is often credited with making work easier and more efficient, but few organizations have a framework in place for its ethical use within risk management and compliance programs. Deloitte’s Irfan Saif and Maureen Mohlenkamp explore how to rectify that issue.
Think of artificial intelligence (AI) ethics as the organizational constructs that delineate right and wrong, such as corporate values, policies, codes of ethics and guiding principles applied to AI technologies. These constructs set goals and guidelines for AI throughout the product lifecycle – from research and design, to build and train and finally to change and operate.
As organizations discern how they’ll adopt and leverage AI capabilities in the future, ethical frameworks are often missing, leaving organizations without a trustworthy AI foundation. In fact, a Deloitte study of 565 C-suite and other executives working at organizations using artificial intelligence found that nearly half (48.5 percent) of respondents expect to increase AI use for risk management and compliance efforts in the year ahead, but just 21.1 percent of respondents reported that their organizations have an ethical framework in place to guide AI use within related programs.
The bright side is that companies are more likely than not to involve Chief Risk Officers, Chief Compliance Officers and other top leaders in developing ethical, trustworthy AI practices. In the survey, more than half of respondents (53.5 percent) indicated that AI ethics responsibilities are established across their organizations’ C-suites, with only one-fifth (19 percent) noting the C-suite in their organizations have no AI ethics responsibilities.
To mitigate unintended and unethical consequences, ask questions early and often about ethical AI, technology and data use. To that end, following are some steps that organizations can take to support ethical, trustworthy AI use in their organizations:
Set the Right Tone
Accountability from the top is a powerful way to ensure any program’s success; therefore, it’s important that corporate boards and C-suites set the tone for ethics and compliance programs, inclusive of AI ethics programs. Data officers, CIOs and CISOs — along with legal, ethics or compliance officers — should determine governance specific to AI, as well as to the processes and controls needed for work executed by machines versus humans.
Develop Organizational Standards
Standards are necessary to guide, monitor and assess if technology and data are used ethically by employees, vendors and customers. For example, data scientists could develop an AI “code of conduct” and set up channels through which issues can be escalated. Ultimately, organizational standards for AI ethics should consider the growth of AI and the role of the machine, as the workforce of the future will not be limited to human beings.
Conduct an AI Ethics Gap Analysis
AI ethics is as much about understanding the risks as it is about establishing a process for avoiding them. Organizations should review existing organizational policies, procedures and standards to address existing gaps, then expand existing policies or build new ones accordingly to fill any voids.
Institute a Plan to Educate the Workforce About AI
An educated and tech-savvy workforce should be better positioned to ethically embrace the opportunities that AI use creates, but for many organizations, a steep AI learning curve awaits. Non-technology business professionals will need to learn how to team with data scientists and technologists to understand the types of ethics risks AI can create and the potential impact this can have on the business, as well as how they can partner to help monitor for and stem those risks. Those who develop algorithms and manage data will also need to be specially trained to identify and mitigate bias within AI applications.
Alert Product Teams About What to Monitor
The best laid plans will still fail if no one is educated about what to look for in monitoring AI solutions for ethical compliance. One example is to design control structures to support ethical tech governance and embed them into AI-enabled solutions.
While AI offers exciting prospects, there is a potential dark side to AI that is hard to ignore. Those involved with the advancement of trustworthy AI use — including ethics and compliance professionals, corporate boards, management teams and IT professionals — face a growing imperative to bring an ethical lens to what they design and build. By instituting a top-level commitment to ethical leadership, a focus on technical and ethical literacy and ethical guardrails to limit missteps along the way, everyone in an organization can work together to produce solutions that represent AI’s full potential.