AI systems based on machine learning can be used to audit financial data efficiently and look for financial fraud, but ethical risks exist if the data is biased. Dr. Steven Mintz explores the principles governing the ethical use of AI.
Ethics are important, whether in our personal or professional lives. Most people believe that ethical behavior encompasses standards such as honesty, fairness, integrity, responsibility and accountability. These norms of behavior, along with transparency, underlie ethical systems of artificial intelligence (AI). Ethical AI is the foundation upon which trust in the system is built. About one-third of executives in a Deloitte survey named ethical risks as one of the top three concerns related to AI.
AI systems pose a diverse set of ethical risks, including how the data is collected and processed. It can be challenging to understand how the system works and the validity of the conclusions reached from the analysis of data. These systems have been referred to as “black boxes,” or a system of automated decision-making often based on machine learning from big data.
Ethics and AI
KPMG identified five principles of ethics and AI:
- Transforming the workplace: Massive change in roles and tasks that define work, along with the rise of powerful analytic and automated decision-making, will cause job displacement and the need for retraining.
- Establishing oversight and governance: New regulations will establish guidelines for the ethical use of AI and protect the well-being of the public.
- Aligning cybersecurity and ethical AI: Autonomous algorithms give rise to cybersecurity risks and adversarial attacks that can contaminate algorithms by tampering with the data. KPMG reported in its 2019 CEO Outlook that 72 percent of U.S. CEOs agree that strong cybersecurity is critical to engender trust with their key stakeholders, compared with 15 percent in 2018.
- Mitigating bias: Understanding the workings of sophisticated, autonomous algorithms is essential to take steps to eliminate unfair bias over time as they continue to evolve.
- Increasing transparency: Universal standards for fairness and trust should inform overall management policies for the ethical use of AI.
Ethical Risks
AI can improve human decision-making, but it has its limits. The possibility exists that bias in algorithms can create an ethical risk that brings into question the reliability of the data produced by the system. Bias can be accounted for through explainability of the data, reproducibility in testing for consistent results and auditability.
Other ethical risks include a lack of transparency, erosion of privacy, poor accountability and workforce displacement and transitions. The existence of such risks affects whether AI systems should be trusted. To build trust through transparency, organizations should clearly explain what data they collect, how it is used and how the results affect customers.
Ethics and Accountability
The “Algorithmic Accountability Act of 2019” was introduced in the U.S. House of Representatives on April 10, 2019 and referred to the House Committee on Energy and Commerce. The bill requires an assessment of the risks posed by automated decision systems to the privacy or security of personal information of consumers and the risks that the systems may result in or contribute to inaccurate, unfair, biased or discriminatory decisions impacting consumers.
Governance and accountability issues relate to who creates the ethics standards for AI, who governs the AI system and data, who maintains the internal controls over the data and who is accountable when unethical practices are identified. The internal auditors have an important role to play in this regard. They should assess risk, determine compliance with regulations and report their findings directly to the audit committee of the board of directors.
Auditing AI Data
Auditing is the function of examining data to determine whether it is accurate and reliable and the system used to generate it is operating as intended. Data that is biased will produce biased results. For example, a financial institution that grants mortgage loans to white applicants in much larger numbers than minorities may be biased. Assuming the data was biased, machine-learning AI systems would unintentionally reproduce these results over time.
AI auditing works well for a leasing firm with hundreds of lease contracts given the need to verify that each one has been properly recorded either as an asset with future value or expense for the period. AI systems can help to quickly analyze complex contracts to make that determination, but the accounting standards must be accurately inputted so the system knows what to look for.
Fraud Detection
The biggest value of using AI in auditing is to detect fraud, the idea being to identify and catch anomalies. For example, a reimbursable expense submitted by an employee should be examined by tying it to a restaurant receipt. What if the receipt for $100 is not based on food ordered but instead is a gift certificate for a friend or family member? The exact amount of the receipt may raise a red flag in an AI-driven, machine learning system where all data is examined, unlike a more traditional data processing system that uses sampled data.
Companies lose an estimated 5 percent of their revenue annually as a result of occupational fraud, according to the 2018 ACFE Report to the Nations. It turns out, the risk of occupational fraud is much higher than many managers and leaders realize. Each case results in a median loss of $130,000; with cases lasting a median of 16 months, fraud is something organizations of all sizes must take care to detect and deter. AI systems can analyze large amounts of data quickly and thoroughly to determine whether assets have been misappropriated.
AI systems can also have predictive value through machine learning and identifying high-risk areas and events. It can devise an accounting fraud prediction model that more accurately calculates the probability of future material misstatements in financial statements and to improve the quality of audits.
Using AI to examine all of the financial data and determine whether financial fraud exists provides a big advantage over previous systems. It affords a higher level of assurance and reduces the risk of fraud.
Corporate Governance
Corporate governance is essential to develop and enforce policies, procedures and standards in AI systems. Chief ethics and compliance officers have an important role to play, including identifying ethical risks, managing those risks and ensuring compliance with standards.
Governance structures and processes should be implemented to manage and monitor the organization’s AI activities. The goal is to promote transparency and accountability while ensuring compliance with regulations and that ethical standards are met.
A research study by Genesys found that more than one-half of those surveyed say their companies do not currently have a written policy on the ethical use of AI, although 21 percent expressed a definite concern that their companies could use AI in an ethical manner. The survey included 1,103 employers and 4,207 employees regarding the current and future effects of AI on their workplaces. The 5,310 participants were drawn from six countries: the U.S., Germany, the U.K., Japan, Australia and New Zealand. Additional results include:
- 28 percent of employers are apprehensive their companies could face future liability for an unforeseen use of AI.
- 23 percent say there is currently a written corporate policy on the ethical use of AI.
- 40 percent of employers without a written AI ethics policy believe their companies should have one.
- 54 percent of employees believe their companies should have one.
Conclusions
The ethical use of AI should be addressed by all organizations to build trust into the system and satisfy the needs of stakeholders for accurate and reliable information. A better understanding of machine learning would go a long way to achieve this result.
Professional judgment is still necessary in AI to decide on the value of the information produced by the system and its uses in looking for material misstatements and financial fraud. In this regard, the acronym GIGO (“garbage in, garbage out”) may be appropriate. Unless the data is reliably provided and processed, AI will produce results that are inaccurate, incomplete or incoherent, and machine learning would be compromised with respect to ethical AI.