Morrison & Foerster’s Stephanie Sharron and Andy Serwin discuss the legal and ethical concerns of using AI. The impacts could be far-reaching and potentially include issues such as unlawful bias and discrimination, violations of privacy laws and uncertainty about legal liability and accountability for harms caused.
with co-author Andrew Serwin
Artificial Intelligence has the potential to reshape our world as much, or more, than the internet. And it isn’t just Silicon Valley startups that are focused on these issues — technology that enables automated decision-making is being created and exploited by companies of all sizes and sectors. While there is not yet complete consensus on the definition of artificial intelligence, machine learning and the algorithms that drive them, we would propose the following definitions for purposes of this article:
- Artificial intelligence: Technologies having the ability to perform tasks or operations that otherwise require human intelligence.
- Algorithm: The description of a finite and unambiguous sequence of steps or instructions for producing a result (output) from initial data (input).
- Machine learning: A type of artificial intelligence in which humans present input data to a system to train algorithms and automate output on a task or operation without explicit programming of each step of the task or operation. Different types of machine learning have varying levels of autonomy.
For ease of reference, we refer to systems that employ artificial intelligence – including machine learning for artificial intelligence – collectively as “AI.” This article focuses on the narrower concept of AI, where a system is focused on a particular task or group of tasks and not on seeking generalized cognition comparable to humans.
Both the type of AI system selected and the particular use case can be important in whether and how a company decides to implement a particular AI system. Given that we are in the early stages of implementing AI, AI systems with greater autonomy might in some cases require more human oversight than they will in the future to ensure that the tasks or operations performed by the system are appropriate. The proposed use of AI also can vary in the type and degree of impact on the individual or society as a whole. This impact can have both legal and ethical consequences.
The range of use cases for artificial intelligence spans across industries including, by way of example, to reduce fraud in payment transactions, to better diagnose and treat patients and manage health care, to support more powerful recruiting tools and platforms and to support driver-assisted and driverless vehicles. Companies across industry sectors are eager for information that will help them decide whether and how to best leverage AI in their businesses, taking into consideration the ethical and legal issues these proposals invoke. This article describes a flexible, policy-based approach.
Overview of Key Legal Concerns
One core legal principle underpinning corporate AI policies is that the use of AI will comply with applicable legal and regulatory requirements. What those legal and regulatory requirements are, though, will depend on a variety of factors, including the type of data that is being processed and the proposed use of the output of the AI solution in question.
The potential scope of legal concerns is as broad as the potential applications of AI. However, certain key legal considerations are of particular concern, such as unlawful bias and discrimination, violations of privacy laws and uncertainty about legal liability and accountability for harms caused. We focus on the first to illustrate how a significant legal issue can arise in the context of AI.
Use of race (or proxies for race) within an AI tool may raise legal issues when used in a manner that discriminates against individuals from that protected class or when used to determine whether to extend health insurance coverage to individuals in that protected class. However, the same AI processing may be perfectly acceptable when used by health care providers to diagnose and treat particular medical conditions if the condition is more prevalent in one race versus another.
For companies using AI for recruiting, concerns have been raised about the unintended but possible disparate impact based on gender, race or other protected classes of individuals arising with the use of AI solutions. In lending and insurance, concerns have been raised about the possibility of redlining (or reverse redlining) – the practice by a lender or insurer of denying (or charging more for) a loan or insurance coverage – in connection with the use of AI tools and their associated algorithms.
These kinds of legal concerns should be addressed proactively not only to avoid violating the law and corresponding legal liability, but also to foster good will with customers and protect the company’s brand and reputation. Corporate policies can help companies achieve these goals.
Examining Ethics Issues Relating to AI
In the context of corporate use of AI, ethical standards can operate alongside the law. Companies are not necessarily legally obliged to comply with such ethical standards, but can adhere to them as a matter of choice. Certain uses of AI might be perfectly legal, but inconsistent with a company’s ethos or may cause concern among a company’s key stakeholders — including employees and shareholders. There are many different possible ethical standards that companies could consider in formulating policies. The following list includes examples drawn from a variety of sources:
- Seek to avoid creating or reinforcing unfair bias and unjust impacts.
- Seek to provide safe and secure systems.
- The information used and logic behind decision-making of AI systems must be explainable.
- The application of AI to personal data should not “unreasonably curtail people’s real or perceived liberty.”
- Apply an appropriate level of human direction and control in the development and use of AI systems. “Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.”
- Maintain clear chains of accountability and avoid abdication of responsibility by individuals, corporations, governments or other entities for decisions automated through AI that otherwise would have required a human decision-maker to act.
- Respect and improve social and civic processes on which the health of society depends.
- Use AI in ways that encourage the sharing a diversity of ideas and opinions and minimize the loss of cultural pluralism.
- Avoid placing individuals in silos or “filter bubbles” that limit their exposure to ideas or cultures different from their own.
- Avoid uses of AI that are likely to cause harm without carefully balancing the potential costs and benefits.
- Avoid use of AI in a manner or for a purpose that violates human rights, for surveillance in violation of accepted norms or for purposes that are intended to cause direct physical injury to people (e.g., lethal weapons).
The Role of Corporate AI Policies
How can companies pursue innovation through AI while coping with the myriad of legal and ethical questions and concerns? One approach is to put into place a corporate AI policy.
Well-structured corporate policies can provide valuable benefits:
- Allowing for corporatewide articulation of ethical and legal principles to guide decisions about acceptable use of AI
- Aligning decision-making with articulated principles
- Improving legal compliance
- Increasing transparency and information sharing across the organization
- Ensuring consistency in approach to decision-making and compliance
Structuring Corporate AI Policies
AI policies should identify the key facts regarding the AI application, as well as the legal and ethical standards that will guide the company’s decisions with respect to development and use of AI within their organizations. Companies should consider what principles they want to emphasize in their policy. Building upon this, the policy should establish a flexible and adaptable process that ensures that decision-making reflects these core principles. The process below is an example of a process that allows for the flexible application of legal and ethical uses of AI can arise within an organization.
- Require submission of a disclosure with regard to the proposed use of AI that includes, at a minimum, the following key pieces of information:
- What is the proposed AI product, service or component?
- For safety-critical uses, full technical transparency may be required
- For other use cases, the information and logic used by the AI system to arrive at its decisions should be explained.
- What is the purpose of the AI?
- What is the company’s role in connection with the AI? (e.g., developer, user, commercialization partner, etc.)
- What data is processed in connection with the AI, and what are the sources of the data?
- Does any data processed in connection with the AI identify the race, color, gender, religion or creed, age, disability (physical or mental), veteran status, genetic information, national origin or ancestry, citizenship, marital status, pregnancy or childbirth (and related medical conditions) or sexual orientation of individuals or serve as a proxy identifier for any of these classes of individuals?
- Is any of the data processed in connection with the AI personal data? Can any of the data processed in connection with the AI be identified with an individual either alone or when combined with any other information in the possession of the company? Include device identifiers, IP addresses, precise geolocation information and biometric information.
- Does any of the data processed in connection with the AI relate to children, the health (including medical condition or insurance) of individuals or their finances or telecommunications?
- Identify the key legal (and if desired, ethical) issues that apply to the proposed use of AI.
- Perform an impact assessment with respect to the proposed use of AI with respect to the legal (and if applicable, ethical) issues identified in #2 above.
- Apply decision-making and recommendations with respect to proposed use of AI based on the legal and ethical issues identified and the impact assessment. Decision-making might fall into three categories: automatic approval, automatic rejection and escalation for further vetting and decision-making by designated individual or committee.
- Consider how much internal and external transparency to provide with regard to the company’s AI policy and decisions and formulate a publication policy with respect to AI decisions resulting from implementation of the AI policy.
Companies will be forced to confront challenging issues regarding the implementation of AI within their organizations. By putting into place thoughtful AI policies, companies can begin the process of implementing these new and powerful tools in thoughtful ways that take into consideration the many legal and ethical considerations these technologies invoke.
 Drawn from the remarks of Marvin Minsky (1968) quoted by Blay Whitby (1996), Reflections on Artificial Intelligence, p 20, and Department of Business, Energy and Industrial Strategy, Industry Strategy: Building a Britain fit for the future (November 2017), p 37: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/664563/industrial-strategy-white-paper-web-ready-version.pdf (accessed 20 March 2018). Note that artificial intelligence systems likely will be able to perform tasks or operations that exceed the capacity of humans, and that this will have consequences we cannot yet fully anticipate or comprehend.
 Algorithms are used in artificial intelligence solutions to automate decision-making.
 There are a number of types of machine learning having varying levels of decision-making autonomy. Two of the most common are supervised and unsupervised machine learning.
Supervised machine learning: These systems learn from pairs of input and output data that are organized and labeled by humans. An example of this is CAPTCHA. The output of supervised systems, therefore, is subject to human-imposed limits and controls.
Unsupervised machine learning: These systems automate tasks or operations producing output based on input data that is not already organized and labelled by humans. The system itself is identifying trends and patterns in data without human-imposed limits or intervention.
 Companies may also want to consider developing externally facing codes of conduct as well so that their customers, suppliers, regulators and the public at large understand the company’s ethos when it comes to pursuing AI solutions.
 Questions, for example, have arisen about who the legal decision-maker is for a machine that incorporates AI. Is it the designer of the AI system, the manufacturer of the end product that incorporates the AI system or some combination thereof? Or is it the end user of the finished product that chose to use an AI-based product? Some have even posited that the machine itself might be given legal personhood and be held accountable (that one seems a stretch to us, at least for weak or narrower AI).
 “AI at Google: Our Principles” (June 2018).
 See, for example, the Report of Session 2017-9, “AI in the UK: ready, willing and able?” submitted to the Select Committee on Artificial Intelligence of the House of Lords at page 39.
 Future of Life Institute, “Asilomar AI Principles” (2017).”
 IBM’s “Principles for the Cognitive Era” state that the purpose of AI and cognitive systems developed and applied at IBM is to augment human intelligence.” IBM Think Blog (January 17, 2017), Transparency and Trust in the Cognitive Era https://www.ibm.com/blogs/think/2017/01/ibm-cognitive-principles/.
 Future of Life Institute, “Asilomar AI Principles” (2017). Also, seek to understand and shape the potential moral implications of the use, misuse and actions of AI systems. Id.
 Note that as the perceived accuracy and reliability of AI recommendations increases, it becomes riskier for individuals or companies to challenge the system’s recommendations.
 Future of Life Institute, “Asilomar AI Principles” (2017). The Asilomar AI Principles also include: “Respect the ideals of human dignity, rights, freedoms and cultural diversity.”
 According to the Report of Session 2017-9, “AI in the UK: ready, willing and able?” submitted to the Select Committee on Artificial Intelligence of the House of Lords at page 39, a number of large technology companies, including Google, IBM and Microsoft, have communicated their commitment to developing interpretable machine-learning systems.
 We propose that impact assessments similar to those used in the field of data privacy compliance can serve a useful role here.