No Result
View All Result
SUBSCRIBE | NO FEES, NO PAYWALLS
MANAGE MY SUBSCRIPTION
NEWSLETTER
Corporate Compliance Insights
  • Home
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Career Connection
  • Events
    • Calendar
    • Submit an Event
  • Library
    • Whitepapers & Reports
    • eBooks
    • CCI Press & Compliance Bookshelf
  • Podcasts
  • Videos
  • Subscribe
  • Home
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Career Connection
  • Events
    • Calendar
    • Submit an Event
  • Library
    • Whitepapers & Reports
    • eBooks
    • CCI Press & Compliance Bookshelf
  • Podcasts
  • Videos
  • Subscribe
No Result
View All Result
Corporate Compliance Insights
Home Risk

AI-Driven Decision-Making May Expose Organizations to Significant Liability Risk

Are U.S. Providers and Users of Artificial Intelligence Ready for American Product Liability Law?

by Chris Temple
September 11, 2019
in Risk
robot with head breaking apart

What happens when decisions made by an artificial intelligence platform lead to injury or damage? Is the “system” responsible? Fox Rothschild’s Chris Temple discusses legal liability when AI decisions made without human input go wrong.

More and more process systems deployed in public infrastructure, industrial, commercial and residential applications are not only automated, but also are increasingly using autonomous non-human agents to manage and to direct such systems. Until recently, these systems have been a combination of mechanical hardware and software code with some level of human worker input and control.

Now, however, more sophisticated and powerful artificial intelligence platforms are using code allowing industrial internet of things (IIoT)-networked “smart” equipment and robots to communicate interchangeably to identify issues and to devise solutions as a result of human-like reasoning processes. In other words, machines are identifying the problems, devising solutions and, in many instances, autonomously executing the steps necessary to implement the chosen solution. The systems also simultaneously keep track of the data upon which decisions were made.

Who – or What – is Responsible When Things Go Wrong?

Machine learning or smart process equipment, robots and related software presents unusual challenges in applying product liability laws as they presently exist. For example, if a machine makes a “decision” that causes injury or damage, who or what is legally responsible for the non-human decision-making, particularly if the hardware and software performed precisely as they were intended and without a demonstrable defect or malfunction of any kind?

In the “old” days, the focus of attention would have been on the human operator or control room technician. His or her conduct would be investigated and evaluated. In modern circumstances, however, the smart process system – a combination of equipment and software – is the only apparent culprit. Since a machine cannot be liable (at least not yet), we can expect the law to assign liability to some person or legal entity, including for example, the hardware manufacturer, the software providers, the sellers of the process, the installers of the process equipment and software and owners of the process or facility where the process is located.

The law has a number of candidates from which payment of damages may be coerced, but how the law will impose liability in a likely case scenario requires consideration now. Counsel and internal compliance professionals within organizations need to consider the potential liability risks of the technology used in today’s operations.

The Evolution of American Tort Law to Create Liability Where None Previously Existed

Over the past 50 years, American tort law has consistently evolved to compensate for injuries or damages caused by newly emerging products or services. It is not wholly unfair to conclude that American tort law will continue to evolve in such a way that where there is an injured party, the law will bend and twist to make someone pay compensation for damages. Thus, the laws in effect at the time the product was manufactured may not be the same laws as are applied years later, after someone is injured. Many companies designed, marketed and sold a product under one set of product liability laws, only to find that a new law would be later created to impose a liability based on a standard that did not exist when the product left the plant floor.

In a recent U.S. Supreme Court decision in Air and Liquid Systems Corp v. DeVries 586 U.S. ___ (2019), for example, the Court held that a manufacturer supplying critical equipment to the U.S. Navy during the wartime exigencies of World War II in 1943 should have issued a written warning based on a legal duty that would not be recognized by the law until 2019 – more than 75 years later. At the time the product was made and supplied, the manufacturer could not have anticipated any such legal obligation in the distant future, but it was nonetheless held liable for its 1943 product.

When marketing a product, it is necessary to consider the current legal obligations, but it is also prudent for those vested with stewardship and compliance to consider what the legal obligations of the buyers or sellers may be in the future, especially for a product that is relatively new to commerce.

What is new in today’s marketplace are tasks and functions being undertaken without human input or intervention. In a situation where decisions are made and solutions are executed by artificial intelligence without human involvement, how does the law assess potential liability when things go wrong? And perhaps more importantly, how will the law assess liability in the future?

If the smart process system was not defective and performed as intended, but an injury causing decision occurred, can we recreate the circumstances and the machine’s reasoning process and judgment as a technical matter or in a manner suitable for evidentiary requirements in a subsequent court proceeding? In other words, can the assembled system of interconnected equipment “testify” based on stored data? If the data becomes unavailable, questions will arise whether evidence is being destroyed. If data is available, how does one recreate what the machine knew or should have known about the circumstances and conditions and how do you measure whether the machine’s response was reasonable or appropriate as a matter of law? Stated another way, will the law develop a standard where a smart machine is considered to be dumb and, if so, who will be responsible for a dumb decision?

Can Management Anticipate Future Liability?

Unfortunately, there are no crystal balls for organizations to predict accurately the course of future liability, but general counsel or compliance officers can expect lawyers to develop alternate theories of liability peculiar to smart process systems and equipment. For example, does a company have independent liability for making, installing or relying upon a smart machine to make potential life-or-death decisions without human intervention, especially where the machine is expected to “learn” and evolve over the course of time and experience unaided by human influence? Marketing materials for companies engaged in the development, sale and design of smart process systems very often tout the increased safety and reliability of a fully interconnected IIoT-based installation. Will such claims give rise to tort liability in the event that injury and damage occur, especially where there was no record that the old system operated by human workers ever had a problem?

Some Practical Considerations While We Wait for the Courts

The courts have not yet answered these and many other related questions and there is nothing to suggest that courts will soon offer much guidance. These matters take some time, as the Supreme Court’s decision more than 70 years after the fact demonstrated. While waiting for the law to catch up to rapidly emerging technologies, however, companies engaged in the development, design, sale, acquisition, installation and use of smart process systems should consider the following:

  • Do purchase order, contract or licensing documents purporting to allocate liability for damages caused by the future operation of the smart process system (or any of its components including software) take into account the impact of potential future changes in the law which may reassign or reallocate liability in ways different than today’s tort and contract law?
  • Can marketing claims about the safety and reliability of smart process systems as a replacement for existing systems using at least in part human workers provide potential evidence for lawyers in the event of an injury-causing decision?
  • Does the process system have the ability to collect and to maintain data in a legally and technically sufficient form that would permit the subsequent recreation of the events and the decision-making process such that the system itself can collectively act as a “witness” to the circumstances causing injury or damage?

Much attention has been given to cybersecurity concerns over IIoT-hacking risks and tech ethics experts continue to explore questions about artificial moral agents, but sellers, buyers and users of smart process systems in the U.S. need to consider how the American tort law will apply liability principles to this emerging technology and who (or what) will be the legally responsible party when artificial intelligence decisions made without human input go wrong.


Tags: Artificial Intelligence (AI)AutomationCyber RiskEmerging TechnologiesMachine Learning
Previous Post

CCPA: An 800-Pound Gorilla Setting the National Data Privacy Standard

Next Post

When it Comes to Diversity in the Boardroom, Progress is Not Perfection

Chris Temple

Chris Temple

Chris M. Temple is a Partner in the litigation practice at Fox Rothschild. He is an experienced trial attorney whose practice centers on complex commercial litigation, and he is well-versed in performance liability; integrated equipment, technology and service supplier/distributor arrangements; toxic exposure claims; and product liability and performance disputes.

Related Posts

castle pixel art

Building a Defense-in-Depth Culture to Combat Phishing

by Perry Carpenter
March 22, 2023

Phishing attempts are only growing more sophisticated by the day, and effective cybersecurity means defending all the vectors of attack,...

risk tunnel

From Regulation to Volume, There Is No Light at the End of the Data Privacy Tunnel

by Jim DeLoach
March 15, 2023

Data proliferation and data privacy regulatory activity across the globe have created the need for focused boardroom discussions. An underpinning...

DALL·E 2023-02-16 13.18.43 - magritte style painting of robot looking into mirror

A Bot Isn’t Going to Take Your Place, But AI Will Make Your Job Harder

by Jennifer L. Gaskin
March 8, 2023

OpenAI’s splashy ChatGPT rollout has generated untold amounts of text, both directly and indirectly. While much of what’s been written...

cisa website

What Can Your Organization Learn From the New CISA Strategic Plan?

by FTI Consulting
January 11, 2023

Cyber threats against organizations of all sizes are only rising as scammers and fraudsters become more and more sophisticated. Kyung...

Next Post
closeup of blue male and pink female icon buttons on keyboard

When it Comes to Diversity in the Boardroom, Progress is Not Perfection

Compliance Job Interview Q&A

Jump to a Topic

AML Anti-Bribery Anti-Corruption Artificial Intelligence (AI) Automation Banking Board of Directors Board Risk Oversight Business Continuity Planning California Consumer Privacy Act (CCPA) Code of Conduct Communications Management Corporate Culture COVID-19 Cryptocurrency Culture of Ethics Cybercrime Cyber Risk Data Analytics Data Breach Data Governance DOJ Download Due Diligence Enterprise Risk Management (ERM) ESG FCPA Enforcement Actions Financial Crime Financial Crimes Enforcement Network (FinCEN) GDPR HIPAA Know Your Customer (KYC) Machine Learning Monitoring RegTech Reputation Risk Risk Assessment SEC Social Media Risk Supply Chain Technology Third Party Risk Management Tone at the Top Training Whistleblowing
No Result
View All Result

Privacy Policy

Founded in 2010, CCI is the web’s premier global independent news source for compliance, ethics, risk and information security. 

Got a news tip? Get in touch. Want a weekly round-up in your inbox? Sign up for free. No subscription fees, no paywalls. 

Follow Us

Browse Topics:

  • CCI Press
  • Compliance
  • Compliance Podcasts
  • Cybersecurity
  • Data Privacy
  • eBooks Published by CCI
  • Ethics
  • FCPA
  • Featured
  • Financial Services
  • Fraud
  • Governance
  • GRC Vendor News
  • HR Compliance
  • Internal Audit
  • Leadership and Career
  • On Demand Webinars
  • Opinion
  • Resource Library
  • Risk
  • Uncategorized
  • Videos
  • Webinars
  • Well-Being
  • Whitepapers

© 2022 Corporate Compliance Insights

No Result
View All Result
  • Home
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Career Connection
  • Events
    • Calendar
    • Submit an Event
  • Library
    • Whitepapers & Reports
    • eBooks
    • CCI Press & Compliance Bookshelf
  • Podcasts
  • Videos
  • Subscribe

© 2022 Corporate Compliance Insights

Welcome to CCI. This site uses cookies. Please click OK to accept. Privacy Policy
Cookie settingsACCEPT
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT