Article 22 of GDPR states that AI cannot be used as the sole decision-maker in choices that have legal or similarly significant effects on users; human intervention is a must. Mphasis’ Eric Winston explores how companies can accelerate overall GDPR compliance while doing right by their customers.
As the European Union completes one year of the General Data Protection Regulation (GDPR), U.S. lawmakers are following suit. Nationwide, there is a growing momentum to enforce strict privacy laws to protect consumers from rampant data breaches that violate their privacy.
Contrary to what may have been expected, big tech has generally come out in support of the Congressional national privacy law proposal, underscoring the widely held belief that something must be done to protect consumers’ personal data rights. However, while tech companies are pushing for a single national federal legislation, several U.S. states, including California, are rushing to reform the lax regulatory landscape by enacting their own, more stringent, data privacy laws.
Meanwhile, as American companies await the implementation of a nationwide legislation, such organizations still have the challenge of complying with GDPR. According to recent research, only 27 percent of EU-based companies, excluding the U.K., are compliant with the GDPR. Further, just 12 percent of companies in the U.S. have achieved GDPR compliance.
GDPR and the AI Conundrum
Even as companies rush to ensure GDPR compliance, the vast scope of the law has raised another concern: the complex interaction between artificial intelligence (AI) and GDPR. In particular, Article 22 of the GDPR has been in the fore, as it contains provisions related to automated profiling and decision-making that deals with how personal data is used in its limited applicability. It impacts any industry where AI is used to derive a user’s automated profile and where decisions are taken through automated means that could have a legal or significant effect on users.
The concern with such decision-making is that the existing AI system logic makes automated decisions without human intervention. Such an approach could potentially victimize individuals. Recognizing this, the GDPR mandates that every decision by AI must have human intervention before any decision that could impact individuals. However, with the drafting of the GDPR around this provision being perceived as ambiguous, organizations must choose wisely how to use AI in automated decision-making.
Since data is the key ingredient for AI, it is imperative to understand Article 22 in the context of restrictions on automated decision-making and profiling. Article 22 is a conditional right based on certain exceptions. It prescribes that AI — including profiling — cannot be used as the sole decision-maker in choices that have legal or similarly significant effects on users as this is necessary to “safeguard” the data subject’s rights and freedoms and legitimate interests. For instance, an AI model cannot be the only step for deciding whether a borrower is eligible to qualify for a loan. The user can also raise an objection to contest the automated decision and obtain human intervention based on exceptions.
Yet if one were to play devil’s advocate, automated decision-making can sometimes be justified – for instance, when an AI tool rejects a job application if the applicant has not furnished sufficient information. The crucial determiner here: At what stage of the automated decision-making process was the application rejected, and why?
But since Article 22 does not require an explanation about the rationale behind any AI-led decision, this highlights a confounding aspect of the interplay between AI and GDPR.
If organizations correctly interpret Article 22, they will do right by the customer and accelerate overall GDPR compliance — while upholding the firm’s reputation.
GDPR: The U.S. Perspective
It has long been established that GDPR applies to all companies that collect EU consumers’ personal data or behavioral information — regardless of the geographical location of the business. Therefore, even U.S. firms that only have a web presence, but not a brick-and-mortar operation, in the EU will need to comply with the GDPR. This is because such businesses market products to the EU and collect EU citizens’ personal data.
The provision that gives consumers ownership of their data poses a roadblock for U.S. companies that use and sell data. Since organizations may be compelled to destroy data under “the right to be forgotten,” businesses have struggled with restrictions imposed by the GDPR in the year since it became effective.
Further, GDPR has flexed its significant enforcement powers toward U.S. firms as demonstrated when Google was recently fined €50 million for a “lack of transparency, inadequate information and lack of valid consent regarding the ads personalization.”
Going forward, given the backlog of GDPR data breach notifications and the number of companies that are not yet compliant with the tenets of the law, the fines are expected to touch hundreds of millions of euros in 2019.
In terms of how the GDPR would impact the development of AI applications in the U.S., experts believe it could increase labor costs, as human intervention is a necessary component of automated decision-making. Its stringent provisions could also limit the scope of AI’s long-term applications.
The Journey Forward
Even so, in the years to come, GDPR’s impact is expected to resonate across the world in many ways. For starters, it has inspired the U.S. to come up with its own cohesive national data privacy law that will make it easier for consumers to protect their private information.
If the U.S. is going to introduce a GDPR-like law, it remains to be seen which of the provisions it will borrow from the EU regulator. A further challenge is how the U.S. would approach AI processes that require transparency to remove bias.
The larger question organizations and policymakers should be asking is: What does it mean for the consumer to be “king?” This alone will determine the trajectory of the country’s ambitious national data privacy law.