Corporations are racing to adopt AI, hoping to realize an array of benefits, from better operations to budgetary windfalls. But as law student Shayna Grife explores, those benefits go hand-in-hand with extensive risk, and compliance teams must be ready.
Prior to the emergence and rapid expansion of artificial intelligence, the European Union’s General Data Protection initiative (GDPR) and the California Consumer Privacy Act (CCPA) set the standard for companies attempting to safeguard personal information. With the emergence of AI, companies whose products, services and customers may be impacted have needed more guidance on how to account for the privacy and security risks that come with the technology. Of course, it is challenging to regulate what we don’t fully understand and right now, we cannot accurately predict the consequences that society will endure, including those associated with personal data breaches, from the rapid growth of AI.
Last year, the Biden Administration issued an executive order underscoring the urgent need for companies to protect personal information from AI and promote the adoption of privacy-enhancing technologies (PETs) in order to mitigate the risk of new technologies and make it easier to adapt to future advancements. The order urges the Federal Trade Commission and other independent agencies to issue guidance documents in regard to AI with the intention of clarifying the responsibilities and obligations of companies using AI.
Of course, the U.S. is not alone in its regulatory efforts, as the EU’s AI Act puts forth a regulatory framework to make the use of AI more transparent and traceable. The act classifies different actions enabled by AI based on the risk they pose to society and the measures and precautions companies should take to mitigate or avoid these risks. For example, AI being used in healthcare for diagnostic or predictive purposes is classified as high-risk and must be registered in the EU database and continually assessed.
Even with new legal and regulatory frameworks emerging, companies must still navigate the potentially treacherous waters of rapidly evolving artificial intelligence. And as compliance experts work to understand what liability and risks AI presents, companies must proactively reevaluate their compliance protocols, rethink their approach to risk assessments and explore updated privacy measures.
Double-edged sword
AI can protect against financial crimes like fraud and money laundering because its ability to review structured and unstructured data is more effective than manual processes. But the flipside is that not only does AI improve companies’ ability to protect themselves and their customers, it paradoxically makes their jobs harder, as fraudsters now have powerful and free new methods to defraud and scam. AI can, for example, help a criminal create deepfakes of bank staff or consumers to launch elaborate schemes with the intent to defraud.
Instead of limiting scams to the funds available in an account, AI-powered fraud attempts like deepfakes can be used to open new accounts, take loans and engage in transactions as if they were the consumer themselves. Banks and financial institutions have several forms of due diligence in place to identify the beneficial owner of an account, but those may fall short in the face of artificial intelligence that can pull data from any source it has access to and create such extensive, synthetic identity scams.
The Evolution of Data Privacy Legislation in the Middle East
Comprehensive Saudi law most recent in region
Read moreDetailsEmbracing regulation?
AI emerges as a highly valuable tool, offering the potential to significantly enhance efficiency and effectiveness in practically all industries from business to government. Embracing AI is not just a step toward innovation — it’s an essential stride in the pursuit of progress and success in the corporate world. Technological innovations like AI can change the way we conduct business in every aspect imaginable, but with it come debates on the boundaries of privacy law and civil liberties, among other considerations.
Where would the world be without the development of Facebook, but also where would we be if Facebook and other social media platforms were not completely unregulated? Of course, many would argue that Facebook needs to be significantly more regulated than it is now, but many people who work more intimately in the field of technology believe the current regulations are impractical and unreasonable.
Risk assessments & the NIST framework
The best course of action to bridge the gap between what we want to implement and what we actually can implement would be to become familiar with the framework for risk assessments published by the National Institute of Standards and Technology (NIST) and incorporate that guidance into enhanced education programs and procedural privacy protection. Compliance teams should take the most immediate action in updating their risk assessments, tailoring their training and education and adopting privacy-enhancing technologies (PETs).
AI and risk assessments are the newest Catch-22. AI-driven risk assessments effectively predict future risks based on historical data; the socio-technical nature of AI can identify trends and vulnerabilities for a company better and more efficiently than an individual. However, the risk assessment must also consider the inherent risks associated with the use of the AI technology itself.
AI operates effectively when it has access to a spectrum of data, but this exposes the company to additional risks, potentially increasing the likelihood of a data breach or cyberattack. NIST’s framework to address the risks accompanying AI outlines four core functions a company must analyze at the different stages of the AI cycle: govern, map, measure and manage.
The NIST framework is the most current resource available to balance the preservation of democratic values with the capitalist technological advancements of AI. It is the Caremark equivalent of risk assessments; a company that adopts the standards and guidance set out by NIST will have a strong case to combat Caremark-equivalent claims of lack of oversight. While no company can shield itself entirely from risk, if it meets the NIST standard, it can likely avoid a great deal of liability, which is the goal they should be striving for.
Ensuring data privacy
Transitioning a compliance program over to privacy by design and the use of PETs is a daunting and expensive task but will pay off in the long run with less liability, fines and lawsuits. AI is not the end of innovation but the beginning, and with each new advancement come myriad privacy concerns. Implementing privacy measures through technology and integrating these control measures into procedures will achieve a higher level of compliance that is more adaptable to change.
PETs like differential privacy, synthetic data and homomorphic encryption can be utilized to anonymize data on a large scale, which allows companies to benefit from the products of AI without assuming all the liability that can accompany a data breach. In the ever-evolving landscape of AI and privacy, striking the right balance between innovation and safeguarding personal information is imperative to avoiding litigation and liability while staying current with the changes in technology.
As AI becomes more integrated in our usual course of business, the problems it brings will become more apparent, and guidance on how to manage it will become clearer. Who knows, though, by then AI will be commonplace and a new technology will be making us question everything all over again.