Due to its touchpoints on fundamental rights, safety and consumer protection — and its global reach — the EU AI Act has the potential for a broader impact on AI governance than the GDPR, argues BDO’s Karen Schuler.
The EU AI Act, which went into effect earlier this month, will likely spur more legislation. GDPR, for example, has become the global standard for privacy, affecting organizations beyond Europe as they look for guidance around data practices. The California Consumer Privacy Act (CCPA) is recognized as one of the most impactful data privacy laws in the U.S., with aspects similar to GDPR, such as consumers’ rights to be aware of the personal data being collected about them, to access that data and to request its deletion. Data protection laws in Brazil, Japan and Canada (among others) have been created and even amended to mirror GDPR’s guidance.
In a similar fashion, the EU AI Act will likely serve as the starting point for future legislation beyond Europe. As AI continues to gain momentum and more cross-industry applications, leaders around the world will be prompted to implement guardrails around its usage and development. The U.S. has agreed to cooperate with the EU on AI initiatives, and the federal government has already published a blueprint for an AI Bill of Rights, a sector-agnostic framework outlining five principles and associated practices to guide the design, use and deployment of AI systems. This marks the first step toward a clear and consistent federal framework regulating AI, which has the potential to drive a common understanding and alignment across the country, encouraging confidence in the future of AI usage and development.
Rigorous standards promise major impact
While the GDPR protects individuals’ data, the EU AI Act regulates AI systems. The GDPR applies to personal data processing — regardless of whether it involves AI — and the EU AI Act applies to AI systems that pose a risk to fundamental rights, safety or consumer protection. Instead of creating new rights for individuals, the EU AI Act focuses on the responsibilities of AI providers and users.
The GDPR establishes broad principles like lawfulness, fairness and security. By contrast, the EU AI Act is much more granular, establishing specific technical requirements in domains such as data quality, human oversight, accuracy, transparency and accountability. The EU AI Act also introduces a risk-based approach with different requirements and prohibitions based on the potential harm of the AI system. Organizations must conduct conformity assessments for high-risk systems. This classification guide sets a global standard for the development and use of AI — laying the groundwork for a similar if not larger potential impact than the GDPR.
The Long and Winding Road to Custom-AI Compliance
Peter K. Jackson, a counsel in Greenberg Glusker’s intellectual property group, tells a hypothetical story about creating (or buying?) a responsible, useful and risk-aware AI tool.
Read moreDetailsThe EU AI Act will propel compliance and innovation
Regulators intend the EU AI Act to inspire innovation. By promoting the safe development and application of AI technology, they seek to encourage confidence, optimism and investment in AI research. The EU AI Act is also slated to spur development in environmental protections, diversity and advancing public engagement with AI. However, the new regulations may constrain innovation in areas where certain types of AI applications are already restricted, such as biometric identification, social scoring and subliminal manipulation.
For example, in November 2023, Spain published guidelines on processing biometric data, which officials there view as a high-risk activity. The rules require companies to conduct privacy impact assessments (PIAs), implement data protection by design and conduct thorough assessments as to whether the risk of using the biometric system is warranted. As a result, certain companies in Spain have taken steps to eliminate the use of biometric systems and revert to the use of radio frequency identification (RFID) systems instead.
Certain sectors, like education and healthcare, may find themselves at a disadvantage due to the sensitive nature of the data they process. Across the board, the organizations will likely face increased compliance costs and administrative overhead, driven by the enhanced accountability required by the EU AI Act.
Navigating a transforming AI landscape
The EU AI Act’s comprehensive requirements mean companies must invest time and resources to achieve and maintain compliance. There are several steps a company can take to begin their compliance journey:
- Test AI systems to ensure they are only trained on data that is relevant, representative, free of biases and errors and complete.
- Establish transparent and verifiable data processing methods with meaningful explanations and justifications for the AI system’s decisions or outcomes.
- Promptly report errors, biases or inaccuracies to regulators and data subjects.
- Implement security measures like access and confidentiality controls to limit unauthorized access, use and disclosure.
- Mandate employee education and training on responsible AI use and how to protect consumer privacy.
In addition to these steps, companies need to account for the EU AI Act’s lesser-known requirements. For example, companies must collect and analyze data on the system’s performance, safety and post-market impact. They must also report serious incidents or malfunctions to authorities in the country where the issue occurred. Without effective post-market monitoring, companies may overlook issues in their AI systems, potentially leading to ongoing noncompliance with the EU AI Act which could result in hefty fines and legal penalties.
Companies can easily slip into noncompliance. In fact, 74% of European data protection professionals said in a survey that authorities would find relevant violations of GDPR within the average company. The continuous oversight and quality control mandated by the EU AI Act — which includes regular check-ups on AI systems and practices to ensure ongoing compliance — will require significant costs and resources that may not be proportionate to the risks posed by the company’s use of AI.