AI may be an ultra-modern technology, but its many challenges echo familiar patterns of failure. Legal and compliance professional Raja Sengupta maps the common threads between history’s most notorious compliance failures and today’s AI challenges, offering a blueprint for avoiding tomorrow’s potential catastrophes.
Imagine a future where AI systems make life-altering decisions without accountability or where poorly governed AI tools harm society’s most vulnerable populations. These are not distant possibilities but very real risks that illustrate the need for robust AI governance. As AI continues to reshape industries, its rapid evolution outpaces regulatory frameworks, leaving gaps that could lead to catastrophic failures. History offers invaluable lessons from compliance failures in sectors like finance, healthcare and aerospace that can help us navigate these emerging risks.
Businesses must learn from past compliance failures to build trust and ensure accountability. By prioritizing transparency, adaptive regulation and ethical practices, we can safeguard AI’s role in society and promote responsible innovation.
Patterns of failure
Historical compliance failures generally fall into three broad categories:
Institutional failures
These arise when leadership fails to foster a culture of compliance. The collapse of Lehman Brothers in 2008 illustrates a failure of governance — leadership ignored critical warnings and prioritized short-term profits over long-term stability, triggering the global financial crisis. Similarly, the Theranos scandal exposed the perils of unchecked leadership where executives allowed the company to overpromise on its technology, endangering public health.
In the AI space, these institutional failures could manifest as leadership neglecting to prioritize ethical AI practices, potentially leading to harmful biases or misleading outcomes, as seen in the IBM Watson Health controversy, where an AI system failed to meet expectations and misled healthcare providers.
Procedural failures
These result from weak or poorly executed processes that can lead to disastrous outcomes. The Chernobyl disaster in 1986 exemplified procedural failure, where human errors and inadequate safety protocols led to a catastrophic nuclear accident, still the costliest in human history.
AI-related procedural failures can occur when models are deployed without thorough testing or when ethical guidelines are not integrated into the development process. A 2018 Uber self-driving car accident, where an AI-controlled vehicle killed a pedestrian, underscores the dire consequences of insufficient testing and oversight in AI systems.
Performance failures
These occur when systems or individuals fail to execute tasks effectively. The 2024 CrowdStrike outage, which disrupted global IT infrastructure, highlights the dangers of inadequate quality control.
With a view on AI, performance failures often result from issues like poor data quality or insufficient training. One notable case is an Amazon recruitment tool, which exhibited gender bias, demonstrating how AI systems that are not properly tested can perpetuate inequality and undermine fairness.
No Strings Attached: Agentic AI Tests Privacy & Antitrust Boundaries
When AI can independently book your travel, analyze your emails and make pricing decisions, compliance concerns multiply. Morgan Lewis attorneys Joshua Goodman, Minna Naranjo and Phillip Wiese explore how agentic AI's unprecedented autonomy challenges existing privacy and antitrust frameworks.
Read moreDetailsFundamentals of AI governance
Based on these compliance lessons, three key areas of AI governance emerge: transparency and accountability, ethical data practices and adaptive regulation.
Transparency and accountability
AI models, often referred to as “black boxes,” present risks similar to the Theranos scandal, where unverified claims misled stakeholders about the efficacy of technology. Similarly, the Volkswagen emissions scandal demonstrated how a lack of transparency can lead to devastating consequences.
To build trust in AI, transparency is essential. Clear guidelines on data usage, third-party audits and proactive transparency — such as that seen in the Cambridge Analytica scandal — can help prevent misleading promises. Companies like Google, which have released AI models for research and publicly committed to ethical standards, demonstrate the importance of transparent AI governance.
Ethical data practices
Data misuse remains one of the most pressing concerns in AI development. The Cambridge Analytica case highlighted how unauthorized data collection can erode public trust, while the HireVue lawsuit, involving facial recognition without consent, emphasized the need for adherence to anti-discrimination laws.
To mitigate risks, AI systems must prioritize ethical data practices by ensuring transparent data handling, clear user consent policies and rigorous auditing of training datasets. Microsoft’s biased facial recognition software serves as a cautionary tale about the dangers of unethically trained datasets and the need for AI systems to be free from bias.
Adaptive regulation
The Boeing 737 MAX crisis illustrates how outdated or inadequate regulatory frameworks can have deadly consequences. The FAA’s failure to adequately address design flaws in the aircraft led to two fatal crashes.
AI’s rapid evolution requires adaptive regulations that can balance innovation with safety. The EU’s AI Act is a significant step forward, but its implementation must evolve with AI advancements. Regulatory frameworks like the OECD’s AI principles, must be flexible and agile enough to keep pace with technological developments and address risks such as algorithmic bias.
Unique AI challenges
AI is rapidly evolving, and its complex nature makes it a uniquely challenging technology promising levels of disruption not seen since perhaps the advent of the internet itself.
- Ambiguous safety standards: Unlike traditional industries, AI lacks universally accepted safety benchmarks. Defining “acceptable risk” in AI is particularly challenging, given the unpredictability of the technology. Incidents like Tesla’s autopilot crashes highlight the risks of deploying AI without universally accepted safety standards. Policymakers must collaborate with industry experts to define evolving safety benchmarks that address these challenges.
- Interpretability issues: AI models are often too complex for even experts to fully understand, hindering regulatory oversight. The complexity of models like DeepMind’s AlphaGo raises concerns about interpretability. Investment in explainable AI (XAI) technologies is essential to improving transparency. Tools like IBM’s AI Fairness 360 toolkit, which provides insights into AI decision-making, can empower regulators and enhance public trust in AI systems.
- Blurring of liability: As autonomous AI systems become more prevalent, the question of accountability becomes increasingly difficult to resolve. The Uber self-driving car accident exemplifies this challenge — who is responsible when an autonomous vehicle causes harm? Regulatory frameworks, such as the EU’s AI Liability Directive, must clarify responsibility in AI-related incidents and ensure appropriate accountability mechanisms are in place.
The way forward: Building a trust framework for AI
History has consistently shown that prioritizing speed over safety leads to disastrous outcomes. AI’s potential to transform industries must be met with responsibility. By learning from past compliance failures and addressing AI’s unique challenges, we can build a governance framework that fosters trust, innovation and accountability.
The path forward lies in transparency, adaptive regulation and a shared commitment to ethical practices. As we navigate this transformative era, the lessons of the past should guide us toward a future where AI serves humanity responsibly.
- Multi-stakeholder collaboration: Governments, tech companies and civil society must collaborate to design AI governance structures that ensure safety and accountability. The OceanGate submersible incident underscores the importance of third-party evaluations in high-risk industries. In AI, third-party audits can verify compliance and help build trust in AI systems.
- Education and awareness: Bridging the knowledge gap between policymakers and AI developers is crucial. Policymakers must be trained on AI ethics and compliance to craft informed legislation. Additionally, AI literacy programs for non-experts will help ensure that policymakers are equipped to create relevant and effective regulations.
- Incentivizing compliance: Aligning compliance with business incentives can drive the adoption of ethical AI practices. Companies prioritizing responsible AI will mitigate risks and gain competitive advantages. Incentives like tax benefits or public recognition can further promote the adoption of robust compliance frameworks.