Practitioners in the pharmaceutical and medical technology sector are no strangers to regulation, and to be sure, rules governing AI in medicine are coming. RegASK’s Caroline Shleifer examines efforts by states and the FDA to put up guardrails around the technology.
In the dynamic world of pharmaceuticals and medical technology, innovation is constant and regulations are ever-evolving. This interplay has been turbo-charged in recent years by the arrival of artificial intelligence (AI), a force with the potential to streamline processes, enhance productivity and improve patient outcomes.
However, the advent of AI has been accompanied by fears that if left unchecked, it could undermine core democratic values of transparency, accountability, privacy and equality. As a result, governments across the world are developing regulations to harness AI’s power safely — moves that have wide-ranging consequences for pharmaceutical and medical technology (medtech) companies.
In light of these developments, it is imperative that companies can effectively track and comply with these regulations. Even with increasing calls for regulatory coordination on the global stage, companies operating in multiple jurisdictions have no choice but to stay abreast of multiple evolving landscapes.
The delicate balance of regulation
From medical imaging and diagnosis to surgery and clinical-trial design, the promise of AI is immense. For companies seeking to optimize its use, it is important to understand the dual purposes of regulators in democratic societies: to foster innovation and improvement, while ensuring data privacy, accountability and patient safety. Thus, emerging regulation will seek to be robust, precise and clear — even while rapid advancement in AI use will likely lead to frequent regulatory updates.
The guiding principles of these regulations are beginning to crystallize. Legislative initiatives governing AI are prioritizing interdisciplinary collaboration, the protection of individuals from unintended effects, safeguards against abusive data practices, transparency, nondiscrimination and accountability for AI developers and deployers. Prospective laws aim to align AI systems with democratic values and to avert the risk that automated systems undermine civil rights.
10 Questions to Ask About Generative AI
Boards and management should settle in for long journey
Read moreEU regulations and their influence
The European Union has been at the forefront of AI regulation, alert to both the risks and benefits of its widespread adoption. The EU’s proposed Artificial Intelligence Act aims to establish a comprehensive regulatory scheme, encompassing areas like data governance, algorithmic transparency and risk analysis. The act proposes impact-assessment and compliance-evaluation mechanisms, outlines an AI governance program and suggests areas for global coordination and voluntary commitments. It also offers a framework for the identification of high-risk systems and specifically prohibits certain practices and functions — e.g., facial recognition and so-called “social scoring.”
This framework is underpinned by a set of democratic values that will inform the regulations of other countries as well, particularly the United States. We can see this in the Biden administration’s blueprint for an AI bill of rights and in the bills enacted by multiple U.S. states.
State regulations on AI in the U.S.
Over the past five years, 17 states have enacted 29 bills regulating artificial intelligence. While California, Colorado and Virginia have so far offered the most comprehensive guidelines, almost all states emphasize data privacy and accountability. Other themes also reflect the principles specified above: multiple-stakeholder collaboration in AI development and use, safety guardrails, transparency of use and nondiscrimination/equitable treatment of citizens.
FDA strategy on AI in medical products
The U.S. Food and Drug Administration’s recently released regulatory strategy for AI in medical products encompasses any AI application that supports drug development and compliance: from drug creation to clinical-trial design to post-market surveillance. It will have a particular impact on drug design and on the processes used to collect post-market data.
The FDA schematic includes four priorities — broad collaboration among stakeholders; clear and predictable regulation that supports and anticipates innovation; the development of standards across the medical-product life cycle; and AI evaluation/monitoring through demonstration projects. The FDA will publish ongoing guidance on these topics, provide post-market feedback and share best practices for manufacturers developing strategies for AI in their products.
Implications for companies: strategies and risks
The broad implications of these developments are clear. As companies race to harness the power of AI, understanding and adapting to the regulatory landscape is paramount. As of now, more than 60 countries are developing frameworks for AI regulation; moreover, it is likely that as these regulatory regimes develop, we will see category-specific guidance, like for machine-learning-enabled devices, for example. Given these dynamics, now is the time for the industry to proactively engage with regulators to provide feedback and influence the development of laws.
The risks of falling behind in these areas are considerable. For example, without rigorous analysis of AI impacts on safety and performance, a company could leave an opening for rivals to place more competitive, safer products on the market. Once this superiority has been demonstrated, competitors might push for the creation of new mandatory safety standards. Once adopted by regulatory agencies, these new rules would leave the original company scrambling to catch up, and incurring significant costs to cover the performance gaps of its product.
As companies develop innovative uses of AI in pharma and medical technology, and navigate the shifting terrain of regulation, proactive strategies for regulatory intelligence will be essential. These strategies should encompass monitoring and analysis of regulations (both regional and global), and of internal processes, systems, and products. In pursuing these strategies, companies should embrace a holistic approach that integrates human expertise and ethics with AI capabilities. By doing so, companies will be well-positioned not only to ensure regulatory compliance and risk mitigation but to harness the full, transformative power of AI.