Artificial intelligence continues to revolutionize virtually every facet of modern life, and healthcare is no exception. But companies in this heavily regulated industry face a variety of data privacy hoops to jump through, both in the US and Europe that complicate their use of this advanced technology, say Marijn Storm, Katherine Wang and Joshua R. Fattal of Morrison Foerster.
AI is transforming the life sciences industry by accelerating drug discovery and personalizing patient care. But as companies increasingly rely on AI to process genetic, biometric and other types of health data, they must also comply with a constantly evolving data privacy, security and AI regulatory landscape.
Several key data privacy, security and AI principles impact the use of AI in life sciences.
Notice & choice
Under US and foreign data privacy laws, companies are required to inform consumers about how they collect, use and process consumers’ personal data, such as health data. Through its enforcement actions, the FTC has made clear that companies must provide consumers with notice and obtain their consent before acquiring their sensitive information, which includes health data.
Under US state consumer privacy laws as well as the EU’s GDPR, companies must obtain express consent before collecting a consumer’s health data. Certain US state laws, such as Washington’s My Health My Data Act, also require businesses to post an additional, dedicated consumer health data privacy notice and to obtain consumer consent for uses and disclosures of health data that aren’t necessary to provide the product or service required by the consumer. Note, however, that health data subject to HIPAA is exempt from these US state laws and subject to federal privacy and security requirements.
The use of AI in the healthcare context may also trigger additional notice and choice requirements. For example, companies in the healthcare space that use automated decision-making to determine whether a consumer should be offered healthcare services must offer consumers the right to opt out from the company’s use of the automated decision-making activity under the GDPR and forthcoming regulations in California. To the extent that US state privacy laws apply, the use of AI in the healthcare space may also trigger the right to opt out of profiling.
Transparency
Companies that use AI chatbots to communicate with consumers about their healthcare, such as chatbots that assist users in diagnosing their symptoms, are also subject to transparency requirements. Specifically, under laws like Utah’s AI Policy Act, companies must disclose that the consumer is chatting with AI and not a human. Similar requirements under the EU AI Act and Colorado AI Act will take effect in the coming year.
Who’s Minding Your Data? The Case for Dedicated Privacy Leadership
As state privacy laws multiply and AI introduces new vulnerabilities, the question isn't whether you need dedicated privacy expertise — it's who will fill that critical gap
Read moreDetailsExplainability & bias mitigation
Consumers affected by the decision of an AI system must be provided with an explanation of how the decision was reached. The FTC advises companies to take necessary steps to prevent harm before and after deploying AI models, including taking preventive measures to detect and deter AI-related impersonation and fraud. The Colorado AI Act requires deployers of high-risk AI systems that make a decision adverse to the consumer, such as systems that determine whether to grant an individual preventative care, to provide the consumer with a statement disclosing the reasons for the decision, the degree to which AI contributed to the decision, the type of data processed by the AI in making the decision and an opportunity to appeal the decision. If technically feasible, the appeal must also allow for human review.
Similarly, companies must take steps to reduce bias in their AI-based decision making. Large language models, for example, can perpetuate biases in the medical system, such as race-based equations to determine organ capacity.
While completely eradicating biases may be impossible, both the Colorado AI Act and EU AI Act require developers of high-risk AI systems to publish a public statement explaining how they manage known or reasonably foreseeable risks of algorithmic discrimination associated with the development of such systems. Under the EU AI Act, AI systems subject to the EU medical devices regulation, such as those that diagnose illness, are considered high-risk even if a human doctor makes the final decision. Such systems are required to comply with the AI Act’s high-risk system requirements, including the development of a risk management system and implementation of transparency and human oversight requirements.
Individual rights
Under the Colorado AI Act, deployers of a high-risk AI system that makes a consequential decision concerning a consumer must provide the consumer with the right to opt out of the processing of their personal data for these purposes. In addition, under state consumer privacy laws in the United States and the GDPR in Europe, consumers must be offered the ability to access, correct and delete their personal data, including their health data. However, when a company uses health data to train an AI model or as input to an AI model, the data cannot be “deleted” from the model in response to a consumer’s request. Companies that use AI models to process such information must find alternative ways to comply with consumers’ deletion requests, such as to suppress the data in the model or anonymize it, which is typically considered a form of deletion under applicable privacy laws.
Data security
In both the United States and Europe, the unauthorized disclosure of health data triggers requirements to notify affected individuals and regulators. To protect against such incidents, companies must comply with applicable security requirements. For example, in the United States, companies regulated by the HIPAA security rule, such as healthcare providers that engage in HIPAA-covered transactions, must implement administrative, physical and technical safeguards to protect electronic protected health information. Under the GDPR, companies must implement appropriate technical and organizational measures to protect personal data.
The use of AI to process health data also triggers novel data security risks, such as adversarial attacks on AI systems to tamper with the model or steal sensitive data from it. To address these concerns, companies should consider complying with AI-specific risk mitigation standards, such as the NIST AI risk management framework.
Enforcement
US and EU regulators actively enforce violations of data privacy and security requirements in connection with sensitive data, and we expect to see increased enforcement of AI in the coming years. The FTC, for example, has initiated enforcement actions against companies that share consumer health data with third parties, and the US Department of Health and Human Services has investigated complaints alleging the impermissible use and disclosure of, and lack of measures to safeguard, HIPAA protected health information. European data protection authorities have imposed fines on healthcare facilities for the collection of health data without notice and the lack of appropriate technical and organizational measures to protect personal data from unauthorized access.
Takeaways
To effectively leverage AI in the life sciences sphere while remaining compliant with relevant AI requirements and data privacy and security principles, companies should take the following steps:
- Develop a risk management program that spans the lifecycle of an AI system and that reduces biases in the system’s decision making, maintains human oversight and accountability and ensures robust data security controls.
- Minimize the amount and sensitivity of health data ingested into AI systems.
- Provide consumers with the option to contest an AI-produced adverse decision and inform consumers about steps taken to reduce bias in AI.
- Determine how to comply with consumers’ personal data requests, including requests to delete personal data from AI models.
- Take appropriate steps to secure personal data in AI models against unauthorized disclosure, such as adversarial training to reduce the risk of adversarial attacks and strict access controls to prevent model theft.