The pull of artificial intelligence is strong, but there are serious ethical questions that must be addressed in certain fields. Nexdigm’s Director of Intelligent Automation and Accelerated Analytics Service, Amit Kumar, considers the implications of using AI solutions in health care.
“Ethics is knowing the difference between what you have a right to do and what is right to do”
Is all this outcry about data regulations worth it? How much do we really care? When was the last time you thought about the terms and conditions and data permissions of a smartphone app or took the time to read a website’s privacy and cookie policies? We inherently trust a multitude of websites and apps, relying on governance bodies to take care of concerning areas like information security and data privacy.
With the intertwined fates of data and health care, it is becoming increasingly challenging to manage due diligence and maintain the privacy and security of health care data. In such a scenario, how do we ensure that health care providers can harness the power of AI while also adhering to the ethical and legal obligations of technology and data use?
The following are some of the key concerns often faced by health care organizations in realizing benefits from data.
Data Privacy and Security
Discussions around the security and risk implications of data are not new. While each organization claims to have ethical guiding principles around the fair use of data, a lack of a binding legal framework has made users more susceptible to data theft, hacking and unauthorized use. Monetization of data through advertising and third-party sharing is the chief business model of so-called digital businesses today, and this is the reason people are skeptical about sharing personal data with organizations. These concerns need to be accentuated in health care, since such data in the wrong hands could prove even costlier. Potentially, if health insurance companies could know about personal medical history at a granular level, medical claim policies of the neediest section of society could be at risk through intelligent predictions and screen-outs of potentially costly individuals.
There has been an ongoing debate around whether in these times of COVID-19, a contact tracing app should be made mandatory for all. Such an app could alert the user (and possibly the authorities) if there’s a likely patient in the user’s proximity. Without a doubt, the app would need access to a user’s location at all times. A few countries like South Korea and Taiwan have proved the effectiveness of a digital and data-savvy approach like this in containing the pandemic. However, little has been done so far to address concerns raised by data privacy advocates. Arguments from such groups – ranging from the possibility of data hacking to unrestricted use by governments for surveillance purposes – pose a dilemma to policymakers and suggest a trade-off between public health and privacy.
Data Rights
The underlying question leading to this discussion is who owns a user’s data? Who governs fair-usage rights of such data? It has often been voiced that users be given enhanced active controls to govern their data. However, many patients (or more broadly, users) are often (deliberately kept) unaware of such data controls. Moreover, as often seen, such controls are almost always multi-layered and too complex for a Luddite to understand. Hence, it becomes important to have discussions about which data can be used, how and under what circumstances. Could a governance body or global law help with these concerns, thus allowing for the free flow of valuable data when and where it is most needed?
GDPR (General Data Protection Regulation) in Europe has been a step in that direction, and it is worthwhile to note that GDPR allows the use of health data without consent where it is necessary for scientific research or in the interest of public health. Patients would be encouraged to share more data with greater awareness, transparency and conviction around how data is anonymized, to whom it is trusted and whether it is utilized for the greater good. Nonetheless, GDPR has set a good benchmark for data privacy and security standards in Europe (still, a work in progress), and it is expected that other countries will soon follow suit.
Fairness and Inclusiveness
While it is typically easy to obtain large, diverse and balanced data sets in other businesses, health care businesses face stringent regulations and organizational barriers while collecting clinical data. AI systems trained on such sparse and biased data are bound to fail. For example, a skin cancer detection algorithm trained on a sample of Caucasian males fails miserably when it is tried on samples of females or a non-white patient group. Such biases are not ingrained in AI and are reinforced through unintentional personal choices/data bias, further marginalizing minority or ignored groups.
Many people might ask, who is going to benefit from this open-ended data sharing? Maybe health care organizations will become more efficient and cost-effective, yes. But how does it impact society at large? There are apprehensions around AI leading to loss of jobs or concentration of power and resources with a chosen few.
While AI might be perceived as an equivalent and replica of humans performing specific knowledge-based and skill-based tasks, so far, in most areas, it has come out only as a valuable ally and assistant by allowing humans to focus on tasks requiring creativity and intellect, as it takes up the boring, manual and tedious tasks.
Trust and Accountability
We all love transparency, don’t we? But in pursuit of highly sophisticated and accurate algorithms, we have lagged in explainability, turning AI into a black box. If all health care stakeholders are to buy in on the benign story of AI, they need to understand the underlying factors responsible for decision-making or recommendations of the very AI-system they are relying upon. This applies not only to patients, but doctors as well, who are not well-versed in their interactions with AI systems and interpretations of system outputs.
At the heart of the widespread adoption of intelligent health care lies the question, how good is good enough? Should one trust a tumor removal surgery to a robot that has the success rate of (only) 90 percent? Further, in the case of an ambiguity between human and machine, who has the final say? And how do we quantify our levels of confidence in both systems?
While some procedures or medical cases might be straightforward and simple enough for AI to handle, others might need human intervention. There are still a lot of such unanswered questions as to who shares the responsibility and accountability for data and intelligent systems in case things take a turn for the worse.
The Future of Ethical AI
AI, in its intrinsic nature, is not so different from a scalpel. Just like any other tool that is intended to be used for a noble cause like surgical procedures, its other malicious use cases can’t be ruled out. To sum up, there is a need to develop new frameworks for evaluating and ensuring the transparency, safety and reliability of AI that range across underlying data and technology, their impact and their limitations. The constant exercise of monitoring, validation and review is necessary to keep up with evolving concerns.