Pursuing the Ethics of Artificial Intelligence in Healthcare
Artificial intelligence (AI) already is making a difference in healthcare by helping medical professionals interpret tests, clarify diagnoses and identify the most effective treatment approaches to a range of diseases.
As Cedars-Sinai explores new uses of AI, it is balancing the rapid development of this emerging technology with responsible and ethical implementation.
“AI systems have the power to transform healthcare,” said Mike Thompson, vice president of Enterprise Data Intelligence at Cedars-Sinai. “If implemented properly and responsibly, AI can be deployed to enhance patient experience, improve population health, reduce costs and improve the work life of healthcare providers.”
Thompson sat down with the Cedars-Sinai Newsroom to examine the uses of AI to improve healthcare and to detail how the academic medical center is pursuing this fast-evolving technology in an ethical manner.
Why do ethics matter when it comes to the use of AI in healthcare?
The integration of AI into medical technology and healthcare systems is only going to increase in the coming years. As technology continues to develop, the push toward safety, soundness and fairness occurs at all levels. This effort will require checks and balances from innovators, healthcare institutions and regulatory entities.
As technology advances, the medical community will need to develop standards for these innovative technologies, as well as revisit current regulatory systems on which physicians and patients rely to ensure that healthcare AI is responsible, evidence-based, bias-free, and designed and deployed to promote equity.
If AI systems are not examined for ethics and soundness, they may be biased, exacerbating existing imbalances in socioeconomic class, color, ethnicity, religion, gender, disability and sexual orientation.
Bias disproportionately affects disadvantaged individuals, who are more likely to be subjected to algorithmic output that are less accurate or underestimate the need for care. Thus, solutions for identifying and eliminating bias are critical for developing generalizable and fair AI technology.
Are AI ethics in healthcare different than AI ethics in other fields, like consumer goods?
While many general principles of AI ethics apply across industries, the healthcare sector has its own set of unique ethical considerations. This is due to the high stakes involved in patient care, the sensitive nature of health data, and the critical impact on individuals and public health.
It is critical that AI in healthcare benefit all sectors of the population, as AI could worsen existing inequalities if not carefully designed and implemented. It’s also critical that we ensure AI systems in healthcare are both accurate and reliable. Ethical concerns arise when AI is used for diagnosis or treatment without robust validation, as errors can lead to incorrect medical decisions.
What is an example of “AI ethics in action” at Cedars-Sinai?
As an example, consider an AI system that is used to assist in a patient’s risk for diagnosis. One question to ask is whether the AI algorithm performs equally for patients, regardless of race or gender.
In the same vein, an algorithm trained on hospital data from the European Union may not perform as well in the U.S., as the patient population is different, as are treatment strategies and medications.
To combat these challenges, bias mitigation strategies may require us to implement mathematical approaches that help an AI model learn and produce balanced predictions.
At Cedars-Sinai, we also believe that critical AI algorithms should augment the expert, not replace that individual. Keeping the “human in the loop” to review a recommendation is another important strategy we use to mitigate bias.
Does Cedars-Sinai approach ethical AI differently than other academic medical centers? If so, how? And why?
To support our AI strategy, we created a framework for the ethical development and use of AI. The framework and policies are designed to ensure that the evolution of AI in medicine benefits patients, physicians and the healthcare community. It advocates for appropriate professional oversight for safe, effective and equitable use.
The framework starts by identifying who might be impacted and how, and then takes steps to mitigate any potential adverse impact.
How does the ethical use of AI evolve over time as Cedars-Sinai progresses in its use of these new technologies?
The most powerful—and useful—AI systems are adaptive. These systems should be able to learn and evolve over time outside of human observation and independent of human control. This, however, presents a unique challenge in AI ethics, as it requires ongoing monitoring, review and auditability to ensure systems remain fair and sound.
Recent booms in AI technologies have been decades in the making. The most relevant and recent advances have accelerated the growth of AI algorithms and concepts—an evolution that will continue.
Now more than ever before, we must ensure that AI algorithms are trustworthy and deserving of trust. In healthcare, this entails systematically accumulating evidence, monitoring systems and data that are based on ethics and equity.