Artificial intelligence (AI) has the potential to turn healthcare data into meaningful insights. The technology can help reduce the cognitive load for clinicians, giving them more time and energy to focus on patient care. Amid these significant benefits is an important question: Is AI designed to improve upon existing methods in terms of efficiency and ethics?
During a session at the Becker's Hospital Review 9th Annual CEO & CFO Roundtable, Cerner associates Dick Flanigan, senior vice president, and Rebecca Winokur, MD, senior physician executive, discussed the ethical use of data and AI in healthcare. Here are three key takeaways from their session.
Use of AI has grown in healthcare, but not all uses address problems holistically
Simple AI systems that run on rules-based algorithms ("if this, then this") consider only clinical variables before making treatment recommendations. This may be inadequate when attending to patients for whom a treatment plan would be impractical if they have difficulty affording a medicine, for example. By contrast, a more sophisticated AI learning system would consider a wider array of inputs, such as socioeconomic factors in addition to clinical data.
"When we started doing our internal assessment, we realized the decision support systems we’ve had in use for 35 years might lead to unintended consequences or adverse outcomes.” – Dick Flanigan
For optimal outcomes, data must be complemented with ethical considerations
Clinical indicators are not enough to drive healthcare decisions, not only because they exclude important nonclinical factors but also because prescriptive care based solely on data could embed unconscious bias. This may stem from incomplete data sets on which algorithms were trained or biased assumptions about patients.
"We can't move toward whole-person care and advance on health equity without consideration of the ethical piece.” - Rebecca Winokur, MD
AI technologies are also used in workforce-related contexts, such as to determine who gets to choose a shift, and in collections. But these uses can lead to biased treatment in favor of tenured staff and against the most vulnerable people with the least ability to pay, respectively.
Healthcare organizations can respond to these challenges by taking a systemic approach
Specific recommendations are to:
- Understand the baseline, which is that AI is already prevalent today — in electronic health records, medical devices, staffing solutions, collections and more.
- Establish a governing principle on how to incorporate an ethics-based approach into all aspects of work, along with an accountability framework.
- Create a culture based on diversity, equity and inclusion, where the right people are at the table and all contributions are valued.
- Conduct an inventory to understand how AI is being developed within IT, clinical and administrative functions, as well as from external vendors, and then "un-silo" those teams to share learnings.
- Develop an operational strategy for monitoring the use of AI solutions, mitigating implicit bias, evaluating outcomes and refining models.
- Champion a multidisciplinary approach based on collaboration and partnership.
- Ensure transparency so that negative AI-related outcomes are surfaced and addressed.
- Support the development of industrywide regulatory guidelines on ethical use of AI.
"We are right at a point where a lot of attention is being paid to big tech, big data, AI and algorithms . . . there’s likely to be some legislative response, which will lead to a regulatory response.” - Dick Flanigan
As the industry continues to explore how to integrate AI into healthcare delivery and operations, we must ensure that the technology is screened through an ethics lens. With a holistic approach that considers clinical and nonclinical data, AI can help produce better outcomes, improve clinician satisfaction and advance the overall healthcare experience.
Using data and intelligent technologies, Cerner is developing new solutions to help ease the clinician burden and transform healthcare outcomes. Learn more here.