Skip to main content
Skip to footer

AI myths, part 1: Will regulation limit the impact in health care?

by Dr. Tanuj Gupta

Published on 3/2/2020

Estimated read time: 6 minutes

Key takeaways:

  • Artificial intelligence (AI) introduces some important concerns around data ownership, safety and security, and with so much at stake, meaningful regulation should be expected.
  • The pharmaceutical, clinical treatment and medical device industries provide a precedent for how to protect data rights, privacy and security and drive innovation in an AI-empowered health care system.
  • We should expect the continued growth of AI applications for health care as more uses and benefits of the technology surface.

I’ve given more than 100 presentations on artificial intelligence (AI) and machine learning (ML) this past year. There’s no doubt these technologies are hot topics in health care that usher in great hope for the advancement of our industry. While they have the potential to transform patient care, quality and outcomes, there are also concerns about the negative impact this technology could have on human interaction, as well as the burden they could place on clinicians and health systems.

In this blog series, I address some of the most common myths I’ve heard during my conversations with health care leaders across the globe. My goal is to give you the facts so you can make informed decisions about how your organization maximizes AI and ML. 

In this blog, I tackle questions around regulations and their potential to limit the impact in health care.

Myth: AI and ML will be so heavily regulated that they won’t be useful in health care.

AI introduces some important concerns around data ownership, safety and security that warrant a thorough discussion:

  • Data rights - Who ultimately “owns” the data used in AI/ML? And, how do we handle patient consent?
  • Patient safety - How can we be sure that an algorithm used to predict a diagnosis or prescribe treatment is safe? When we’re building and testing an algorithm, do we need institutional review board approval?
  • Data security and privacy - How do we protect personal health information and prevent data breaches in a world where potentially thousands of ML algorithms are routinely accessing data for the purpose of making predictions?

Without a standard for how to handle these issues, there’s the potential to cause harm, either to the health care system or to the individual patient. For these reasons, important regulations should be expected. But the second part of this myth, “…they won’t be useful in health care,” is untrue. 

Let’s start with data rights. Have you ever used an at-home DNA testing kit and sent away a sample for the results? If so, you likely gave broad consent for your data to be used for research purposes, as defined by the U.S. Department of Health and Human Services (HHS) in a 2017 guidance document.

While that guidance establishes rules for giving consent, it also creates the process for withdrawing consent. Handling consent in an AI-empowered health care system may be a challenge, but there’s precedent for thinking through this issue to both protect rights and drive innovation.

In regard to patient safety concerns, the Food and Drug Administration (FDA) has published two documents to address the issue: Draft Guidance on Clinical Decision Support Software and Draft Guidance on Software as a Medical Device. The first guidance sets a framework for determining whether or not a ML algorithm is a medical device. Once you’ve determined your ML algorithm is in fact a device, the second guidance provides “good machine learning practices.”

We’ve seen parallels to this in the drug and device industries. Similar FDA regulations on diagnostics and therapeutics have kept us safe from harm without getting in the way of innovation. We should expect the same outcome for AI and ML in health care.

Finally, let’s look at data security and privacy. There’s a natural “tug-of-war” when it comes to these topics. The industry wants to protect data privacy while unlocking more value in health care. For example, HHS has long relied on the Health Insurance Portability and Accountability Act, commonly referred to as HIPAA, which was signed into law in 1996. While HIPAA is designed to safeguard protected health information, growing innovation in health care ─ particularly regarding privacy ─ led to HHS’ recently issued proposed rule to prevent information blocking and encourage health care innovation.

Additionally, in early 2019, President Donald Trump issued an executive order titled, “Maintaining American Leadership in Artificial Intelligence,” followed by the release of proposed guidance on federal agency regulation of AI by the Office of Management and Budget in early 2020. This is part of the language:

 “The policy of the United States Government [is] to sustain and enhance the scientific, technological, and economic leadership position of the United States in AI. The deployment of AI holds the promise to improve safety, fairness, welfare, transparency, and other social goals, and America’s maintenance of its status as a global leader in AI development is vital to preserving our economic and national security. The importance of developing and deploying AI requires a regulatory approach that fosters innovation, growth, and engenders trust, while protecting core American values, through both regulatory and nonregulatory actions and reducing unnecessary barriers to the development and deployment of AI.”

Countries around the globe are grappling with this same tug-of-war between mitigating risk and driving innovation. A report from Deloitte Insights illustrates this contrast best for seven large nations, first by comparing their concerns for AI risk and then by looking at their AI investments for the future:

 

 

With so much at stake related to data rights, patient safety and data security and privacy, it’s safe to conclude that AI and ML in health care will be regulated. But that doesn’t mean these tools won’t be useful.

In fact, we should expect the continued growth of AI applications for health care as more uses and benefits of the technology surface. However, this raises more questions: Will AI/ML replace human decision-making in health care? And, will AI/ML perpetuate bias or disparities in health care?

As we continue this series on the myths of AI, we’ll take a deeper look at those two concerns and more.

Visit the Cerner booth (2941) at HIMSS20 March 9-13 in Orlando, Florida, to learn more about our AI solutions.

More like this: