BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Explainable AI In Health Care: Gaining Context Behind A Diagnosis

This article is more than 4 years old.

Most of the available health care diagnostics that use artificial intelligence (AI) function as black boxes—meaning that results do not include any explanation of why the machine thinks a patient has a certain disease or disorder. While AI technologies are extraordinarily powerful, adoption of these algorithms in health care has been slow because doctors and regulators cannot verify their results. However, a new type of algorithm called “explainable AI” (XAI) can be easily understood by humans. As a result, all signs point to XAI being rapidly adopted across health care, making it likely that providers will actually use the associated diagnostics.

For many fields outside of health care, the black box aspect of AI is fine—and perhaps even desirable—because it allows companies to keep their precious algorithms as trade secrets. For instance, a type of AI called deep learning identifies speech patterns so a person’s voice assistant of choice can start a favorite movie. Deep learning algorithms find connections and patterns without their operators ever understanding which parts of the data are most important to the decision. The results validate the algorithms, and for many applications of AI there is little risk to trusting it will continue to give a good answer.

But for fields such as health care, where mistakes can have catastrophic effects, the black box aspect of AI makes it difficult for doctors and regulators to trust it—perhaps with good reason. Doctors are trained primarily to identify the outliers, or the strange cases that don’t require standard treatments. If an AI algorithm isn’t trained properly with the appropriate data, and we can’t understand how it makes its choices, we can’t be sure it will identify those outliers or otherwise properly diagnose patients, for instance.

For these same reasons, the black box aspect of AI is also problematic for the FDA, which currently validates AI algorithms by looking at what type of data is input into the algorithms to make their decisions on the data. Furthermore, many AI-related innovations pass through the FDA because a doctor stands between the answer and the final diagnosis or action plan for the patient.

For example, in its latest draft guidance released on Sept. 28, the FDA continues to require doctors to be able to independently verify the basis for the software’s recommendations in order to avoid triggering higher scrutiny as a medical “device.” Thus, software is lightly regulated where doctors can validate the algorithms’ answers. Consider the case of a medical image, where doctors can double-check suspicious masses highlighted by the algorithm. With algorithms such as deep learning, however, the challenge for physicians is that they have no context for why a diagnosis was chosen.

Thus, XAI algorithms being developed for health care applications can provide justifications for their results—in a format that humans can understand. Many of the XAI algorithms developed to date are relatively simple, like decision trees, and can only be used in limited circumstances. But as they continue to improve, these will likely be the dominant algorithms in health care. Health care technology companies would be wise to allocate resources for their development.

Follow me on LinkedIn