BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

AI’s Impact In Medicine Around Development, Deployment, And Use

Following

As AI increasingly makes its way into our daily lives, there’s no doubt that its impact on healthcare and medicine will impact everyone, whether or not they choose to use AI for their own activities. So, how can we make sure that we implement AI in a responsible way that provides mostly positive benefits while minimizing the potential downsides?

At the recent 2024 SXSW Conference and Festivals that took place in March 2024, Dr. Jesse Ehrenfeld, President of the American Medical Association (AMA) spoke on the topic of “AI, Health Care, and the Strange Future of Medicine”. In a followup interview on the AI Today podcast, Dr. Ehrenfeld expands on his talk, and shares additional insights for this article.

Q: How are you seeing AI impact medicine, and why did AMA recently release a set of AI principles?

Dr. Jesse Ehrenfeld: I am a practicing doctor, an anesthesiologist, and in fact I saw a bunch of patients earlier this week. I worked in Milwaukee, Wisconsin, at the Medical College of Wisconsin, and I've been in practice for about 20 years. I'm the current president of the AMA, which is a household name, and the largest, most influential group representing physicians across the nation. Founded in 1847, purveyor of the code of medical ethics and lots of things to help physicians practice health care in America today. I'm board certified in both anesthesiology and clinical informatics. I'm the first informatician board certified to be President of the AMA. It's a relatively new specialty designation, and I also spent ten years in the Navy. Foundationally, everything I do comes down to understanding how we can support the delivery of high quality medical care to our patients, informed by my work and active practice.

It won't surprise you, but doctors have been burdened with a lot of technology that has just sucked, not worked, and been a burden not an asset. We just don't want that anymore, especially with AI. So AMA released a set of principles for AI development, deployment, and use in November 2023, which comes in response to concerns that we're hearing by both physicians and the public.

The public has a lot of questions about these AI systems. What do they mean? How can they trust them? Security, all of those things. Our principles are guiding all of our work, our engagement with the federal government, Congress, the administration, as well as industry around how we make sure that we set these technologies up to work as they are developed, deployed, and ultimately used in the care delivery system.

We have been working on AI policy since 2018. But in the latest iteration, we call for a comprehensive government approach to AI. We've got to make sure that we mitigate risk to patients and make sure that we maximize utility. And these principles came from a lot of work to get subject matter experts together, physicians, informaticists, national specialty groups, and there's a lot in those principles.

Q: Can you provide an overview of those AI principles?

Dr. Jesse Ehrenfeld: Above all else, we want to make sure that healthcare AI is designed, developed, deployed in a way that is ethical, equitable, responsible ,and transparent. Our vision and perspective is that compliance with a national governance policy is necessary to develop AI in an ethical and responsible manner. Voluntary agreements, voluntary compliance is not sufficient. We need regulation, and we should have a risk based approach. The level of scrutiny, of validation oversight ought to be proportional to the potential for harm or consequences that an AI system might introduce. So if you're using it to support diagnosis versus a scheduling operation, maybe it requires a different level of oversight.

We have done a lot of survey work of physicians across the country to understand what's happening in practice today as increased usage of these technologies is happening in our survey work. The results are exciting, but I think it also probably should serve as a bit of a warning to developers and regulators. Physicians in general are very enthusiastic about the potential of AI in healthcare. 65% of US physicians from a nationally reputable sample see some advantage to using AI in their practice, helping with documentation, helping with translating documents, assisting with diagnoses, and getting rid of administrative burdens through automation such as prior authorization.

But they also have concerns. 41% of physicians say that they are equally as excited about AI as they are frightened. And there are additional concerns about patient privacy and the impact on the patient physician relationship. At the end of the day, we want safe and reliable products in the marketplace. That's how we're going to get the trust of physicians, of consumers, and obviously all of our work to support the development of high quality, clinically validated AI comes back to these principles.

Q: What are some of these data and health privacy concerns you’re focusing on?

Dr. Jesse Ehrenfeld: What I see is more questions than answers from patients and consumers on data and AI. For example, with a health care app, what does it do? Where does the data go? Can I use that information or share it? And unfortunately, the federal government has not really made sure that there's transparency around where your data is going. The worst example of this is a company and a developer and an app that they label as “HIPAA compliant”. In the mind of the average person, “HIPAA compliant” implies that their data is safe, private, and secure. Well, applications are not a covered entity in HIPAA, and HIPAA only applies to covered entities. So saying you're “HIPAA compliant”when you're not covered by HIPAA is completely misleading, and we just shouldn't allow that kind of thing to happen.

There is also a lot of concern about where health data is going, and that extends obviously into AI use with patients. 94% of patients tell us that they want strong laws to govern the use of their healthcare data. Patients are hesitant to use digital tools if they don't understand the privacy considerations around it. There's a lot that we have to do in the regulatory space. But there's also a lot that AI developers can do, even if they're not statutorily required, to strengthen trust in AI data use.

Pick your favorite large technology company. Do you trust them with your healthcare data? What if there is a data breach? Would you upload a sensitive photo of a body part to their server to allow it to give you some information about potential conditions that you might be worried about? What do you do when there's a problem? Who do you call? So I think there ought to be opportunities to create more transparency around where data collection goes. How do you opt out of having your data pooled and shared and so on and so forth?

Unfortunately, HIPAA does not solve all of this. In fact, a lot of these applications are not covered by HIPAA. There needs to be more done to make sure that we can ensure the safety and privacy of healthcare data.

Q: Where and how do you see AI having the most positive impact on healthcare and medicine?

Dr. Jesse Ehrenfeld: We have to use these technologies like AI, and we're going to have to embrace them if we are going to solve the workforce crisis that exists in healthcare today. This is a global problem. It's not limited to the USA. 83 million Americans do not have access to primary care. We also don’t have enough physicians today in America. We could never open enough medical schools and residency programs to meet those demands if we continue to work and deliver care in the same way.

When we talk about AI from an AMA lens, we actually like to use the term augmented intelligence, not artificial intelligence. Because it comes back to this foundational principle that the tools ought to be just that, tools to boost the capacity capabilities of our healthcare teams, physicians, nurses, everybody involved, to be able to be more effective, more efficient in delivering the care. What we need, however, are platforms. Right now, we've got a lot of one-off point solutions that don't integrate together, and that's a direction that I think we're starting to see companies move quickly towards. Obviously, we're eager for that to happen in the physician space.

We are trying a lot of different pathways to make sure that we have a voice at the table throughout the design and development process. We've got our physician innovation network, a free online platform to bring physician entrepreneurs together to help fuel change and innovation and bring better products in the marketplace. Companies are looking for clinical input, and clinicians are looking to get connected with entrepreneurs. We also have a technology incubator in Silicon Valley called Health2047. They have spun off about a dozen companies powered by insights that we have as physicians in the AMA.

At the end of the day, it comes down to making sure that we've got a regulatory framework that makes sure that only clinically validated products are brought into the marketplace. And we've got to make sure that tools truly live up to their promise and that they are an asset, not a burden.

I don't think that AI will replace physicians, but I do think that physicians who use AI will replace those who don't. AI products have tremendous potential and promise to alleviate administrative burdens that physicians and practices experience. Ultimately, I expect there's going to be a lot of success in ways that we directly utilize AI in patient care. There's a lot of excitement there but we need to make sure that we obviously have tools and technologies that address challenges around racial bias, errors that can cause harm, security, privacy issues, and threats to health information. Physicians need to understand how to manage these risks and how to manage liability before we rely on more and more tools.

(disclosure: I am a co-host of AI Today podcast)

Follow me on Twitter or LinkedInCheck out my website or some of my other work here