BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

If AI Harms A Patient, Who Gets Sued?

Following

More than two-thirds of U.S. physicians have changed their minds about generative AI, now viewing it as beneficial to healthcare. Yet, despite its growing power and presence in medicine—with 40% of doctors planning to use AI next year—apprehensions remain strong.

For the past 18 months, I’ve explored the potential uses and challenges of generative AI in medicine, work that led to my recent book, ChatGPT, MD. During this period, I’ve observed a shift in clinicians’ concerns. Initially, the focus was on AI’s reliability and patient safety. More recently, however, the predominant question I hear is: Who will be liable when something goes wrong?

From Safety To Suits: New AI Fears Emerge

Technology experts are increasingly optimistic that next generations of AI technology will prove reliable and safe for patients, especially under expert human oversight.

As evidence of GenAI’s quickly evolving capabilities, remember that Google’s first medical AI model, Med-PaLM, achieved a “passing score” (>60%) on the U.S. medical licensing exam in late 2022. But within five months, its successor, Med-PaLM 2, scored at an “expert” level (85%).

Since then, numerous studies have shown that generative AI increasingly outperforms medical professionals in various clinical tasks. These include diagnosis, treatment decisions, analytic ability and even empathy.

Despite medical AI’s precision and progress, medicine is an imperfect science. Errors can and will occur, whether the expertise comes from human clinicians or a bot.

Fault Lines: Navigating AI’s Legal Terrain

When it comes to the issue of liability, two things are clear. First, in the United States, practically anyone can be sued for almost anything. Second, the liability implications of using generative AI in healthcare exist in largely uncharted territory, as these cases have yet to be directly litigated in court.

Legal experts anticipate that as AI tools become more integrated into healthcare, determining liability will come down to whether errors were the result of AI decisions, human oversight or a combination of both. For instance, if doctors use a generative AI tool in their offices for diagnosing or treating a patient and something goes wrong, the physician would likely be held liable, especially if it’s deemed that their medical expertise should have overridden the AI’s recommendations.

But the scenarios get more complex when generative AI is used without direct physician oversight. As an example, what happens when patients rely on generative AI’s medical advice without consulting a doctor? Or let’s say a clinician encourages a patient to use an AI tool for help monitoring and interpreting wearable device data. What happens if the AI’s analysis and advice, although similar to what a physician would offer, leads to a serious health issue? And then there are questions of relative improvements. If, for example, AI reduces the incidence of misdiagnoses and preventable medical errors by 95% but still fails 5% of the time, how will the grieving families and the courts view their situation: as a statistical inevitability or as gross malpractice?

In a working paper, legal scholars from the universities of Michigan, Penn State and Harvard explore these challenges, noting: “Demonstrating the cause of an injury is already often hard in the medical context, where outcomes are frequently probabilistic rather than deterministic. Adding in AI models that are often nonintuitive and sometimes inscrutable will likely make causation even more challenging to demonstrate.”

AI On Trial: A Legal Prognosis From Stanford Law

To get a better handle on the risks that clinicians face when using AI, I spoke with Michelle Mello, professor of law and health policy at Stanford University and lead author of “Understanding Liability Risk from Using Health Care Artificial Intelligence Tools.”

Her paper, published earlier this year in the New England Journal of Medicine, offers clear insights into the murky waters of AI liability, addressing how the courts might handle AI-related malpractice cases.

Her analysis, based on hundreds of software-related tort cases, suggests that current legal precedents around software liability could inform future legal decisions about AI in healthcare. However, direct case law on any type of AI model “is very sparse,” Mello cautioned. For medical professionals worried about the risks of implementing AI, she offers reassurances mixed with warnings.

“At the end of the day, it has almost always been the case that the physician is on the hook when things go wrong in patient care,” she noted, while adding, “As long as physicians are using this to inform a decision with other information and not acting like a robot, deciding purely based on the output, I suspect they’ll have a fairly strong defense against most of the claims that might relate to their use of GPTs.”

She emphasizes that while AI tools can improve patient care by enhancing diagnostics and treatment options, providers must be vigilant about the liability these tools could introduce. To minimize risk, she recommends four steps.

  1. Understand the limits of AI tools: AI should not be seen as a replacement for human judgment. Instead, it should be used as a supportive tool to enhance clinical decisions.
  2. Negotiate terms of use: Mello urges healthcare professionals to negotiate terms of service with AI developers like Nvidia, OpenAI, Google and others. This includes pushing back on today’s “incredibly broad” and “irresponsible” disclaimers that deny any liability for medical harm.
  3. Apply risk assessment tools: Mello’s team developed a framework that helps providers assess the liability risks associated with AI. It considers factors like the likelihood of errors, the potential severity of harm caused and whether human oversight can effectively mitigate these risks.
  4. Stay informed and prepared: “Over time, as AI use penetrates more deeply into clinical practice, customs will start to change,” Mello noted. Clinicians need to stay informed as the legal landscape shifts.

The High Cost of Hesitation: AI And Patient Safety

Concerns about the use of generative AI in healthcare are valid, with both patients and doctors justifiably worried about its reliability and the legal repercussions of mistakes. However, failing to embrace generative AI technology creates major risk for both parties, as well.

Each year, misdiagnoses lead to 371,000 American deaths while another 424,000 patients suffer permanent disabilities. Meanwhile, more than 250,000 deaths occur due to avoidable medical errors in the United States. Half a million people die annually from poorly managed chronic diseases, leading to preventable heart attacks, strokes, cancers, kidney failures and amputations.

These statistics not only highlight existing liability risks but also present opportunities for clinicians to reduce malpractice claims, improve patient health and lower the cost of American medicine.

Today’s AI models like GPT-4, Gemini and Claud are not yet ready for unsupervised use by patients. However, we can expect that clinicians will soon rely on and recommend generative AI use throughout clinical practice. The medical community will need to navigate this transition thoughtfully, understanding that adopting AI creates risk, but not adopting AI poses far greater danger.

Given that the legal system most likely will take years to establish clear precedents for AI liability, I encourage clinicians to embrace the opportunities the technology offers and prevent unnecessary harm and loss of life.

Follow me on Twitter or LinkedInCheck out my website or some of my other work here

Join The Conversation

Comments 

One Community. Many Voices. Create a free account to share your thoughts. 

Read our community guidelines .

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's Terms of Service.  We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Spam
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's Terms of Service.