Skip to main content

Unclear Liability Poses a Growing Risk in AI-Driven Healthcare

October 20, 2025
Image: [image credit]
Photo 218896018 | Healthcare © Lacheev | Dreamstime.com

Mark Hait
Mark Hait, Contributing Editor

As artificial intelligence continues its rapid integration into clinical workflows, a looming liability vacuum threatens to destabilize trust, governance, and accountability in patient care. While AI tools promise greater efficiency, diagnostic power, and operational agility, legal experts are raising alarms about the absence of clearly defined fault lines when outcomes go wrong.

A new report emerging from the JAMA Summit on Artificial Intelligence, co-authored by legal scholars and health system leaders, warns that the current pace of AI adoption in healthcare far outstrips the sector’s legal and regulatory readiness. The result is a high-risk ambiguity around responsibility for medical errors involving AI systems, whether in direct clinical decision-making or operational management.

AI-Induced Ambiguity in Legal Accountability

One of the most pressing concerns identified by the report is the fragmentation of liability. When a clinical error involves an AI recommendation, say, a misinterpreted scan or an incorrect triage alert, it becomes difficult to determine whether the fault lies with the software developer, the deploying institution, the clinician using the tool, or some combination thereof.

According to legal experts such as Professor Glenn Cohen of Harvard Law School, plaintiffs may find it nearly impossible to prove fault. AI systems often operate as “black boxes,” obscuring how outputs are generated. This opacity, combined with contractual indemnification clauses between vendors and providers, may create a liability maze where no single party bears clear accountability.

The challenge is compounded by the lack of accessible alternatives. To bring a successful claim, a plaintiff must not only demonstrate harm but also prove that a safer design or clinical protocol was available and feasible. For AI systems trained on proprietary data with evolving architectures, this burden can become insurmountable.

Regulatory Gaps and Post-Deployment Drift

The liability dilemma is not isolated from broader regulatory gaps. As Professor Derek Angus of the University of Pittsburgh notes, many AI tools are deployed without FDA oversight, particularly those falling outside of diagnostic imaging or traditional device classifications. This creates an uneven landscape where tools used in high-stakes clinical scenarios lack outcome validation.

Even when tools are evaluated pre-deployment, their real-world behavior can drift. The same algorithm may perform differently across institutions, patient populations, and clinician workflows. This variation undermines the assumption that a pre-market evaluation reflects true clinical impact.

Complicating matters further is the paradox highlighted at the Summit: the AI tools most rigorously evaluated are often the least adopted, while widely adopted tools frequently lack robust evaluation. Without standard post-market surveillance mechanisms or centralized reporting requirements, this gap between adoption and accountability is likely to widen.

Implications for Health System Leadership

For healthcare executives, particularly CIOs, CMIOs, general counsel, and risk officers, this legal uncertainty introduces strategic and operational risk. Institutions implementing AI tools must grapple with questions such as:

  • Who is liable when a negative outcome involves AI?
  • What due diligence processes are in place to vet AI tools before deployment?
  • Are contracts with vendors sufficiently transparent and protective?
  • How are clinicians trained and supported in using these tools appropriately?

Failure to address these questions could expose organizations to litigation, regulatory scrutiny, or reputational damage. More fundamentally, it could erode clinician and patient trust in the technologies being promoted as solutions.

Toward a Liability-Resilient AI Ecosystem

The JAMA report calls for a more deliberate approach to liability and governance. Among its key recommendations:

  • Develop shared legal frameworks to allocate responsibility among developers, deployers, and clinicians.
  • Fund independent evaluations of AI tools that reflect real-world complexity.
  • Mandate transparency in algorithm design, validation metrics, and intended use cases.
  • Encourage development of “explainable AI” that allows users to understand and question recommendations.

Perhaps most critically, the report underscores the need for digital infrastructure investments that enable continuous performance monitoring and rapid detection of harmful drift. Without this, liability disputes may arise only after harm has occurred whihc is too late for preventive intervention.

In the short term, health systems can begin mitigating risk by creating multidisciplinary AI governance boards, updating procurement policies to require outcome data, and engaging legal teams early in vendor negotiations. These steps will not eliminate liability ambiguity, but they can reduce exposure and strengthen institutional preparedness.

As the legal system catches up to the AI era, one thing is clear: accountability cannot be retrofitted. It must be designed into health care AI from the start, with clarity, transparency, and shared responsibility at the core.