JAMA: AI Tools in Health Care Are Spreading Faster Than We Can Govern Them
![Image: [image credit]](/wp-content/themes/yootheme/cache/11/xChatGPT-Image-Aug-9-2025-04_26_37-PM-11088fdc.png.pagespeed.ic.Lx7CYqwzkl.jpg)

Artificial intelligence is already shaping how care is delivered, how health systems operate, and how patients access services. But the rapid pace of AI adoption is exposing a foundational gap: health care lacks the infrastructure, incentives, and oversight mechanisms to evaluate whether these tools are actually improving health outcomes.
The recent JAMA Summit on AI paints a sobering picture. While AI offers potential across clinical decision-making, business operations, and consumer-facing tools, few of these systems undergo rigorous evaluation. In many cases, the tools most deeply embedded in care delivery are those least subject to regulatory review. This asymmetry between promise and proof risks not only wasted investments but patient harm.
Not All AI Tools Are Created Equal, But All Can Influence Health
The summit’s analysis highlights four broad categories of AI in health care: clinical tools (like sepsis alerts or diagnostic aids), direct-to-consumer applications (like symptom checkers), business operations tools (like scheduling software), and hybrid tools that straddle administrative and clinical functions (like ambient scribe systems). While their interfaces differ, all can shape access, quality, and outcomes.
Yet, across these domains, regulatory and evaluative standards vary dramatically. AI diagnostic tools for imaging often require FDA clearance, while apps marketed for wellness or tools used in revenue cycle management typically do not. This regulatory patchwork is particularly concerning for hybrid systems now being deployed at scale without clear oversight.
Why Evaluation Fails to Keep Pace
Unlike traditional medical interventions, AI tools are mutable, context-dependent, and often difficult to define with precision. Their performance is influenced by human-computer interface, clinical workflows, training, and institutional priorities. Even sophisticated tools can underperform if implemented poorly.
These challenges make randomized controlled trials (RCTs) both impractical and insufficient. Yet alternative evaluation designs, like adaptive platform trials or embedded real-world studies—require infrastructure that most health systems lack. The result: tools are rolled out faster than their effects can be measured.
In business operations, the gap is even wider. Many tools are promoted based on internal use cases or marketing claims, with no public data on patient impact. As administrative AI proliferates, its downstream effects on access and equity remain largely speculative.
A Call for Total Lifecycle Oversight
To address these challenges, the JAMA Summit recommends four core strategies:
- Engage stakeholders across the entire product lifecycle. Developers, clinicians, regulators, and patients must be involved from tool design through deployment and monitoring.
- Build fit-for-purpose evaluation and monitoring tools. Existing safety standards are insufficient. New frameworks must capture real-world health outcomes, not just technical performance.
- Invest in representative data infrastructure. A federated learning network of health systems could support ongoing assessments and generalizable insights.
- Realign incentives. Without financial and policy levers, health systems lack motivation to evaluate AI tools robustly or share findings.
These recommendations reflect a broader recognition: the traditional linear model of development, approval, and post-market monitoring does not work for AI. The technology evolves too rapidly, and its effectiveness is too tightly bound to context.
Why Health Leaders Should Care
For CIOs, CMIOs, and operational executives, the current landscape presents both opportunity and risk. While AI tools promise efficiency gains and clinician relief, unmanaged adoption could introduce bias, inefficiency, or safety issues. Moreover, without meaningful outcome data, leaders cannot distinguish between strategic investments and unvetted hype.
The workforce implications are also mounting. Clinical roles are being redefined. New skills are required. And without transparent governance, trust in AI tools may erode, especially among clinicians asked to rely on opaque algorithms.
Toward an Accountable AI Ecosystem
AI is not just another IT deployment. Its effects span care delivery, organizational strategy, and patient trust. To navigate this complexity, health systems must treat AI as a living intervention, requiring continual validation, monitoring, and recalibration.
In the absence of regulatory mandates, systems can lead by example. Partnering with developers to co-design evaluations, participating in federated research networks, and integrating ethical oversight into deployment processes are all within reach.
But voluntary action is not enough. Policymakers must create the conditions for sustainable, equitable, and transparent AI adoption. As the Summit concluded, the promise of AI will only be realized if the ecosystem is built to learn, adapt, and ensure that technology serves health, not just efficiency.