Skip to main content

AI Ambitions Meet Systemic Barriers in HHS Push for Clinical Adoption

December 22, 2025
Image: [image credit]
Photo 230368273 © Monticelllo | Dreamstime.com

Mark Hait
Mark Hait, Contributing Editor

The U.S. Department of Health and Human Services (HHS) has issued a sweeping Request for Information (RFI) to identify how artificial intelligence (AI) can accelerate clinical transformation across the American healthcare system. Framed as a call to harness AI for lower costs and improved care, the initiative invites input on everything from evolving reimbursement frameworks to rethinking regulatory pathways. But beneath the enthusiasm lies a complex set of operational, legal, and ethical dynamics that will shape whether these ambitions translate into scalable realities.

Regulatory levers or friction points?

HHS has positioned this RFI as an extension of its broader AI strategy, aimed at activating the agency’s full policy arsenal, including regulation, reimbursement, and R&D investment, to propel the use of AI in clinical settings. The challenge is that many of the systems AI must pass through are already bottlenecks, not accelerants. Despite efforts from the Food and Drug Administration (FDA) to modernize its approach to software as a medical device (SaMD), many AI tools continue to face lengthy and ambiguous approval timelines. This is especially true for adaptive algorithms that change over time, triggering questions about post-market surveillance, safety, and version control.

Moreover, aligning regulatory clarity with clinical utility remains elusive. A 2024 report from the GAO found that most AI-driven tools cleared for clinical use showed limited real-world evidence of improved patient outcomes—raising concerns that the regulatory bar may not be high enough in some cases and too procedurally rigid in others.

Reimbursement isn’t ready

The RFI’s inclusion of reimbursement reform signals a growing recognition that current payment models are structurally misaligned with digital innovation. AI-enabled tools designed to optimize workflows, detect conditions earlier, or reduce unnecessary care often struggle to gain traction under fee-for-service models that reward volume over value.

As the Center for Medicare and Medicaid Innovation (CMMI) continues piloting alternative payment models, none have yet offered a definitive blueprint for integrating AI as a reimbursable component of care delivery. According to a recent Health Affairs analysis, existing models either overlook digital tools entirely or create administrative burdens that discourage adoption. Until reimbursement mechanisms reflect the deflationary promise of AI, the tools most capable of reducing costs may remain commercially nonviable.

Data liquidity without data risk

AI’s effectiveness depends not just on clinical validity but on data liquidity. The Office of the National Coordinator for Health IT (ONC) continues to champion interoperability standards such as TEFCA, yet provider trust in cross-platform data exchange remains tenuous. Privacy protections under HIPAA, while foundational, were not designed for the complexity or velocity of modern data ecosystems.

Efforts to enforce tighter controls, such as recent ONC proposals, could unintentionally constrain the very access AI models require to learn and improve. Conversely, too much latitude risks eroding patient trust and triggering litigation. A 2023 JAMA Network Open study found that over 60% of patients expressed concern about their data being used for algorithmic training without explicit consent. The tension between ethical stewardship and technical performance is unlikely to resolve easily.

R&D must shift from novel to operational

HHS’s emphasis on research and development levers is welcome, but strategic realignment is necessary. While federal grants have historically funded early-stage innovation, less support has gone toward implementation science, integration design, or post-deployment auditing, each critical to ensuring clinical-grade utility.

The NIH’s Bridge2AI initiative represents one of the few federally backed programs aimed explicitly at building ethically sourced, interoperable datasets for AI training. Expanding this model to include funding for implementation at the health system level, particularly in rural and under-resourced regions, could improve both equity and evidence. Without this shift, AI risks becoming yet another technology whose benefits accrue unevenly.

Avoiding overreach through design

While the RFI includes thoughtful framing around caregiver support, outcome improvement, and cost reduction, it stops short of addressing the unintended consequences AI can introduce. Automation bias, algorithmic opacity, and over-reliance on unproven tools can all degrade care if implementation outpaces evidence. A recent KFF brief cautioned that AI hype cycles are often followed by disillusionment unless guardrails are explicit and enforceable.

Future guidance must focus not just on enabling innovation, but on operational readiness. This means designing tools that complement, rather than replace, clinical judgment; embedding auditability into development lifecycles; and ensuring vendors are accountable for long-term performance, not just initial deployment.

Stakeholder input or stakeholder inertia?

Public RFIs can be powerful mechanisms to solicit insight, but only when the feedback loop results in policy refinement. For AI in healthcare, the question is not whether stakeholders will respond, but whether their input will meaningfully influence program design, reimbursement architecture, or regulatory guidance.

Health systems, vendors, and advocacy organizations should treat this as a rare opportunity to confront persistent misalignments between innovation and adoption. For HHS, the real work will begin once the RFI closes: filtering signal from noise, resolving contradictions, and building consensus that can translate into durable change.

The momentum behind AI in healthcare is real. But transformation will require more than aspiration. It will require a full-system reckoning with the operational, ethical, and economic dimensions of digitized clinical care, and a policy posture prepared to navigate the friction ahead.