Skip to main content

HHS AI Strategy Sets a Federal Precedent, But Stops Short of Sector-Wide Reform

December 8, 2025
Image: [image credit]
Photo 142816673 © kkssr | Dreamstime.com

Victoria Morain, Contributing Editor

The U.S. Department of Health and Human Services (HHS) has released its first unified artificial intelligence (AI) strategy, positioning AI as a transformative tool for internal operations, public health research, and care modernization. Framed as a “OneHHS” approach, the strategy invites coordination across all departmental divisions, including CDC, CMS, FDA, and NIH.

But for all its ambition, the new strategy remains internally focused, aimed more at optimizing the federal workforce than overhauling AI’s role in national healthcare delivery. The result is a plan that raises expectations, but also reveals the slow pace of federal adaptation in the face of rapidly evolving AI technologies and commercial adoption.

Five Pillars, One Department, Limited Scope

The HHS AI Strategy is built around five thematic pillars: governance, infrastructure, workforce, research reproducibility, and care modernization. These pillars reflect priorities common to public sector modernization efforts: build systems that are secure, efficient, and equitable. According to Deputy Secretary Jim O’Neill, the plan is designed to improve outcomes while reinforcing public trust and transparency.

However, the scope of the strategy is almost entirely inward-facing. It focuses on tools for internal operations, research enhancement, and process efficiency. There is no defined mandate for regulating AI use by commercial payers, health systems, or digital health vendors. Nor does the plan offer concrete standards for clinical-grade AI tools already deployed across diagnostics, revenue cycle management, or patient engagement.

This structural limitation means the federal government’s largest health agency is taking a leadership stance without addressing the AI transformation already underway across the sector.

Coordination Without Command

The “OneHHS” concept is a significant structural shift. By unifying divisions such as CMS, FDA, and NIH under one AI governance umbrella, the strategy aims to break down silos and accelerate shared innovation. This approach acknowledges a long-standing truth: health data and AI systems don’t respect agency boundaries.

Yet inter-agency coordination does not equal national leadership. Without clear federal standards or enforcement authority, downstream stakeholders, health systems, vendors, and state agencies, remain free to define their own AI strategies, risk tolerances, and implementation timelines. This fragmentation has already led to uneven adoption, widening trust gaps, and regulatory confusion across the digital health ecosystem.

The new HHS strategy does little to close those gaps. While it rightly emphasizes internal reproducibility and workforce readiness, it leaves external oversight and sector-wide accountability untouched.

Public Trust Demands More Than Internal Optimization

HHS has made transparency and trust central themes of the AI Strategy. But trust in government-led AI initiatives cannot be sustained without public visibility into how these systems are tested, validated, and monitored. As federal agencies incorporate AI into claims processing, eligibility screening, and public health surveillance, the implications for patient data privacy and equity become harder to ignore.

According to a 2024 analysis by Health Affairs, poorly monitored AI tools in clinical settings have already led to care disparities, algorithmic bias, and patient harm. Meanwhile, commercial vendors continue to deploy black-box algorithms with limited external validation and few enforceable standards for performance transparency. HHS’s strategy does not directly address these concerns.

If the federal government seeks to rebuild public trust, internal improvements alone will not be enough. What is required is a national AI governance framework, one that holds public and private actors to the same safety, transparency, and equity benchmarks.

Missed Opportunity for Regulatory Modernization

The timing of this strategy matters. AI-driven tools have moved beyond experimental use and are now embedded in core functions like clinical decision support, prior authorization, claims adjudication, and diagnostic imaging. Several large payers and health systems are actively using AI to stratify risk, determine treatment pathways, and manage utilization—often without full transparency to patients or providers.

The Food and Drug Administration (FDA) has made early moves to adapt its oversight of Software as a Medical Device (SaMD), including AI-based tools. But regulatory frameworks remain fragmented. The new HHS strategy does not advance this work. It lacks any reference to FDA’s risk-based classification models, to Office for Civil Rights concerns about algorithmic privacy, or to the potential for CMS to enforce documentation standards tied to AI-supported care decisions.

In this context, HHS’s AI Strategy feels like a foundational document written for a different moment—one where AI is theoretical rather than operational. It lays important groundwork for internal reform but stops short of addressing the live tensions emerging in real-world AI deployment across the healthcare continuum.

The Path Forward: From Strategy to Enforcement

Despite its narrow focus, the HHS AI Strategy sets a precedent. It affirms that AI is not peripheral to public health or administrative efficiency. It is central. It recognizes the need for coordinated infrastructure, skilled workforce development, and transparent risk governance. And it gives agency leaders a shared framework to begin aligning investment and innovation.

The next step is to extend that framework beyond HHS walls.

The Office of the National Coordinator for Health Information Technology (ONC), OCR, and FDA must now work in tandem to define guardrails for AI across public and private settings. These guardrails should include baseline documentation requirements, independent validation protocols, and public reporting of AI tool performance, particularly for systems used in patient-facing care.

Additionally, CMS and other payers must consider how AI-based determinations will be audited, challenged, or reviewed in real time. A broader regulatory vision should also address algorithmic explainability, bias mitigation, and continuous learning models that evolve without clear revalidation triggers.

Without these measures, HHS risks leading a thoughtful but incomplete conversation, one that improves government workflows while the broader healthcare ecosystem continues to experiment without clear accountability.