Speed Versus Safety in AI Regulation Isn’t a Binary Choice

The Department of Health and Human Services (HHS) is accelerating its push into artificial intelligence, with top officials openly prioritizing speed over caution. Speaking at the Milken Institute Future of Health Summit, Deputy Secretary Jim O’Neill declared that “faster is better” when it comes to AI deployment in healthcare. His colleague Amy Gleason, strategic advisor at the Centers for Medicare & Medicaid Services (CMS), echoed this urgency, predicting that AI assistants will become routine patient tools within four years.
Taken together, these remarks signal a dramatic shift in federal posture: from cautious observer to active accelerator. The question isn’t whether the federal government will greenlight AI in healthcare. It’s how quickly it can get out of the way. But this “go-fast” stance is colliding with a complex reality: the AI era will demand more regulation, not less, and speed without structure is a risk multiplier, not a shortcut to innovation.
A Federal Bet on Infrastructure, Not Intervention
Gleason outlined the administration’s two-pronged focus: build foundational infrastructure like national provider directories and identity systems, then let private innovators take the lead in applications like ambient AI, disease management, and patient-facing chatbots. The Health Technology Ecosystem Initiative and its forthcoming CMS App Store are central to this strategy. More than 60 companies have signed on to deliver vetted tools by early 2026.
This division of labor, government sets the rails, industry drives the train, mirrors successful models in other sectors. But healthcare is not just another data market. The risks of unintended bias, data misuse, clinical overreach, and patient harm are not hypothetical. They’re already playing out in early AI implementations across the country.
A 2024 Health Affairs analysis warned that generative AI tools in healthcare are being adopted faster than institutional safeguards can keep pace. Some systems report “hallucinated” summaries in EHRs; others face challenges in validating AI recommendations for clinical appropriateness. These are byproducts of moving too fast without aligning technical innovation with clinical governance.
Workforce Realignment, or Workforce Risk?
One of the more revealing moments came when Gleason described CMS’ internal talent shortages. As of January, the agency had 12 engineers to oversee thousands of contractors. “There’s no way that anybody can have any oversight,” she admitted. CMS is now actively recruiting from Silicon Valley and startups, encouraging tech talent to join for temporary “tours of duty.”
This talent infusion is necessary, but it also underlines the structural fragility of the current federal health tech apparatus. Large language models are already being deployed to thousands of federal employees. Yet oversight, interpretability, and auditability remain inconsistent. A system that deploys AI tools before staffing its oversight mechanisms risks not only policy failure, but public trust erosion.
The False Trade-Off: Innovation or Regulation
O’Neill’s assertion that “faster is better” may resonate in a Silicon Valley boardroom, but it sits uneasily in a regulatory agency tasked with patient safety. The suggestion that speed and safety are mutually exclusive betrays a fundamental misunderstanding of healthcare’s complexity. Regulatory delay is often diligence in disguise.
The Food and Drug Administration, for example, has taken a more deliberate approach, publishing draft guidance on predetermined change control plans for AI/ML-based software. These frameworks allow for iterative development while ensuring traceability and validation. Similarly, the Office of the National Coordinator for Health Information Technology (ONC) has emphasized algorithmic transparency and data provenance as core to any trustworthy AI system.
Rather than bypassing these processes, HHS should be aligning its infrastructure investments with these guardrails. If AI tools are to become everyday companions for patients, as Gleason envisions, they must be vetted not only for utility, but for bias, accessibility, and unintended consequences. Otherwise, the government risks enabling a two-tier system: AI for those who can interpret it, and errors for those who can’t.
A Role Model in Waiting, or a Cautionary Tale
The ambitions laid out by CMS and HHS are substantial: unify fractured provider directories, embed identity security into patient portals, and unleash private sector tools that simplify the patient experience. But if those ambitions aren’t grounded in enforceable standards, they’ll fall short of their promise, or worse, they’ll compound existing disparities in access, trust, and outcomes.
The future of AI in healthcare won’t be defined by how fast it arrives, but by how well it works—and for whom. For HHS to truly lead, it must discard the false dichotomy between speed and safety and embrace a third path: strategic acceleration. That means regulating with agility, recruiting with purpose, and building infrastructure that prioritizes both access and accountability.
Speed matters, but only when it’s going in the right direction.