
As artificial intelligence (AI) continues to transform clinical decision-making, administrative workflows, and payer operations, one unsettling truth remains: there is still no national regulatory framework for its use in healthcare. With federal oversight slow to materialize, states are beginning to write their own rules—introducing a fragmented compliance environment that’s putting pressure on health systems and digital health vendors alike.
From algorithmic coverage denials to AI-powered diagnostics, the stakes are growing. And as the 2025 legislative cycle gains momentum, state lawmakers, patient advocacy groups, and healthcare technologists are increasingly clashing over how to balance innovation, equity, and accountability in the age of medical AI.
States Take the Lead
In May 2024, Colorado became the first state to enact a comprehensive AI law targeting “consequential decisions,” including healthcare applications. The law requires both developers and deployers of AI systems to assess and mitigate algorithmic bias, provide clear audit trails, and notify consumers when they are subject to automated decision-making.
“The federal government is particularly ineffective and slow these days,” said Colorado State Representative Brianna Titone, a lead sponsor of the bill, in an April interview with Axios. “The states really need to step up.”
Other states are following suit. Utah’s Office of AI is now reviewing mental health chatbots used in Medicaid programs. California is advancing legislation to curb insurers’ use of AI for prior authorization, while Oklahoma is building AI standards into its state Medicaid system contracts.
Professional associations are stepping in as well. This spring, state medical and osteopathic boards adopted recommendations on AI ethics, documentation, and liability—guidance expected to shape licensing and disciplinary actions in the future.
Compliance Fragmentation: A Tech Headache
Healthcare organizations operating across state lines are now facing an emerging compliance maze. With no uniform federal standard, vendors may need to support 50 different interpretations of ethical AI deployment.
“You can’t just copy and paste a law into someone else’s statute book and expect it to work exactly the same,” Titone cautioned. “That’s especially true for healthcare.”
For enterprise IT teams, this means updating algorithms to meet varying definitions of transparency and fairness. For example, a patient-facing AI tool deemed compliant in Texas may need major documentation upgrades to meet new requirements in Colorado or California.
“We’re having to build jurisdiction-aware governance frameworks,” said Diana Velasquez, Chief Compliance Officer at Tempus, a precision medicine AI firm. “It’s more overhead, but without it, we risk being shut out of entire markets.”
Federal Activity Remains Patchy
Despite growing pressure, federal regulators have moved slowly. The Biden-era Executive Order on Safe, Secure, and Trustworthy AI—rescinded by the Trump administration in early 2025—had established a Chief AI Officer role within federal agencies and called for robust risk-based assessments.
The Trump administration has since replaced that policy with a new AI Action Plan, but critics say it lacks a healthcare-specific framework. Notably, the plan’s drafting team does not include representatives from HHS, CMS, or the FDA.
In March 2025, HIMSS reiterated its call for national AI policy during the HIMSS Global Conference, warning that the lack of standardization will only grow more dangerous as clinical use cases accelerate.
“We have to make our voices heard,” said Jonathan French, HIMSS senior director of public policy. “Healthcare has unique risks and opportunities that require tailored oversight—not one-size-fits-all regulation.”
Risk, Liability, and Patient Impact
Without guardrails, the real-world risks are multiplying. In 2024, an insurer’s AI tool used to expedite prior authorization decisions was shown to deny claims for certain cancer therapies at triple the usual rate. In another case, a hospital’s sepsis detection algorithm missed key signs in non-white patients due to unbalanced training data.
Such incidents are pushing legal and ethical debates to the forefront. Plaintiffs in several states are suing health plans and hospitals for AI-related harm, claiming violations of civil rights, denial of care, and negligence.
“It’s not enough to say the algorithm made a mistake,” said ethicist Dr. Leah Warner of the Hastings Center. “Healthcare organizations are still responsible for what their tools do—especially when patients suffer harm.”
The HIMSS Policy Framework
To support thoughtful AI deployment, HIMSS has released a set of AI Policy Principles including:
- Risk-based oversight tailored to clinical use
- Bias testing and mitigation in model development
- Deployment monitoring with real-time feedback loops
- Data harmonization across systems and vendors
- Strong privacy, cybersecurity, and disclosure requirements
- Equity-first design principles that avoid denial of care based on protected characteristics
The organization also supports workforce development initiatives to ensure providers, IT teams, and administrators understand how AI systems function—and how to intervene when they don’t.
With no national AI law expected in 2025, organizations must prepare for a patchwork reality. That means building AI audit logs, documentation pipelines, and model explainability protocols into product and workflow design now—not later.
For some, it’s a compliance burden. For others, it’s a competitive differentiator.