Skip to main content

Certification Is Coming for Healthcare AI. Will Systems Be Ready?

July 30, 2025
Image: [image credit]

Mark Hait
Mark Hait, Contributing Editor

The rapid expansion of artificial intelligence in healthcare has outpaced the guardrails needed to ensure its responsible use. With clinical applications of AI now spanning from diagnostics to administrative optimization, the long-promised benefits of data-driven care are beginning to materialize. But so are the risks, operational, ethical, regulatory, and reputational. A new partnership between The Joint Commission and the Coalition for Health AI (CHAI) aims to confront that imbalance head-on.

Announced in June 2025, the partnership will co-develop industry-wide guidance, tools, and a certification program to help more than 80 percent of U.S. healthcare organizations implement AI responsibly. Unlike vendor-led governance frameworks or institution-specific risk playbooks, this effort is structured to unify the field under a scalable, evidence-based standard. At stake is not only the trustworthiness of AI but also the operational feasibility of embedding it at scale.

From Fragmented Pilots to Systemic Accountability

As of 2024, nearly half of U.S. health systems had already initiated generative AI pilots, according to Fierce Healthcare. Use cases range from ambient documentation to early detection of clinical deterioration. But these deployments often emerge from isolated innovation teams, with minimal input from compliance, risk, or clinical governance leaders.

That fragmented approach poses growing risks. A recent Health Affairs study found that less than 30 percent of hospital-based AI deployments had completed a formal risk assessment or equity audit. Even fewer had patient-facing disclosure protocols or escalation frameworks for model failure. Without shared accountability structures, organizations face widening exposure to litigation, bias-related harm, and clinician distrust.

The Joint Commission’s entry into this space represents a pivotal shift. Known for its deep regulatory integration and legacy of safety standardization, its involvement brings more than brand recognition. It brings teeth. By building AI guidance into its accreditation framework and operational checklists, the Commission can make responsible AI deployment not just advisable—but expected.

CHAI’s Consensus Model Meets Regulatory Gravity

While The Joint Commission lends regulatory clout, CHAI brings breadth and agility. Formed by clinicians and researchers, CHAI has spent the past several years developing open-access frameworks on algorithmic fairness, transparency, and monitoring. Its network of over 3,000 member organizations spans health systems, academic medical centers, startups, and patient advocacy groups, an ecosystem well-positioned to stress-test both technical guidance and cultural adoption.

CHAI’s collaborative posture aligns with calls from the National Academy of Medicine for co-designed AI governance frameworks that address clinical, operational, and population health dimensions simultaneously. Unlike proprietary toolkits that cater to specific vendors or enterprise platforms, CHAI’s outputs are deliberately vendor-agnostic and implementation-neutral. This makes them more adaptable across geographies, care settings, and resource environments.

That neutrality will be critical as the Joint Commission and CHAI begin certifying healthcare organizations. Rather than certifying individual AI tools, an approach already underway at FDA, the partnership will assess whether institutions have the policies, oversight structures, and response mechanisms needed to deploy AI safely in practice.

Certification as a Catalyst, Not a Checkmark

Industry watchers may be tempted to view this new AI certification as just another compliance hurdle. But that reading misses the larger strategic pivot. In a field saturated with innovation pilots and digital transformation rhetoric, what healthcare needs now is a path to operational maturity. Certification can offer more than reputational legitimacy. It can serve as a forcing function for system-wide alignment.

Embedding certification within The Joint Commission’s platform introduces two structural advantages. First, it leverages existing institutional rhythms of audit, improvement, and leadership accountability. Second, it creates a common reference point for payers, regulators, and patients to assess whether AI is being used safely and equitably.

As CMS continues evaluating how AI tools intersect with Medicare Advantage fraud prevention and risk adjustment models, a Joint Commission-backed certification may even evolve into a precondition for reimbursement in certain contexts. That kind of policy tether could transform the adoption curve of responsible AI from optional to obligatory.

The Missing Stakeholder: EHR and Vendor Integration

One notable absence in the initial announcement is the role of electronic health record (EHR) vendors. While health systems may be the implementers, many AI tools are embedded directly into clinical workflows through EHR platforms. Without transparent cooperation from EHR vendors, certification efforts risk being undermined by black-box integrations and opaque update cycles.

Recent GAO investigations into AI in federal health programs have underscored the importance of traceability and vendor transparency. If EHR systems cannot reliably document model provenance, update logs, and audit trails, organizational certification may struggle to fulfill its intended goals.

The Joint Commission and CHAI would be wise to formalize vendor expectations early, perhaps even offering parallel certification pathways for platform providers. Doing so would help ensure that risk management doesn’t stop at the hospital firewall but extends through the entire data and infrastructure chain.

Toward a System Where Trust Is Engineered, Not Assumed

As AI tools reshape how clinical decisions are made, trust can no longer be a passive byproduct of innovation. It must be built explicitly, audited continuously, and governed transparently. The Joint Commission–CHAI partnership is a strong step toward that reality.

But adoption alone will not ensure safety. Without institutional rigor, even well-intentioned AI tools can deepen disparities, disrupt workflows, and erode patient confidence. Certification offers a path to avoid that future, not by blocking innovation, but by channeling it through frameworks that respect complexity and prioritize accountability.

The first wave of guidance is due this fall. What follows will test not only how ready healthcare is for AI, but how ready AI is for healthcare.