Skip to main content

AI Certification Marks the Beginning of Real Governance in Clinical Technology

June 12, 2025
Image: [image credit]
Photo 130409802 | Ai © Funtap P | Dreamstime.com

Mark Hait
Mark Hait, Contributing Editor

The Joint Commission and the Coalition for Health AI have initiated what may become the most influential governance infrastructure for artificial intelligence in healthcare to date. Their new partnership, which will produce operational playbooks and a national certification program, signals a shift away from speculative use of AI tools toward institutional accountability.

The timing aligns with a broader industry movement to formalize oversight. The American Medical Association has issued a policy demanding that clinical AI tools be explainable to physicians and verified by independent entities. Meanwhile, the National Academy of Medicine has published an AI Code of Conduct aimed at codifying principles for safety, transparency, and patient trust. But unlike these frameworks, which remain advisory, the Joint Commission’s entry could tie compliance to accreditation status. That changes everything.

Certification will force a reckoning with AI risk

Certification programs are often treated as soft governance but in practice become gatekeeping mechanisms. For AI in healthcare, where models are embedded in workflows that impact diagnostics, treatment pathways, and resource prioritization, the absence of standards is no longer sustainable.

The partnership will produce its first set of best practices in Fall 2025, with certification to follow. In theory, certification will cover safety, explainability, oversight mechanisms, and integration practices. In practice, it will expose how few institutions have comprehensive AI governance today. Most health systems cannot currently identify which models they are using, what data they were trained on, or who is responsible for monitoring their performance.

The certification framework will draw on the Joint Commission’s platform for evidence-based standards and CHAI’s consensus process, which includes participation from more than 2,800 health entities across sectors. CHAI’s prior work includes guidance on risk stratification, model validation, and implementation guardrails, as outlined in its 2024 Blueprint for Trustworthy AI in Health.

Explainability is no longer a developer’s promise

The most consequential development is the AMA’s demand for explainability to be assessed by independent third parties. This is a direct response to the tendency of AI developers to self-certify models as interpretable without disclosing underlying logic. The AMA policy echoes similar language in the OECD’s AI principles and the recently enacted European Union AI Act, both of which treat explainability as a precondition for risk-tier classification.

For US health systems, explainability will move from a product feature to an operational requirement. Certification will force clinical leaders, compliance officers, and digital governance boards to adjudicate whether the tools in use meet minimum transparency thresholds. This also raises the bar for vendors. AI suppliers will need to produce defensible audit trails, accessible documentation, and clinical rationales that align with defined performance criteria.

The biggest problems have yet to be solved

Model drift, institutional accountability, and health equity remain unresolved. Continuous learning models complicate recertification schedules. Governance boards inside most hospitals lack clear authority or resources to review deployed tools. And bias detection at population scale has no shared methodology.

These are both technical and structural problems. The White House Office of Science and Technology Policy has published a Blueprint for an AI Bill of Rights calling for algorithmic discrimination protections, but without enforcement. The new Joint Commission certification may offer the clearest pathway yet to making those protections real inside care environments.

Health systems should not wait for playbooks to arrive

While the first tranche of guidance is expected later this year, institutions should already be working to establish the basic architecture of AI governance. That includes creating a cross-functional AI review board, inventorying active algorithms, and assessing risk tiers using emerging frameworks such as the NIST AI Risk Management Framework.

For vendors, certification will separate mature companies from speculative ones. Claims about AI safety and fairness will need to be substantiated in a regulatory-style dossier that can survive external scrutiny.

The Joint Commission and CHAI are not writing a rulebook for the future. They are defining a new operational baseline. For any health system planning to use AI as a clinical tool rather than a marketing asset, this is no longer optional. The governance era has arrived.