Why Responsible AI Starts Before the First Algorithm
![Image: [image credit]](/wp-content/themes/yootheme/cache/69/xdreamstime_l_132528154.64fa025448f9f-696a007a.jpeg.pagespeed.ic.z2QivXFjm-.jpg)

There is no shortage of AI in healthcare, but there is a shortage of discipline in how it is applied. As health systems accelerate adoption of machine learning tools for documentation, triage, imaging analysis, and patient engagement, many are also encountering the hard ceiling of what happens when innovation outpaces governance.
The stakes are rising. Nearly 80 percent of health executives plan to increase investments in generative AI over the next year, according to a 2024 Accenture Digital Health Tech Vision report. Yet less than half of those organizations have formal policies in place to evaluate algorithmic bias, manage explainability, or define appropriate use boundaries in clinical decision support.
In other words, AI is scaling, but without scaffolding.
And regulators are noticing. In April 2025, the Office of the National Coordinator for Health IT (ONC) released new guidance urging all certified health IT developers and implementers to adopt risk-based frameworks for AI safety and equity. The agency’s emphasis on “transparency, testability, and user comprehension” marked a sharp pivot toward accountability infrastructure, not just functionality. It also set the stage for deeper federal scrutiny into how clinical AI tools are used, and misused.
This is no longer a theoretical debate. In real-world deployments, fairness failures in AI have already led to disparities in access and outcomes. The landmark 2019 study from Obermeyer et al. in Science found that a widely used algorithm in U.S. hospitals systematically underestimated the care needs of Black patients, allocating fewer resources than for white patients with comparable health status. That algorithm had reached millions before the bias was uncovered.
The industry is slowly learning the lesson: transparency and governance must be baked into AI strategy from day one, not retrofitted in response to harm.
That is the subject of next week’s Q&A with Jim Younkin, MBA, Senior Director at Audacious Inquiry, a PointClickCare company. Younkin has spent nearly three decades navigating the operational, ethical, and compliance implications of health IT, including interoperability, software development, and now AI governance. In this interview, he lays out a five-part model for responsible AI in healthcare: accountability, fairness, transparency, human oversight, and privacy.
His approach cuts through the hype and speaks directly to the C-suite tension: how to embrace automation without surrendering clinical judgment, operational control, or patient trust.
Younkin also addresses the practical implementation questions many leaders now face. How do you balance AI-enabled efficiency with auditability? What metrics should you set before pilot deployment? And how do you ensure clinicians, staff, and even patients understand where the machine ends and human care begins?
These questions are front and center in the race to scale AI responsibly.
As federal agencies increase their posture on safety and equity, as patients grow more aware of automation in their care, and as internal stakeholders demand clarity on risk, the organizations that lead will not be those with the flashiest models. They will be the ones who operationalize trust at scale.
And that begins with governance.
Next week, we share Younkin’s full interview. The week after, we will follow up with an editorial conclusion that looks at where responsible AI policy is heading, what infrastructure is still missing, and how health systems can begin building now for the regulations that are almost certainly coming.