Skip to main content

Jim Younkin Explains What Responsible AI in Healthcare Really Requires

June 18, 2025
Image: [image credit]
ID 354316818 © Retrosesos | Dreamstime.com

Jim Younkin, Senior Director, Audacious Inquiry, a PointClickCare company

AI is moving fast, but governance is moving slow. That tension is at the heart of this week’s conversation with Jim Younkin, MBA, senior director at Audacious Inquiry, a PointClickCare company. Younkin brings nearly three decades of experience in health IT strategy and federal programs, and today he is one of the few voices speaking plainly about what responsible AI should mean in practice, not just theory.

In this Q&A, Younkin outlines five essential pillars of responsible AI in healthcare: accountability, fairness, transparency, human oversight, and privacy. But he doesn’t stop there. He walks through how these principles translate into real implementation decisions, like how to flag AI risk before it enters the workflow, or how to set metrics for ethical performance in patient-facing tools.

For provider organizations rushing to deploy AI solutions in documentation, triage, and clinical decision support, this interview is both a warning and a roadmap. Younkin argues that patient trust, regulatory readiness, and financial integrity all hinge on building AI into operations with a purpose-built foundation. And the time to do that is now, before regulators, auditors, or patients force the issue.

What are the components of a responsible AI policy, and why are they important in healthcare?

The five components of responsible AI use in healthcare are:

Accountability – Organizations are responsible for ensuring that the AI systems they create, deploy, or sell operate effectively and ethically. This goes beyond routine monitoring and requires clearly defined roles and accountability structures. Key elements include appointing leaders directly accountable for AI outcomes, keeping thorough and transparent records of how AI decisions are made, regularly assessing performance across varied patient groups, putting swift error-response protocols in place, and creating clear pathways for clinicians and patients to provide feedback.

Fairness – AI systems must be built to ensure equitable care and outcomes for every patient, regardless of race, gender, age, income level, or location. Achieving fairness involves understanding how existing healthcare disparities may be embedded in training data, consistently testing system performance across diverse populations, and making necessary adjustments when discrepancies arise. Organizations should define specific fairness criteria and implement review procedures to help ensure that AI tools work to mitigate – rather than perpetuate – inequities in healthcare delivery.

Transparency – People have a right to understand how AI systems influence decisions – especially those related to the care they or their loved ones receive. Healthcare organizations must be open and transparent about how AI tools function, including the data they rely on and how they generate conclusions. Transparency also involves tailoring explanations to different audiences, i.e., providing technical details for IT teams and regulators, operational guidance for clinicians, and clear, accessible explanations for patients about how AI factors into their care. Organizations should maintain and share documentation on data sources, system limitations, performance benchmarks, and the specific role AI plays in clinical workflows. This level of openness helps build trust, supports effective oversight, and promotes ongoing refinement of AI systems.

Human oversight – AI systems should be designed to assist and strengthen human decision-making, rather than take its place. Healthcare providers must retain the authority to assess, confirm, or disregard AI-generated recommendations, relying on their clinical judgment and knowledge of each patient’s unique circumstances.

Privacy and security – AI systems bring unique privacy and security challenges that go beyond those of traditional healthcare data protection. Because they rely on large volumes of data to operate effectively, these systems expand both the amount of information collected and the potential risk of exposure. AI can also unintentionally uncover patterns that may compromise patient privacy in ways conventional systems don’t. Moreover, the same advanced pattern-recognition features that make AI powerful can, without proper security, increase the risk of re-identifying individuals from anonymized data. To safeguard patient information, organizations must implement dedicated protections and governance frameworks tailored specifically to the risks posed by AI – covering every stage from development to day-to-day operation.

These pillars are particularly important in healthcare because of the high stakes involved and the necessity that people be able to trust the care they receive. AI does tremendous good when applied correctly, but it also can cause harm when misused. Following these guidelines allows AI to be highly effective while avoiding the sort of errors that could cause users and patients to lose faith in the technology.

How do you ensure human oversight remains effective when AI tools are integrated into clinical decision-making?

Effective human oversight requires both proper system design and ongoing training. AI tools should present recommendations with confidence levels and supporting rationale, allowing clinicians to quickly assess the quality of suggestions. Healthcare organizations should establish clear protocols for when AI recommendations can be accepted versus when additional human review is required. Regular training should help providers understand AI capabilities and limitations, ensuring they maintain appropriate skepticism while leveraging AI’s analytical power. For example, in diagnostic imaging, AI might flag potential abnormalities, but radiologists retain full authority to interpret findings within the clinical context.

What considerations should healthcare organizations have when implementing AI solutions?

AI should be implemented only as part of a well-thought-out plan with clear goals and measurable metrics. All affected groups should have input into how it will be used and be educated in its use, including guidelines and safety measures. It’s best to start with small pilot programs that offer a clear ROI if successful. That makes it easier to make any necessary adjustments before going larger. Organizations should choose scalable technology that can be used with both department and enterprise-wide platforms as implementation expands. And, of course, they should use an AI architecture that meets healthcare compliance requirements. Organizations should also assess their data readiness and staff preparedness before implementation. This includes evaluating data quality, ensuring proper governance structures are in place, and identifying potential resistance to change. Success often depends as much on organizational culture and change management as on the technology itself.

To what extent are patients or the public involved in understanding how AI influences their care?

The public, including patients, is cautious about AI involvement in their care, particularly when it comes to making decisions. They want a real-life clinician to make those calls, even if they’re informed by AI. Some states have passed or are considering legislation that requires disclosure of AI use. Providers should be transparent with patients about how they use AI in clinical decision-making and how it makes care better. They’ll probably find that a lot of patients are using it themselves to research their conditions and treatments.

Patients seem less concerned about AI involvement in the non-clinical aspects of care, such as coding, billing, setting appointments, sharing data, streamlining processes, etc. If it improves their experience – and it does – I think they will support it. Patients want effective, accessible healthcare, and AI helps deliver that.

What are some of the AI use cases you’re seeing, and how effective have they been?

Many healthcare organizations have success using AI for clinical documentation enhancement. Healthcare generates a tremendous amount of data, but it’s often poorly structured and it can be difficult for providers to find the most relevant information quickly, such as during an appointment.

Fine-tuned generative AI can extract and summarize from records the most relevant information for providers, giving them the information they need to make informed care decisions while saving them the considerable time and effort it would have taken to search themselves.
Another example is the use of AI on help desks. Help desks are often a primary source of contact between patients and healthcare organizations, but the volume of calls and emails can be overwhelming. Delays in resolution can lead to frustration on the part of patients and even impact clinical operations.

AI has improved this situation considerably. Natural language processing automatically categorizes and prioritizes incoming tickets while AI search finds similar past tickets and their successful resolutions. A recommendation engine suggests responses based on past solutions and escalates complex cases for human review. This AI-powered system has reduced average resolution time by 40% for common issues.