AI in Healthcare Is Moving Fast but Trust Is Moving Slowly
![Image: [image credit]](/wp-content/themes/yootheme/cache/27/ChatGPT-Image-Aug-9-2025-04_05_31-PM-275d5b33.png)

Artificial intelligence is no longer a theoretical capability for healthcare systems. It has moved beyond pilot projects and vendor demonstrations into live, day-to-day workflows. From pre-visit patient history summarization to automated claims processing, AI is showing up in both the exam room and the back office.
Yet while technology maturity is accelerating, adoption at scale is not. The gap is increasingly defined by trust, or the lack of it, among clinicians, patients, and regulators. Many healthcare leaders now face the paradox of possessing tools with proven efficiency potential but encountering deep skepticism about their safety, reliability, and purpose.
The most valuable use cases right now
In 2024, several AI applications have emerged as credible, implementable solutions that address persistent operational and clinical challenges. Ambient listening technology, for example, is moving from early adoption to mainstream deployment. Instead of typing notes during a patient encounter, physicians can allow AI to transcribe the conversation, extract structured data, and generate a clinical note for review. This not only reduces documentation time but also supports more natural patient interactions, something that aligns with the American Medical Association’s finding that excessive EHR work is a leading driver of burnout.
Pre-visit preparation is another high-value area. AI-powered summarization tools can process large volumes of patient history, including records from external sources, to provide a concise, clinically relevant overview before the encounter. That means providers spend less time combing through fragmented documents from health information exchanges or other providers, and more time preparing for targeted, high-quality conversations.
On the administrative side, AI can be deployed to automate low-risk, repetitive tasks such as responding to common patient inquiries or managing billing workflows. According to a McKinsey analysis, automating even a portion of these functions could save U.S. health systems billions annually, freeing up staff capacity for higher-value work.
The adoption barrier is cultural, not just technical
Despite this progress, adoption is far from universal. Clinicians remain cautious about integrating AI into their workflow. In some cases, they worry about accuracy — particularly with general-purpose AI models that can produce incorrect or irrelevant results, a problem often referred to as “hallucination.” Others fear that reliance on AI could erode clinical judgment or be used to justify staff reductions.
Patient trust is also fragile. A Pew Research Center survey found that 60% of Americans would feel uncomfortable if their healthcare provider relied on AI for diagnosis or treatment recommendations. Even when AI is used behind the scenes, transparency about how it informs care decisions is essential to avoid perceptions of secrecy or overreach.
Building adoption, therefore, requires a deliberate cultural change strategy. Healthcare organizations need to engage all stakeholders early, listen to their concerns, and incorporate their feedback into deployment plans. As ONC guidance has emphasized, participatory design, involving clinicians and patients in system development, can accelerate adoption by fostering a sense of ownership.
Regulatory complexity adds friction
Beyond trust, regulatory fragmentation poses a significant challenge. AI development and deployment must navigate a patchwork of state-level rules, some of which conflict with federal policy. This increases development costs, complicates implementation, and slows innovation.
For example, privacy and security requirements under HIPAA intersect with new state-level consumer privacy laws, creating overlapping obligations that can be difficult to reconcile. Developers and health systems must maintain compliance across all jurisdictions where they operate, which can mean maintaining multiple versions of the same tool.
Several policy experts have called for a more unified, federally led framework for AI in healthcare. Such an approach could set consistent standards for safety, transparency, and accountability while enabling faster innovation. The FDA has already taken steps toward regulating certain AI-based medical devices, but broader guidance for non-device applications remains under discussion.
What’s at stake for leadership
For healthcare executives, AI adoption decisions are high-stakes investments. The right deployment can reduce burnout, improve patient satisfaction, and create measurable efficiency gains. The wrong one can lead to wasted capital, low utilization, and reputational risk.
Success will hinge on embedding AI in ways that respect workflow realities. That means choosing use cases that address immediate, high-friction pain points rather than chasing novelty. It also means ensuring AI is explainable, interoperable, and demonstrably accurate in real-world conditions.
Next week, Ben Scharfe, EVP of AI Initiatives at Altera Digital Health, will share his perspective on where AI delivers the most value today — and the risks healthcare leaders must anticipate. His insights will illustrate how targeted, domain-specific AI can support clinical decision-making and operational efficiency without undermining trust.