Why Healthcare’s AI Future Still Needs a Blueprint
![Image: [image credit]](/wp-content/themes/yootheme/cache/95/20250908_1512_AI-in-Healthcare_simple_compose_01k4nf6rpbeygskgn527fp081d-951f687d.png)

Despite record levels of experimentation and investment, most health systems remain underprepared to operationalize artificial intelligence at scale. New data from a joint report by the Healthcare Financial Management Association (HFMA) and Eliciting Insights shows that while 88% of organizations are using AI in some form, only 18% have achieved maturity which is a status defined by the presence of both a late-stage governance framework and a defined enterprise AI strategy.
This gap reveals a crucial friction point in healthcare AI: adoption is outpacing alignment. Clinical and administrative functions are increasingly experimenting with AI-driven tools, but the foundational structures needed to ensure safe, scalable, and compliant deployment are still missing in most organizations. The implications are not merely operational—they are strategic, financial, and regulatory.
Widespread Pilots, Shallow Strategy
The report’s findings highlight a paradox in AI development. Nearly three-quarters of surveyed organizations have launched pilots or implemented AI in finance, revenue cycle, or clinical functions. Yet fewer than one in five health systems have built the governance scaffolding needed to manage risk, define priorities, or scale solutions effectively.
Much of this dissonance stems from the nature of AI implementation in healthcare. Unlike a traditional IT upgrade or EHR rollout, AI is not a singular product. It is a capability that cuts across operational, clinical, and compliance domains, requiring cross-functional leadership and infrastructure alignment from the start. Without it, organizations face siloed experimentation, redundant investments, and increased exposure to regulatory risk.
This is especially problematic given healthcare’s complex regulatory landscape and the fragmented nature of patient data. As JAMA has emphasized, AI governance in healthcare must address not only technical integration but also data provenance, patient privacy, and algorithmic transparency. Without mature frameworks in place, the potential for unintended clinical or legal consequences grows with every deployment.
Resource Gaps and Vendor Dependence
Even among those with a declared AI strategy, few organizations are equipped to execute. The HFMA report reveals that more than 80% of health systems lack sufficient resources to implement AI effectively which is a shortfall that includes both technical staff and operational bandwidth.
In the absence of internal capabilities, most organizations lean heavily on vendors for guidance. According to the report, 70% of hospitals still working toward AI maturity rely on external partners to identify opportunities and support deployment. While vendor relationships can accelerate implementation, they also introduce dependency risks, particularly when multiple, uncoordinated platforms are introduced into a complex care environment.
This fragmentation creates what Ensemble Health Partners CEO Judson Ivy calls “non-value-added work,” particularly in revenue cycle operations, where payer-provider friction already drains clinical and financial resources. Without coordinated AI architecture, every new solution introduces potential for misalignment—separate contracts, divergent data models, and redundant interfaces.
The cumulative impact is a rising administrative burden masked as innovation. Hospitals seeking efficiency gains through AI could inadvertently expand complexity, cost, and compliance risk if governance is not addressed in parallel.
Strategic Hesitation at the EHR Layer
A notable finding from the HFMA survey is that only 10% of health systems plan to wait for their electronic health record (EHR) vendor to offer AI solutions. This suggests growing impatience with the pace and scope of EHR-native AI capabilities, but it also exposes a governance challenge: how to manage third-party AI tools alongside EHR platforms that remain the central source of clinical truth.
Some systems are now hedging by piloting solutions only with existing vendors or companies partnered with them. While this strategy reduces integration risk, it does little to solve the broader architecture problem. AI’s value lies in its ability to synthesize, predict, and optimize across systems. Constraining innovation to existing vendor silos may solve short-term interoperability issues while stalling long-term transformation.
As ONC continues to emphasize interoperability and algorithmic transparency in its regulatory agenda, health systems will need to resolve this tension between innovation and control. Fragmented experimentation without a centralized governance framework risks violating emerging AI safety standards and undermining trust in clinical tools.
Revenue Cycle as a Launchpad, but Not a Destination
Unsurprisingly, most CFOs cited revenue cycle as the most promising area for AI investment. The appeal is clear: administrative waste is high, processes are rules-based, and the potential for automation is well understood. Ambient listening and clinical documentation improvement also rank high among use cases, further underscoring how hospitals are leaning on AI for operational leverage rather than clinical transformation.
But this strategy risks stalling AI’s potential in healthcare. While revenue cycle automation offers clear ROI, it is not where AI will reshape care delivery, reduce disparities, or enable value-based models. Limiting investment to back-office efficiency may deliver cost savings, but it will not solve for diagnostic precision, personalized medicine, or longitudinal population health which are areas where AI could be most transformative.
The challenge, then, is not technical but institutional. As Health Affairs has noted, AI implementation in healthcare often fails not because of flawed algorithms, but because of misaligned incentives and fragmented accountability. A mature AI program demands clarity around data governance, clinical leadership, risk frameworks, and cross-functional integration.
The Cost of Waiting Is Growing
The current state of AI in healthcare resembles a middle phase: optimism remains high, but structure and strategic discipline have not caught up. As cost pressure mounts, some health systems may be tempted to delay governance development in favor of quick-turn pilots or vendor-led rollouts. But the data suggest that such decisions will come at a cost.
Organizations with mature AI programs, though still a minority, are more likely to be reaping early returns on investment, scaling solutions, and integrating capabilities across service lines. They also tend to have higher net patient revenue and larger operational scale, giving them more flexibility to absorb risk and learn iteratively.
For mid-sized and smaller health systems, the path forward will require not just vendor support but internal alignment. Without intentional governance, AI will remain a fragmented set of tools, not a transformational capability. Leaders must decide whether to continue experimenting on the periphery or begin building the strategic scaffolding needed to embed AI across clinical and operational domains.
That scaffolding includes not only policies and processes, but clarity around accountability, ethics, transparency, and impact. In a space as high-stakes as healthcare, these are not theoretical concerns. They are table stakes for sustainable, system-wide innovation.