VA’s AI Strategy Shifts from Ambition to Infrastructure
![Image: [image credit]](/wp-content/themes/yootheme/cache/22/677813391e8511c76b8fc97a-dreamstime_xl_152043361-22749a16.jpeg)

The Department of Veterans Affairs (VA) has released an expansive roadmap for operationalizing artificial intelligence, articulating a five-priority strategy to embed AI across clinical, administrative, and support functions. While many federal AI initiatives remain aspirational, the VA’s plan marks a notable shift: from proof-of-concept pilots toward foundational infrastructure, system-wide governance, and workforce-scale enablement.
The question is no longer whether AI can improve Veteran services, but whether the agency can operationalize those gains without introducing new layers of risk, inequity, or fragmentation.
From Pilot Culture to Platform Thinking
Many public sector agencies continue to treat AI as a skunkworks function, isolated teams conducting isolated experiments. The VA’s latest strategy breaks from that mold, recognizing that real transformation requires more than high-impact use cases. It requires scaled platforms that enable safe, repeatable, and governed AI implementation.
At the heart of this shift is the Summit Data Platform (SDP) and broader enterprise data modernization. These platforms are structured to support reusable AI services, auditable model pipelines, and secure access for authorized developers and analysts. The VA is explicitly using live AI deployments to inform future standards rather than freezing development until standards are set, embracing an adaptive governance model that aligns with OMB’s M-25-21 and M-25-22.
This is a pragmatic decision. As tools like GPT-4 and generative AI APIs evolve monthly, waiting for universal standards would delay impact. By embedding AI efforts within evolving infrastructure and federated governance, VA is attempting to balance innovation with accountability.
Rethinking the Role of the EHR
Perhaps the most consequential aspect of the strategy is its redefinition of the Electronic Health Record (EHR). Instead of framing the EHR as the destination for AI integration, the VA positions it as a node in a larger network of interoperable, AI-augmented applications.
This has direct implications for ongoing federal EHR modernization efforts. Rather than relying solely on embedded vendor features, the VA anticipates layering context-aware AI assistants across both legacy and emerging platforms. That includes ambient documentation tools, clinical decision support, and dynamic information retrieval, all of which are designed to reduce cognitive load and documentation time for clinicians.
Clinical efficiency is a persistent challenge across the Veterans Health Administration (VHA). According to a recent Health Affairs study, administrative burden remains the top contributor to physician burnout in federal health systems. AI tools that streamline note-taking and highlight latent health trends may offer modest but meaningful relief, if they’re integrated carefully.
Claims and Benefits: Targeting Time, Not Just Accuracy
The Veterans Benefits Administration (VBA) is targeting AI not just for accuracy improvements, but for time compression. In theory, AI-supported document classification, fraud detection, and eligibility adjudication could reduce benefit processing from months to minutes. That’s a bold goal, but not without precedent.
CMS and private insurers have already experimented with real-time claims processing and algorithmic adjudication. However, outcomes vary widely based on model transparency, input quality, and error remediation capacity. The VA’s challenge will be to apply these lessons while preserving Veteran trust and ensuring human oversight in complex or ambiguous cases.
One standout tactic: aligning procurement and deployment pathways around use case utility rather than novelty. The VA explicitly defaults to buying commodity AI tools when they meet functional needs, reserving custom development for truly unique problems. This lowers total cost of ownership and improves scalability, particularly as generative AI tools become increasingly interchangeable across vendors.
Trust as a Design Constraint
The VA’s AI ambitions are tempered by one critical constraint: public trust. A recent user research session with 75 Veterans found most were comfortable with AI scribes in clinical settings, but concerns about privacy and misuse remain. This echoes broader public sentiment. A 2024 KFF survey found that 59% of Americans support AI in healthcare “with guardrails,” but only 21% believe the government currently has those guardrails in place.
To its credit, the VA is taking steps to close that perception gap. All high-impact AI use cases are subject to formal review, and the agency publishes an annual public-facing AI use case inventory. The goal is not just compliance, but intelligibility: ensuring Veterans and stakeholders understand what the technology does, where it’s used, and how data is protected.
Crucially, AI systems must meet the same privacy and security thresholds as other VA IT systems. That means encryption, auditability, access controls, and integration into enterprise risk management frameworks. But procedural compliance alone may not suffice. Trust is relational, not just technical. As tools like AI-powered virtual assistants and fraud detection systems become more visible, VA must proactively address concerns about bias, explainability, and recourse.
The Workforce Equation
A final pillar of the strategy is workforce readiness. The VA’s 400,000+ employees span clinical, administrative, and technical domains, with wildly different levels of AI exposure. The agency aims to close this gap by pairing central “hub” teams (in OIT and VHA’s Digital Health Office) with “spoke” groups across VISNs and program offices.
This distributed model is critical. AI cannot be operationalized from a central command center alone. Clinical champions, frontline innovators, and regional data stewards must all play a role. That’s why the VA is investing in an “AI Corps” of technical specialists, a clinician AI innovators program, and enterprise-wide training programs on generative AI.
Still, success will hinge on more than content delivery. Upskilling must be paired with platform access, workflow integration, and time allowances to experiment. Otherwise, the promise of “AI for all” risks becoming a slogan rather than a strategy.
From Strategy to Systemic Change
The VA’s five-priority AI framework, focused on access, workflow reimagination, infrastructure, workforce, and trust, reflects a system-level view of transformation. This stands in contrast to vendor-centric AI roadmaps that overemphasize feature sets and underplay integration costs.
By anchoring its AI efforts in infrastructure, governance, and workforce enablement, the VA is attempting something rare in the federal space: not just deploying AI tools, but reshaping how those tools fit into public service delivery. The stakes are high, and so is the complexity, but the strategy is credible, not performative.
Whether this vision materializes will depend less on model sophistication and more on execution discipline. That includes procurement reform, change management, regulatory alignment, and most of all, willingness to adapt the system, not just the software.