Skip to main content

Predictive AI in the ED Must Focus on Flow, Not Just Forecasting

August 19, 2025
Image: [image credit]

Mark Hait
Mark Hait, Contributing Editor

Emergency department (ED) overcrowding is not a new problem, but artificial intelligence may finally be offering new tools to address it. A large-scale, multi-hospital study from the Mount Sinai Health System shows that AI can accurately predict which ED patients will require hospital admission, hours earlier than traditional workflows allow. The results, published in Mayo Clinic Proceedings: Digital Health, offer a real-world case for embedding predictive models into clinical operations.

Trained on data from more than 1 million past patient visits and evaluated across 50,000 real-time encounters, the machine learning model demonstrated high accuracy in predicting hospital admissions shortly after triage. Perhaps most notably, the AI outperformed the predictive value of nurse assessments and maintained performance across multiple hospital sites. But while the technology shows promise, its value will ultimately be judged not by statistical performance, but by how effectively it helps health systems address capacity stress and care delays.

For emergency departments grappling with resource constraints, boarding, and throughput bottlenecks, prediction must be tethered to action.

AI as a Planning Instrument, Not a Triage Tool

Mount Sinai’s study involved seven hospitals and more than 500 emergency nurses. It sought to understand whether AI-generated forecasts, based on historical data and triage inputs, could supplement or improve existing decision-making processes. While combining human and machine predictions did not materially improve performance, the AI model alone demonstrated consistent accuracy, prompting researchers to recommend its integration into operational workflows.

The distinction is important. The goal of such AI is not to override clinical judgment or change triage protocols. Rather, it is to enable earlier downstream planning: preparing beds, notifying inpatient teams, and mobilizing support services before an official admission order is placed.

As ED boarding rates remain high and hospital capacity continues to tighten, such anticipatory intelligence could shift operations from reactive to proactive. But deploying this model at scale requires more than an accurate algorithm. It demands reconfiguration of how and when hospitals make resource allocation decisions.

From Prediction to Systemic Relief

Mount Sinai’s AI model was built specifically to forecast admission needs, not discharge probability or care intensity. This is a crucial targeting decision. Hospital admissions are high-cost, high-disruption events that touch nearly every operational domain: bed management, care coordination, staffing, and documentation.

By moving the “signal” for probable admission earlier in the ED process, hospitals can begin adjusting workflows before formal orders are in place. This allows for dynamic load-balancing, earlier bed assignments, and more efficient use of inpatient resources. In high-volume EDs, even modest reductions in boarding time can improve throughput and patient experience.

However, as noted by study authors, this benefit hinges on real-time workflow integration. Mount Sinai’s next phase will test live deployment of the model into active care settings, where its true operational impact will be measured not in accuracy metrics, but in reduced length of stay, shorter wait times, and improved capacity utilization.

Limitations of a Single-System Study

While the study’s scale and design are impressive, its external validity remains untested. All participating hospitals belong to a single, unified health system. Variability in electronic health records, triage standards, and resource availability may affect performance in more heterogeneous environments.

Additionally, the study’s two-month timeline captures only a narrow operational window. Seasonal fluctuations in ED deman, such as winter respiratory surges or summer trauma patterns, may introduce new pressures that affect model utility. Longer-term, multi-system validation will be needed to confirm reliability.

Nevertheless, the research sets a high bar for what real-world AI evaluation should look like. It moved beyond retrospective model building to involve frontline nurses in a prospective design, and it focused on a concrete use case with immediate system-level consequences.

Predictive Accuracy Is Not a Standalone Metric

While enthusiasm around predictive AI continues to grow, health systems must resist the temptation to adopt tools based solely on model performance. A high area under the curve (AUC) score is only as valuable as the workflow it enables. For EDs, this means tying prediction to capacity coordination, transport readiness, and admission timing, not just generating a likelihood percentage.

A 2024 National Academy of Medicine report cautioned against deploying AI tools without clear downstream actionability, noting that predictive models without aligned intervention pathways can introduce alert fatigue, workflow fragmentation, or even worsen disparities if outputs are poorly contextualized.

Mount Sinai’s design avoids these pitfalls by targeting a discrete, operationally meaningful output, hospital admission likelihood, and planning for real-world integration. But the challenge of AI implementation remains: aligning the predictive engine with the practical mechanics of hospital logistics.

Building Predictive Infrastructure Around Human Expertise

One of the study’s most important takeaways is that nurses remain central to digital transformation. More than 500 participated in the research, not as data entry points, but as co-assessors and operational stakeholders. The model may predict, but nurses triage, advocate, escalate, and execute.

Framing AI as a support tool, not a substitute, aligns with emerging best practices in responsible digital deployment. It also builds the cultural trust necessary for adoption. As Eyal Klang, MD, Chief of Generative AI at Mount Sinai, noted, the value of AI lies in “freeing [clinicians] up to focus less on logistics and more on delivering the personal, compassionate care that only humans can provide.”

That sentiment reflects the broader imperative facing health systems today. As demand increases and capacity tightens, efficiency must not come at the cost of empathy. Tools that help forecast care needs, if well-implemented, can preserve the time and space clinicians need to deliver it.