Health Systems Must Brace for Quantum and AI Threats by Modernizing Cyber and Governance Infrastructure
![Image: [image credit]](/wp-content/themes/yootheme/cache/cf/xc0a14504-66ef-43a1-a42b-418d4a31e4e1-cf8544ec.jpeg.pagespeed.ic.ooystguucE.jpg)

The convergence of quantum computing and artificial intelligence has introduced not just transformative opportunities but mounting threats to healthcare cybersecurity and digital governance. With post-quantum vulnerabilities looming and AI adoption accelerating across clinical and administrative workflows, hospital IT leaders are entering a new era of strategic risk, one that requires urgent recalibration of infrastructure, policy frameworks and operational safeguards.
Philip Bradley, digital health strategist at HIMSS, warns that quantum computing, though still nascent, could eventually dismantle the cryptographic foundations that health systems rely on to secure sensitive data. Unlike classical computing, quantum systems can factor large primes and solve complex mathematical functions at speeds that render current encryption protocols, such as RSA and ECC, fundamentally obsolete. As a result, protected health information (PHI), financial data, and even the integrity of connected clinical devices are all at theoretical risk.
According to Bradley, the HIMSS INFRAM (Infrastructure Adoption Model) will play a key role in aligning health systems’ digital maturity with post-quantum security imperatives. INFRAM, originally designed to help hospitals benchmark and optimize IT infrastructure, is being adapted to integrate more granular cybersecurity metrics in response to quantum-era threats. Its alignment with evolving NIST and ISO frameworks for post-quantum cryptography will give providers a strategic lens to future-proof their networks, endpoints, and encrypted communications as new standards emerge.
This is not an abstract risk. In 2022, NIST launched its Post-Quantum Cryptography Standardization Project, which has already selected four algorithms as candidates for replacing vulnerable cryptographic systems. Health systems with long-term data retention obligations, such as academic medical centers, large IDNs, and life sciences organizations, must assume that encrypted archives are susceptible to a “store now, decrypt later” approach from malicious actors. The time to prepare is now, not when quantum tools reach general accessibility.
On a parallel front, AI governance is also becoming a mission-critical pillar of digital health strategy. As health systems rush to deploy generative AI tools in clinical documentation, decision support, and patient communications, executives are confronting a policy vacuum that leaves room for ethical misfires and regulatory exposure.
Abhinav Shashank, CEO of Innovaccer, argues that CIOs and CMIOs must insist on guardrails that go beyond performance metrics. In an environment where algorithmic opacity and bias are active threats, AI deployment cannot be governed solely by efficiency outcomes. According to Shashank, tools must be rigorously tested not just for accuracy but for fairness, explainability, and safety. This includes aligning model behavior with institutional policies and the evolving AI Risk Management Framework released by NIST in early 2023.
Moreover, AI tools introduced into clinical care environments, such as decision support or triage assistants, must be validated against known sources of bias and clinically tested before full deployment. As recent warnings from the FDA on adaptive AI in medical devices highlight, iterative learning systems cannot be treated like static tools. Their impact shifts over time, and so must their oversight.
Ultimately, digital leaders must build dual-track resilience: proactively preparing for quantum disruption while actively governing AI tools already in production. That means evolving from one-off implementations to systemic strategy — combining cryptographic modernization, ethical AI policy, and digital infrastructure benchmarking into a unified roadmap for long-term security and trust.
Health systems that fail to adapt may soon find themselves caught between the risk of outdated protection and the liability of unchecked innovation. Those that lead will set the model for operationalizing trust in a rapidly shifting digital frontier.