Can Deregulation Coexist With Safety and Trust? What EHRA’s Proposals Mean for Providers
![Image: [image credit]](/wp-content/themes/yootheme/cache/61/xdreamstime_xxl_186876253-scaled-61109fd5.jpeg.pagespeed.ic._i1Hzp-svc.jpg)

EHRA’s deregulatory wish list, while developer-focused, presents an under-examined risk for clinicians and health systems. It raises a vital question: when federal guardrails shrink, who picks up the oversight slack?
The Association’s recommendations to ONC suggest eliminating or easing numerous certification components — from Real World Testing and AI transparency to interoperability and safety usability checks. While each change is framed as a way to reduce administrative burden, the net result may be a dramatic shift in where, how, and by whom digital safety and reliability are maintained.
Who’s Responsible When “Certified” Doesn’t Mean “Usable”?
EHRA wants to replace Real World Testing — the one method requiring EHRs to actually demonstrate functionality in live clinical environments — with basic attestations. But for clinicians, this isn’t a technicality. It’s one of the few regulatory signals that a tool will work in practice, not just in theory.
Removing RWT raises questions: Will software be “certified” even if it fails to integrate cleanly into workflows? Will usability or performance breakdowns continue to go unnoticed until they affect patient care?
AI Transparency: Optional, or Essential?
EHRA’s call to scale back AI-related certification — including requirements for source transparency and predictive intervention review — runs counter to growing calls for explainable AI in healthcare.
If these standards are weakened, clinicians may be asked to trust decision support tools that:
- Don’t reveal their data sources
- Can’t be audited or adjusted
- Don’t clearly signal when a recommendation is AI-generated
This erodes clinician autonomy and undermines trust in new digital tools just as health systems begin large-scale AI deployments.
From Developer Burden to Provider Risk
EHRA repeatedly argues that certification imposes unnecessary burden on developers. But what’s missing is a serious analysis of the downstream effect: What happens when oversight is relaxed but clinical responsibility stays the same?
Without formal certification processes in place, more scrutiny shifts to care teams, CMIOs, and informatics leads. They become the de facto quality control, expected to vet systems, assess usability, and ensure AI safety — on top of managing day-to-day care delivery.
For small or resource-constrained organizations, this is an unsustainable shift. Even for large systems, it poses staffing and legal challenges that increase the hidden cost of technology adoption.
What a Balanced Path Could Look Like
Deregulation doesn’t have to mean abandoning clinical safety. A middle path could include:
- Risk-tiered certification: Less burden for low-risk, admin-only systems; higher standards for clinical decision support.
- Standardized model transparency: Public documentation or model cards for AI tools, instead of embedded UI disclosures.
- Shared vetting frameworks: ONC-certified templates or checklists to help providers evaluate uncertified components.
Deregulation Can’t Ignore the End User
EHRA’s intentions — reducing waste, speeding innovation — are not wrong. But implementation matters. If ONC adopts these proposals without strong compensatory measures for providers, it risks accelerating the very burnout and mistrust that digital health reform aims to solve.
In a healthcare system already burdened by workflow fragmentation and information overload, less regulation without more support isn’t liberation. It’s abandonment — and frontline clinicians will feel it first.