Skip to main content

The Doximity–OpenEvidence Lawsuit Could Redefine AI IP Boundaries in Healthcare

September 23, 2025
Image: [image credit]
ID 327364816 © Alexandersikov | Dreamstime.com

Brandon Amaito, Contributing Editor

A legal feud between Doximity and OpenEvidence has pulled into sharp relief the growing pains of healthcare artificial intelligence, particularly where innovation meets intellectual property. At the center of the case are allegations that Doximity used fake physician accounts to “prompt hack” OpenEvidence’s AI platform, extracting proprietary inputs and underlying logic. Doximity denies the claims and has countersued for defamation.

The dispute, though in its early stages, has the potential to set lasting precedents for how AI models, prompts, and data interactions are treated under trade secret and cybersecurity law. For digital health executives, compliance officers, and AI governance leaders, the case reveals the fragile architecture underpinning competitive advantage in the AI-for-clinicians space.

Prompt Hacking and the Question of Ownership

At issue is whether AI prompts, specifically engineered language strings that guide large language models (LLMs) in producing useful medical outputs, can be considered proprietary intellectual property. OpenEvidence argues that its carefully constructed prompt libraries, built from exclusive content deals and domain-specific workflows, represent a defensible trade secret.

Doximity, by contrast, claims that its interactions with OpenEvidence’s AI were both legal and limited to publicly available materials. Its core argument: prompt structures built from publicly accessible medical knowledge do not meet the threshold for legal protection under trade secret law.

The technology behind the dispute is not hypothetical. Prompt injection and reverse engineering of AI models are well-documented vulnerabilities. A Stanford University study in early 2025 found that LLMs are particularly susceptible to prompt leakage when engaged in extended user interactions, especially when prompts are reused across users and exposed via insecure session management.

This raises complex questions: If a prompt is derived from public data but fine-tuned for performance, does it become proprietary? If it can be extracted by a third party through clever input engineering, does it lose protection? The federal court hearing this case may be among the first to answer.

High Stakes in a Rapidly Consolidating Market

The AI-for-doctors sector has evolved quickly from experimental chatbots to production-level tools integrated into EHRs, telehealth platforms, and clinical workflow engines. Doximity’s suite of AI offerings now spans secure messaging, clinical search, and document drafting tools. OpenEvidence, a younger company backed by Sequoia Capital and academic partnerships, positions itself as a deep-tech innovator with exclusive content licensing from journals like JAMA and NEJM.

The timing of the lawsuit is not coincidental. With healthcare AI startups seeking scale and incumbents acquiring or replicating features at pace, product differentiation has narrowed. The lawsuit also follows Doximity’s acquisition of Pathway Medical, a move OpenEvidence claims was intended to replicate its core AI functionality.

According to Digital Health Wire, the case could influence how M&A due diligence is handled when AI capabilities are part of the target’s asset profile. It may also spark more aggressive IP enforcement strategies from digital health startups seeking to protect algorithmic advantages.

Regulatory and Privacy Overlap Adds Complexity

Unlike general-purpose AI, healthcare models operate within a matrix of privacy, licensing, and ethical guardrails. Prompt content that pulls from clinical guidelines, peer-reviewed literature, or patient-specific data must be managed with precision. If prompts contain or reference protected health information (PHI), even indirectly, HIPAA and state privacy laws may come into play.

OpenEvidence has not alleged PHI misuse, but the case nonetheless spotlights an emerging concern: Can interactions with AI platforms be used to extract not only intellectual property, but also confidential or regulated data? The Office for Civil Rights (OCR) has recently signaled that AI data pipelines may fall within the scope of breach reporting requirements if misuse leads to unauthorized access or inference of protected content.

This raises the stakes for companies training or deploying AI models on sensitive medical datasets. It also suggests that AI governance must now account not only for model bias and explainability but for adversarial prompt interactions, an area many compliance programs have yet to address.

The Precedent Problem in AI IP Enforcement

One of the key challenges in this case is the lack of established legal doctrine around AI prompt structures and reverse engineering via LLM interaction. Unlike software code, which is traditionally protected by copyright or trade secret law, prompts are harder to isolate, define, or watermark.

Courts have historically been cautious about granting IP protections to inputs or behaviors derived from public knowledge. Yet as AI platforms become more complex, and as competitive advantage increasingly derives from nuanced prompt design and response optimization, companies are looking for new legal frameworks to protect their work.

A 2025 analysis by Legal.io notes that courts may soon be asked to decide whether the act of engaging an AI model in a manner intended to expose its inner workings constitutes unauthorized access under the Computer Fraud and Abuse Act (CFAA). If the answer is yes, it could open the door to criminal and civil penalties for AI prompt hacking. If not, it may signal to competitors that probing AI tools, within limits, is legally permissible.

Implications for Digital Health Stakeholders

For hospital systems, payer organizations, and provider networks evaluating AI-enabled solutions, the Doximity–OpenEvidence case highlights several urgent takeaways:

  • Vendor evaluation must now include IP posture. Organizations should understand whether vendors are developing proprietary AI models or relying on fine-tuning third-party models using prompts that may be at legal risk.
  • Contracts need updated clauses. Procurement and compliance teams should include language that defines IP ownership, prompt engineering practices, and data integrity responsibilities.
  • AI governance frameworks must expand. AI risk management cannot stop at accuracy and bias; it must address adversarial prompt injection, model leakage, and trade secret stewardship.
  • Legal departments should prepare for cross-functional coordination. AI-related disputes are no longer siloed in R&D—they now touch cybersecurity, marketing, product development, and strategic partnerships.

A Legal Fight with Industry-Wide Implications

Regardless of outcome, the litigation between Doximity and OpenEvidence will shape how companies build, protect, and challenge AI-enabled tools in healthcare. If courts side with OpenEvidence, expect a wave of lawsuits as startups move to enshrine their prompt structures and data interactions as trade secrets. If Doximity prevails, the boundaries of AI tool engagement may be redrawn to permit more aggressive competitive analysis.

Either outcome is likely to influence investment patterns, partnership diligence, and how health systems vet AI vendors for originality and defensibility.

As more of clinical decision support becomes AI-mediated, the industry will need not only better models, but better boundaries, technical, ethical, and legal. The Doximity–OpenEvidence case is a signal flare from the front lines of AI commercialization in healthcare. governance?