
In an unexpected and troubling move, the U.S. Department of Health and Human Services (HHS) has disbanded the Secretary’s Advisory Committee on Human Research Protections (SACHRP), a 21-year-old panel that served as the nation’s leading source of ethical guidance for human subject research. While HHS cited budget streamlining as part of broader agency reductions, the dissolution of SACHRP raises serious concerns—especially amid the growing use of artificial intelligence in clinical research settings.
The decision to disband SACHRP comes just as health systems, biotech firms, and digital therapeutics companies are rapidly integrating machine learning into trials. Yet with no clear replacement for SACHRP’s oversight, researchers are left in a regulatory gray zone. This poses both ethical and financial risks for healthcare technology stakeholders and clinical trial sponsors.
A Missing Compass at a Critical Juncture
Since 2003, SACHRP had offered nuanced, forward-looking recommendations to HHS and the Office for Human Research Protections (OHRP). Topics included research involving children, decentralized trials, and, increasingly, the use of artificial intelligence in human subject research.
“The loss of SACHRP means the loss of a critical ethical and legal compass,” said a recent Holland & Knight policy brief. “With AI applications in clinical studies accelerating, the timing could not be worse.”
Indeed, AI-driven research—such as predictive algorithms for patient enrollment or machine-learning models evaluating trial efficacy—poses novel ethical dilemmas that traditional research frameworks were only beginning to address. Who bears responsibility for harm if an AI model misclassifies a patient’s eligibility? Should AI outputs be considered investigational devices under FDA guidance? These are no longer hypothetical questions.
AI in Clinical Trials Is Already Here
From automated trial matching to virtual control arms, AI is already embedded in research infrastructure. Companies like TrialJectory and Medable are using AI to match patients with oncology trials in real-time, while platforms like Unlearn.AI generate synthetic patient data to reduce the size and cost of randomized trials.
In a December 2024 industry report, CB Insights noted that global investment in AI-enabled clinical trials reached $2.7 billion, a 42% year-over-year increase. Yet no single federal framework currently governs the responsible deployment of these technologies.
“We are building the aircraft while we’re flying it,” said Dr. Michael Buckley, a clinical trial ethicist at Johns Hopkins, in a January 2025 webinar hosted by the NIH Department of Bioethics. “The regulatory infrastructure hasn’t caught up to the sophistication—or the risks—of these tools.”
Private Sector Fills the Ethical Vacuum—For Now
In the wake of SACHRP’s closure, private coalitions like the Coalition for Health AI (CHAI) have tried to fill the gap. CHAI, which includes Mayo Clinic, Microsoft, and UCSF, released its Blueprint for Trustworthy AI in March 2025. But these are voluntary frameworks, not enforceable law.
Meanwhile, state governments are scrambling to regulate AI in health care. Colorado, Utah, and California are among the first to enact or propose legislation affecting algorithmic accountability in clinical decision-making. But this patchwork approach could pose compliance risks for sponsors conducting multi-state trials.
“You now have a situation where trial protocols need to be customized not just for each IRB, but potentially for each state’s AI laws,” said Valerie Rogers, senior director of government relations at the Healthcare Information and Management Systems Society (HIMSS) (Axios, April 2025).
Financial Implications for Sponsors and Developers
Without uniform ethical guidelines, clinical trial sponsors may face mounting legal exposure and increased insurance premiums. In an analysis by Willis Towers Watson, 68% of surveyed biotech CFOs said they expected trial insurance costs to rise due to AI-related liability concerns in 2025.
Technology vendors operating in the decentralized trial space may also face regulatory bottlenecks. “It’s not just about compliance anymore. It’s about investor risk,” said Rachel Kumari, managing director of digital health at L.E.K. Consulting. “Companies that lack a transparent ethical roadmap may find themselves on the wrong side of public scrutiny or litigation.”
What Comes Next?
So far, HHS has not announced a successor to SACHRP. The Office for Human Research Protections (OHRP) remains operational but understaffed, and it is unclear whether it will issue new AI-focused guidance. In the meantime, stakeholders await additional direction from FDA and CMS—both of which have recently reorganized internal offices to accommodate AI policy development, though timelines remain vague.
The real question, then, is not whether AI will be used in clinical research—it already is. The question is whether the healthcare system can regulate it fast enough to prevent harm, protect privacy, and maintain public trust. Without a credible, centralized ethics body, that challenge becomes harder by the day.
Sources: