Skip to main content

Q&A with Dr. Jay Anders, CMO of Medicomp Systems: How LLMs and AI Are Reshaping Healthcare—And Where They Fall Short

April 24, 2025
Image: [image credit]
ID 292740998 © Nils Ackermann | Dreamstime.com

Jay Anders, MD, Chief Medical Officer, Medicomp Systems

Artificial intelligence and large language models (LLMs) are rapidly reshaping the healthcare landscape, offering the potential to streamline administrative workflows, augment clinical decision-making, and improve patient outcomes. Yet alongside these advancements come critical challenges—from unreliable outputs and flawed data to operational inefficiencies and cost concerns.

In this exclusive Q&A, HIT Leaders & News speaks with Dr. Jay Anders, Chief Medical Officer at Medicomp Systems, to explore both the promise and the pitfalls of deploying AI and LLM technologies in real-world clinical settings. Dr. Anders discusses why these tools must be implemented with caution, how data quality remains a fundamental hurdle, and what providers can do to ensure AI supports—rather than undermines—quality care.

What are some of the positive benefits we are seeing from LLMs and other AI-based tools?

Healthcare organizations that adopt large language models (LLMs) and artificial intelligence (AI) technology can experience more overall efficiencies, greater assistance in clinical diagnosis and decision making, and an off-loading of mundane administrative tasks. Locating equipment or managing scheduling – these are things AI does well on its own. AI can also improve ambient listening systems and enhance certain clinical algorithms, such as helping to identify a patient with sepsis or predicting the likelihood that a patient might need intensive care services. As LLMs and AI platforms become increasingly sophisticated, look for even more benefits for healthcare providers.

What are some of the unexpected challenges with these technologies?

It is important to keep in mind that, despite all the positive potential of LLMs and AI, there are also some limitations. These technologies generate voluminous amounts of data for clinicians to manage – but that is just one challenge. A bigger concern is that AI and LLMs lack intelligence and reasoning, so the quality of information they generate is only as good as the training data. Even the most advanced algorithms cannot overcome limitations of poor-quality clinical data – which means that sometimes the output they produce is flawed. When working with these tools, it’s important to ask whether or not the output is trustworthy, if the system has been vetted, and if the text being generated is error-free.

There is a perception that AI can replace a mid-level clinician like a PA or an NP or even a physician, but that has not yet been vetted. In fact, for some health systems, an unexpected challenge is figuring out just how much supervision an AI or LLM system needs. In addition, you must also evaluate if the tools are really streamlining your processes or just creating another workflow to review.

Adding to the challenges is the lack of AI developers asking providers what they want. Vendors are trying to throw this technology at everything. But a trained clinician doesn’t need AI to suggest a diagnosis or treatment plan for every patient who comes into their office with a runny nose and a low-grade fever.

Finally, the cost to acquire these tools is a concern. Many healthcare organizations – especially most rural hospitals – are already running on razor-thin budgets so the initial cost to implement these technologies can be cost-prohibitive.

What are some of the financial and operational impacts of bad clinical data?

One point that I think cannot be stressed enough is that AI is not a magic bullet. This is especially true when you are struggling to manage bad clinical data. AI, for example, can’t fix the problem with AI incorrectly coding patient visits – which occurs about half the time. Coding inaccuracies create downstream problems with everything from reimbursement and workflows, to ensuring patients receive proper follow-up care. And the major source of coding inaccuracy is bad clinical data.

To improve the quality of clinical data, organizations must have tools to validate the data and resolve issues stemming from such things as bad mappings, missing or duplicate data, or incorrect items. And, ultimately it is up to clinicians to advocate for good, clean, and trustworthy clinical data because they are the ones responsible for delivering quality patient care.

What can and should providers do to ensure clinical data is valid, cleaned and optimized for patient care?

Providers should not automatically assume the accuracy and trustworthiness of data received from outside sources, including records and notes from other providers or an HIE. Some steps that providers can take to ensure valid, clean, and optimized data include:

Validate and Normalize Data
Seek technologies that can validate clinical data and fix issues stemming from duplicate or incorrect items, bad mappings, duplicate or incorrect items, and inadequate codes. This includes tools that process structured, semi-structured, and unstructured data.

Enhance Clinical Terminology
Embrace technologies that resolve inconsistencies in local codes and legacy systems, including mappings that are inconsistent across systems and localities; custom terms that are improperly validated or maintained; historical concepts that are insufficient for current care coordination; and terminology standards that aren’t adequately maintained and updated.

Leverage AI Together with Evidence-Based Algorithms
Healthcare organizations can further normalize historical data by combining AI technologies with evidence-based algorithms. These tools can also help to match related diagnoses, recategorize inappropriate items, and fix inadequate or missing codes.

Finally, providers should remember that LLMs and AI systems are tools; they do not replace human interaction and expertise but require the maintenance of a “human-in-the-loop” to ensure the trustworthiness of data for patient care.