Ben Scharfe of Altera Digital Health on Targeted AI Adoption in Healthcare
![Image: [image credit]](/wp-content/themes/yootheme/cache/11/ChatGPT-Image-Aug-9-2025-04_26_37-PM-11088fdc.png)
Artificial intelligence is moving rapidly into healthcare workflows, but as explored in last week’s HIT Leaders & News editorial AI in Healthcare Is Moving Fast but Trust Is Moving Slowly, technology readiness does not guarantee successful adoption. While ambient listening, automated pre-visit summaries, and specialty-specific AI recommendations are already reshaping clinical and administrative processes, organizational trust, stakeholder buy-in, and regulatory clarity remain decisive factors in determining whether these tools deliver meaningful value.
In this second installment of our three-part series, Ben Scharfe, EVP of AI Initiatives at Altera Digital Health, offers a direct look at how targeted AI solutions can streamline provider workflows, improve patient engagement, and reduce clinician burnout, without displacing the human expertise that drives care decisions. Scharfe addresses both the potential and the pitfalls of AI adoption, from overcoming resistance among clinicians to navigating fragmented regulatory landscapes.
By situating AI where it can deliver immediate, practical impact, such as reducing documentation burdens or enhancing specialty-specific clinical insights, Scharfe argues that healthcare leaders can unlock efficiency gains while making care more personal. His perspective builds on the stakes outlined in last week’s editorial and sets the stage for our concluding piece next week, which will examine what it will take for AI to mature into a trusted, scalable part of healthcare’s infrastructure.
What do you see as some of the best use cases for AI in healthcare right now?

AI can make sense of vast volumes of data, making it a powerful tool before, during and after clinical encounters. Providers can leverage AI for pre-visit preparation by generating summaries of a patient’s history. They can seamlessly review the most clinically relevant information without combing through charts or long documents shared by HIEs and other healthcare organizations. While in the room with the patient, point-of-care analytics can give providers an easily referenceable snapshot into trends in the patient’s health and identify care gaps to address.
Ambient listening AI is another use case rapidly growing in adoption. This technology transcribes conversations during patient visits and creates structured documentation from the datapoints collected. After the visit is complete, the provider only has to review the note for accuracy, rather than manually inputting all that data, freeing up their time and reducing clicks.
AI also can be leveraged to safely automate routine, low-risk administrative tasks such as addressing common patient inquiries and streamlining billing processes that can eat into a clinician’s day.
Additionally, multi-specialty practices can integrate AI into the EHR to improve efficiency and precision. AI-trained agents tailored to specific specialties can provide more relevant clinical insights and recommendations, minimizing risks like inaccuracies or irrelevant suggestions that often accompany general-use AI models. By customizing AI training to consider the depth of clinical domain knowledge needed in different specialties, multi-specialty practices can mitigate risks and empower clinicians with reliable and contextually aware data.
Amidst the excitement about the potential of AI in healthcare, what are some of the risks associated with AI adoption?
A major risk is resistance to AI adoption by clinicians, staff members and patients. For emerging technologies like AI to deliver real outcomes for organizations, they must be fully embraced by anyone these tools touch.
To mitigate this risk, organizations must understand and address the psychological barriers that may hinder acceptance. Listening to the concerns of various stakeholders and implementing their suggestions when possible is vital to building trust and transparency, which ultimately pave the way for successful adoption.
Another risk is that AI could be used to unilaterally make diagnoses and recommendations for treatment. This would be dangerous because AI is prone to mistakes, including producing output that is inaccurate, a phenomenon known as “hallucinating.” AI is a powerful tool that should assist clinicians, rather than replace them.
Additionally, new IT systems can introduce new security and privacy risks which must be managed to protect sensitive health information. Data leaks open the potential for identity theft as well as bias from insurance carriers, potential employers and others against those impacted by a breach. Healthcare organizations implementing AI must do their due diligence to safeguard their IT systems, their patients and their data.
What do you think clinicians most want from AI technology and why?
I consistently hear from clinicians that the top AI benefits they’re most excited about are connecting more with patients and gaining back time. With fewer technology-induced distractions, clinicians can really listen to and connect with patients. Building that trust is crucial to care plan adherence.
Clinicians also want help with tedious administrative tasks that consume time and energy — both at work and at home during “pajama time.” In particular, they want AI to ease the burdens of documentation, searching for patient records and interacting with payers.
Naturally, clinicians also want patient and population data that provides real-time insights at the point of care to guide treatment plans and inform decision-making. Clinicians know that when they have the right data at the right time and are able to spend more time focused on patients, the end result will be better outcomes and a more satisfied healthcare consumer.
It may sound counterintuitive, but AI’s promise is in its ability to make healthcare feel more human. It’s not about replacing people. It’s about offloading the work that chips away at productivity and efficiency while also compromising patient and provider experiences. Documentation, regulatory reporting, prior authorization — these are all necessary to support care delivery. But with the power of automation, they do not have to come at the expense of connections between people.
What impact do you think ambient listening and other AI-based technologies could have on clinician burnout?
Nearly half (48.2%) of physicians surveyed last year by the American Medical Association (AMA) reported experiencing at least one symptom of burnout. EHR systems were supposed to ease the administrative burdens of clinicians and greatly improve efficiency and data-sharing. Yet a recent large study about how EHR systems impact burnout shows that excessive documentation work is a huge factor in a physician’s desire to leave a provider organization.
Clinicians didn’t get into medicine so they could stare at a computer screen all day and take home charting work at night. They’d much rather be focusing attention on their patients. AI and ambient listening enable faster documentation, giving clinicians more time to spend with patients face-to-face.
By reducing major contributors to clinician burnout, ambient listening and other AI-based technologies can help improve patient safety and outcomes while driving down costs.
How do you see AI technologies benefitting patients?
Patients have been using tools to better inform their health and care decisions for years. As internet adoption grew, many turned to search engines for answers to their healthcare-related questions, and today, patients are seeking information from generative AI chatbots.
Healthcare organizations have an opportunity to empower patients and enhance patient education by integrating generative AI into patient-facing self-service applications that have access to EHR data. For example, instead of asking a public chatbot like ChatGPT about diabetes management, a patient using a chatbot within a patient engagement platform could get more accurate, more personalized guidance based on their historic A1C levels and receive care guide information curated by the practice.
FHIR APIs will also allow patients to extract their own health data more readily so they can see it where and when they want it and take greater ownership of their healthcare decisions.
What impact might government regulation and guidelines have on the use of technology in healthcare?
For health IT developers, the current fragmented patchwork of rules poses significant challenges. Developers must customize and maintain different versions of their software to account for state laws, many of which directly contradict each other and federal regulations. This sort of regulatory complexity inflates development costs and hinders speed to innovation as developers must ensure the tools they deliver meet requirements wherever they are used.
Fragmented health IT regulation also strains the healthcare organizations that rely on these systems. Compliance burdens stretch administrative capacity thin, drive up costs and slow down implementation of new technologies.
A more unified, federally driven approach to AI regulation could help the healthcare ecosystem strike a balance between scalable, effective adoption and robust guardrails to ensure safe, responsible use of AI.