Skip to main content
Learn more about advertising with us.

Dictation making a comeback with speech recognition enhancements just in time for ICD-10

Marsha Taicher, Vice President, Director of Sales, Speech Processing Solutions North America
Marsha Taicher, Vice President, Director of Sales, Speech Processing Solutions North America

During the transition from paper charts to electronic health records (EHRs), many physicians were required to abandon their proven, preferred documentation method, dictation, and switch to typing on a keyboard or choosing from numerous drop-down menus on a screen.

What organizations may have overlooked during the transition is that voice technology, such as digital dictation devices and speech recognition software, is not only compatible with EHRs, but can be a more efficient documentation method, especially in complex, high-patient volume healthcare environments. With the arrival of more value-based payment programs coupled with the transition to ICD-10, dictation allows healthcare organizations to capture a greater level of clinical detail in physician notes in less time than typing or pointing-and-clicking.

Voice technology systems, however, vary in durability and suitability for healthcare environments. Hospitals and practices need equipment that can withstand heavy usage in busy healthcare facilities and appreciate added benefits like antimicrobial properties to fight the spread of bacterial and viral infections. Most importantly, these devices need to clearly capture the physician’s voice for accurate speech recognition transcription.

Greater clinical detail requires more documentation

The ICD-10-Clinical Modification (CM) set alone includes 69,000 codes compared to ICD-9-CM’s 14,000 codes. The ICD-10-Procedure Coding System (PCS), which hospitals will use, contains nearly 72,000 codes compared to ICD-9-PCS’s 3,800 codes.

This enhanced level of coding detail – which organizations will be required to report to payers to earn reimbursement in just a few short months – begins with ensuring all information is captured by providers at the point of care. Typing or pointing-and-clicking for this additional data will likely frustrate physicians, but allowing them to simply speak the information and have speech recognition software or a transcriptionist capture the detail in the EHR is proving to be an effective solution for many organizations.[1] Physicians can complete their notes in seconds instead of minutes and the organization will have access to ample information to submit an accurate ICD-10 coded claim, plus they will have the necessary data to conduct clinical quality performance analysis and reporting.

Dictation can offer patient safety benefits as well. In a recent study of two medical centers, the authors observed one group of emergency physicians who entered all their notes by typing while the other group dictated. Although study authors didn’t find a statistical difference between the physicians’ time spent at the computer, they did discover the dictation physicians were interrupted nearly half as often as the physicians who typed their notes.[2] Interruptions have been shown to significantly contribute to the “cognitive workload” of healthcare providers, which has been associated with errors and provider burnout.[3]

Designed for healthcare environments

Speech recognition technology can certainly improve the speed of clinical detail capture and reduce transcription costs, but the software’s accuracy can be limited if the organization does not choose a dictation microphone designed for healthcare environments.

Hospitals and physician practices can be noisy environments in hallways and common areas. Dictation microphones used in these areas should be able to filter background sounds and distortions such as pop and hiss noise, and block other distracting touch, click, air or structure-borne sounds so that the physician’s voice is clear and easily understandable by the speech recognition software.

An intuitively-positioned trackball and buttons on the microphone should allow physicians to edit their observations, comments and other dictation as necessary without requiring them to place the microphone down. Once the microphone is down, an embedded sensor should detect the new position and shut off the microphone to eliminate unwanted recording. The microphone’s buttons should also be programmable to allow the physician to pause and continue recording mid-dictation.

As hospitals face increasing regulatory pressure and financial penalties to reduce the number of hospital-acquired conditions, another essential feature for dictation microphones is antimicrobial capabilities. Dictation microphones are available that not only withstand frequent cleanings with hospital disinfectants, but are also built with an antimicrobial substance embedded in the housing material. Microorganisms such as bacteria (e.g. pneumococcal bacilli and multi-resistant microorganisms such as MRSA), viruses (e.g. HIV, influenza, etc.), fungi (e.g. Aspergillus niger), or algae can be virtually eliminated from the device for least for five years.

Dictation 2.0

While dictation has been used in healthcare for decades, the digital recording technology and speech recognition software advancements of recent years are making the practice just as essential in the era of value-based payment and ICD-10. Organizations, however, should not select just any voice technology, but rather equipment and software designed for healthcare environments. Not only will physicians be relieved from typing and pointing-and clicking, but patient satisfaction could also rise due to improved face-to-face communication.

[1] Williams, Megan. “New Approaches to Voice Recognition Technology In Healthcare.” Business Solutions. July 8, 2014.

[2] dela Cruz, Jonathan E. et al. “Typed Versus Voice Recognition for Data Entry in Electronic Health Records: Emergency Physician Time Use and Interruptions. Western Journal of Emergency Medicine. July 2014.

[3] Tucker, A. & Spear, S. “Operational Failures and Interruptions in Hospital Nursing.” Health Services Research. June 2006.