Home  |   Subscribe  |   Resources  |   Reprints  |   Writers' Guidelines

April 2015

Marrying Voice and Text Within the EHR
By Selena Chavis
For The Record
Vol. 27 No. 4 P. 22

As speech recognition ingratiates itself into documentation workflows, industry professionals weigh in on the pros and cons.

The role of speech recognition technology within EHR clinical documentation workflows has been debated for years. And the verdict appears to be in.

Industry professionals agree that while speech recognition is no longer a product differentiator for the average EHR, it has carved out a noteworthy niche within today's evolving technology-driven workflows—one that is likely here to stay.

Richard Garcia, MD, MPP, MHA, medical director of the emergency department at Beverly Hospital, a stand-alone community facility in Montebello, California, notes that the advantages of speech recognition have traditionally been associated with efficiency. As the technology has advanced and matured, he now believes that it lends to a better documentation product.

"The documentation is more thorough. You can say a lot more," Garcia explains. "We all prioritize our documentation. … If I have more time, I may document more of a history, more nuanced information about a visit."

Speech recognition technology has been in use in some form for more than 15 years at Beverly Hospital. Garcia acknowledges that the organization has historically been well ahead of the HIT adoption curve. "We're very fond of [speech recognition] and have been leveraging it for efficiency for a long time now," he says, adding that the Beverly Hospital's success with the technology can be traced to a thoughtful approach taken over time to customize its use in tandem with the facility's Meditech EHR, making workflows as seamless as possible.

Few would deny the tremendous strides the vendor community has made in recent years in its effort to advance speech recognition's effectiveness. And while the technology may be better than ever, user satisfaction is often varied. In fact, according to Wayne Crandall, president and CEO of NoteSwift, satisfaction often is directly tied to expectations.

"[A few years back] when speech recognition was really being proliferated into the EHR community, it was still a relatively new field. People were adopting it thinking it was going to be the saving grace," Crandall explains, adding that some in the industry expected natural language speaking advances to enable easy population of fields and checkboxes throughout an EHR. This expectation led to some dissatisfaction.

Alternatively, those who adopted speech recognition with the goal of simply getting narrative data into the EHR have likely been much more approving of the technology.

Regardless of expectation, Crandall says speech recognition, when leveraged in a thoughtful, strategic manner, has the potential to cut documentation time in half—an important achievement to counter physician angst over increased time commitments for EHR documentation.

Nick van Terheyden, MD, chief medical information officer of clinical language understanding with Nuance, agrees, noting that not only is speech recognition an important tool for relieving the workflow burden initially placed on physicians with the introduction of EHRs, but it has advanced in such a way that it now impacts quality. "It's moved past the tipping point and is in wide use," van Terheyden says. "[As a physician], I can spend less time looking at the computer. I can capture high-quality information, and I'm not tied to this method of keyboard and mouse. And from a quality standpoint, I get better quality notes."

The overall impact on quality is one that is still under debate throughout the industry. Outlining the dramatic shift in documentation models that has occurred over the past decade, Dale Kivi, MBA, director of business development with Future Net, suggests that the progression from using traditional dictation with manual transcription to a physician-centric documentation model can increase the potential for errors.

"On paper, one can be led to believe such a change improves the quality and efficiency of the process. In practice, usually, the opposite is true," Kivi says. "Inevitably this results in physicians looking for ways to save time, and that's when document quality really starts to suffer."

Clinical Documentation: Opportunities and Challenges
Garcia points out that physicians appreciate speech recognition because it allows them to include more patient narrative into the medical record as opposed to addressing only the structured data fields featured within an EHR template. By not having to turn their backs to patients to key data into the computer, the technology affords physicians the opportunity to be more engaged.

"Speech recognition is absolutely turning that around," van Terheyden says. "Every physician on this planet walks into work with the same ideas: 'I want to give great care to my patients; that's why I got into this industry, that's why I do what I do.'"

In an evolving regulatory landscape, the challenge to effectively using speech recognition continues to be whether structured or discreet data can be captured from free text. Heightened performance and reporting requirements have made it essential to balance the need for efficient physician workflows with the desire to capture structured data elements, circumstances that have placed many health care organizations in difficult situations.

To overcome this conundrum, Beverly Hospital adopted a hybrid approach to discreet data. For a particular procedure, structured elements exist inside the medical record to capture key data alongside a free-text template. "There are ways of playing with your EHR to enhance that functionality," Garcia says. "Not everything has to be discreet data. It becomes a balance between discreet data, free-text data, and the formatted stuff that comes in from lab and radiology."

Garcia points out that while structured EHR elements are great for handling normal findings, they are not equipped to address abnormal or nuanced findings. "What I have is discreet data for normals, which goes very quickly," he says, emphasizing that any structured data that can be easily pulled through the EHR from visit to visit must be leveraged to its fullest. "The nuanced information that is applied to make the record individual, that is done using speech recognition."

Although approaches vary by institution and provider, Garcia notes that he uses discreet data, free text, and formatted data from radiology and the laboratory in equal parts for his own use.

NoteSwift is addressing the free text question by taking the output from medical speech recognition products and automatically populating the data into the patient note's structured fields without having to cut and paste simple narrative text. Pointing out that the promise of speech recognition in EHRs is associated with its ability to minimize the documentation burden placed on physicians, Crandall notes that NoteSwift technology virtually eliminates the EHR clicks required to complete a patient note while still ensuring that structured fields get populated.

"We did our own study, and one EHR takes 100 clicks to create a single patient note," Crandall says. "Through NoteSwift, we now can completely navigate through the entire EHR infrastructure. We can input not just narrative, but also structured information."

Nuance also is leveraging advancements in natural language processing, medical artificial intelligence, and speech recognition to address the need for structured data to support health care informatics. The vendor's clinical language understanding technology can derive patient information from free text and produce actionable data to improve patient care and streamline workflows.

As natural language processing continues to advance, Garcia believes that addressing discreet data through speech recognition will become less of an issue. The ability to capture critical data to support quality metrics and regulatory requirements will be much more easily enabled on the front end, he says.

Even in light of technological advances, Kivi cautions that numerous studies have found that physician-centric documentation models increase documentation time by as much as 60 to 90 minutes per day. "The problems caused by this shift in workflow scheme and the corresponding increased time commitment are quite notable, especially when it comes to document quality," he says. "Inevitably, this results in physicians looking for ways to save time, and that's when document quality really starts to suffer."

The bottom line, Kivi adds, is that checks and balances must be implemented into the greater workflow to prioritize quality. This means having a second set of eyes to edit and proofread documentation before errors have the opportunity to move downstream.

He suggests that one alternative to integrating front-end speech recognition to a certified EHR is to utilize traditional transcription—often supported by back-end speech recognition. "We have a number of large clients that use this model for the narrative components of structured reports in Epic and other EHRs," Kivi says.

In this scenario, physicians identify where they want the transcribed output to appear in the structured reports. Through an intelligent interface, the content is pushed to those designated fields upon completion. "Obviously, unlike self-type or front-end speech, it takes a while for the transcribed content to appear, but it achieves the physician's objectives of saved time while fulfilling the requirements for meaningful use," Kivi says.

Another alternative is a blended approach in which physicians dictate reports the same way they did before moving to a structured EHR. The resulting narrative documents are run through a natural language processing engine that recognizes the discrete data and segments those components to populate the structured EHR report.

Van Terheyden acknowledges that success with front-end speech recognition in EHRs requires a thoughtful approach to deployment and use. "Where it's not successful, it just got dropped in," he says. "Health care is complex, and applying it in a way that makes sense to the clinician's needs requires some understanding of their workflows, their specialties, and some of the requirements of their specialties. If you do it right, the satisfaction is off the scale."

To date, Garcia notes that Beverly Hospital has been pleased with its front-end speech recognition infrastructure—specifically its impact on productivity and the quality of physician notes. "In our region, we looked at how productive our physician assistants were, and, by far, our hospital had the highest charges per hour," he says. "Not only were they the fastest, but they also were very complete. You have to be efficient. You have to capture everything you are doing, and you have to use technology to move forward."

An Evolving Landscape
Speech recognition technology continues to advance to meet the workflow needs of today's physicians and health care organizations. It's no secret that more physicians are going mobile, a trend that speech recognition technology must take into account if it is to satisfy industry needs, says Seth Flam, DO, CEO and director of HealthFusion. Mobile devices enable physicians to document more conveniently while still spending face-to-face time with patients at the bedside.

As health care becomes more patient-centric, this relational model of care delivery will become increasingly critical, Flam notes. However, there are significant challenges to bedside documentation on mobile devices. For example, Flam says EHRs and speech recognition software typically do not support mobile platforms well and it's difficult to key data into an EHR from mobile devices.

HealthFusion hopes to address these factors through its MediTouch EHR by augmenting fingertip touch with iPad dictation to improve the chances that provider chart notes can be entered in real time at the point of care.

Nuance is exploring innovative ways to make speech recognition smarter in order to support clinical documentation improvement initiatives, according to van Terheyden. For instance, if a cardiologist is documenting using speech recognition, the application boasts specialty-specific intelligence to analyze what is being said.

Consider the challenges physicians face when attempting to meet the documentation requirements of multiple and ever-fluctuating regulatory initiatives.

For example, from a coding perspective, congestive heart failure must be categorized as either acute or chronic as well as systolic or diastolic. If these descriptions are not included in the documentation, the organization could lose justification for the severity of the illness or complex care.

"Instead of training the physician to use technology, we train the technology to learn the physician," van Terheyden explains, adding that the application not only records the documentation but also analyzes it and makes suggestions for improvements.
It's another step toward what the industry hopes is a blissful marriage between speech recognition and the EHR.

— Selena Chavis is a Florida-based freelance journalist whose writing appears regularly in various trade and consumer publications covering everything from corporate and managerial topics to health care and travel.