Home  |   Subscribe  |   Resources  |   Reprints  |   Writers' Guidelines

August 2, 2010

Selling Physicians on the Value of Speech Recognition
By Lisa A. Eramo
For The Record
Vol. 22 No. 14 P. 14

To avoid a lackluster reception, experts recommend several steps healthcare organizations can implement to obtain staff buy-in.

At Rockingham Memorial Hospital (RMH) in Harrisonburg, Va., all the tell-tale signs to justify using speech recognition were present: a failing end-of-life legacy dictation system, increasing dictation demands, escalating costs, and overall physician dissatisfaction with transcription services. The hospital knew it needed to make the switch, but first it needed to figure out how to get physicians on board.

“Physician buy-in is critical to the success of any clinical documentation process,” says Mike Rozmus, chief information officer (CIO) and vice president of information services at RMH. In 2008, he organized a transcription task force to examine alternatives to what he says was a failing transcription model that included a combination of insourced and outsourced services.

RMH achieved rapid adoption of speech recognition by focusing on physicians’ needs and cultivating buy-in throughout the rollout, says Rozmus. “We’ve got a much greater satisfaction with our transcription and dictation services. The turnaround time has exceeded their expectations, quality has improved and, overall, they have been very pleased with the results,” he says. Today, 90% of the hospital’s dictation is speech recognized.

There are many reasons healthcare organizations should consider speech recognition, particularly if they are looking for ways to ease the burden of an EMR implementation, says Peter Durlach, senior vice president of marketing and product strategy for Nuance Communications. Speech recognition relieves physicians of the cumbersome need to point and click through structured EMR templates to convey a complicated patient narrative, he says.

“Speech-recognition technology has always been looked at as a great way to reduce costs and improve turnaround time. With EMRs, it has really transitioned to being a fundamental element,” he adds.

Because of its proclivity to enhance physician satisfaction, speech recognition can also help hospitals achieve meaningful use, says Durlach. “If you don’t give physicians the tools and choices to make the transition from the traditional way of documenting to an EMR, they’re not going to use [the EMR]. You can’t get meaningful use if there’s no use at all,” he says.

Know Your Options
The first step toward achieving physician buy-in is to understand your choices and their associated pros and cons. There are two types of speech-recognition technology:

Back end: A physician dictates into a digital dictation system that records an audio file. The file is then converted to a centralized system where it is recognized and subsequently edited by a transcriptionist.

Front end: A physician dictates into a speech-recognition engine in which words are recognized and displayed immediately after they are spoken. The dictator is responsible for editing and signing off on the document.

“The benefit of using a front-end solution is that physicians have complete control of the report-generation process from beginning to end,” says Joel Fontaine, vice president of business development for M*Modal Technologies. “However, not all physicians have the time or desire to complete and sign off on the document and instead would find the option of sending the document to a medical transcription editor for completion beneficial. Back-end use is beneficial because physicians are blind to knowing they are using the technology. They continue their normal dictation style, and the technology works in the background.”

Although front-end technology has traditionally been geared toward outpatient physician practice settings, there has been a dramatic increase in hospital adoption of this model in light of the HITECH Act and EMR incentives, says Durlach. Nuance has noticed an increase in adoption of its front-end speech-recognition product over the last year as well as an overall increased interest in speech recognition in general over the last three years, he notes.

Identify the Barriers
Experts say although physicians are beginning to understand the benefits of speech recognition, some hospitals still have an uphill climb in terms of getting them to actually use the technology on a daily basis—or at all.

Some physicians who were frustrated with older versions of speech-recognition technology that were plagued with inaccuracies may be reluctant to use today’s highly developed and sophisticated solutions simply out of fear that they will have a repeat bad experience, says Nick van Terheyden, MD, Nuance’s chief medical information officer (CMIO), clinical language understanding, and a member of the Medical Transcription Industry Association board of directors.

Another barrier to adoption is that physicians must take the time—often without compensation—to develop new competencies such as cognitive-oral skills, says Michael Bliss, MA, a speech-recognition trainer and national consultant. Bliss, who has more than a decade of experience training hundreds of physicians nationwide, says these skills allow a doctor to move inside a document or program solely using his or her voice. Unlike traditional point-and-click methods, speech recognition uses specific vocal commands that trigger certain actions. Physicians using front-end recognition must also learn to self-edit, which includes noticing mistakes when they occur and fixing them in real time, he adds.

Devise Buy-In Strategies
What can hospitals do to overcome these barriers and achieve buy-in? Experts say there are plenty of ways to usher in the new technology while minimizing workflow disruptions and maximizing benefits.

Consider a Pilot Program
In December 2008, RMH piloted Nuance’s front-end solution with three of its emergency department (ED) physicians. It chose the ED because of its high patient volume (the department processes more than 75,000 patients annually) and because it generates more than 30% of the hospital’s total annual transcription. ED physicians, many of whom have high computer literacy, were also interested in using the technology to improve quality and reduce escalating costs.

During the eight-week pilot, RMH learned that each of the three participating physicians had a distinctly different workflow. “One documented after discharging each patient; one documented on multiple patients concurrently, keeping various documents open at a time; and one documented in batch after several patients were seen,” says Rozmus. The pilot highlighted the fact that the speech-recognition technology must be able adapt to each workflow to increase the likelihood that these and other physicians would use it, he says.

Front End vs. Back End
Experts say there often isn’t a one-size-fits-all solution, particularly when hospitals are in a hybrid state and employ physicians with varying degrees of computer literacy and willingness to change.

Some users are better candidates for front-end speech recognition, while others could better adapt to back-end technology, says Rozmus. For example, the radiologists at RMH use back-end technology because it better integrates with the PACS system and provides a more conventional dictation workflow. Good candidates for front-end solutions are those who have predictable dictation patterns and work types, such as ED physicians, he adds.

Bliss says he has worked with all types of physicians, including those who are technologically savvy and others who have never turned on a computer. “Some physicians are embarrassed when they need to learn a new technology, especially when they can’t type quickly,” he says, recalling one physician with whom he worked one on one for 13 hours, providing basic computer literacy training before moving on to the actual speech-recognition technology.

Other physicians may have the technological skills but aren’t interested in learning about speech recognition, Bliss adds. “It all depends on whether you’re willing to change. Some people are creatures of habit and don’t want to learn new things; other people are just more open,” he says.

Give Physicians a Choice
This is extremely important and a crucial part of the buy-in process, says Fontaine. M*Modal offers physicians the choice—at the time of dictation—whether they will self-edit the speech-recognized document or send it to a transcription editor. This hybrid workflow that’s part of a single integrated system is unique because it caters directly to the physician and empowers him or her to make the decision, he says.

Nuance also offers physicians a choice at the time of dictation through its progressive blend model that includes both back-end and front-end options. The company started to offer this model when HITECH began spurring EMR adoptions. Because many providers are currently in a hybrid state, they can use the progressive blend model to migrate from back-end to front-end dictation commensurate with their EMR deployment and other specific dynamics within the organization, says Durlach.

“It’s the carrot vs. the stick. It’s giving them that choice that really allows for the success of the implementation. If you force them to use it, it won’t work,” adds van Terheyden.

At one of Nuance’s client sites, physicians using the blended approach began with back-end recognition and by the third day of usage were on board with using the front-end technology, says van Terheyden. “With this continuum of back end to front end—and offering the choice—you don’t force physicians into anything,” he notes.

RMH uses Nuance’s progressive blend model in its ED because it realized that certain workflows are better suited for front-end vs. back-end solutions, says Rozmus. “We understood that in a busy ED such as ours, more than one method of documentation needed to be available,” he explains. ED physicians have the option of choosing front-end dictation at a computer workstation or back end via phone, depending on the complexity of the patient case. In the hospital’s fast-track ED (an area dedicated to routine and minor cases), physicians are also able to use dictation templates that were created for added efficiency.

Provide Adequate Training
Today’s robust speech-recognition solutions oftentimes learn and adapt to physicians as they dictate, minimizing the need for extensive training. However, training is still typically required to enroll users in the system and teach them how to incorporate the technology into their workflow.

M*Modal’s philosophy is to further reduce the training burden on physicians by getting them to buy into the technology. “Our approach is to not ask the physician to do anything; physicians don’t have to change the way they dictate,” Fontaine says, adding that the technology automatically adapts to users’ dialect, speed, and volume. Dictations are also compared with aggregate data that includes users with similar speech profiles to enhance accuracy.

Bliss says although vendors generally provide baseline training, many don’t go into enough detail. On average, he provides four hours of training per physician. This includes follow-up observations that take into account cursor mobility (how well physicians can move around a document using their voice), how well physicians can correct a recognition error using a habitual technique or strategy within a short turnaround time, and how well physicians use shortcut commands. One year after adopting speech recognition, Bliss says most physicians should be using at least 50 (and in some cases as many as 200) commands effectively in order for them to be getting the most out of the technology.

For ease of use, consider providing physicians with cheat sheets (eg, laminated cards) that they can either keep in their lab coats or post at their workstations, says Bliss. These cards should include widely used commands and shortcuts that can make life easier.

RMH’s physicians received plenty of one-on-one training tailored to their specific workflow needs. For example, separate and tailored trainings were provided for radiologists, pathologists, and attending physicians using back-end recognition and ED physicians using front-end recognition.

Ensure Accuracy
Before buying into speech recognition, physicians want to know their dictations are going to be accurate, says Bliss. One way to help ensure this is to create a personalized language model (PLM) for each physician. This proprietary model, which Bliss designs using previously dictated documents that he inputs into the speech engine, helps produce the results physicians are looking for because it takes into account unusual physician names, specific terminology, geographic-specific insurance company names, and more.

Bliss says although he has no affiliation with Nuance, his consulting business focuses on PLMs that can be input into the company’s front-end solution. His training sessions also focus on a more in-depth look at how to use the product, which he says is helpful in terms of obtaining buy-in and ensuring long-term use and retention. “I don’t just hand them a microphone, tell them how to use it, and say good luck. I believe in overtraining them so they know what’s happening inside the software,” he says.

Simply showing physicians the intricacies, shortcuts, and power of speech recognition is sometimes enough to get them to buy into using the technology, says Bliss. His training technique takes into account the complexity of human speech as well as information he has gleaned from speech-language specialists, programmers, and microphone developers. Training sessions begin with an overview of the technology and what it can do followed by two voice enrollments. During the first six-minute enrollment, physicians read a passage that includes content knowledge about speech recognition into the dictation device. Next, Bliss inputs the PLM into the speech engine. During the second 10-minute enrollment, physicians read another passage out loud that includes content knowledge regarding the journey toward successful use of the technology.

Bliss says accuracy rates improve significantly after the second enrollment, bolstering physician support of the technology. “This is the psychological hook that shows doctors the technology is capturing exactly what they say,” he adds.

Address Assumptions
Although some physicians may harbor concerns based on previous versions of speech recognition, explain that the technology has come a long way in terms of accuracy and adaptability, says van Terheyden. “The technology is so much more advanced than what it once was,” he explains.

Get Clinical Leadership On Board
If clinical leadership doesn’t see the value of speech recognition and encourage physicians to adopt it, the effort will likely not succeed, says Durlach. “Clinical leadership has to care,” he says.

RMH created a project charter that committed key internal and external stakeholders to its goals, says Rozmus. “We established a project team that was motivated and committed, and we overcommunicated progress and expectations throughout the implementation,” he says. “We sold the solution, listened to feedback, delivered on our promises, and held our vendors accountable.”

Provide a Feedback Loop
Allow physicians several avenues for providing feedback and suggestions for improvement, says Rozmus. For example, at RMH, physicians can contact the CMIO, CIO, HIM director, or physician leadership.

Overcome Technology Hurdles
At RMH, ED physicians using front-end recognition have dual-monitor workstations to display the EMR and the speech-recognition tool, meaning they can look at lab results or radiology interpretations while interacting with the speech-recognition tool.

Identify Physician Champions
“They become the standard bearer for excellence,” says van Terheyden. At RMH, the CMIO (who was also an ED physician) was part of the pilot program and also provided support and advice to colleagues during the early stages of adoption.

— Lisa A. Eramo is a freelance writer and editor in Cranston, R.I., who specializes in healthcare regulatory topics, HIM, and medical coding.