Home  |   Subscribe  |   Resources  |   Reprints  |   Writers' Guidelines

February 19, 2007

Shifting Gears — MTs and Speech Recognition
By Selena Chavis
For The Record
Vol. 19 No. 3 P. 12

Slowly but surely, speech recognition is becoming more prominent in healthcare organizations across the country. With that in mind, it’s vital that medical transcriptionists be ready to make necessary changes to their skill set.

A long time in the making, many HIM professionals and software vendors believe speech-recognition technology is finally becoming a viable approach to medical transcription. Improvements in technologies have steadily expanded the capabilities of computers to understand voice commands and the benefits achieved through increased productivity cannot be denied.

“We are probably realizing about 35% to 40% increases in productivity,” says Carol Weishar, MBA, RHIA, director of medical information and transcription officer with Milwaukee-based Advanced Healthcare. A group of 14 clinics comprising the largest physician-led, multispecialty clinic group in southeastern Wisconsin, the organization began transitioning to speech-recognition technology in 2004 and is currently approximately 80% complete with the process.

Software vendors agree that, typically, organizations will see increases in productivity from 25% to 50% once the introductory phase is complete and transcriptionists become comfortable with the new technology. “We certainly see extremes on both ends of the spectrum,” notes Jay Vance, CMT, technical and special projects consultant with the American Association for Medical Transcription (AAMT) and president and CEO of Vance Digital, a medical transcription service company. “The question often lies in whether HIM is setting up reasonable goals and benchmarks.”

Recognizing advances in speech-recognition technology over time, Connecticut-based Gartner Group, a technology industry research firm, listed speech recognition as one of the top 10 technologies likely to have the greatest impact on business in 2007.

“Speech recognition use in our organization has tremendously increased the value of dictated and transcribed reports by improving turnaround times and allowing us to provide relevant patient information much more efficiently than ever before,” says Jefferson Howe, MSA, CMT, assistant director of HIM at Maine Medical Center. “As a result, the transcribed report is an expected component of the electronic health record [EHR] and the medical transcriptionist [MT] is more closely linked to the process and ultimately a more valuable asset to the entire healthcare documentation process.”

While the use of speech-recognition solutions for transcription is on the rise, experts suggest that the transition phase will be here for a while. “Transcription as we know it is not going away in the next 24 months,” emphasizes Don Fallati, senior advisor at Dictaphone, a division of Massachusetts-based Nuance. “We have a number of clients at 50% speech recognition, but very few [organizations] are operating at 100%.”

Experts agree that the protracted transition is influenced by numerous factors, including the learning curve associated with new technology as well as past perceptions and resistance by some traditional medical transcription professionals.

Overcoming Resistance
According to Ben Chigier, CEO and chairman of speech recognition vendor eScription, many transcription professionals have biased perceptions about new technology, which cause incorrect assumptions about its role in the broader picture of the profession. “One thing that has happened in the last 10 to 20 years is that speech recognition has been undersold and underdelivered,” he says, adding that often, clients were dissatisfied with the outcomes. “People got disappointed because their expectations were set too high, but fortunately, that is now changing.”

Chigier notes that software vendors frequently must help professionals overcome two extreme views about speech-recognition technology—either that it’s perfect and MTs are no longer needed or that it’s useless. “We have to find a balance between these two views,” he says.

Compounding unrealistic expectations about new technology is a certain amount of skepticism and fear on the part of MTs that “this will replace me,” Fallati says. “Typically, the HIM professional is facing a range—some degree of interest and some degree of skepticism.”

Fallati suggests that organizations can achieve the least amount of resistance from MTs by involving them in the decision-making process. Chigier concurs, noting that “it’s important to communicate that they have a very critical role in producing documents in conjunction with the speech-recognition technology. It’s really important to recognize that this is a significant change for the medical transcriptionist.”

While industry professionals agree that, fundamentally, the responsibilities of an MT using speech-recognition technology don’t change, the mindset of how they go about it changes dramatically. “The primary difference is that [the MT] becomes an editor rather than a typist,” Fallati notes. “It’s a slightly different skill set.”

Editing speech recognition is a brand-new behavior pattern for the MT as opposed to creating a transcribed report from scratch, says Howe. “One has to listen differently, often ahead of the cursor in order to make adjustments quickly and efficiently,” he notes, adding that the best editors are those who have the ability to read ahead and take in the text quickly.

In his experience, Vance suggests that critical thinking skills are essential to making the transition. “Is what I’m seeing as well as what I’m hearing matching up?” Vance says, adding that MTs need to be able to look at the whole context of a report and make a judgment as to whether all the pieces make sense.

Differentiating the process from speed reading, where one learns to omit words and focus on subjects and verbs, Howe says each word recorded by speech-recognition technology needs to be viewed, recognized, and placed appropriately in context. “This is challenging and only for those with strong knowledge of medical terminology and a keen focus on details,” he says.

A willingness to learn something new is also crucial to making the transition phase smoother, according to several HIM professionals. Weishar expects that Advanced Healthcare’s transition efforts are far from reaching a level of 100% use, recalling significant resistance by some MTs to speech-recognition technology. “There are some of these guys who won’t ever do it. They are too set in their ways and disorganized with what they do,” she says.

“I believe that in most cases, a good transcriptionist will make a good editor,” Vance says. “Not everyone may be cut out for editing, though. If it’s something you don’t enjoy doing, you’re not going to be as productive.”

Realistic Expectations
Don’t expect productivity gains to reach the 50% mark during the first month—or the second or third, for that matter, say many industry professionals. “Clearly, there is going to be a productivity dip when it is introduced,” Fallati says. “People tell us that they can train someone to [regain their productivity margins] in a few weeks.”

Vance concurs, emphasizing that organizations should not underestimate the time and energy it will take to make the transition to speech-recognition technology successful.

In the case of Advanced Healthcare, Weishar says it took three to four months before the organization realized measurable benefits from the transition. The organization trained its transcriptionists in phases, beginning with a group that included a balanced mix of high producers and low producers. “We deliberately didn’t put all of our high producers in there,” Weishar recalls, adding that with the first group, there was a struggle to get production up to par. After working out the kinks of the training process with several groups, Weishar says the organization has witnessed improvements in productivity turnaround times, with the last group making gains in just a few weeks.

Appropriate training is the key, says Fallati. “We typically say that in terms of training, this is just like exercise. You can’t do just a few jobs a day and expect to get proficient,” he says. In the case of Dictaphone, management recommends that transcriptionists spend three to four hours per day with the new product in a “real world” setting. Fallati suggests coordinating the training process into the regular workflow, which will allow an adequate amount of volume to be transcribed via speech-recognition methods. “You can’t just do only those jobs that make sense at the time,” he says.

Acknowledging that the primary benefits of speech-recognition technology rest in productivity gains, Chigier warns that organizations should not solely focus on the production aspect of the job. “It’s most important to produce a high-quality document that is accurate,” he says. “Transcriptionists are often so caught up in production work that not enough time is spent on training.”

Experts also agree that production margins will not increase at the same rate with all transcriptionists. Weishar notes that in her experience, the transcriptionists who were traditionally the organization’s high producers showed the lowest increases—only 10% to 15%. “Where we’ve really made it up are with those lower to the middle-of-the-road producers,” she says.

Howe warns that he has witnessed some MTs translating the training process into an overediting mode that won’t achieve the desired outcomes of efficiency. “Instead, we need to balance the results of the recognition with a consistency of editing that is focused on efficiency and effective documentation,” he suggests.

Offering a real-world example, Howe notes that if a comma is recognized where a verb was dictated as in “heart sounds, normal” and no meaning has changed, transcriptionists at Maine Medical Center leave it as recognized and move on. “In our organization, we have coined this as a ‘Chevy’ version of an edited report rather than a ‘Cadillac’ version with complete sentences and punctuation,” he says. “The providers are trained they will get more of what they say and less of what they want to say because this level of editing is simply not cost-effective.”

Howe says the organization also emphasizes consistency in editing because if the team doesn’t edit in unison, the technology will not be able improve its results due to a lack of understanding as to what is preferred. “In addition, we work to orient new physicians that they will be speech recognized and use it as a ‘talking point’ to encouraging better dictation habits,” Howe says, adding that the Dictation Best Practice Tool Kit from AAMT helped his organization facilitate that dialogue.

With the advent of EHR systems, speech-recognition technology, and the vast resources available over the Internet, Howe believes the scope of medical transcription is broadening in a way that traditional forms of training will eventually not be adequate. “We simply can no longer afford to train on the job and therefore are looking for the RMT (registered medical transcriptionist) credentialed candidates when doing a position search,” he says. “Job responsibilities have higher expectations, are more technical, systems-based, and require a heightened level of independent problem solving.”

Considerations for Compensation
Under debate is how best to pay transcriptionists a fair and equitable salary for editing work, especially in the training phase when production levels dip. “I believe that it is reasonable that transcriptionists who are learning this new skill should be able to share in the financial benefits,” Vance says. “At least to begin with, most MTs will be better off financially if they are paid by salary or salary plus incentive.”

Weishar notes that Advanced Healthcare is currently set up under a base pay plus incentive structure. Expecting that transcriptionists would not meet their incentives during the training process, the organization averaged each employee’s bonus from the previous months and paid them at that rate for the first two months of the transition. “Medical transcriptionists are hard to find,” she says. “You don’t want them to get mad and leave.”

Chigier concurs, adding that organizations would be wise to implement a compensation strategy during the training process that motivates MTs to learn the tools and gives them adequate time to do it.

Vance emphasizes that compensation becomes a great issue in this process because, depending on the compensation structure, increases in productivity could mean decreases in pay. “If compensation is cut by 50%, what happens if productivity does not double?” he questions.

Acknowledging that this is a quandary faced by many HIM managers when it comes to staff considerations, Fallati suggests two solutions. In larger organizations, it often means there is more time to deploy personnel to other tasks. In the case of an organization where increased productivity means that workloads will no longer justify the need for current staffing, he suggests “that’s part of the payback” for the investment in the technology. “Every HIM manager is being asked to cut costs,” he says.

— Selena Chavis is a Florida-based freelance journalist whose writing appears regularly in various trade and consumer publications covering everything from corporate and managerial topics to healthcare and travel.


From the Front Lines: One Transcriber’s Perceptions of Speech-Recognition Technology
“A step in the right direction” is how Marion Rathbone, master medical transcriptionist (MT) with Maine Medical Center, describes the transition from traditional medical transcription to the use of speech-recognition technology. “A great benefit is that it is not as physically demanding on the hands and arms. My production has also increased,” notes the 30-year medical transcription veteran, who found the transition to this new technology to go more smoothly than anticipated.

Familiar with traditional dictators, Rathbone found that speech recognition requires more focused concentration and emphasis on the visual word than traditional transcription. “The most difficult adjustment has been trying to be mouse-free and learning the shortcuts of the system,” she says. “As long as you have a solid computer background, the transition should not be difficult.”

Of particular concern for some MTs is whether speech-recognition technology will impact current pay rates. “Our current method of compensation—that is hourly wage plus incentive—is fair, provided there is consideration for length of service,” Rathbone says. “I know some places pay by production, but I am not familiar with the details of those plans.”

In making the transition, Rathbone believes she has expanded her skill level and ability to contribute to the clinical process. “At the end of the day, I have a greater sense of accomplishment,” she says. “MTs have to step up to the plate and be willing to accept the challenge.”

— SC