The Proof Is in the Proofing
By Susan Chapman
For The Record
Vol. 29 No. 11 P. 20
Despite the improvements made in speech recognition technology, a robust quality assurance effort remains essential.
Speech recognition technology has come a long way, especially in recent years. For example, the time it takes to train speakers has been reduced, the cloud has become a viable option, and the technology's "learning" capabilities continue to expand. Nevertheless, it would be foolish for health care organizations to take speech recognition at its word, so to speak. Whether back- or front-end, the technology requires a robust editing process to avoid potential harm to patients and the organization's bottom line.
Advances in Speech Recognition
Over the last two or three years, speech recognition has improved significantly, according to Jonathon Dreyer, senior director of solutions marketing at Nuance. "The underlying technology that has been driving speech recognition's move to the cloud has improved the technology from an accuracy and physician experience perspective," he explains. "Using the cloud, Nuance has ways of making our users more productive, and we have more technology and insights that can help us address users that may need targeted education or training."
In the past, most speech recognition software was installed locally on a user's workstation. At the site level, there was a central server where the speech-recognition processing occurred. When a new version of the software was released, local servers and workstations would be updated, typically with some lag time due to resource availability and deployment schedules.
As EMRs have moved to web-based and mobile applications, some larger speech recognition providers have followed suit. "Dragon Medical is now cloud based, and everything is managed through a single secure cloud platform," Dreyer says. "This enables us to offer access to the latest technology, which is immediately available to the user as updates are released. This means less work for IT staff and less hassle for clinicians. Additionally, the technology uses the latest in machine learning and artificial intelligence, which significantly improves system responsiveness and accuracy, as it's adaptive and continues to get more accurate over time."
Beyond more accurate speech recognition, the cloud-based platform delivers computer-assisted physician documentation capabilities that offer real-time advice at the point of care. "This further improves documentation quality and ensures that the most appropriate information and pertinent details are captured as early in the workflow as possible," Dreyer says.
While Dragon Medical users no longer have to spend time training a voice profile or calibrating the software itself before using it, Nuance does recommend that users undergo brief customized training based on their workflow and documentation needs.
"Performance data … indicate that those who have gone through this optimization training are 33% more productive than clinicians who have not," Dreyer says. "The biggest challenge that clients face is clinician availability for this brief training due to busy schedules, internal coordination challenges, underemphasis on change management, and lack of executive sponsorship. However, our customers who have adopted Nuance's recommended best practices have successfully overcome these challenges and achieved a greater than 80% adoption rate and a 99% satisfaction rate."
Dreyer says the technology's effectiveness is influenced by how it is deployed. "It's like any software," he says. "There are a host of other efficiency gains that users can achieve. For that optimization, it really comes down to the training portion for the user: using voice to navigate the EMR or automate tasks. You can use voice to do more than simply generate speech-to-text. How do you make use of the other 80% of the software's capabilities for the user? Often with new technology we only scrape the surface of how to use the tools, but once we're properly educated, then we can use them more efficiently."
In terms of the editing process, Dragon Medical users must edit their notes should they change their minds about what they dictated or need to correct any errors that arise during the dictation process. However, Dreyer says that year-over-year analysis of usage data reveals that clinicians are spending 37% less time editing their notes, while capturing up to 20% more relevant content. "This creates a more complete patient story and translates into more time available for clinicians to care for patients, a quicker and more efficient capturing of the patient narrative, higher quality notes, and improvements in clinical documentation flow," he says.
Challenges on the Back End
While acknowledging the improvements made in speech recognition technology, Patricia King, manager of HIM transcription at Tucson Medical Center in Arizona, emphasizes that clinical documentation in this setting continues to faces hurdles, especially during the back-end transcription process.
"In order for the speech recognition engines to learn the provider's speech patterns, every person on the transcription team must follow the same rules as to what they fix and how they fix it. Otherwise the speech engine can degrade," she explains. "For example, when an organization I recently worked with first began using a speech recognition product, our team all used the same rules, and we got good transcription drafts. Over time, though, as people drifted from the agreed-upon process and staffing wasn't available to identify the problems, we saw our speech engine-generated drafts degrade and require a lot more editing."
Both King and Dreyer acknowledge that speech recognition software has the capability to learn over time. Take for example when the system inserts "patient" when someone is actually dictating "the patient." Over time the system can learn to insert "the" as it is being corrected by a transcriptionist. "The things you have to look out for are homonyms; to, two, and too, for instance," King notes. "There are a number of medical words that sound the same. Also, sometimes people will insert commas or remove them. Correcting punctuation can throw the system off. If the punctuation is important, it needs to be corrected, but sometimes it does not."
Stephanie Peck, president of Peachtree Transcription Associates, says vendors must stay on their toes. "With speech recognition you must inspect what you expect. Users have the power to make the changes within the speech engine, and it will constantly learn as long as the vendors are examining and tweaking the system, essentially performing system maintenance," she says. "We're laying the groundwork for what to expect from the vendor's system, and we have to have someone on the vendor side who is inspecting the automation."
Not every physician may be the ideal candidate for a speech recognition system. "There is no one-size solution that fits all. Therefore, every speech recognition system is different, and the complexity of the health care area plays a role in how well a system performs," according to Jason Peck, sales director at Peachtree. "Speech recognition is great with the easy stuff. It speeds up physicians' turnaround time. When it comes to certain specialties, you can find a lot of errors. In such circumstances, it is more beneficial for the dictation to go through a straight transcription document."
Jason Peck believes physicians have been forced to accept speech recognition in order to meet meaningful use provisions. Consequently, if they haven't adopted speech recognition as their own, then their documents can contain a host of errors. "For the most part, time and technology are the consistent barriers that prevent physicians from improving the process and making it work for them," he says.
King points out that the quality of the final product has a great deal to do with the system, the selection of which often depends on cost. "There are many places that would like to use speech recognition systems but simply can't because of the cost," she explains. "For example, if an organization is smaller, they likely aren't going to be able to obtain an expensive system and will choose a front-end system that is more economical. They also may not have the staffing to ensure that editing is done properly. It can be difficult to get clinicians to review drafts if they don't have the time in their day. They sometimes will just sign off on the draft without editing it. If drafts aren't edited, then it affects how the system can learn. Ultimately then, such an expense becomes a waste of time and money.
"Some of the larger vendors have an initial cost, then a cost/license per user," King continues. "The biggest vendors offer access on a subscription basis in the cloud. Then they charge based on each provider's speech profile per month as long as they use it. So it's easy to see why some smaller organizations choose to go with cheaper front-end systems."
King believes a hybrid system featuring partial dictation with manual input can be a useful alternative. While the EMR gathers health information, the narrative is necessary in order to provide a full patient profile. To help address this issue, some EMR packages allow users to click on a spot in the record where they would like to dictate. The EMR records it, and the voice file is transcribed. This text can be edited before it is finalized with a signature.
"Still, physicians may not edit errors before signing off on the record. This produces an inaccurate record, and such poor results can impact coding, billing, reimbursement, and, of course, patient safety," King says.
If organizations are able to subscribe to a technology with natural language processing, providers have more flexibility. "Subscription services have better technology because of the income stream. They can also have tools to help the coding environment, as well as the physician's specificity. For instance, doctors can dictate a condition, such as renal failure, and then get prompted with a drop-down menu that provides the ICD-10 codes, which can help improve accuracy from the beginning of the process," King says. "However, if you're buying a product rather than subscribing to a system, you may or may not have access to that level of sophistication."
King believes the degree to which back-end technology can be challenging depends on the skill sets and innate talents of the transcriptionists. "And it helps tremendously if the transcriptionists can also have the voice files," she notes. "In order to do proper quality assurance, the editor needs to hear what was actually said so that it aligns with the draft. Some software vendors will provide this, but others will not."
Quality Assurance Best Practices
King says quality assurance (QA) efforts can vary depending on who created the documentation. Because QA is such an important issue and to address the absence of QA programs in the documentation process, the Association for Healthcare Documentation Integrity (AHDI) recently updated its "Healthcare Documentation Quality Assessment and Management Best Practices" toolkit to include information and resources from its "Clinician-Created Documentation: Reinstating Quality Assurance Programs to Safeguard Patients and Providers" resource kit.
"And it's not just the AHDI that is taking this step. There are a variety of organizations that are putting QA in place," King says. "The difference when you do QA on a note that has been created by a provider as opposed to one created by another staff member is you have to be more lenient with a provider in terms of grammar, spelling, and punctuation. It's important to have the backing of the administration, chief medical officer, and others who are the physicians' peers. Speech recognition software can learn, but it learns best with consistency. We need to have physician support in order to optimize the process."
Peachtree conducts an annual QA study, the results of which the company presents to its clients, who can be either hospital- or physician-practice based. "We believe we are the bridge, the voice between the provider and the technology," Stephanie Peck says. "We are taking the time to make speech recognition technology better. The software wasn't necessarily built for accuracy—it was built for creating digital documentation. Vendors and users must be willing to take the time and the necessary steps in order for the technology to improve."
The "Back-end Speech Recognition Implementation Best Practices" toolkit, which King helped produce for AHDI, can be a valuable resource for professionals attempting to address and correct the challenges presented by speech recognition software. According to AHDI, the toolkit "draws on the experiences of numerous health care organizations that have successfully implemented back-end speech recognition as well as some that were less successful but able to provide much- needed insight, solutions, and workarounds."
Among the many solutions the document suggests are the following:
• Be clear about the organization's goals and expected outcomes. Asking why the organization wants to implement speech recognition and what it hopes to achieve helps build a foundation for more specific inquiries as the project moves forward.
• Determine costs.
• Manage expectations, the process of which begins by having a realistic understanding of what the technology can do for the organization.
• Consider different vendors. Create a comprehensive request for proposal that includes the organization's needs, make a list of qualifying vendors, and arrange for product demonstrations.
• Establish and maintain good communication among all parties involved.
• Understand the support that will be provided.
• Test the product.
"Patient safety and organizational efficiency are the driving forces behind our striving for accuracy in the transcription process overall," King says. "So if we can adopt best practices within our respective organizations, including when we're working with speech recognition technology, then we can achieve these all-important goals."
— Susan Chapman is a Los Angeles-based freelance writer.