Home  |   Subscribe  |   Resources  |   Reprints  |   Writers' Guidelines

June 22, 2009

Transcription’s Holy Grail
By Elizabeth S. Roop
For The Record
Vol. 21 No. 13 P. 14

In its seemingly never-ending quest for uniformity, the industry is taking steps to move closer to standard quality measures.

The establishment of recognized quality standards has taken on greater urgency in light of the expected acceleration of HIT adoption in the wake of the HITECH Act. The real concern is that widespread adoption of electronic medical records (EMRs) will fall short of the goal of higher quality, cost-effective healthcare without established standards to ensure that patient encounter data are captured accurately and in a manner that facilitates efficient exchange of health information.

But the heightened awareness of the need for established quality standards is driven by more than just greater prevalence of technology. It is also the result of the changing role of transcription within the greater HIM picture. Transcription is no longer a stand-alone function but rather one of many contributors of information into an EMR. Established standards are critical if all parties expect to work harmoniously to the benefit of the end user.

“The real issue is the orchestration of the people, processes, and technologies that enable interoperability through the full spectrum of the healthcare documentation process,” says Dale Kivi, MBA, director of business development with FutureNet Technologies Corporation. “You have to be aware of the contributions each makes to ‘quality’ in the mind of the end user: Is the document on time; does it have all the necessary supporting data; does it accurately reflect the physician’s encounter with the patient? That is the only way quality truly can be measured and managed. It all has to be based on the needs and expectations of the end user.”

What Constitutes “Accuracy”?
Perhaps the greatest issue confronting the transcription industry in terms of established standards is the lack of a cohesive voice to govern a confusing array of input sources. Without an established set of standards, quality measures are typically decided one on one between the client facility and transcription service. But the issue is often not whether a document has been accurately transcribed but the definition of accuracy according to the end users.

In the traditional sense, medical transcription is converting dictated notes into a text-based document. But its simplicity is deceptive when end-user expectations come into play.

For example, some physicians want the document to reflect exactly what they’ve dictated, while others expect it to reflect their meaning. That often requires the medical transcriptionist (MT) to correct grammar, fill in gaps, and format according to facility requirements. In both cases, the end result may be a document that is considered inaccurate by the end user.

“Certainly, because our business deals with information that has to be accurate by definition, I think that quality always has been and will be an important issue. Anytime you say something is important, the next question is how do you measure it, set goals and standards, and know if you’re reaching them?” says Jay Vance, CMT, president and CEO of Global HDG, a medical transcription consulting company. “You can throw a number out there in terms of percentage of quality, but unless you define the methodology and have a clear understanding of how you arrived at that number, it can mean different things to different people. A percentage doesn’t tell the whole story. It’s important to not only have standards but also some common definition of how those standards were arrived at.”

For example, best practices established by the Association for Healthcare Documentation Integrity (AHDI) call for accuracy scores quantified with the use of a numeric calculation that weighs varying degrees of error against the length of the report. The AHDI recommends 100% accuracy related to critical errors, 98% accuracy for major errors, and 98% accuracy for all errors in the report, including minor errors.

While the AHDI does provide definitions of minor vs. major errors, much is still left to individual interpretation.

“There are some variations on the theme of measuring quality. If you look at the 98%, you really have to ask what the methodology was. What is the baseline number? How do you determine what 100% is? Do you deduct errors from that baseline number based on the length of the report?” asks Vance. “Is there some other measurement or another way of coming up with that baseline number? On top of that is actually identifying the relative weight of an error because not all are created equal. Some are minor, like punctuation that doesn’t alter the meaning, all the way up to major errors that will drastically affect the meaning of the report.”

When speech recognition software enters the picture, it becomes even less clear. The accuracy levels are often dictated by the software creators and the individuals using the program. The client, working with the technology vendor, will typically set the threshold that determines which dictations can be managed utilizing the software and which will require regular transcription.

“Since that is solely at the discretion of the client and technology vendor, there are a wide range of outputs you can get from the exact same technology, the exact same platform, used at different locations with different dictators,” Vance notes. “Are they willing to make any concessions or changes in their manner of speaking in order to optimize the quality of the speech recognition software? ... There are just too many factors that go into just the quality of the initial draft before it is ever seen by a human person.”

Multiple Touch Points Muddy the Waters
The emergence of speech recognition software isn’t the only change in the evolving medical transcription industry that has served to muddy the waters in terms of measuring quality. Outsourcing, both onshore and offshore, has added a layer of complexity by increasing the number of touch points before a document is seen by the client facility.

With the transcription process being managed largely by a widely disbursed workforce, including documentation specialists working remotely from their homes and even overseas, it is not uncommon for each document to go through multiple reviews before it reaches the client facility.

“Anytime you add additional layers, additional touch points that require additional levels of QA [quality assurance], you make the whole process of measuring quality more challenging,” Vance says. “But it has to be done. There is no way to get around that.”

According to Judy Hinickle, president of TransCom Corporation, a medical transcription and consulting company, to accurately evaluate the quality of a transcribed document, it is critical that all transcription methods be judged based on the same criteria. This includes traditional dictation, speech recognition, handwriting recognition, structured text, interactive direct entry, and natural and intelligent language processing.

The evaluation, she notes, should be based on review criteria for the accurate presentation of any elements of patient safety, document integrity, or style in the documentation of the patient encounter.

“Without using the same quality standards for all methods of input and for all the transcriptionist locations, the resultant inconsistent statistics are meaningless in the inevitable comparative situation,” says Hinickle. “Alternative data capture methods are not immune from scrutiny and need to meet the same high standards required of transcribed documentation. Transcribed documents, no matter where they originate, must be measured with the same standardized criteria. Presently, we have problems in the industry with disparities in the quality expectations … or the claim of immunity from review standards by the alternative method.”

Hinickle notes that quality expectations are often tied to the reviewer’s bias. For example, a QA reviewer may hold contractors to far higher standards than in-house staff. Also, there is often no review of the work directly input into the record by a physician or by various software or hardware technologies.

She also notes that speech recognition vendors will often create their own quality standards relative to the number of words or phrases correctly translated, as opposed to the magnitude of the error.

“While technology contributes to the production of medical documentation, highly trained professionals exercising critical thinking skills and informed interpretive judgment are best suited to ensure quality in the production of medical records,” says Hinickle.

Taming the Quality Wild West
Because of the rapidly evolving nature of medical transcription, many consider the best practices from the AHDI to be too dated to do an adequate job in today’s healthcare environment. As a result, there is a growing demand from the industry for a standard set of quality measures against which all sources of transcription will be evaluated.

But the problem remains that the transcription industry is one in which commonly accepted definitions are few and far between.

“It is similar to the other [industry] issues of cost and turnaround time. Everyone had their own definition of what is a line and what is the required turnaround time for common work types. Everyone defined their own standard, which makes it difficult to compare and contrast performance between facilities,” Kivi says. “It’s too easy to stack the deck for quality audits to show that you are or aren’t meeting requirements. We have to come up with common sampling methods and measurement tools that are applicable to the entire industry today and that are driven by AHDI quality standards documents.”

To that end, the AHDI and the Medical Transcription Industry Association (MTIA) assembled a Quality Assurance Best Practices workgroup that has spent the past two years identifying metrics and definitions for a credible, measurable QA program standard for medical transcription. During the April MTIA Quality Assurance Summit, the committee released its draft paper, “Metrics for Measuring Quality in Medical Transcription,” for comment.

According to Kivi, a steering committee member of the QA workgroup, the final result will be an enhancement of existing AHDI best practices that will address the broader scope of the healthcare documentation process and how quality is measured and managed, from voice capture through the final distribution of the medical report, whether generated through traditional transcription, speech recognition, or an automated system.
In addition to input from MTs and QA specialists, the draft paper also incorporates recommendations from the AHIMA and medical transcription service organizations (MTSOs), as well as academic and industry statisticians, to ensure every possible variable is taken into consideration.

By inviting the AHIMA and MTSOs to participate in the standard-setting process, the hope is that the final program is comprehensive enough to meet the changing needs of today’s technology-driven healthcare environment and the end users.

Kivi notes that end user input is particularly important as they—physicians, coders, and researchers—are the ones who judge document quality based on the availability and usefulness for their respective responsibilities and needs. Those diverse expectations go beyond the traditional role of the medical transcriptionists who have been charged with converting dictation to an electronic record and embrace the industry’s future where medical language specialists ensure the integrity of health data for all end users.

“Therefore, quality in medical transcription, or more broadly, quality for the complete healthcare documentation process, can only be achieved through an orchestration of people, processes, and technology,” Kivi says. “Discussions on healthcare documentation quality must expand beyond the simple conversion of voice to text or generation of structured text through simple point-and-click solutions and include the workflow process and technology factors that influence quality from the perspectives of end users.”

Differing expectations along with variables over which documentation specialists have little control, such as workflow routing of the transcribed document, accuracy of admission/discharge/transfer information, and performance of technology and/or software such as speech recognition, are part of the reason why the transcription industry has wrestled with quality standards over the years.

The struggle has become more acute since the industry moved away from the majority of the work being done in house through traditional transcription. This evolution has made it “easier to agree upon the general categories of errors within the industry than it has been to agree upon a formula for use or the appropriate sampling techniques or whether technological document originators need to abide by the same standards as medical transcriptionists,” says Hinickle, past chair of the QA workgroup.

“It is very key that the standards be developed by the users who have the critical thinking skills and content expertise to recognize errors and their consequences, and that the statistical methods involved be developed by objective authorities in that statistical field rather than developed by vendors who have money at stake in the results,” she adds. “It is also key that [with] the implementation of the standards, [we] keep in mind that within the objective standards are areas of subjectivity. Also, provisions need to be made that safeguard transcriptionists or even contractors from tyrannical QA editors and yet safeguard the integrity of the record.”

— Elizabeth S. Roop is a Tampa, Fla.-based freelance writer specializing in healthcare and HIT.

 

Retrospective Audits Fill Critical Feedback Gap

One unfortunate side effect of outsourcing the majority of transcription has been the loss of feedback to both documentation specialists and facilities that serves to improve the quality of the overall process.

“Back when transcription was all in house, the staff would have the opportunity to receive feedback. Those days are long gone,” says Jay Vance, CMT, president and CEO of Global HDG. “Much of whatever focus there is by the client on quality is on an anecdotal basis. When an HIM director or doctor is complaining about quality, the squeaky wheel will get the grease. The focus is going to be on putting out that fire. [Auditing] is about trying to avoid that fire in the first place.”

Retrospective audits are a valuable but largely underutilized tool in the transcription industry. They can reveal areas of weakness, both in the overall process and with individual transcriptionists or software applications.

SPi Healthcare, which provides onshore and offshore medical transcription solutions, conducts monthly retrospective audits and reviews the findings with both its internal transcription and operation teams and its clients. In addition to validating the quality of the services SPi provides to its clients, audits also help the facilities identify areas for improvement, such as problematic physicians or dictation policies that hinder the accuracy of final reports.

“We make sure that we audit every person that distributes a report to a client, whether they be an MT [medical transcriptionist] or a QA [quality assurance] editor. If they distribute one or 100 reports to a client, we make sure we touch every one of them,” says Christabel Campos, SPi’s director of audit and compliance.

When conducting its audits, SPi works with the version of the report that is received by the client to ensure it is evaluating the exact document the client would receive. That eliminates the potential that it will be working with a document that has been altered by the client for any reason.

Audits are conducted utilizing the original voice record and evaluated using the same scoring methodologies as the company’s QA editors use when conducting concurrent evaluations. SPi’s auditors utilize QA Navigator, a comprehensive tool designed by TransCom Corp President Judy Hinickle that automates the evaluation process utilizing the Association for Healthcare Documentation Integrity quality standards.

In addition to evaluating its own accuracy levels, SPi also takes into account any corrections made by the client after the report was delivered.

“On the client side, if and when they do edits postdistribution, we take a look at those reports and are able to identify which are corrections and which are edits. That way, we can [determine if] the client perceived the report as not being complete,” says Campos. “Also, when we incorporate what [clients] find as errors into our audits, we make it a 360-degree approach. It brings the client in as part of the process. When we declare the accuracy of the team working on that account, it becomes a joint effort.”

Involving the client in the audit process is key because it demonstrates the service provider’s willingness to accept responsibility for and correct any errors. But it also prevents misperceptions over the quality of the transcription services provided, which can sometimes be called into question when the end user’s expectations differ from what the contract itself calls for.

“Sometimes it is a battle with perception. You do two things wrong and a hundred things right, but all the good is negated,” says Campos. “We make sure both sides are on the same page.”

— ESR