Home  |   Subscribe  |   Resources  |   Reprints  |   Writers' Guidelines

Special Showcase Edition April 2013

How to Measure Duplicate Rates
By Mike Bassett
For The Record
Vol. 25 No. 7 P. 18

Is there a singular solution? Industry experts discuss the various nuances involved in calculating “extra” medical records.

Most health care facilities have long recognized that their master patient indexes (MPIs) have issues involving data quality, particularly those related to duplicate records.

An MPI duplication rate is a number laden with significance. For example, a high rate means a facility is likely to have problems with missing clinical information, duplicate tests and treatments, and billing and accounts receivables delays as well as an increased risk of medical errors.

According to AHIMA, the average duplication rate in a hospital setting is approximately 10%. Ideally, the best practice duplication rate, depending on the setting, should be less than 5%. Which leads to the questions: How is that rate calculated, and what does that number represent?

According to Lou Ann Wiedemann, MS, RHIA, FAHIMA, CDIP, CPEHR, director of HIM practice excellence at AHIMA, the association’s guidelines for computing an actual duplicate record rate in a single database involves dividing the total number of patient records in the MPI database into the total number of individual duplicate patient records multiplied by 100. The total number of individual duplicate records is the amount of the extra records as distinct from the original records. For example, if 50 patients each had two records, the number of duplicate records would be 50.

However, Robert Lynch, MBA, president of EvriChart, a medical records storage and management company, believes that most organizations and information management vendors don’t necessarily use the same method to determine duplicate records.

Victoria Wheatley, MS, RHIA, vice president of client services for patient access solutions at health care software provider QuadraMed, subscribes to a model that entails finding the records involved in any duplication. In a duplicate pair, one record is designated a “survivor” while the other is “retired” and considered to be a duplicate for the purposes of calculating the rate. But she adds that some calculation methods define duplicate as the total number of records involved in the duplication no matter whether a record is deemed an original or a duplicate.

Lynch points out there could be scenarios that raise the question of how to count an original record if it has multiple duplicates. Should it be counted once or twice if it has two duplicates? He says using a Six Sigma methodological approach to calculating the rate could bring some clarity to the process and also help measure a broader definition of data integrity.

In the Six Sigma approach, a health care organization specifies that each record in the MPI must represent one patient and be unique and distinguishable from all other records. Consequently, an MPI “defect” is any record that represents the same patient as another record. “Each HIM director wants every record in their database to represent one individual patient,” Lynch says. “But to the extent you have a pair of records that represent the same patient, there is a defect in both of them, not just one. By definition, a duplicate record creates a defect in the original.”

Using this kind of methodology has several benefits, Lynch says. “We believe that duplicate rates should not be the sole measure of quality that should be applied to the MPI and that there should be a broader definition of quality defined as data integrity,” he says. “And as you broaden the definition of what a defect is and you apply it to things like missing or incorrect Social Security numbers or imaging information, then this can get integrated into an overall data integrity measure. We think it’s a pretty interesting way of looking at the issue, and one that’s easy to understand.”

Susan Seams, Evrichart’s MPI project manager agrees, pointing out that it’s also a better way of measuring the degree of inefficiency that exists within an MPI. For example, “How many lines of data are going to have to be examined in order to prove or disprove duplicity?” she asks. “I may think I have a 30% duplicate rate out of 100 items that’s based on AHIMA guidelines, but there’s actually 60 lines that need to be examined.”

Seams believes the Six Sigma method provides a “truer look” at the number of lines of data and the percentage of data contained within an MPI that needs to be examined.

A Truer Reading
Do health care organizations really know their duplication rates? Wheatley thinks not. “A lot of them think they do because they run reports from the hospital information systems,” she says. “But typically they will underreport the problem because the tools for duplicate detection and reporting are pretty rudimentary in most of the standard systems that are out there.”

That could be changing, though, Wheatley says, since health care organizations—particularly larger ones—are looking more closely at their duplication rates when they perform meaningful use attestations, connect to health information exchanges, or adopt new clinical systems.

There are other reasons facilities should have a firm handle on duplication rates. Because health care organizations invest heavily in health information systems, they want to be able to measure their return on investment, says Dan Cidon, chief technology officer for NextGate, a provider of enterprise identity management solutions. “You want to come up with a meaningful metric to show how the data has improved over time,” he notes.

An overabundance of duplicate records can have serious financial implications. Cidon says studies have shown that the cost to an organization can approach $20 per duplicate record because of the administrative burden it represents. Wheatley says one client conducted a software implementation cost justification exercise and found that it spent $250,000 annually on duplicate corrections. “We tell our clients that depending on what we factor into the cost that it may be as little as $5 to $10 per duplicate if you’re just factoring in the clerical time [involved] and as expensive as thousands of dollars for a duplicate clinical record that has an adverse impact,” she says.

Cidon says that if a value can be assigned to a duplicate record, “multiply it by some factor representing how many dollars you are using [to fix the error], and you can clearly explain to management what kind of return on investment they’re getting by not having administrative staff hunt down records that are separated from each other.”

The rate also tells the provider the extent to which it has a data quality problem and can help it determine what kind of action plan it needs to institute. “It should tell an organization how much in the way of resources it needs to spend on its MPI and duplications,” Wiedemann says.

More specifically, a duplication rate will tell an organization the level of risk it’s operating under since duplicate records cause myriad problems in areas such as clinical care, reimbursement, and adoption of clinical systems, according to Wheatley. “There are all sorts of risks associated with duplication,” she says. “A really good rate means your risks are lower, while a high duplication rate creates all sorts of challenges, like a higher risk of clinical error.”

If an organization knows the magnitude of the problem, she adds, it can determine not only that some sort of intervention is needed, but it also can prioritize a response so that it can quickly resolve certain problems such as missing data elements.

Frequency of Error Checks
How often should duplication rates be calculated? Lynch says there is no concrete industry standard but recommends that it’s performed at least monthly depending on the controls implemented at the patient registration level. “If you have really good controls in place and you’re not creating a lot of duplicates, then you need to do it less frequently than those facilities that don’t have such good controls in place,” he explains.

Cidon says some organizations monitor duplication rates on a weekly basis to drive process improvements and obtain immediate feedback on any of those subsequent efforts. It should be a fairly easy process, he says, pointing out that “in our system [the statistical data are] stored in a database that is very easily accessible with any number of reporting tools.”

At the executive level, Cidon says administrators want to see duplicate rate information on a quarterly basis to get a handle on the data quality in the MPI. As a result, they can determine whether their information management efforts are achieving the desired return on investment.

Wheatley says health care organizations always should have an eye on their duplication rates. “I’m biased since I work for a company that sells software and services, but it ought to be an ongoing process,” she says. “I once told a health information management department that whatever tool you have—whether it’s a bad one or the best that money can buy—you need to be using it to look at duplication rates so you can stay on top of it on a regular basis. If you don’t stay on top of it every day, duplicates are being created every day.”

— Mike Bassett is a freelance writer based in Holliston, Massachusetts.

 

MPI Miscue Gets Personal for HIM Supervisor
While duplicate records within a master patient index can have significant consequences for a health care organization, the impact on individual patients can be equally, if not more, severe. Susan Bailey, RHIT, an HIM supervisor at a hospital in the western United States, can speak on the topic both as a provider and a consumer. She has been the victim of patient misidentification four times over the past 10 years.

Most of the incidents were fairly minor in nature, such as receiving an explanation of benefits notification for a chiropractic visit that never occurred. But one mix-up involved a CT examination that she didn’t undergo for a chronic condition she doesn’t have showing up in her medical record.

While in the middle of a consultation, the physician noticed Bailey’s record indicated that she recently had undergone a CT exam. “I told him, ‘I haven’t set foot in a radiology department in years, except to give them grief,’” she says. “And he said, ‘Oh, that’s why I always ask patients about new exams, to make sure it’s theirs.’”

Not only was the information incorrect, it led Bailey to contemplate what might happen if she ended up in an emergency department and received medication for a condition she doesn’t have. “It’s kind of scary,” she says.

Bailey speculates that each of the problems occurred because of mistakes made during patient registration. That’s no surprise—ask HIM professionals about how most duplicate records are created, and they’ll start with problems associated with patient names.

“Certain data elements are inherently not well structured,” says Dan Cidon, chief technology officer for NextGate. “Maybe in the first encounter a patient gives her name as Liz Smith and in the second encounter it’s Elizabeth. The same thing goes for addresses. There are thousands of ways to represent an address. All of these eventually lead to records that are very hard to match together unless you have a very sophisticated, fine-tuned algorithm to deal with these kinds of data issues.”

While the cause of the problem is easily explained, the solution is anything but simple, Bailey says. “Getting these problems fixed is just a tremendous challenge,” she says. “I don’t think people really understand how difficult it really is. Much of the answer has to depend on getting accurate patient identification up front because when that [inaccurate] information gets in [the master patient index], it’s an absolute nightmare to get it out.”

— MB