October 8, 2012
The Value of Error Reports
By Mike Bassett
For The Record
Vol. 24 No. 18 P. 10
An exploration of coding accuracy can yield worthwhile riches for organizations seeking to maintain a strong financial foothold.
Every healthcare organization is expected to have in place an effective coding compliance program—one that will be continually evaluated and reevaluated. To accurately assess the program’s effectiveness, it’s necessary to measure several outcome indicators, including error rates, which many HIM experts consider to be the most significant.
There are various methods of determining chart error rates but, according to Rose Dunn, MBA, RHIA, CPA, FACHE, chief operating officer for healthcare consulting firm First Class Solutions, Inc, the two most common are the “code-over-code” and “record-over-record” methods.
In each approach, several charts are selected for review. The code-over-code approach divides the total number of correct codes by the total possible number of correct codes, while the record-over-record approach divides the number of records correctly coded by the total records in the sample.
According to the AHIMA publication Benchmarking to Improve Coding Accuracy and Productivity, the more widely recognized record-over-record method provides the advantage of being less labor intensive. On the other hand, it is considered to be more subjective because it may not include a definition of what counts as an error. In addition, some organizations may adjust the error rate based on the perceived impact of the errors.
The more specific code-over-code approach is deemed more objective because errors are more clearly defined. According to the AHIMA publication, it is also more effective in identifying trends for educational purposes or process improvement. The disadvantages of code over code are that it takes longer, represents a change in thinking, and requires more of a learning curve for auditors.
“Organizations prefer [record over record] for its simplicity,” Dunn says. “However, a gauge of coding accuracy is probably better reflected with the [code-over-code approach].”
Lori-Lynne Webb, CPC, COBGC, CCS-P, CCP, CHDA, a coder III at Saint Alphonsus Regional Medical Center in Boise, Idaho, recalls that when she performed staff audits for one particular healthcare provider, she would randomly pull 25 charts per coder twice per year. She used a 100-point scale and would simply deduct a point for an error, such as a missed procedure. The organization set a passing rate of 95 for individual charts but also required coders to achieve a passing rate in 80% of the 25 reviewed charts.
Error rates can also reflect the severity of the error involved. There are five levels of care within evaluation and management (E/M) CPT codes, with one being the lowest and five the highest. When Jacqueline Thelian, CPC, CPC-I, a healthcare consultant with Medco Consultants, Inc, performs audits to determine error rates for E/M charts, she separates errors into those requiring a one-level service change and those involving changes greater than one level.
For example, if a provider codes a level 4 and Thelian determines it to be a level 3, that would represent a one-level change. If it had been coded as level 5, it would have represented a two-level change. “The seriousness of the error is greater when it’s more than a one-level change or it involves a category change,” she says. “And that coder is going to need a higher degree of education.”
What Is an Appropriate Error Rate?
There is no one answer when it comes to determining what constitutes an acceptable amount of miscues. Dunn says most facilities attempt to maintain an accuracy rate between 94% and 96%, while Thelian points out that an error rate of 5% or lower is considered acceptable by the Office of Inspector General.
“That’s a very high standard. I’ve been doing this for 27 years, and I’ve had maybe five clients who fall into that category,” Thelian says. “If the healthcare provider has a good compliance program and does a compliance review every year, then at some point they may be able to achieve it.”
While the federal standard is 95%, acceptable error rates within hospitals and practices can vary greatly. “There’s really no standard in place,” Webb says. “What is an adequate passing grade? One of the problems with error rates is that the factors involved are always different, so that it makes it difficult to come up with a hard and fast way to do things.”
The 95% accuracy rate is “pretty stringent and difficult to accomplish,” says Peggy Stilley, CPC, CPMA, CPC-I, COBGC, ACS-OB, director of audit services for AAPC Physician Services in Salt Lake City, pointing out that practices and hospitals have developed their own compliance plans with their own accuracy standards. “What’s important,” she adds, “is that a good compliance plan will have a standard of what is acceptable and will spell out what is required [if a coder drops below that standard].”
How to Use Error Rates
Dunn says determining error rates serves several purposes, the most important of which is to identify educational needs for individual coders as well as the entire coding team.
During the course of an audit, Webb says if she determines a coder has a problem with E/M codes, she assigns that coder nothing but those codes for several weeks and then reevaluates his or her performance. “So I would expect to see improvement in that time frame because there had been so much concerted education,” she says.
Dunn says determining coding accuracy rates is useful to an organization in the following ways:
• They can help identify coders who are achieving a high level of accuracy, allowing the organization to consider them for advanced positions such as lead coders, supervisors, or internal auditors.
• They can be a tool to help base merit/performance pay increases for deserving coders.
• They can be part of any coding incentive plan, which should always require a minimum coding quality level before allocating any bonus payments.
Keeping tabs on error rates can lead to financial implications. While working in an obstetrics and gynecology practice, Stilley perceived the office was purchasing too many intrauterine devices (IUDs). At a cost of $500 to $700 per device, it was an expensive habit. What’s more, Stilley was concerned the practice wasn’t billing for all the IUDs it was purchasing.
“So I did a targeted audit of all my providers,” she says. “I ran the two codes—one for insertion of the IUD and one for purchase of the device—went through every chart that popped up on the radar, compared it against the log [of devices issued] we were keeping, and identified $30,000 that hadn’t been billed. Needless to say, heads rolled.”
In the end, error rates have a symbolic value, Webb says. “I’ve never had to have what I call the ‘men in black’ show up at my doorstep,” she says, “but [calculating error rates] gives you the opportunity to assess your compliance plan. You are able to say, ‘Our plan is being carried out, and we are conscientiously looking at this all the time.’ So that if you have those RACs [recovery audit contractors] visit, you can demonstrate you are acting with good intent and that your intent is not to defraud—it’s that you want to make sure a patient’s visit is well documented.”
Stilley says the extent to which errors should be monitored and coders and physicians reeducated depends on the severity of the error rate and the types of errors being committed. “A physician who has a 90% accuracy rate in documentation probably doesn’t have to be looked at for another year,” she says. “But a physician with 60% or 65% accuracy needs to be looked at semiannually and maybe even quarterly if he is undercoding by two levels.”
Sometimes measuring the kind of education necessary for coders can be just as exact as measuring an error rate. Thelian is familiar with one healthcare organization that would assess error rates by assigning a letter grade to the coder, which would then be used to define what kind of education that coder needed.
It the documentation errors are “particularly egregious” compared with the billing, Thelian says auditors can also perform prospective reviews, which means “nothing leaves the office until someone has actually reviewed it first.”
All coding is based on the depth and quality of clinician documentation, Dunn says, pointing out that “if the doctor doesn’t document it, it can’t be coded. However, if there are clues noted in the chart, either in diagnostic test results, physician orders, nursing notes, and cryptically in the physician notes, the coder should query the physician. Obtaining just a bit more documentation from the physician can be the difference between one DRG [diagnosis-related group] and another.”
Computer-assisted coding applications promise an improvement in quality, Dunn says, but those gains are sometimes minimal. “Computer-assisted coding does help with the challenge of note bloat that frustrates coders today,” she says. “To that extent, it may identify conditions that may have been overlooked by even the most experienced coders.”
Stilley says the pertinent issue is whether physicians have a good grasp of what constitutes a complete chart. “And it doesn’t matter whether they like something or not, they have to understand it’s what the government wants,” she says. “I want them to capture the actual work they are doing without writing down more stuff than is necessary.”
“Technology is a double-edged sword. On the one hand, it’s amazing; on the other side, it can really stink,” says Webb, adding that while using encoder software can be “great, the bottom line is that if you put garbage in, you’ll get garbage out. If your physicians aren’t documenting well or aren’t well trained in the software, it’s going to perpetuate an error stream all down the line.”
Stilley points out that with its autofunctionality, an EHR can create some accuracy problems. “Sometimes you’re weeding through six or seven pages of information that really isn’t pertinent,” she says. “On the other hand, if I can’t read what the physician wrote, it doesn’t matter anyway.”
“I think that if we took a poll today, we would find most coders still prefer working with paper records because they can [see] transitions in handwriting, are able to flip between many pages, and recognize different forms where pertinent data is recorded,” Dunn says. “The paper record with handwritten notes also doesn’t have the note bloat issues with electronic records and the freedom to cut and paste over and over again.”
Thelian points out that the Office of Inspector General realizes that document cloning has become a serious problem, warning that Medicare contractors are seeing an increased number of medical records across services with identical documentation. EHRs allow providers to basically point and click their way through bullet points, Thelian says, which invariably leads to duplicative and excessive documentation.
“So it appears that you have the documentation to support a level 5 visit, but there’s really no medical necessity for it,” she says. “And if you were to hold the notes up to the light, you would see that they are exactly the same.” These types of errors can lead to being audited, Thelian adds.
Stilley says the best technology for lowering error rates may be an old standby: dictation. “Doctors are taught in medical school how to dictate a note, so that makes some sense.”
These HIM experts say whether a provider decides to lean on technology is inconsequential as long as a sturdy compliance program is in place to monitor coding activity. If not, the consequences could be grave.
— Mike Bassett is a freelance writer based in Holliston, Massachusetts.