Home  |   Subscribe  |   Resources  |   Reprints  |   Writers' Guidelines

February 2017

Coding Contest Hints at ICD-10 Struggles
By Elizabeth S. Goar
For The Record
Vol. 29 No. 2 P. 22

The results contradict widespread reports of a painless transition to the new codes.

If the results of a nationwide ICD-10 coder contest are any indication, the ICD-10 transition hasn't gone as smoothly as the industry would like to believe. With average accuracy scores of just 55% for inpatient cases, 46% for ambulatory surgery cases, and 33% for emergency department (ED) cases, the performance data revealed by Central Learning's contest are raising red flags that a number of experts say should not be ignored.

"We all really patted ourselves on the back about our successful conversion to ICD-10, but maybe we should have waited," says Laura Legg, RHIT, CCS, CDIP, an AHIMA-approved ICD-10-CM/PCS trainer and executive director of revenue integrity and compliance for Healthcare Resource Group. "We should respond [to the findings] very cautiously, but we should also respond with training, auditing, and education. … Continuous feedback is what makes a good coder."

The Contest
More than 550 coders, of which 59% were AHIMA certified and 28% were AAPC certified, participated in the first-of-its-kind contest. The remaining 13% of participants did not specify any certification. Inpatient coders had an average of 13 years of experience, followed by nine years for ambulatory surgery coders and eight for ED coders.

Central Learning, a web-based coding assessment application from AVIANCE Suite Inc that simulates a true ICD-10 coding production environment, was utilized to code 1,859 medical cases (54% inpatient, 20% ambulatory, and 26% emergency) over a 30-day period. The contestants were also provided with official coding guidelines for each patient type, which included inpatient, ED, and ambulatory surgery.

"Once coded, the cases were electronically graded against Central Learning's standardized answer keys to remove any subjective bias for accuracy scores," says Eileen Dano Tkacik, AVIANCE's director of operations and information. "Prior to the contest initiation, the Central Learning answer keys for each case were vetted and approved through a rigorous process that included a review and approval process by a forum of 14 certified coders and then verified by multiple external AHIMA-approved ICD-10-CM/PCS trainers."

Winners were determined by calculating the highest average accuracy scores for their assigned cases. Receiving the $5,000 prize for each patient type were the following:

• Inpatient: Linda Muchewicz, CCS, RHIT;
• ED: Amanda Compoe, RHIT; and
• Ambulatory surgery: Love Iovino, CCS, CCS-P.

The final report provides an important snapshot into revenue risks due to coding quality, with performance findings revealing accuracy ratings well below the 95% accuracy standards touted under ICD-9, Tkacik says.

"Overall, the results indicate a much lower level of coder accuracy than previously 'self-reported' coding performance surveys had estimated. The metrics captured during the coding contest are reminiscent of findings from an early HIMSS pilot program originally testing coder accuracy which was also met with subpar initial accuracy ratings and even worse ratings in productivity," she says.

The contest revealed other interesting findings as well. "The contest also monitored DRG [diagnosis-related group] accuracy, with lower-than-expected accuracy percentages resulting in a potential loss of $1.149 million across 612 inpatient cases, or an average of $1,877 lost per inpatient case," Tkacik says. "Extrapolated across an organization's average number of inpatient discharges per month, this benchmark loss per case represents a significant financial red flag for health care providers."

Revenue losses related to DRG 222 and DRG 455 were called out in the study as the most concerning. There are many possible reasons why these DRGs could have the potential impact that they do, Tkacik says. The coder may not have entered the correct DRG, or perhaps a procedure code was missing—a problem that can often be traced back to the differences in procedure coding between ICD-9 and ICD-10. The latter requires two codes, whereas the former may require only one.

"In order to handle this reporting difference, grouper logic for ICD-10 includes a number of procedure codes that result in a different DRG when reported alone vs when reported along with another procedure code," Tkacik says.

In addition, she notes several surprises in the results, one of which was data indicating that ED coders did not consistently assign external codes. This occurred despite guidelines and contest rules that asked for these to be coded. This inconsistency led to underperforming accuracy scores for ED coders.

It's a widespread issue, Tkacik says, explaining that "external cause codes were worked into ICD-10 to provide enhanced detail and streamline claims submission and payment adjudication. However, external cause code reporting is voluntary but it is highly encouraged. To make improvements in this area, physicians and coders must not only be directed to make these changes but also take the time to get familiar with coding guidelines and conventions to take advantage of this opportunity provided by ICD-10."

Other "low accuracy" code categories revealed during the scoring processes include the following:

• congenital malformations, deformations, and chromosomal abnormalities;

• certain infectious and parasitic diseases;

• diseases of the blood and blood-forming organs, and certain disorders involving the immune mechanism;

• diseases of the skin and subcutaneous tissue; and

• pregnancy, childbirth, and the puerperium;

In terms of productivity, inpatient coders averaged 1.8 cases per hour, while ambulatory coders averaged 4.7 cases and ED coders averaged 5.5. Coders with the highest productivity compared with ICD-9 benchmarks scored the lowest on ICD-10 coding accuracy.

The findings "clarified the issue of pushing for productivity, especially with ICD-10. Ultimately it's going to cost you more than it's worth," says Seth Avery, president and CEO of AppRev, a business intelligence technology company that helps hospitals improve billing accuracy, develop pricing strategies, and understand denials. "It confirms the view that accuracy is going to be much more enriching than productivity."

Denials on the Rise
According to Avery, the results jibe with what he is hearing from AppRev customers: While ICD-10 had limited impact on productivity, there has been a marked increase in coding errors related mostly to cardiology, especially around the placement of cardiac devices—errors that will likely lead to an uptick in claim denials or underpayments.

"We are just starting to see October data, so I do expect an uptick," he says, cautioning that "payers are so inconsistent with the codes they use for denials that sometimes it's hard to uncover the uptick."

Legg concurs, noting that while she hasn't "really looked at the data to see accuracy vs nonaccuracy, I would say yes, we will see a jump in denials."

Legg also expresses surprise at the contest's low performance and accuracy scores—especially since she's not seeing the same in her daily interactions directing large auditing teams. "We aren't finding those low scores in our audits," Legg says. "The ED scores weren't an entire surprise because they use external cause codes. Many coders aren't proficient with those."

The ED scores were also no surprise to Rhonda Buckholtz, CPC, CPMA, CPC-I, CGSC, COBGC, CPEDC, CENTC, vice president of strategic development at AAPC, which boasts more than 155,000 members worldwide and provides education and 32 professional certifications to physician-based medical coders, who notes there is a logical reason for the poor ED results.

"In the ED, information is sometimes limited on the patient condition and sometimes all results are not back at the time of coding," she says, adding that the trouble spots also came as no surprise. "These all have specific sequencing or code first rules that sometimes coders forget to take into account or they code from cheat sheets/tools that do not have the sequencing on them."

According to Buckholtz, if the issue is sequencing, then an increase in denials should be expected. In fact, "many payers sent notices to providers this fall notifying them they would start to enforce the sequencing on claim submission and deny if there were sequencing issues that had coding rules," she says.

In addition to denials, Tkacik notes that failure to address accuracy issues may also expose a health care organization to recovery audits. An organization's chances for success in today's value-based care environment could also be put at risk.

"Provider organization chief financial officers and HIM leaders need to maintain a watchful eye on coder accuracy. Coding accuracy is especially critical as increases in payer coding denials and recovery audits due to ICD-10 coding errors are expected to climb in 2017. In addition, accuracy will become increasingly important under value-based payment models as ICD-10 codes form the foundation of accurate DRG assignment and quality reporting," Tkacik says.

Takeaways
Buckholtz says that the contest results indicate that coders cannot become lax when it comes to keeping their skills sharp. "Coders need to continue their education, their learning path. They need to have a solid understanding of pathophysiology for coding," she says.

A desire to learn was a common refrain among the experts, all of whom emphasized the importance of continuing education, audits, and feedback to ensure a comprehensive understanding of ICD-10. For example, Tkacik identified the following six key takeaways from the contest results that can help coding departments better manage ICD-10 coding:

• In 2017, target the bottom five diagnosis-specific coding areas for ongoing coder knowledge assessment, training, and monitoring initiatives.

• Continue assessing coder knowledge in ICD-10.

• Provide targeted ICD-10 education and training to address knowledge gaps.

• Supplement internal coding reviews with monthly external coding audits.

• Balance coding productivity and accuracy performance metrics.

• With coding denials predicted to increase in 2017, step up monitoring efforts.

Avery agrees with the need to strike the proper balance between accuracy and productivity. "The community needs to continue to push for accuracy over productivity. If you think about it, it's a balance. Life is a series of compromises. You just have to get the dial to the right balance," he says.

Legg points to her own career when discussing the importance of continuous feedback. She credits her time working at a hospital that placed a high value on auditing and continuous feedback for her coding skills.

"That continuous feedback is what made me a good coder," she says. "Another thing … you have to understand the rationale for coding things the way you are. Understand guidelines and the rationale. That's how you get to the clinical truth of the way you code. You can't just code numbers."

— Elizabeth S. Goar is a Tampa, Florida-based freelance writer specializing in health care and HIT.