Home  |   Subscribe  |   Resources  |   Reprints  |   Writers' Guidelines

December 2017

HIM Challenges: CAC's a Tool, Not a Panacea
By Erica E. Remer, MD, FACEP, CCDS
For The Record
Vol. 29 No. 12 P. 8

Technology is meant to be a tool to help us be more productive and efficient. However, overreliance on technology is counterproductive and makes us less effective. Computer-assisted coding (CAC), when used as intended, may be a great technology, but caution is demanded. We must not relegate too much power to the computer.

Coding reduces the patient encounter into codes. Ideally, a medical record review consists of more than reading the actual documentation. A between-the-lines approach helps ferret out any implied conditions that were not "codably" documented but which nevertheless help tell the patient's story.

Next, a set of idealized codes is compiled and sequenced. If there is discordance when compared with the submitted codes, one of several things occurred: There was a provider documentation gap compounded by missed clinical documentation improvement (CDI) opportunity or the coder or record reviewer made errors. In some cases, mistakes occurred because the coder was led astray by CAC.

Ignoring CAC derived from structured input, let's concentrate on the functionality accomplished by software analyzing health care documentation and, through a natural language processing algorithm, presenting coding options. Danger presents itself when coders view the coding process as discrete choices of accepting, rejecting, or ignoring offered codes rather than a holistic assembly of a complete picture of the encounter.

There is precedent for computer assistance in medicine. At one point, medical school training consisted of manually reading an electrocardiogram by assessing systematically rate, rhythm, P wave morphology, PR interval, QRS, etc. In the late 1980s, it became commonplace for a computer to give a "reading." Rates and intervals were reasonably infallible, but the automated interpretations were less accurate than an experienced cardiologist's reading. The computer is unable to take into account the patient's history, background, and symptomatology.

Providers were implored to do their own independent reading, and only then review the computer's. Providers have the capacity to attend to both content and context.

But convenience is compelling, and having the machine interpretation readily available often lulls providers, especially novice ones, into a false sense of security. It can be distracting and confounding, and, at worst, misleading. Treatment may be guided by erroneous data, due to a lagging overread.

Computer-assisted diagnosis is utilized in radiology and pathology. What you will not find at this juncture is complete automation without human review and monitoring.

Evidence to Date
There is not a plethora of literature regarding CAC. One of the most commonly cited studies asserts a 22% reduction in time per record with no loss in accuracy. However, the study, which took place at the Cleveland Clinic, was apparently conducted with experienced coders who were aware they were being scrutinized, which may have skewed the findings.

Meanwhile, other literature suggests there is some reduction in accuracy when CAC is used, which leads to the question: Is accuracy an acceptable trade-off for speed?

"From Novice to Expert: Problem Solving in ICD-10-PCS Procedural Coding," a 2013 article in Perspectives in Health Information Management, distinguishes between CAC in the hands of novice and expert coders but predates ICD-10 implementation.

The article references the four characteristics that separate expert problem-solvers from novices. Of particular interest is the idea that "Experts devote more time to developing a global approach before attempting to sift through the details of the problem. They reflect on the nature of the problem … whereas novices often jump into immediate problem-solving responses without first examining the background or context."

This concept relates to CAC in that there are multiple issues possibly attributable to an overreliance on the technology, including incongruous principal diagnosis (PD), missed CDI opportunities, and superfluous secondary diagnosis codes. Are novices more prone to missing the forest for the trees?

For example, take acute kidney injury, which translates into Medicare severity diagnosis-related group (DRG) 683 or 684. In situations where a patient is admitted for a work-up of intractable signs or symptoms, a definitive PD may never be determined. Once a definitive diagnosis has been identified by CAC and accepted by the coder, the CAC no longer suggests sign or symptom codes that could be attributable to that diagnosis. If the only diagnosis the coder has hung her hat on is acute kidney injury, she may resort to using that as the PD even if, after study, the PD should be "weakness." This misstep is reinforced by generalized sign/symptom DRG aversion.

If coders do not read the entirety of the documentation with an open mind, they may miss the point. There have been cases of converted observation status falling into the wrong DRG. For example, take the case of a patient being observed for rib fractures (CAC- and coder-selected PD) who is then formally admitted for abnormal liver function, which, after study, was due to choledocholithiasis (which didn't make the list of diagnoses because it was noted only in radiology and implied in a progress note). If the CAC generates a list of disjointed codes without the coder truly understanding the progression and evolution of the admission, inappropriate PD and DRG may be selected.

CAC can pluck verbiage from different parts of a record, an example of how computer assistance may prove even more valuable in future CDI efforts. If a provider documents pieces and parts of a more specific code, a CDI specialist may get the provider to reassemble them by querying for appropriate linkage. Further value can be obtained from CDI by reading between the lines to locate "unsaid" conditions, a nuanced task that a computer program may be unable to accomplish. After all, being confined to a box prevents the computer from thinking out of the box.

Other Trouble Spots
The engineering community has expressed concerns about systems intended to be substitutes for human operators. One reason for this concern is "automation bias," the propensity for humans to favor suggestions from automated decision-making systems. In these cases, humans ignore contradictory information or their own better judgment, convinced that the computer is better equipped to handle the task.

It's ironic that a human operator may make more errors when being assisted by a computerized device than when performing the same task alone. If operators are redirected to narrower tasks such as supervision, monitoring, and emergency intervention without first having the broader experience of performing hands-on tasks (which enables them to know what is involved and recognize perils and pitfalls), they won't have the familiarity or knowledge to troubleshoot and ensure success.

Examples of CAC errors include a misdirected focus on the wrong verbiage, verbiage in the wrong location in the document, and unidentifiable missing verbiage (for example, a misspelled diagnosis in free text mode). There also can be signal detection errors, such as flagging so many phrases/diagnoses that the coder overlooks an important one.

In addition, the computer may detect two conditions that meet Excludes1 criteria and present them to the coder as independent and mutually acceptable choices, or it may fail to present the correct one because the previous option had been accepted. There are also false positives in which the computer detects a condition out of context. For example, picking up "breast cancer" when the condition is really "history of breast cancer" or vice versa.

Proceed With Caution
Vigilant, experienced coders may not be duped, but new coders may not have the depth of experience to vet diagnoses in context if they start coding using CAC without a period of de novo manual coding. Excessive trust in the computer can lead to accepting its suggestions without proper analysis.

Several highly regarded experienced coders share that even they continually fight the urge to relinquish the coding to the computer, forcing themselves to consciously read all of the unhighlighted text. They note an increase in productivity using CAC, but they are unwilling to sacrifice accuracy to achieve it.

Each of them refers to CAC as being "in its infancy," adding that CAC is evolving and improving with each iteration. No one should view the technology as artificial intelligence, however.

Pre-ICD-10, many coders were concerned they were going to be made obsolete by CAC, which was designed to mitigate productivity loss. CAC is a tool, not a replacement for competent coding; the industry should never devolve into computerized coding with coder assist. As long as quality is judged by—and reimbursement is based on—the codes, coders can rest easy about their job security.

— Erica E. Remer, MD, FACEP, CCDS, is founder and president of Erica Remer, MD, Inc, which offers clinical documentation and ICD-10 consulting services.