Hit a CDI Roadblock? Maybe Your Success Metrics Need a Tune-Up
By Lisa A. Eramo, MA
For The Record
Vol. 34 No. 1 P. 16
Determining whether your documentation is running smoothly may not be as straightforward as you imagined.
Consider this scenario: On paper, a hospital’s clinical documentation improvement (CDI) program looks fantastic. CDI specialists review dozens of charts every day and send countless queries to which physicians respond in a timely manner. The only problem? The hospital’s volume of denials and recoupments continues to skyrocket.
Such a scenario begs the question, “How does a hospital with a successful CDI program actually lose money?”
Glenn Krauss, RHIA, BBA, CCS, CCS-P, CPUR, CCDS, C-CDI, PCS, FC, founder of Core-CDI.com, says the answer is simple: There’s no correlation between task-based measures (eg, number of charts reviewed, query rate, or physician response rate) and actual documentation improvement.
“The fact that you issued a query doesn’t mean you’re going to get paid for that diagnosis,” he says, noting that this is especially true for cases with only one major complication or comorbidity (MCC). Payers have edits in place to review these cases and often take the money back, says Krauss, who cites metabolic encephalopathy as a common example.
Erica Remer, MD, CCDS, a CDI consultant and cohost of the Talk Ten Tuesdays podcast, agrees that task-based metrics don’t paint an accurate picture of a CDI program’s overall effectiveness. “If you use query turnaround time as a metric, and the provider responds ‘unable to further clarify’ immediately after receiving the query, the turnaround time is miniscule, but it is a useless response,” she says. “Likewise, if you are judging your CDI specialists by how many queries they generate, but the queries are poorly constructed or noncompliant, is it advisable to encourage them to compose more queries to jack the numbers up? No.”
Krauss and Remer agree that hospitals with truly effective CDI programs should start to see a decrease in the number of queries.
“If you do a really good job at educating your providers, you should actually be reducing your queries,” Remer says. “If you educate a provider out of needing a query for every respiratory failure or you design a technological solution to eliminate malnutrition queries, that doesn’t mean your program is less valuable.”
The same is true for denial rates, diagnosis-related group (DRG) downgrades, postpayment recoupments, and the cost to collect, Krauss says. In a truly successful CDI program, these metrics should decrease over time while the rate of claims paid as billed and the denial overturn rate should simultaneously increase, he adds. (See the sidebar below for what Krauss deems as the top seven key performance indicators [KPIs] that every CDI program should monitor.)
Some CDI programs continue to struggle with a “productivity mindset,” according to Jennifer Eaton, RN, MSN, CCDS, CRCR, executive director of consulting services and education at Enjoin. “It’s difficult to evolve out of these productivity metrics into true success measures that resonate with a CFO, a vice president of revenue cycle, or the chief medical officer (CMO),” she says, adding that productivity measures are important, but they can’t be the only metrics organizations use to measure success.
Terrance Govender, MD, vice president of clinical affairs at ClinIntell, Inc. agrees. “CFOs, CMOs, and other senior leaders could care less about query response rates,” he says. “They want to know whether you’re shifting your severity reporting in the right direction based on your unique patient population.”
As health care organizations continue to expand CDI programs, experts agree they must dig into critical questions such as: How will we define CDI success? What metrics will we use, and how can we set realistic goals? For the answers, documentation experts offer the following suggestions.
Define what success means for your organization. Experts agree that it’s fine to look at quality. However, a goal of simply improving quality ratings isn’t specific enough, Eaton says. For example, perhaps the hospital wants to specifically reduce its readmissions penalties. To accomplish this goal, CDI managers need to know which specific cohorts drive the majority of the hospital’s readmissions. Then they need to seek out coding and documentation opportunities as well as analyze care processes (eg, are patients seeing a cardiologist within seven days of discharge?).
“There’s a huge opportunity for coding and CDI to have an impact, but they need to be given some direction,” says Eaton, who encourages CDI managers to consider these questions: What are your organization’s top five strategic goals? What is the effect, if any, of clinical documentation and coded data on those goals?
Calculate true financial impact, but don’t let revenue alone determine success. Eaton says the biggest mistake CDI managers make when calculating the financial impact of a CDI program is that they don’t consider payer-specific reimbursement methodologies. Instead, they default to Medicare rates across the board.
“If you don’t have an understanding of payer variances, the financial numbers you’re reporting to hospital executives may not be 100% accurate,” she says. “You could either be downplaying the financial impact or overinflating it. It creates a very fictionalized picture that’s not as specific as it could be. You need to make sure you have the appropriate data before you start sharing numbers with the CFO.”
It’s critical for CDI to collaborate with someone in the revenue cycle who is familiar with contracts for each payer, Eaton says, adding that opening the lines of communication can greatly enhance data reporting capabilities.
However, even with accurate data, financial impact shouldn’t be the single most important driver of a CDI program, says Cesar Limjoco, MD, CMO at T-Medicus. In fact, hospitals that let revenue alone guide their CDI programs are among those often hit with the biggest financial penalties, Limjoco says.
“If you are guided by the wrong north star, you can always go off the rails. The true north star is clinical documentation that shows what happened based on the clinical truth,” he notes.
Remer agrees, pointing to the fact that many successful CDI programs may bring in revenue, but they also mitigate potential recoupments, preventing organizations from losing hundreds of thousands of dollars or more. This contribution adds enormous value that organizations must consider when evaluating the overall success of their CDI programs, she adds.
Look beyond the case mix index (CMI). When organizations first roll out a CDI program, they’re likely to see an increase in CMI. However, that increase may level off over time as providers start to document more effectively, says Tony Oliva, DO, MMM, FACPE, CMO at Nuance Healthcare.
One caveat? COVID-19 had the opposite effect in many organizations. MedPar data from FY 2020 indicates that CMI jumped by almost an average of 4%, according to Oliva, who notes that it had been increasing by only 1% for the previous three years.
“How do you prove that your CDI program is doing well with this external force that’s actually driving the majority of your CMI? Also, remember what goes up must come down. As COVID hopefully resolves, CMI is naturally going to retract to a more normalized place,” Oliva says. “Your CDI program may still be performing well, but your CMI might come down.”
Govender agrees. “CMI can go up or down for reasons that have nothing to do with how physicians are documenting,” he says. “Using CMI as a metric to hold physicians accountable or even to determine short-term [return on investment] can be very misleading.”
Mortality is a great alternative to the CMI, Oliva says. “If you’re moving up in your mortality performance compared with your peers and/or you’re not falling off, that means you’re doing a good job of capturing the essence of your patient population. Mortality is the cleanest indicator of that,” he says.
“What’s important is whether the patient was expected to die,” Remer says. “Documentation needs to reflect patient severity of illness and risk or mortality so that if the patient does die, it’s clear why—because they were at greater risk.”
Set realistic goals. This includes setting a timeline—especially because there may not be an immediate return on investment, Eaton says. Medicare, for example, uses three years’ worth of data in its claims-based quality rating methodologies.
“If you start a quality initiative in January—and your quality reports come out in October—you’re not going to see the return on investment or realization of CDI impact in October,” Eaton says. “It may not be until two Octobers down the road. So again, it’s a different mindset when focusing on goals and KPI outside of fee-for-service initiatives. The messaging around these initiatives must be clear so that realistic goals are established, understood, and accepted.”
Another aspect of setting realistic goals is drilling down into your own organization’s data and not relying entirely on national benchmarks, Govender says. For example, perhaps an organization is in the 60th percentile for pneumonia DRGs and wants to be in the 80th percentile so it can compete with peer organizations. Govender says most organizations in this scenario start comparing their DRG rates with national averages. What they don’t—and often can’t do—is drill down into and compare service lines that reported that DRG. They also don’t keep in mind that they have no control over variables that would impact the probability of those patients having MS-DRG–related comorbid severity.
Another pitfall? Organizations fail to consider sample size that is only exacerbated at the provider level.
“DRG capture rates are what I call a vanity metric,” Govender says. “It may make you look good and sleep well at night, but it’s not telling you what’s really going on or where real opportunity lies within your patient population. No matter how carefully you choose a cohort, there is no proxy for your unique and constantly changing DRG mix and patient population. These factors inevitably influence your performance on capture rates.”
What national benchmarks do is satisfy the “curiosity itch,” Govender says. “They tell you where you stand compared to your peers. They don’t tell you if your peers’ CMI is higher because they’re better documenters of severity or because they have a higher severity patient population. You don’t have any insight into whether the peers in the cohort are overreporting severity, underreporting severity, or exactly where they need to be for their unique patient population.”
To set realistic goals, organizations must calculate population-specific severity reporting opportunities as observed-to-expected ratios with the understanding that you will have variances even among physicians within the same service line, Govender says.
“This means the goals you set will differ based on the service line and patient populations,” he says. “If we do this right as an industry, we should be able to tell physicians which clinical conditions they’re underreporting for their unique patient population. Benchmarking does not help us with that. It’s virtually impossible to set realistic, specific, and achievable goals for your organization if you’re using national benchmarks.”
Factor in technology. Realistic and reasonable goals should also take technology into consideration. For example, technology can help identify clinical evidence to support a query or justify a prompt to the physician at the point of care for additional specificity, making it easier for severity reporting to be optimized, Govender says.
Performing faster, more effective reviews bodes well for organizations facing a direct care nursing shortage, Oliva says. “The cost of nursing is going up exponentially right now. Most CFOs do not believe this is a short-term problem. That’s going to put a downstream effect on staffing for CDI programs. That’s not because nurses don’t want to go into CDI; it’s because hospitals won’t be able to move them into those roles,” he explains.
The shortage puts pressure on existing CDI specialists to take on as much as possible. They get the CC and MCC and move on instead of looking at SOI and ROM, Oliva says. “But it’s those second, third, and fourth diagnoses that increase your severity risk and allow your mortality to be comparable to your peers,” he notes.
Automating physician queries can free CDI specialists to perform deeper, more complex reviews, Oliva says. “You’re not going to take people out of CDI; you’re just maybe not going to fill a position or add a position,” he says.
As organizations look ahead, experts agree that focusing on medical necessity will be paramount.
Krauss references the 2021 Comprehensive Error Rate Testing report that cites insufficient documentation and lack of medical necessity as the top two reasons for improper payments.
“Without medical necessity, CDI is irrelevant,” he says. “It doesn’t matter how many CCs and MCCs you get if you don’t ultimately get paid. Do you want short-term gain, or do you want a program that allows you to keep the money you receive?”
“If you’re going to exaggerate what the patient has to rationalize an inpatient stay, right from the get-go you’re already off on the wrong foot,” Limjoco says. “If you don’t have processes in place to make sure you’re admitting patients only when they need to be admitted—and then capture severity of illness in those patients—you’re going to keep having the same problems.”
It all comes back to the basic principle of letting the right motivations drive your CDI program. “Do what’s best for the patient and everything else will fall into place,” Limjoco says. “Manipulating the data isn’t going to get you there. Some people say this is too idealistic, but it’s actually the only way.”
— Lisa A. Eramo, MA, is a freelance writer and editor in Cranston, Rhode Island, who specializes in HIM, medical coding, and health care regulatory topics.
MONITOR THESE KEY PERFORMANCE INDICATORS
The problem with many clinical documentation improvement (CDI) programs? They aren’t measuring the right data, says Glenn Krauss, RHIA, BBA, CCS, CCS-P, CPUR, CCDS, C-CDI, PCS, FC, founder of Core-CDI.com. Moving away from traditional productivity metrics can help CDI programs expand and deliver maximum return on investment.
Krauss says to consider the following seven key performance indicators:
• Average monthly discharged not final billed dollars attributable to query clarifications
• Average overturn rate for nontechnical denials
• Clinical validation denials (volume and dollar amount) by diagnosis and payer
• Diagnosis-related group downcodes (volume and dollar amount) by payer and discharging physician
• Gross vs actual CC/MCC capture rates (subtract CCs/MCCs refuted by payers)
• Medical necessity denials (volume and dollar amount) by payer and physician
• Net patient revenue vs gross revenue (subtract nontechnical denials)
Jennifer Eaton, RN, MSN, CCDS, CRCR, executive director of consulting services and education at Enjoin, says to also focus on the following three additional key performance indicators related to value-based care:
• HCC capture rates
• Quality cohort accuracy rates
• Risk adjustment capture rates (particularly for CMS claims-based measures that dictate reimbursement penalties/incentives)