Hitting the Mark on Coder Performance
By Selena Chavis
For The Record
Vol. 31 No. 2 P. 22
Industry experts weigh in on how analytics and benchmarking are driving improvements.
Value-based care demands the best performance from all areas of today’s hospitals and health systems. HIM is no exception.
Accurate, compliant, and complete coding ensures health systems receive optimized quality reporting scores used for new and evolving reimbursement models. Consequently, a health care organization’s profitability hinges on a coder’s ability to optimize execution of these areas, according to Chetan Parikh, CEO of ezDI.
While HIM directors have long measured coder performance to inform process improvement strategies, many are finding that advances in analytics are opening new insights and opportunities for improvement. “Thanks to EHRs, the medical coding industry today has massive chunks of data,” Parikh says. “But it remains utterly useless if health systems do not make sense of data. This is where analytics comes into play.”
Parikh notes that analytics tools help HIM leaders visualize, analyze, and impact coder performance in real time. “Through dashboards and analytics, one can monitor coder productivity, conduct root cause analysis, and identify key improvement opportunities,” he says. “The same tools can be used to look at trends and compare with other peer group hospitals. This empowers health systems leaders with a crucial competitive edge to make data-driven informed decisions.”
Andrea Romero, RHIA, chief operating officer of himagine solutions, notes that EHRs make it easier for hospitals to track coder productivity by providing access to such information as the number of charts completed during the timeframe that a coder is actively logged into the system. “Those can then be parsed out by patient type,” she says. “For those using EHRs or encoder software, those types of reports are easily accessible now.”
Due to these advancements, Romero believes the use of reporting and analytics to measure coder performance has become fairly mainstream even though there is often a cost associated with deployment of these capabilities. For instance, she points to broad uptake and use of the Epic system, a platform, like many other EHRs, that offers “tons of reporting and analytics.”
For organizations not leveraging this kind of functionality through an EHR or third-party software, Romero says that, at a minimum, all hospitals receive the Program for Evaluating Payment Patterns Electronic Report, or PEPPER, which provides at least some baseline information for measuring performance.
What to Measure
Coder performance is typically measured across two categories: productivity and quality. While EHRs and other analytics tools provide a streamlined method for measuring productivity, quality is a different story, Romero says.
“EMRs and encoders do not automatically calculate a quality percentage for each coder. This is often a manual process that’s performed separately by an auditing team to determine quality scores,” she explains, noting that these scores can vary widely based on the type of audit.
For example, she says some hospitals may opt for a quick compliance check while others dive deeper, looking at such areas as procedures, all patient refined diagnosis-related groups (DRGs), risk of mortality, severity of illness, and discharge disposition. “While most hospitals aim for a 95% compliance rate, the data to compute this rate often varies,” Romero says.
While the industry standard for quality is widely accepted as 95%, identifying standards for productivity has proven to be much more difficult.
“It can be challenging to create coder performance metrics because every health system is different and there are multiple factors to be taken into account,” Parikh says. “However, there are basic metrics you can regularly monitor that provide a good baseline, such as coder productivity and accuracy.”
Parikh breaks these down further into number of charts per day or per hour, number of cases coded per day or per hour, number of claims submitted, and number of denials appealed.
Real-Life Application for Coder Performance
Parikh says analytic tools fall into the following broad categories:
• descriptive analytics, which shows what has happened in the past (eg, a coder’s productivity for the last six months);
• diagnostics analytics, which explains why it happened (eg, the causes of coder productivity trends);
• predictive analytics, which forecasts the likely performance of coders in the upcoming months; and
• prescriptive analytics, which describes what corrective actions need to be executed to improve coder performance.
In a case study presented at the 2018 AHIMA Convention and Exhibit, Parikh demonstrated how a 270-bed hospital in New York with six inpatient coders used prescriptive analytics to optimize coder performance. The data analysis was conducted using regression, median ranking, and linear programming techniques.
“Most of these tools are bundled in an easy-to-use and easy-to-implement software so that the hospital does not need a statistician to improve the coder performance,” Parikh points out.
Two data sets were collected—one from 2016, the other from 2017—using analytics software that captured the primary metrics of length of stay (LOS), DRG, service line, coder name, and coding time. Coding time was measured based on active time in the system from the time a case was opened until the DRG was submitted for billing, excluding breaks and time for query responses.
Parikh says the regression analysis revealed that the LOS and service line metrics were the most dominant factors affecting coder productivity. In other words, different coders had different productivity based on the LOS and service line.
“Subsequently, a simplex linear programming method was utilized to optimize the coder performance,” he explains, noting that the algorithm identifies the most efficient coder in a specific service line and LOS bucket and allocates those cases to that team member.
As a result of the tool recommending the most favorable way to allocate cases, coder productivity improved by 45% at the hospital.
Romero notes that when considering coder productivity, many facilities focus only on time-to-case completion ratios. To fully optimize coder performance, she suggests taking it a step further by looking at coder utilization. “Many organizations say they are able to code X number of charts an hour, but how many hours out of their 40-hour week are their coders really being utilized?” Romero asks.
Benchmarking is the systematic process of measuring and comparing an organization’s performance against such metrics as industry standards, recognized peers, and an organization’s past performance. “The objective is to identify best practices which, over time, help you reach and sustain a remarkable performance,” Parikh says, adding that trusted sources for benchmarking coder performance can be found in AHIMA journals, through the Healthcare Financial Management Association, and in other national benchmarking studies.
Parikh says benchmarking coder performance helps hospitals accomplish the following:
• monitor and evaluate coder performance and understand health systems’ strengths and weaknesses from a revenue cycle management point of view;
• identify weak areas within revenue cycle management processes to help improve the bottom line;
• observe and improve where a health system stands in terms of value-based reimbursement;
• measure and compare with peers to identify opportunities for improvements; and
• translate benchmark results into data, trends, and insights to convince the C-suite, HIM and revenue cycle leaders, physicians, and coders about the change required for the betterment of the health system.
While benchmarking is an important component of coder performance improvement, Romero emphasizes that it is also an area where many hospitals struggle, especially as it relates to productivity.
“When it comes to productivity, there is not an [industry] standard because it depends on systems and processes,” she says. “You will have a critical access hospital where coders can perhaps code three to four inpatient charts an hour, and then you might have a large academic facility with a big trauma center where productivity might be 1.5 charts an hour.”
At the 2018 AHIMA Convention and Exhibit, Romero presented national coder productivity statistics that drew from data collected from January 2017 through April 2018 within himagine’s database. Across all inpatient cases (882,766), the average productivity achieved by the organization’s clients was 2.02 cases per hour. Yet productivity between cases from academic medical centers and community hospitals differed—2.11 and 2.75, respectively.
Pointing to one case in which the number of disparate systems had a notable impact on productivity, Romero recalls how an inpatient coder at one hospital had to touch seven systems in order to complete a chart because the organization lacked integration. “They may be able to code only one-half chart per hour vs a coder in another facility of the same size who has to access only one EMR and an encoder who can code 2.5 charts an hour,” she points out.
Bylaws around processes such as physician querying can also impact productivity. Romero says that at some facilities, querying is still completed via e-mail and word documents, which can slow down response times. In addition, some hospitals allow as much as 10 days or more for a physician to complete medical record documentation.
himagine data also point to disparities in coder productivity as it relates to LOS and the number of beds in a facility. Average productivity for LOS of four days or less ran 2.22 charts per hour, while productivity dropped to 1.87 for an LOS of greater than five days. Hospitals with fewer than 200 beds saw lower productivity (2.18 charts per hour) while coders at facilities with more than 500 beds achieved a rate of 2.5 charts per hour.
To ensure accurate benchmarks, Parikh suggests organizations follow a structured approach that encompasses Six Sigma methodology across five steps: define, measure, analyze, improve, and control. First, hospitals should define their goals and critical success factors related to coder performance and then measure and monitor their current performance. Once a baseline is identified, an assessment of how existing performance measures up to goals can be conducted, followed by an effort to dig deeper using root cause analysis when disparities exist. Then, hospitals can deploy strategies to improve performance by identifying gaps and opportunities. Finally, ongoing assessment and control of processes are critical to sustainable outcomes. Parikh says this can be accomplished through feedback processes that encourage input from all stakeholders to improve benchmark performance.
Benchmarking numbers must be verifiable, Parikh adds. “Always identify the sources, make sure it is credible, and, if needed, contact the relevant authority to help you understand the assumptions and methods they used while setting a benchmark,” he says, noting that the first step must always be centered on identifying the problem transparently. “This mitigates any risk and ensures that your benchmarking process … paints an accurate picture of your health system against your peers.”
Romero emphasizes that the process of measuring coding productivity is not a one-size-fits-all proposition, noting that it’s important to leverage data that will lead hospitals to make apples-to-apples comparisons. “If I am a level 1 trauma academic facility, I do not want to compare myself to a 100-bed hospital down the street,” she says. “Even though we may have 100 hospitals using Epic in our database, it’s amazing how different processes can be within the same environment.”
— Selena Chavis is a Florida-based freelance journalist whose writing appears regularly in various trade and consumer publications, covering everything from corporate and managerial topics to health care and travel.