Home  |   Subscribe  |   Resources  |   Reprints  |   Writers' Guidelines

Spring 2023

Audit Alley: RADV Medical Record Reviewer Guidance and Preparedness Utilizing AI
By April Russell
For The Record
Vol. 35 No. 2 P. 28

Audits are universally stressful. The knowledge that someone we do not know will be scrutinizing, criticizing, and invalidating our work is enough to make anyone lose a few hours of sleep. According to the Contract-Level 15 Risk Adjustment Data Validation (RADV) Medical Record Reviewer Guidance, effective January 10, 2020, CMS can conduct RADV audits annually to “ensure risk-adjusted payment integrity and accuracy.”1

With the penalties associated with the dreaded audit, it’s understandable that risk revenue directors, HIM managers, and medical coders employed at Medicare Advantage (MA) organizations often include the acronym RADV on the obscenity list along with words and phrases like “extrapolation” and “would apply to the payment year 2011 contract-level audits and all subsequent audits.”2

In November 2018, according to the proposed rule, CMS announced the plan to recover “overpayments based on extrapolated audit findings through the use of statistically valid random sampling techniques” going back to the review year 2011.2 This was originally proposed in 2012 but never implemented. The timeline for enacting this rule has been extended several times. On January 30th, CMS ended up finalizing the policy to only collect nonextrapolated overpayment for payment year (PY) 2011–2017.3 CMS will collect extrapolated overpayments starting with PY 2018. While this news significantly decreases the overpayment penalties organizations will have to repay if CMS had decided to start with PY 2011, MA organizations must be vigilant in planning for increased recoupment amounts compared with past years starting with PY 2018.

The implications are losses that could amount to millions per year for some MA organizations. Since there’s no way to avoid RADV audits, for which most large MA organizations are selected each year, the best way to prevent financial loss is to be overly prepared for the inevitable.

There are precautions MA organizations can take. An important one is ensuring that provider contracts include language allowing for payer organizations to retrieve clinical documentation to validate that ICD-10-CM codes are supported. Some providers agree to send copies of progress notes with each claim, while others allow representatives of the MA organization onsite to the office or facility to make physical copies of member documentation. Other providers even agree to give payers access to EHR systems.

MA organizations are willing to be flexible and agree to almost any method that will allow them to retrieve this precious documentation. They also employ teams of HIM managers, nurses, and coding professionals who specialize in hierarchical condition category (HCC) coding. Their main job is to review medical records throughout the year to verify that all RADV standards are met. If they are not, these professionals can work with provider relations departments to create education programs that will teach providers how to compose complete documentation that will pass not only a RADV but any audit.

According to the RADV guidance document, a valid encounter is one that is a face-to-face visit with an allowable provider under CMS rules; the note is complete and unambiguous, and has a valid signature and all necessary dates, including admission, discharge, and visit dates. And obviously, it must fully support the ICD-10-CM code that maps to the HCC in question. HCC coding professionals are trained to look for all of this information during a thorough HCC review.

RADV Guidance Rules Regarding HCC/ICD-10-CM Validation
Of note, on the most current RADV guidance document, which is 72 pages long, the footnote on every single page reads as follows: “The general guidance in this document is not exclusive. In addition to this guidance, all other rules, requirements, and instructions relating to medical record documentation substantiation of diagnoses and the coding of diagnoses apply, including, but not limited to, that the supporting medical records be clear and unambiguous, the requirements set forth in Chapter 7 of the Medicare Managed Care Manual, the requirements of the International Classification of Diseases (ICD) Clinical Modification Guidelines for Coding and Reporting (ICD-9-CM), and all requirements set forth in Medicare regulations, the Parts C and D contracts, and the Electronic Data Interchange Agreements.”1

The RADV guidance document’s rules for ensuring ICD-10-CM codes are substantiated by documentation follow common themes found in ICD-10-CM guidelines. These are the same guidelines that payer organizations depend on to ensure that codes providers submit on claims for reimbursement are fully supported. But being armed with the knowledge that many provider organizations understand the importance of hiring experienced and credentialed coders isn’t enough for many payers to feel safe and prepared for RADV.

Leaders of MA organizations have implemented programs that run throughout the year, where on-staff clinical documentation improvement nurses and HCC coders concurrently and retrospectively compare claims with encounters to make sure all ICD-10-CMs are supported before the codes are submitted to CMS. Conducting such reviews can prevent overpayment and a need for recoupment. This same group of people is also responsible for preparing the MA organization for submitting records to CMS for the RADV audit when the time comes.

Finding documentation that supports HCCs can be a manual process. Depending on how patients’ medical records are retrieved, it could mean reading through physical charts in the form of paper or scanned images on a computer or paging through the providers’ EHR systems. It’s not unheard of for providers to have a mix of documentation, some being in paper form and others being in the EHR.

Some chronically ill patients can have medical records that are thousands of pages long, making it difficult for reviewers to pinpoint what they need, even with the use of advanced EHR systems. And if they run into difficulty finding ICD-10-CMs that are not supported on the dates that went out on claims, it will become necessary to find support for those codes on documentation from another visit within the review year, which can be challenging. The difficulty surrounding retrieval and review processes means MA organizations will target only member charts with the highest risk scores. Ideally, MA organizations would like to do a thorough review of all contracted providers and member charts, but time constraints associated with submission deadlines might make this goal unattainable. Technology utilizing artificial intelligence (AI) and natural language understanding (NLU) could be the solution for helping payer organizations review charts for all MA members.

Using AI to Find the Evidence MA Organizations Need
Claims systems can find admission dates, discharge dates, dates of service, and billing codes. EHRs store clinical documentation by patient medical record numbers along with other pertinent information such as provider signatures, provider specialties, problem lists, and lab results. Neither of those systems is designed specifically for the medical code substantiation review task. An ideal solution for this important job is a system that seamlessly ties together patient encounters to claims data, then annotates evidence within document narrative and direct links to relevant ICD-10-CM codes.

In recent years, several AI-enabled products have appeared on the market that can be configured to offer such a solution. AI tools that specialize in identifying documentation that supports HCC coding must have a robust implementation period with an experienced team of technical and clinical professionals that will help providers set up a solution that will meet their needs. These professionals include architects who will work with providers to set up interfaces that accept only the pieces of patient documentation that are clinically relevant.

Computational linguists and NLU engineers will review documentation trends to learn what rules will work best for each client. NLU engineers will work closely with content analysts such as medical coders, and clinical documentation improvement nurses will conduct focused evaluations to ensure that AI output is reasonable and accurate compared with what is documented and what codes went out on claims. With large enough bodies of documentation and human verified feedback, machine learning techniques can be utilized to further improve NLU return.

After initial analysis and testing are complete, and the AI solution stabilizes following a period of iterative improvements often necessary for new technology adoption, providers are left with a tool that can process thousands of patient records, highlighting only the evidence and concepts that are relevant for their use case, such as ICD-10-CMs that map to HCCs. Some providers may choose to find what the industry commonly refers to as monitored, evaluated, assessed/addressed, and treated, or MEAT, criteria, which is evidence that supports a given code was managed, evaluated, assessed, or treated.

NLU dictionaries can be extended to look for procedures, lab tests, vital signs, symptoms and signs, and providers plans to look for concepts suggestive of MEAT for a particular ICD-10-CM code. Some organizations may choose to focus on a particular population, such as patients with diabetes, due to a known provider weakness regarding thoroughly documenting diabetic complications. Through reporting and query features, the rich NLU markup and output can be used to return exactly what providers need.

MA organizations working with trailblazing provider organizations can reap the benefits of such technology. A tool that allows them to query for members and relevant encounters, along with all the evidence that goes with it, could be the power drill on their toolbelt for RADV preparedness.

— April Russell is a natural language understanding content manager for 3M Health Information Systems.

 

References
1. Center for Medicare and Medicaid Services. Contract-level 15 risk adjustment data validation. https://www.cms.gov/files/document/medical-record-reviewer-guidance-january-2020.pdf. Published January 10, 2020.

2. Medicare and Medicaid programs; policy and technical changes to the Medicare Advantage, Medicare Prescription Drug Benefit, Program of All-Inclusive Care for the Elderly (PACE), Medicaid Fee-for-Service, and Medicaid Managed Care programs for years 2020 and 2021. Federal Register website. https://www.federalregister.gov/documents/2018/11/01/2018-23599/medicare-and-medicaid-programs-policy-and-technical-changes-to-the-medicare-advantage-medicare. Published November 1, 2018.

3. Medicare Advantage Risk Adjustment Data Validation fnal rule (CMS-4185-F2) fact sheet. Centers for Medicare & Medicaid Services website. https://www.cms.gov/newsroom/fact-sheets/medicare-advantage-risk-adjustment-data-validation-final-rule-cms-4185-f2-fact-sheet. Published January 30, 2023.