Home  |   Subscribe  |   Resources  |   Reprints  |   Writers' Guidelines

September 2019

Chart Conundrums: AI Initiatives Extract Value From Data Assets
By Chris Funk, PhD
For The Record
Vol. 31 No. 8 P. 26

Payers and providers are using clinical natural language processing to reduce administrative costs and improve the accuracy of analytics.

Artificial intelligence (AI) success stories are beginning to flood the health care industry as stakeholders attempt to accelerate labor-intensive processes and realize the promise of advanced technology. Data are central to these initiatives, yet health care organizations are all too familiar with the challenges associated with getting maximum value from the rapidly expanding repositories of information now housed in EHRs and other clinical systems.

AI is the umbrella term for a vast set of technologies that can learn, reason, and adapt based on data. AI is a continuum. Narrow AI can be used to automate simple tasks that assist humans, while artificial general intelligence can be used in a cognitive way to make decisions and mimic human behavior.

For the purposes of health care, AI is typically used to augment human workflows by helping clinical staff focus on things that need their subject matter expertise. Examples include robotics, machine learning, image recognition, speech technology, and clinical natural language processing (cNLP).

The tedious process of reviewing patient records is a prime example of AI technology providing notable value. Medical record reviews are conducted to inform a variety of organizational initiatives and drive patient care. Often characterized by manual processes that entail well-paid subject matter experts plowing through records, these reviews aim to move raw data from information to knowledge to, ultimately, wisdom to inform analytics initiatives and decision making.

In addition, the time-consuming efforts associated with manual patient record review often fall short of the end goal: making data actionable. To unlock the promise of AI, health care organizations must consolidate, standardize, and enrich their data assets. This requires bringing disparate sources together in a centralized repository, while normalizing all forms of data—including unstructured notes—to an identified standard to support interoperability. It also involves categorizing data into clinical concepts that support mission-critical activities such as quality measures reporting, risk stratification, medical necessity, and predictive analytics. Without this foundation, even the most advanced AI tool will produce subpar insights and have limited impact.

Challenges to Creating Actionable Data
Disparate data reside in a variety of locations across the health care landscape, meaning providers must be prepared to extract data from sources ranging from claims and clinical systems to emerging data repositories. Presenting in the form of ICD-10, CPT, HCPCS, and diagnosis-related group codes, billing data found in claims are typically already clean, coded, and ready for AI. Unfortunately, this data source is not clinically rich enough on its own to power high-level AI projects such as data science and quality measures reporting.

Clinical data found in EHRs or in the form of labs, drugs, imaging, behavioral health, or other clinical notes improves this outlook but is much more disordered and not clean enough for the purposes of machine learning/AI models. It may come from semistructured EHR drop-down menus or it may be unstructured, such as free text fields—an area of clinical documentation that few organizations are tapping in a meaningful way.

Notably, some estimates suggest that unstructured text accounts for as much as 80% of clinical documentation.

Emerging data sources include telehealth, genomics, patient-reported lifestyle tools, and social determinants of health. These areas present in a variety of forms that health care organizations need to address for accurate, complete AI.

Current methods for extracting valuable patient information in a complete and accurate manner are marred by various pitfalls. Because many health care organizations still approach patient record review via manual processes, they run into resource challenges. Simply put, it is an expensive proposition, as the review process is commonly done by teams of physicians who utilize keywords to comb through patient charts and documents. It’s not unusual for a single patient record review to take anywhere from 30 minutes to three hours, depending on the information being collected.

In addition, manual processes, by their very nature, are error prone. It is impossible for humans to consistently maintain a high level of accuracy when confronted with the volume and demands associated with record review. AI can help address these concerns by efficiently identifying key clinical information and presenting it to the reviewer in a unified manner that combines both structured and unstructured data.

Extracting Greater Value From Free Text
Different from traditional NLP, cNLP addresses the complexities and nuances of clinical data, such as physical jargon, negative and ambiguous statements, phrases vs complete sentences, duplicative data, and temporal aspect matters (ie, time).

Because it turns unstructured documentation into more computable information that can be analyzed and acted upon, this form of AI is becoming increasingly important. Many of these rich data are missing from present-day analytics strategies because infrastructures lack the capabilities needed to extract key information from free text.

cNLP can add value by decreasing review times, increasing staff efficiency, reducing administration costs, and improving the accuracy of analytics by unlocking data and making them actionable. It is easily integrated with other AI technology and can be deployed directly into a data warehouse or analytics engine.

When cNLP is applied to a physician-documented encounter, it can identify clinical concepts and help provide suggestions for coding data to industry standards such as LOINC (labs), RxNorm (drugs), and SNOMED CT and ICD-10 (problems/diagnoses).

It can also group elements by context to ensure complete capture of information. For example, consider the challenge of accurately identifying the severity of a diabetic patient for hierarchical condition codes (HCCs)—one way of measuring patient risk for risk adjustment. Information on the patient can come from the structured ICD-10 code in the EHR or claim, and present as E11.9, diabetes without complications. By understanding HCC content and structure, cNLP can be applied to help determine whether there are additional noncoded complications, such as dry mouth, present in the notes.

Once identified, this information empowers the correct designation of HCC18, diabetes with a chronic complication. This process not only improves the accuracy of HCC reporting but also elevates reimbursement to the appropriate level based on condition severity.

Quality measures reporting is another area where cNLP can deliver value. PQRS 116 (NQF 58), the quality measure for avoidance of antibiotic treatment in adults with acute bronchitis, is a good example. This national measure within the Physician Quality Reporting System will have lower performance scores when patients receive antibiotics for acute bronchitis, since evidence suggests that this approach to treatment does not improve the condition and may cause harm.

Yet, exclusion criteria exist for patients who have a secondary condition, such as cystic fibrosis or HIV. Often, documentation demonstrating the secondary diagnoses is found in free text as opposed to structured areas of the EHR. Without cNLP-enabled data mining, providers have no way of identifying patients who fit this criterion without manually reviewing charts. In this case, cNLP improves accuracy, which helps health care organizations calculate accurate scores and avoid negative payment.

Looking to the Future
The health care industry has only scratched the surface in terms of realizing the full potential of AI strategies. Emerging use cases that are delivering value include the following:

• building intelligent content libraries (eg, search indices for articles and pharmacovigilance use cases);

• reducing readmission risk (eg, use of postdischarge care management data with EMR data to assess risk and intervention opportunities);

• enabling differential diagnosis (eg, use of patient-reported symptoms at intake to discover potential diagnosis);

• providing clinical decision support (eg, combining structured and unstructured patient data to customize clinical decision support);

• predicting onset of disease (eg, improve speed and accuracy of sepsis detection); and

• targeting population health initiatives (eg, opioid abuse and intervention services and support for social determinants of health).

Getting Started With AI
Health care organizations looking to employ AI strategies are wise to take a measured approach. As a first step, clinical and financial leaders should ensure systems are in place to enrich their data so that AI is powered by a complete and accurate aggregation of all mission-critical information.

A multifaceted strategy that engages technology, expertise, and the right processes is essential to ensuring a framework of clean, enriched data. Comprehensive strategies must address terminology management and data governance from three vantage points: establishing a single source of truth through reference data management, automated mapping of nonstandard data to support data normalization, and addressing unstructured data through cNLP.

When providers implement practices to achieve high data quality, they can more effectively extract value from their data assets and advance AI initiatives. As a result, it’s easy to make the business case for leveraging terminology and data management solutions that use structured data processes to extract, normalize, and map needed data to appropriate industry standards in tandem with cNLP.

— Chris Funk, PhD, is a senior medical informaticist at Wolters Kluwer, Health Language.