Home  |   Subscribe  |   Resources  |   Reprints  |   Writers' Guidelines

April 2020

Can AI Turn the Corner Into Clinical Settings?
By Selena Chavis
For The Record
Vol. 32 No. 3 P. 22

Industry professionals weigh in on challenges and opportunities.

There is no question that artificial intelligence (AI) is taking health care by storm. Across all functions and stakeholders, the use cases for AI are seemingly endless in terms of how it can improve efficiencies and inform decision making.

While the opportunities to extract value from AI strategies are many, one area that remains somewhat nascent in terms of use is the clinical setting (not to be confused with the application of AI in the academic setting for clinical research). Cynthia Burghard, research director with IDC Health Insights, notes that health care organizations are realizing the benefits of AI from an operational and administrative perspective, but its maturity in the clinical setting is still premature.

“It’s making progress,” Burghard says. “Where you do begin to see the application of AI in clinical settings is for things like identifying sepsis early. That’s an area where its use can save lives, save days in the hospital. That’s where it has gained pretty wide acceptance.”

Chris Funk, PhD, a senior medial informaticist with Wolters Kluwer, Health Language solutions, says that the value of AI is found in its ability to power efficient analysis of large amounts of data. “The amount of data generated in health care today is exploding,” he says, pointing out that previous estimates from IBM suggest that the amount of data health care amasses doubles every three years—and by 2020, it was projected to double every 73 days. “Health care organizations are sitting on a treasure trove of information that can improve decision making in clinical settings. When AI is applied in an optimal way, it can elevate the concepts of clinical decision support and predictive analytics.”

Using sepsis as an example, Burghard says it’s nearly impossible for clinicians to comb through large amounts of patient data manually to get ahead of the condition’s progression. With the right AI algorithm, data are mined in near real time to quickly produce an alert, prompting the clinician to make a decision.

Jonathan Linkous, MPA, FATA, CEO and cofounder of PATH (Partnership for Artificial Intelligence and Automation in Healthcare), cites other clinical areas where AI is making inroads, particularly when it comes to reading X-rays, MRI scans, EKGs, and even pathology results.

“Those are areas where it’s probably the easiest to understand how AI can look at an image and do a better interpretation of it than a human can—and do it a lot faster than a human can,” Linkous explains. “There are a number of applications that are now in development, and some have been adopted. I think that [imaging] is the lowest-hanging fruit from the perspective of where AI is being used the most. It’s kind of a no-brainer.”

Challenges to Advancing AI’s Use in Clinical Settings
Experts agree that the industry has only scratched the surface in terms of realizing AI’s potential in clinical settings. Nevertheless, health care organizations must overcome a number of obstacles before use of these advanced tools can become commonplace.

Linkous points to health care’s ongoing general reluctance to adopt new technology, noting that AI is a prime example of this tendency. “Some of it is called for, but it’s an issue and a challenge,” he says, pointing out that while the use of AI across other industries has been mainstream for some time, health care stakeholders hold on to a certain amount of skepticism.

Along with health care’s historically slow pace of technology adoption, Burghard notes clinician angst over how it is used. For instance, how a clinician uses data generated through AI could come under scrutiny.

“I think you have to separate what part of care is science and what part is art,” Burghard suggests. In this equation, AI becomes part of the science in that it works with data and creates alerts for clinicians, but Burghard emphasizes that its role ends there. Ultimately, the clinician must make the final decision about how to use the information.

Regulation of AI is still evolving, which presents another challenge, as do the fundamentals of how to pay for it, Linkous says. For example, at a recent encounter with a specialist, Linkous observed a form of AI being used to read an EKG, yet the physician still came in and read the test—in essence, duplicating the workflow.

When asked the reason for this approach, the physician indicated, “This is the way we get paid,” Linkous says.

Burghard points out that deploying AI in the clinical setting is no small undertaking. It requires the right technology and expertise from data scientists for building out algorithms and analyzing data. “You need a whole process around how you frame the algorithms,” Burghard says, noting that it’s a resource-intensive effort that only the largest organizations can adequately support.

To help health care organizations better leverage AI, Burghard notes that a number of HIT vendors have come on the scene in recent years providing ready-made algorithms for specific initiatives such as sepsis. She cautions that there is still a lot of work that must go into the process of tweaking algorithms to align with a health care organization’s policies for acceptable practice.

In particular, Burghard points to a case in which a Midwest health care system worked with an AI software vendor to look for variations in prescribing patterns to identify ways to reduce costs. The vendor built the algorithm to meet their needs and then provided an analysis based on data to save a substantial sum. While good in theory, the health system still had to collaborate with pharmacy and physicians to determine the best way forward.

“The pharmacist took a look at the data and said, ‘Hold on, you identified a single source drug,’ or ‘You identified the sacred cows of the orthopods—they like this particular medication. For me, as a pharmacist, it would be hard to get that kind of change through, but here are three or four things that are reasonable,’” she says. “It’s not as simple as pushing a button.”

The Data Quality Conundrum
Undoubtedly, one of the greatest challenges to using AI in a clinical setting rests with data quality, an ongoing issue that continues to plague the industry in terms of its ability to fully leverage big data initiatives. “Many health care organizations struggle to extract deeper insights from AI due to data quality issues. It’s a significant challenge amid the exponential data expansion underway in the industry, yet the success of AI is dependent on access to accurate and complete data,” Funk says.

Burghard agrees, noting that the industry has seen “so many false starts with data,” and there is generally a “lack of trust” when it comes to any kind of analytics initiatives in clinical settings.

Much of the problem can be attributed to the proliferation of disparate data spread across health networks that reside in a variety of locations and formats, Funk says. “For example, data may be structured in the form of an industry standard such as ICD-10 or CPT. It may be semistructured within an EHR drop-down menu such as local labs and medications, or it may be free text,” he explains, noting that industry estimates suggest that unstructured free text makes up as much as 80% of clinical documentation.

To get ahead of the problem, Funk says health care organizations must first address the monumental task of bringing disparate information sources together in a centralized repository, normalizing all forms, including unstructured notes, to an identified standard to support interoperability. Then, data must be categorized into clinical concepts that support mission-critical activities in the clinical setting.

“Without this foundation, even the most advanced AI tool will produce subpar insights and have limited impact,” Funk says. “We suggest a multifaceted strategy that engages technology, expertise, and the right processes is essential to ensure AI is drawing from enriched, clean data.”

A comprehensive strategy, he adds, must address terminology management and data governance from the following vantage points:

• Establishing a single source of truth through reference data management. Reference data are made up of industry HIT standards, such as ICD-10, RxNorm, LOINC, and SNOMED CT, and other proprietary content, helping health care organizations establish interoperable communication lines that support the free flow of information between systems. Funk says an optimal reference data management strategy encompasses oversight and ongoing maintenance of these terminology standards.

• Normalizing clinical data to standards. Data normalization addresses the need for mapping nonstandard clinical data such as local labs and drugs to standard terminologies that are maintained as part of a reference data management strategy. Funk notes that this process bridges the gap between disparate systems by establishing semantic interoperability of data across the health care enterprise.

• Unlocking unstructured data with clinical natural language processing. Use of clinical natural language processing solutions can improve the capturing of unstructured data by automatically searching out identified clinical data and provider-specific synonyms and acronyms related to valuable data such as problems, diagnoses, labs, medications, and immunizations.

Looking Ahead
Advancing the use of AI in clinical settings will require evidence and demonstrated best practices, Linkous says.

“That’s when you get into the field of translational activities, where you take the studies completed and the research that shows that [application of AI] works on a consistent basis. Then, you translate that into the day-to-day operation,” he explains. “Health care, probably as much as anything, relies on research to prove that certain procedures, certain activities will actually come out and they’ll do exactly what they say they do.”

Once AI is proven, Linkous notes that the next hurdle will be getting providers and consumers to overcome fears and understand how the technology can impact practice and patient care. He says consumers in particular tend to look for a “trusted brand—is it something they know or know something about? Can they recognize how it can improve their lives in a direct way?”

Best practice guidelines that demonstrate ethical practices are important to the trust equation going forward, Linkous says. It’s one reason PATH recently released a set of ethical guidelines for implementing AI (See sidebar).

“It’s really not so much the technology that doesn’t have ethics or not; it’s a computer program,” he emphasizes. “It’s about the application of how you use it and what you do with it.”

Industry professionals agree that while premature, AI holds great promise in the clinical setting. Burghard believes that best practices will emerge as health care organizations learn how to best apply AI within the art and science of practice.

“It’s a way of distilling large amounts of data and providing insights for clinicians,” she says. “Then they must draw on critical judgement and their experience, and all that art part of practice to make final decisions.”

— Selena Chavis is a Florida-based freelance journalist whose writing appears regularly in various trade and consumer publications, covering everything from corporate and managerial topics to health care and travel.

 

PATH’S GUIDELINES FOR ETHICAL IMPLEMENTATION OF AI
The following principles for the use of artificial intelligence (AI) in health care were developed by PATH (Partnership for Artificial Intelligence and Automation in Healthcare) members, including PATH’s Ethics Work Group. Some of the principles have been adapted from existing literature, such as the Asilomar AI Principles, while others were drafted by PATH.

First Do No Harm: A guiding principle for both humans and health technology is that, whatever the intervention or procedure, the patient’s well-being is the primary consideration.

Human Values: Advanced technologies used to deliver health care should be designed and operated to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

Safety: AI systems used in health care should be safe and secure to patients and providers throughout their operational lifetime, verifiably so where applicable and feasible.

Design Transparency: The design and algorithms used in health technology should be open to inspection by regulators.

Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

Responsibility: Designers and builders of all advanced health care technologies are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

Value Alignment: Autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

Personal Privacy: Safeguards should be built into the design and deployment of health care AI applications to protect patient privacy, including their personal data. Patients have the right to access, manage, and control the data they generate.

Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

Shared Benefit: AI technologies should benefit and empower as many people as possible.

Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

Evolutionary: Given constant innovation and change affecting devices and software as well as advances in medical research, advanced technology should be designed in ways that allow them to change in conformance with new discoveries.

— SOURCE: PATH