Words Have Power
By Susan Chapman, MA, MFA, PGYT
For The Record
Vol. 34 No. 3 P. 22
A study raises questions about how EHR notes portray Black patients.
In the recent study “Negative Patient Descriptors: Documenting Racial Bias in the Electronic Health Record,” third-year medical student Michael Sun and his fellow researchers at the University of Chicago Pritzker School of Medicine used machine learning to search for negative descriptors within health records. For the study conducted at a Chicago academic medical facility, the coauthors “analyzed a sample of 40,113 history and physical notes (January 2019–October 2020) from 18,459 patients for sentences containing a negative descriptor … of the patient or the patient’s behavior.” Included were 15 descriptors for analysis: “(non)adherent, aggressive, agitated, angry, challenging, combative, (non)compliant, confront, (non)cooperative, defensive, exaggerate, hysterical, (un)pleasant, refuse, and resist.”
The study’s coauthors discovered that “[c]ompared with white patients, Black patients had 2.54 times the odds of having at least one negative descriptor in the history and physical notes.” The team noted that their “findings raise concerns about stigmatizing language in the EHR and its potential to exacerbate racial and ethnic health care disparities.”
The Study’s Origins
Sun conceived the project after taking a health care disparities course during his first year of medical school. “The course lecturer, a physician, sent her patient for a referral, and nothing came of it. When the lecturer asked her colleague what had transpired with this patient, the colleague painted a picture of the patient that puzzled her, as she’d always known this patient to be proactive and engaged,” Sun explains. “My instructor pried a little more and discovered that this colleague was oblivious to the patient’s poor medical literacy, particularly in the English language, and had not met the patient’s communication needs. The colleague had described the patient as difficult and nonadherent, but there were a lot of important socio-demographic factors that came into play and impacted the patient’s medical care.”
That story prompted Sun to consider the descriptors used by physicians and other clinicians, what biases they contain that may influence patient care, and how best to research this important issue. As Sun crafted the study, he looked at where he and his coauthors could best capture the necessary information. It was determined that EHRs were the ideal source.
Sun and the team selected machine learning because of Sun’s previous academic experience and the technology’s ability to analyze text data. “Essentially, we taught a program to identify negative descriptors and pull out the associated data that can find patient characteristics for this information,” Sun says.
The Issue’s Reach
Research on race-centered health care disparities has been taking place for many years. In fact, the study’s coauthors cite a nationally representative study of the Centers for Disease Control and Prevention’s Behavioral Risk Factor Surveillance System that found “robust evidence of unequal treatment by race in the US health care system and of its negative impact on patients. During 2005–2013, 12.3% of Black respondents reported discrimination in health care compared with 2.3% of white respondents.”
However, Sun believes the issue has been gaining steam as a research topic over the last five years.
He says many colleagues have had the same reaction to the study’s findings: This type of documentation is not uncommon. “It happens a lot,” Sun says. “While I don’t have the data on how far-reaching the problem is beyond the facility we studied, the problem is very real and beyond our institution. Our paper brought evidence to what we knew anecdotally to be true. We all have biases, anywhere in health care, and there are bound to be negative medical descriptors. And we’re concerned about the stigmatizing effects of these descriptors. How do these negative descriptors impact medical treatment? We hope to replicate our results in the broader health care system to enhance our understanding of this issue.”
Suranga Kasturi, PhD, a research scientist at Regenstrief Institute’s Clem McDonald Center for Biomedical Informatics and an assistant professor of pediatrics at Indiana University School of Medicine, notes that biased care delivery is a pervasive problem across the United States.
“Such biases impact many underserved and minority populations, but especially Black patients,” Kasturi says. “These biases have a far-reaching impact beyond the care of the individual patient. My expertise is in machine learning and data science, and the rule in data science is garbage in/garbage out. So, if there are harmful biases inherent in the datasets that are used to train machine learning models, then the models will inherit the same biases and will propagate them into the future. As such, it’s very important for us to monitor for, and be very transparent about, these biases in health care.”
Robert Bart, MD, chief medical information officer at the University of Pittsburgh Medical Center, believes that health care does better than most industries at trying to address inherent bias, but adds that what is being observed in the industry reflects what is happening in broader society. He believes that health care is better at mitigating the problem, seeing the individual for who they are and the care they need.
Having worked previously in Los Angeles County and the University of Southern California Medical Center, Bart has observed this issue from several perspectives. “There can be quite a bit of difference between different types of facilities,” he notes. “I was involved in implementing the EHR across LA County Health Services. There was a certain type of person who worked there who was more interested in being mission-driven and focused to serve a higher-risk patient population for many reasons. You do see some of that focus as you move about the country. However, the study represents an academic health center perspective. We deliver care at different environments for different reasons. There are those who are drawn to an academic health care environment and those who are drawn to the community service hospital. It takes all types of people to make a successful health care system, and I would trust my fellow clinicians to have the same dedication to a high level of care in either system.”
Potential Solutions and the Benefits of Training
According to Kasturi, there are two different ways to address racial biases and create sustainable solutions. “As a long-term solution, we must look at the actual root cause, the biases inherent in providers and health systems. We need awareness, accountability, and education to highlight disparities and, thereby, create an environment where providers will be more aware of this situation and judge more fairly,” he says. “As a more immediate solution, we would also need to understand how models are informed by bad data and fix those areas programmatically within the models themselves.”
Sun believes the health care system must reevaluate how providers in every position compose their notes. “In my medical school, we had a few different dedicated workshops and, when you go on to clinical work, you get feedback on your notes,” he says. “That said, there are a lot of best practices on patient care, medicine, and concepts in how we write our notes. We have to consider what that implies about our patient. That is the first step and readily attainable.
“The underlying cause, racial biases, that gets a little bit trickier,” Sun continues. “I believe implicit bias training is viable but still limited. Some of this training has various levels of efficacy in reducing implicit bias. We can’t expect one course to be that impactful. Studies have shown that additional stress and burnout influence how people use stereotypes, including racial stereotypes. It can be a mental shortcut, a pattern of human behavior. So, to address racial bias, it’s more than education. It has to include addressing burnout.”
“There must be a system-level approach that has diversity and inclusion training which extends beyond documentation, so that providers are not documenting one way but providing care in another,” Bart says. “We don’t just want physicians who document appropriately—we also want them to be providing care in that way. It needs to be in every school—medical, nursing, and pharmacy schools. We want the care to be the same irrespective of socioeconomic or racial factors or the payer. During my training, we never knew who the payer was. We want clinicians to make the best decisions for the person who is in front of you regardless of other factors.”
Kasturi shares Bart’s concern about a potential disparity between documentation and treatment. “Sure, we could train providers not to use negative descriptors in their notes. But when we do that, are we basically training providers to hide their inherent biases from making it to paper, but go on to potentially provide bad care? If a provider believes a person is negative, doesn’t document it, but then provides bad care based on believing the person is negative, we’re then training the providers to hide their biases,” he says.
“People might have inherent biases, but they may not necessarily be malicious,” Kasturi adds. “The training has to be focused more on empathy and awareness, with an understanding of how people have been treated. This awareness should also exist across other areas, including disparities across gender and socioeconomic lines. We need to be honest and transparent about the mistakes that occurred in the past. And when providers see study results like these, they are motivated to make some actionable changes.”
The EHR’s Role in Mitigating Racial Bias
Given the EHR’s central role in health care, experts consider how the technology can be used to mitigate racial bias.
Leigh Burchell, chair of the EHR Association’s Public Policy Workgroup, ex officio EHR Association chair, and vice president of policy and government affairs with Altera Digital Health, believes that while this study was based on handwritten entries and not on preconfigured EHR characteristics such as drop-down menus or radio buttons, the EHR can still aid the clinician in reducing this problem.
“Based on the analyses in this study and elsewhere, the EHR itself isn’t contributing codified data to the documentation at hand, which is instead largely being created through free text entries. EHRs may be able to help through analysis of data captured within the system, which could then be extrapolated to support documentation best practices. Ultimately, however, it is the clinician who authors the documentation included within a patient’s record,” she says. “As the industry attempts to solve this problem, the EHR is making available the data that can help identify potential patterns of biased language. This, in turn, helps the industry to study the problem of racial bias more deeply.”
Nevertheless, Burchell draws attention to the nature of medical terminology itself, particularly the language the study authors sought for analysis. “It’s also important to note that some of the phrases included in this study, such as ‘noncompliant’ or ‘agitated,’ have meaning as well-accepted medical terminology,” she says. “As such, it’s important we are thoughtful in our analysis of the presence of bias and consider the context, the user, and the technology when otherwise valid medical terminology is used. For example, ‘the patient is nonadherent to medication protocols’ would be a factual and an important note within the context of the patient record but could be misconstrued if identified through a search for the term ‘nonadherent’ and viewed through a nonclinical lens. In so far as the terms that the study included that are codified within the EHR, it is done within the context of clinically relevant and medically accepted vocabulary. Their elimination, because they can potentially be biased when viewed through a nonmedical lens, could be problematic. EHR developers work regularly with our user communities to assess their interactions with our products, and this is an area where clinicians will have an opportunity to provide guidance as to how we can best support their efforts to reduce bias in clinical documentation.”
In response, Sun explains that the team coded the use of the selected descriptors in the medical context as clinicians. “The issue is not that these descriptors are untrue, but that they are being used to disproportionately describe Black patients. Writing a negative descriptor is a conscious choice by a clinician, and by no means required,” Sun explains. “If use was not biased, then we would not see a difference between patients of different race and ethnicities. But with all other variables made equal, Black patients are being described by medical providers with words like ‘noncompliant’ or ‘agitated’ at 2.5 times the odds as white patients.
“Given this, why are these descriptors accepted in medical terminology if their use is biased? If these words are alarming in a ‘nonclinical’ lens, why should it be acceptable in a medical context? Health care providers are just as influenced by language and bias as anyone else. In addition to some clinical information, these descriptors may impart a negative impression onto the patient’s medical team, which may impact the patient’s care. Furthermore, the most clinically useful act would be to describe the underlying cause of a patient’s ‘noncompliance’ or ‘resistance’ beyond a label.”
A Surprising Result
One of the study’s outcomes that surprised Sun and his coauthors involved data gathered during the pandemic. “While we were still seeing a disproportionate use of negative descriptors with Black patients, we thought that the pandemic would have increased the odds, but, instead, it decreased the odds,” Sun says.
In the study, the coauthors offered their hypothesis for how these findings could have emerged: “The onset of the pandemic coincided with a historically defining moment of national response to racialized state violence (for example, the police murders of George Floyd and others) and revealed stark racial disparities in COVID-19 health access and outcomes. These social pressures may have sensitized providers to racism and increased empathy for the experiences of racially minoritized communities. Although such a shift may have contributed to reductions in negative descriptor use after March 1, 2020, additional research is required to understand which aspects of the COVID-19 pandemic affected physicians’ language. For instance, it may be that health care providers had less frequent interactions with patients, reducing opportunities for conflict to develop. Alternatively, patients being treated for COVID-19 may have been considered ‘less at blame’ for their illness compared with patients with other more chronic and lifestyle-associated conditions.”
Sun adds that despite the team’s observations during the COVID-19 pandemic, Black patients are still 2.5 times more likely to have a negative descriptor included in their health records. Nevertheless, he is optimistic about how the team’s findings can raise awareness of this important issue. “I do believe there can be changes to the medical record that can allow a more empathetic picture of the patient and the patient’s story, ultimately leading to improved patient care,” Sun says.
— Susan Chapman, MA, MFA, PGYT, is a Los Angeles–based freelance writer and editor.