Home  |   Subscribe  |   Resources  |   Reprints  |   Writers' Guidelines

October 2018

Patient Matching: Are We Any Closer to a Solution?
By Sarah Elkins
For The Record
Vol. 30 No. 9 P. 18

While no one can quite get a grip on this pressing matter, there is no shortage of eager minds working to solve the equation.

The effect of poor patient matching on the entire health care continuum is old news. From patient satisfaction and safety to revenue cycle and HIM workload, the industry knows duplicate records and other matching errors bog down the system, cause lost revenue, and result in medical errors, some of which can lead to death.

Opinions on how big the problem is and what the right solution is vary depending on whom you talk to. Some point to the need for a national patient identifier (NPI). Others are less confident that an NPI will solve the problem but agree that a nationwide strategy, at the very least, would be a move in the right direction.

Timeline of Progress
Since the passing of the 21st Century Cures Act in December 2016—which, in part, called for the Government Accountability Office to "review policies and activities at ONC [Office of the National Coordinator for Health Information Technology] and other relevant stakeholders to ensure appropriate patient matching to protect patient privacy"—progress has, at times, seemed to find a rhythm and then slowed back to a familiar crawl.

In January 2016, almost a full year before the signing of the Cures legislation, the College of Healthcare Information Management Executives (CHIME) launched the private-sector Healthcare Innovation Trust National Patient ID Challenge. All in all, it was a promising year for data integrity. It seemed as though all players, private and public, were on the same page and moving lock-step toward a national solution to a serious patient identification problem.

In early 2017, the national conversation continued, with the ONC launching its own Patient Matching Algorithm Challenge in May. That same month, CHIME announced the finalists in its challenge. The proposals centered on biometric technology and blockchain capabilities. Later, in October, a bipartisan group of senators coauthored a letter to the Government Accountability Office asking for a review of the congressional ban on funding for the development of a unique patient identifier system, writing, "Patient misidentification can lead to inadequate, inappropriate, and costly care and, in the worst cases, patient harm or death."

Then, in November, CHIME announced the discontinuation of their National Patient ID Challenge. In a CHIME-issued press release, President and CEO Russell Branzell, FCHIME, CHCIO, said, "Though we've made great progress and moved the industry forward in many ways through the Challenge, we ultimately did not achieve the results we sought to this complex problem."

Mark LaRow, CEO of Verato, which offers cloud-based referential matching solutions, told For The Record in February that the CHIME challenge proved that biometrics and blockchain are not the end-all solution for the patient matching problem. "It also proved that patient identification is only a small part of a much larger challenge around patient matching—no patient identification solution, no matter how sophisticated, will be able to resolve the billions of duplicate records currently in existence," he said.

Regarding whether the industry was moving away from a solution to the patient matching problem, Steve Kotyk, director of health care business development at ARGO Healthcare Solutions, an analytical-science software company, says, "We are just stalled. It's more political than it is technical. My perspective is if that national patient identifier is self-attested like a Social Security number or driver's license number, it will always be prone to error. It's another piece of information to capture. It's not the silver bullet.

"The only thing bulletproof is biometric identifiers," he adds.

In May 2018, HIMSS' EHR Association penned a letter to Congress to request inclusion of patient identification and matching language in the House FY19 Labor, Health and Human Services, and Education and Related Agencies draft Appropriations Bill. To date, it doesn't appear such language was added.

A Growing Problem
While the federal government moves toward formally addressing the problem, the industry continues to face poor patient matching head on. Exacerbating the data integrity problem is the breakneck pace of mergers and acquisitions among hospitals and health care systems. In one fell swoop, the medical records from several separate health care organizations can find themselves under one umbrella with no easy way to link individual patient data across different platforms.

Companies such as LexisNexis are called in when acquisitions render a rapidly expanding health care system's records unruly. LexisNexis assigns a proprietary unique identifier, LexID, to patient records, linking disparate records for the same patient across facilities. Erin Benson, director of market planning for LexisNexis, explains the need to assign a unique identifier is often "driven by acquisitions and mergers, but also just wanting to help patients along the entire health care journey."

When organizations merge, the data may not merge so seamlessly. In fact, the job of identifying records that may be duplicated across several facilities and linking them together is humanly impossible. "All these different organizations store data in different ways. Someone might include a middle initial and someone else doesn't," Benson says.

Earlier this year, Butler Health System (BHS) called on LexisNexis and Occam Technologies to help them link 1.3 million records across six different EMRs. Together, Occam's eMPI platform and LexisNexis' unique identifier solution helped BHS reach a 97% match accuracy, leaving only 3% to be manually adjudicated.

"We knew we had a quality problem with our data. We had hundreds of thousands of records from different providers that needed to be linked to the right patients, who now all fell under the BHS umbrella," said Thomas McGill, MD, vice president of quality and chief medical information officer at BHS in a case study published by LexisNexis. "You can't manage populations of any kind or make decisions regarding how to allocate resources if you don't have data integrity."

McGill makes an important point, namely that BHS knew it had a problem.

Jon Case, a senior solutions architect at Verato, agrees that by the time a new client has contacted him, "They almost certainly have quantified the problem for themselves."

In short, the health care organizations that are addressing the patient matching problem most directly are those who know they have a problem. It begs the question: What about the organizations that don't know the extent of their duplicate rate problem?

Kotyk believes that's where the real challenge lies. "This is my drum beat on the issue," he says. "Most HIM directors don't know what their duplicate record rate is. They don't know what they're missing because they don't do an audit."

He continues, "A lot of this gets swept under the rug. I don't think it's with malice. Their index is accurate based upon the measuring stick they're using, but the measuring stick isn't as sophisticated as it needs to be."

Kotyk is adamant that a third-party audit is the only way to accurately measure duplicate rates and ascertain what the internal system is missing. Moreover, there are instances where data are not getting clean at the organizational level and duplicates are sent to the health information exchange (HIE), he says.

"In a regional health exchange, the undetected duplicate becomes complex. Once there's a duplicate at the HIE level and future records are pushed up, they don't know which record to attach the new encounter to," Kotyk says.

On the plus side, according to Kotyk, "HIEs are leveraging more sophisticated technology, and they're allowing their algorithms to make the decision to match or not match duplicate records."

Duplicates in the HIE
The HIEs have their eye on a different goal, and take a different attitude toward duplicate records. After all, from the perspective of an HIE, duplicate records are expected and nothing to worry about as long as they can be identified and "roll up into one catalogue that clinicians can click once to access," says Beverly McKee, a solutions specialist with Great Lakes Health Connect (GLHC), an HIE serving the state of Michigan.

When asked if GLHC requires organizations, or data senders, to audit their records to ascertain the duplicate rate, McKee says, "Typically it has been our philosophy and company mission to meet the customer where they are. So, to be honest, we don't put a lot of requirements on the customer. There are some basic foundational requirements that are necessary. Like, if you're going to send us data, you have to send us the first name and last name."

GLHC is confident in its ability to identify duplicate records whether they exist within one master patient index or across systems. "When you step outside of an organization and you start looking at Hospital A and Hospital B, the medical record number doesn't matter anymore. We look more at other data points such as address and Social Security number, and telephone number," McKee says.

When pressed on whether patient matching is a problem at GLHC, the team is almost confused by the question. Brian Mack, manager of marketing and communications, jumps in.

"It doesn't keep us up at night," he says, adding, "When we're talking about patient matching, the context of that conversation is on a national scale. There is a myriad of dynamics associated with solving that problem. From our perspective at the state level, we have that issue licked."

Case explains it this way: "They [HIEs] are less interested in trying to find duplication within a single data source. That's not really their mission. Their mission is to connect data across those 10 hospitals. In those cases, the rate they're looking to measure is how much overlap they have between data sources. In that case, the bigger the number, the better. The more overlap there is between a few hospitals, the better that is for them as an HIE. They can say, 'Look, there's value in sharing data because 14% of your patients have been to another hospital.'"

Algorithms and Machine Learning
Regardless of which challenges are being offered up by which entities, vendors in the private sector are driven by the intrinsic challenge of improving their algorithms and delivering the most effective solution possible to an industry that desperately needs a fix. In that pursuit, each vendor takes a slightly different position on how best to accomplish the goal.

Kotyk emphasizes ARGO's commitment to continuously improving its algorithm. "Some people stopped their algorithm development in the '90s and 2000s. A lot of the vendors out there are in the 'good enough' phase of software development," he says.

ARGO is leveraging machine learning capabilities along with probabilistic algorithms. Kotyk says machine learning is effective in identifying relationships between fields. For example, a female who changes last names at a particular age carries more significance.

He adds, "Senior citizens have a higher propensity to misreport their age. Mistakes appear in birthdates the older the population gets."

Benson notes that LexisNexis has "one of the largest, if not the largest, referential database" available on the market, adding that no one's data can remediate every error. "Some people don't have a public record footprint, or they have a really common name, or there's simply too much variation in their records to merge," she says.

She hasn't lost sight of the reason LexisNexis continues to improve their database: "There's more to it than it just being helpful from an administrative standpoint—we're actually improving patient safety by making sure your records all end up together so that your care provider can look at your health holistically."

Verato ensures referential data are orthogonal, or independently sourced, and trustworthy. Case says data from multiple sources that aren't orthogonal can mislead a person to believe a data element is more trustworthy than it is.

"If we were to go buy identity information from a credit card company, a bank, and a credit bureau, it's all the same information revolving around. We don't want to fool ourselves into thinking, 'Oh, wow. I've seen this piece of information in three places. It must be really good,'" she says.

Verato purchases publicly available data from three sources: credit header information, telephone utility information, and an aggregation of government and legal records for property taxes, voter and DMV registration, and the death index. "We believe those are better quality sources of information because they tend to be events in a person's life where it's in their best interest to give correct information," Case says.

According to Kotyk, vendors will continue to struggle to clean up errors made at registration until the industry solves the problem at the source. "The problem is the search tools used in registration are vintage 1960s," he says. "It's an exact match or a wild card match. If there's any mistake, they're not going to find that record.

"Put the best technology, the same level of sophistication, on the front end in the search tool to prevent creating the duplicate record in the first place," he continues. "Scan the license. Layer in biometrics. That's the sales pitch. That's how you solve the problem."

— Sarah Elkins is a freelance writer based in West Virginia.