Home  |   Subscribe  |   Resources  |   Reprints  |   Writers' Guidelines

February 1, 2010

Do Privacy Hurdles Impede Data Exchange?
By Elizabeth S. Roop
For The Record
Vol. 22 No. 2 P. 14

The industry searches for middle ground in the debate over how to exchange protected health information without violating patients’ privacy rights.

The passage of the American Recovery and Reinvestment Act (ARRA) brought with it a bevy of new regulations governing the protection and use of protected health information (PHI) designed to strengthen HIPAA and sharpen the teeth of state-level privacy laws. It also added fuel to an already heated debate that pits some privacy advocates against those who champion the unfettered exchange of deidentified PHI to improve healthcare access, quality, and safety, as well as to advance medical research.

Central to this debate are questions regarding a patient’s right to privacy vs. the industry’s need to advance technologically and scientifically. Specifically, are privacy laws that restrict the sharing of PHI preventing effective data exchange and impeding the scientific research necessary to advance the practice of medicine?

“Privacy laws aren’t hindering anything. In fact, privacy laws are exactly what enable your information to be shared with the people and places you want it to be shared with,” says Deborah C. Peel, MD, founder and chairman of Patient Privacy Rights, a medical privacy watchdog organization. “The myth that consumer control over PHI is an obstacle was dreamed up by the insurance industry, the vast health data-mining industry, and those who want to use and sell data without our permission.”

But Don E. Detmer, MD, MA, a professor of medical education at the University of Virginia and senior advisor with the American Medical Informatics Association, says, “They [privacy laws] are [a concern] today, and I think that the question is probably one of whether we can find better policies and technical solutions so that they won’t be. But it’s clear to say this is an issue that bothers a number of experts in this space.”

While those advocating for stronger privacy protections and those politicking for greater PHI exchange appear to be diametrically opposed, the reality is that the debate is made up of many shades of gray and the two sides are actually in agreement on many points. Most agree that the ability to exchange data is beneficial to patients. They also agree that an effective consent system is needed that enables patients to control when, with whom, and for what purposes their PHI is shared.

The fissures don’t truly erupt until the conversation turns to the breadth of those consents and whether exceptions should be made for the use of deidentified data for research. Those involved in the debate are also often at odds over whether there really is such a thing as deidentified PHI and, if there is, whether it can maintain any scientific value once it is stripped down to a truly anonymous form.

The Need for Stronger Consent
Most agree that a properly structured patient consent process could go a long way toward resolving the privacy vs. advancement debate. However, it would need to go beyond even the latest consent regulations contained within ARRA that allow patients to restrict the disclosure of PHI related to treatments or other services paid out of pocket.

To be effective, the consent process would also need to overcome variances between the patchwork of state laws that currently govern privacy of PHI. Doing so would eliminate the argument made by many healthcare organizations that it is impossible for their systems to electronically manage the varying restrictions placed on PHI.

“It will require compromise on both sides,” says Robert D. Belfort, a partner in the healthcare industry practice of Manatt, Phelps & Phillips, LLP. “We’ve seen those positions articulated even in New York, where we’re working with the New York eHealth Collaborative. On one side, you have physicians who take an in loco parentis view of their patients. Why would anyone object to sharing information between providers? On the other side, there [are] privacy advocates who believe patients don’t necessarily want their podiatrist to see their abortion record. There needs to be a fine level of control over the information flow.”

The challenge for both sides is to come up with practical solutions that allow patients to decide for themselves whether the benefits of releasing their information outweigh the risks and that it can be accomplished in a way that is technologically feasible. Starting with the assumption that some type of consent will be obtained is a start, although some problems, such as minor consent, defy simple resolution.

For example, Belfort notes the New York consortium is temporarily excluding the exchange of any information related to minors between the ages of 10 and 18. This allows them to circumvent a complex, multifaceted issue that could drag down the entire process.

“It is one way to address the privacy issue and still have a robust exchange of information,” he notes.

Detmer suggests an effective consent process could be as simple as giving individuals the chance to opt in or out of data sharing, at least at the state level. As proof that such a basic system would be effective, he points to statistics from Utah, where just 3% to 5% of parents opt out of exchanging childhood vaccination histories, and Massachusetts, where 95% to 97% of the population agreed to join the state’s data exchange system.

“The fact of the matter is that we have evidence out there [that the process] is not an abstract thing. When talking about opt in or opt out, most people are pragmatic,” he says. “This is clearly not the black box that it used to be 20 years ago when we first started getting into these discussions.”

Peel argues the issue surrounding consent is not that a suitable process doesn’t exist for granting informed electronic consent. Rather, the problem is the illegal use of PHI by secondary parties to whom no consent has been granted.

She uses the pharmaceutical industry as an example. When an individual drops off a prescription at a local pharmacy, even if they pay out of pocket for the medication, their information is automatically available to multiple secondary organizations, including pharmaceutical data-mining companies and manufacturers.

“The problem arises because so many people want to use your information in ways you don’t agree to or permit. … The tremendous obstacles are the data feeds to secondary and tertiary users,” Peel says. “You should be able to tell your physician to send your information to anyone you want to see your records. The problem is that once your records leave your doctor’s office, you have no control over the vast illegal systems of providers and HIT vendors that mine and sell our information. That’s data theft.”

Such unauthorized, yet largely uncontrollable release of PHI can have a chilling effect on individuals who fear that seeking care may open them up to discrimination by potential employers and insurance companies. In fact, Health and Human Services estimates that nearly 600,000 Americans did not seek early treatment for cancer and 2 million did not seek care for mental illness because of privacy concerns. Also, the RAND Corporation found that 150,000 U.S. soldiers suffering from posttraumatic stress disorder did not seek help because of privacy concerns.

For Peel, a psychiatrist, it’s a reality she faces every time she prescribes medication for a patient diagnosed with depression. “I feel like I have to give [my patients] a Miranda warning before writing them a prescription,” she says. “If you fill a prescription for an antidepressant, that information can and will be used to discriminate against you.”

PHI and Research
The other hot button issue arises when the debate turns to the use of PHI for research. Again, the issue is far from black and white, with both sides acknowledging there is great medical and societal value to be derived from the sharing of data among researchers. Where they disagree is the issue of whether consent should be secured before data are shared, even when they are deidentified.

HIPAA does permit partially deidentified data to be used without a patient’s consent, says Belfort. Researchers also have the option of requesting a waiver from the institutional review board overseeing their study to work with a full data set, as well as obtaining consent from the individual patients. “Some researchers believe that the rules are still too strict,” he says. “I do hear a lot about HIPAA being an impediment to research, but I’m not sure why researchers believe that to be the case.”

According to Detmer, the problem for researchers is that despite the HIPAA exception, U.S. policy does not distinguish between the use of PHI for medical care and its use for research. Therefore, individual patients have no real means for specifying that utilizing their PHI for research is acceptable, especially if they have concerns about that same data being shared elsewhere.

“Other parts of the world with a more ‘social solidarity’ approach to healthcare have … separate policies governing research and related to care. Part of our problem in this country is that we have not made that distinction clear at all, so we’re paying the price,” he says. “There’s a pretty broad consensus that it’s hurting biomedical and all other kinds of research because we have not developed a policy that discriminates between research and care. Rather, we approach it as one issue.”

For privacy advocates, the real problem with regulations allowing the use of deidentified data is that there is no way to prevent that data from being reidentified. Plus, there are no penalties if data are reidentified.

“Those claiming to deidentify data should have the burden of proof. They should release their methods for deidentification so outside scientific experts can verify whether the claim is accurate,” says Peel.

That is why some advocates for expanding access to PHI for research purposes say it’s time to stop pretending that it’s possible to fully deidentify data and that such data would be useful for researchers. Instead, they say the best approach is to give individuals the opportunity to opt out of allowing their full data to be used for research.

According to Jamie Heywood, chair and cofounder of PatientsLikeMe, the societal value of allowing researchers to access open data far outweighs the risk to the individual. Eliminating the fallacy of deidentification would allow individuals to make educated decisions regarding the potential value their PHI holds for advancing medicine.

“This is the dishonesty in the dialogue,” he says. “In order to deidentify data, you have to combine it in large enough groups so that you can’t identify that individual or you need to take out any meaningful data. Both place extreme compromises on the value of the data.”
Heywood, whose Web site is a community where individuals diagnosed with life-changing diseases can post information and share experiences, likens the current regulatory environment to neighboring cities that are legally prohibited from sharing information on crime.

“How would we fight crime?” he asks. “We’ve really set up a tragic system. Realistically, 100,000 people die each year due to medical errors that, if we allowed data to flow effectively, could have been prevented.”

Heywood acknowledges that discrimination based on PHI does occur. But he says passing laws prohibiting data sharing rather than discrimination has “so perverted the concept of privacy and security that we have pursued secrecy as a goal rather than a right. It has fundamentally eroded part of the culture that makes a great society.”

He also makes the case that people are more than happy to openly exchange health information if it will be used to help others and themselves. Heywood points to the more than 40,000 members of PatientsLikeMe who encompass nine disease communities to illustrate the willingness people have to be open about some of the most sensitive medical issues.

“Privacy is a right and, unless there is strong societal justification, we should not require people to give up that right,” he says. “In general, privacy is a right we should preserve. It’s an important value. What we’re advocating for is an option that becomes default saying that you’re willing to share this information so others can benefit. Stigmatization occurs because people don’t understand. Understanding comes through knowledge, and knowledge comes from data sharing.”

— Elizabeth S. Roop is a Tampa, Fla.-based freelance writer specializing in healthcare and HIT.

 
What Do Patients Want?
There’s one sector rarely heard from in the debate surrounding privacy laws and the advancement of medicine: the general public. Do patients want the opportunity to consent to having their protected health information (PHI) used for research? Do they care whether their information is shared among providers?

Though there has been little fanfare over the findings, there have been several studies on the issue. Most recently, the Agency for Healthcare Research and Quality held 20 HIT focus groups across the country. What it found was that the population, though conflicted, had some pretty clear views on who should decide what happens to their data.

For example, while a large portion of participants initially said consumers should not help in determining how HIT is designed and used, “upon further discussion, the participants tended to feel that they needed to have a say about health IT in order to protect the privacy and security of their medical information.”

The report, “Consumer Engagement in Developing Electronic Health Information Systems,” also noted that privacy and security were main concerns of a significant majority of focus group participants. According to the authors, “A substantial proportion felt that healthcare consumers owned their data and needed a role in ensuring that those data were secure and used only in ways that they authorized.”

In addition to concerns about hackers, participants also expressed fears that their data could be shared with people who wanted to use it for purposes other than the provision of care.
Finally, participants supported the idea that healthcare consumers should be asked for their consent before their medical data are stored electronically and that they should be able to elect to leave their data in paper format.

“The participants tended to feel that each individual provider should ask each patient for permission to store the patient’s data electronically and to share the data with other providers,” wrote the authors. “Patients should be able to grant permission to one provider but deny it to others in the opinion of many in the focus groups. In this way, the participants felt that health IT restrictions should be set individually for each consumer rather than by general rules applied to all consumers. The participants were divided on the issue of how electronically stored data could be used for medical research and for market research by pharmaceutical companies.”

— ESR