Home  |   Subscribe  |   Resources  |   Reprints  |   Writers' Guidelines

Special AHIMA Edition September 2013

The New Omnibus Rule
By Susan Chapman
For The Record
Vol. 25 No. 13 P. 10

Experts shed light on a few of the gray areas capable of causing consternation.

Health and Human Services (HHS) has strengthened the privacy and security protections for protected health information (PHI) established under HIPAA. Its Omnibus Final Rule, which took effect September 23, not only enhances patient privacy protections but also provides individuals with new rights to their health information and reinforces the government’s ability to enforce the law.

The changes offer the public increased protection and control of PHI. In the past, HIPAA’s privacy and security rules focused on health care providers, health plans, and other organizations that process health insurance claims. The current changes extend many of the requirements to affiliated business associates, such as contractors and subcontractors, who have access to PHI.

With the new rule, penalties for noncompliance are based on the level of negligence, with a maximum penalty of $50,000 per violation up to $1.5 million per violation of an identical provision in a calendar year. The changes also strengthen the HITECH Act breach notification requirements by clarifying when breaches of unsecured health information need to be reported.

The rule expands individual rights in important ways. For example, patients can request a copy of their medical records in an electronic form. Additionally, when when patients pay in full out-of-pocket for treatment, they can instruct the provider to keep treatment information private from their health plan. The rule also sets new parameters on how information can be used and disclosed for marketing and fund-raising purposes, prohibits the sale of an individual’s health information without permission, and streamlines patients’ ability to authorize the use of PHI for research purposes.

Although the Omnibus Rule does further clarify HIPAA, there remain several vague areas that have left some HIM professionals searching for more clarification.

Security Incident vs. Breach
The rule revised when a breach must be reported. However, there is confusion about the difference between a security incident and a breach.

“A breach means that someone accessed information, or possibly did, that he shouldn’t have,” says Rich Temple, a national practice director at Beacon Partners. “A security incident means that someone may have seen or taken something that he shouldn’t have. It could also mean that he didn’t know what he was looking at.”

“When it comes to a covered entity, an incident does not become a breach until the data are compromised,” says Chris Apgar, CISSP, CEO and president of Apgar & Associates. “If the data are encrypted, then the information has not been compromised. If a hard drive is not encrypted, for instance, then I have to notify somebody. If it’s paper, it’s never secure, according to guidance issued in 2009 by HHS.”

On the latter point, Temple points out that it would be challenging to find an organization that hasn’t left a paper file in an obvious location. “Usually, these areas are secure with people who are authorized to see it,” he says. “Janitors, though, for instance, could see something, and precautions should be taken to prevent this. Generally, common sense should prevail. The National Institute of Standards and Technology [NIST] has a guide that assesses the functional impact of an incident. It’s important to apply a robust set of parameters that make sense.”

Apgar says the new rule requires organizations to assume that if a breach of unsecure PHI happens then they must file a report and prove the contrary. “If, after investigating the situation, it’s determined that PHI has been compromised, whether electronic or paper, the event organization needs to assume it’s reportable,” he explains. “If the organization is the covered entity, it must conduct a four-factor risk assessment and, unless the risk is pretty low, the breach would need to be reported.”

Daniel W. Berger, president and CEO of Redspin, concurs, noting that the first determination organizations must make is whether the security incident involved—or may have involved—PHI. Under the Omnibus Rule, covered entities must conduct a risk assessment for every breach that involves unsecure PHI. Business associates also are required to report any breach of unsecure PHI to the covered entity, which is responsible for conducting the risk assessment.

“In determining the probability of compromise, four factors must be taken into consideration,” Berger says. “First, the nature and extent of the PHI involved, including identifiers and likelihood of reidentification. Second, the identity of the unauthorized person who used the PHI or to whom the PHI was disclosed. Next, whether the PHI was actually acquired or viewed and, finally, the extent to which the risk to that PHI has been mitigated. While this still leaves room for interpretation and subjectivity, such flexibility is probably necessary since in this area of governance, hard lines are often impossible to draw.”

Berger adds that if the covered entity concludes there was a very low probability that PHI was compromised, it may choose not to disclose the incident to the individuals affected or to the Office of Civil Rights (OCR). However, the organization still must maintain a record of how that determination was made. Unless otherwise delegated to the business associate through contract, the business associate does not determine risk and whether the breach needs to be reported.

Whether the potential breach was the result of a hacking incident or a power outage, Berger says that risk assessment still must be carried out.

Shannon Hartsfield Salimone, cochair of Holland & Knight’s data privacy and security team, says it’s important to note security incidents even if they do not result in a breach. “A stolen laptop that is encrypted could suggest that even if the information can’t be accessed, there could be a physical vulnerability that must be looked at,” she says. “Is there a broken lock on a door or some other situation that leaves PHI vulnerable? It’s not only about the information itself; it’s also about keeping the environment secure overall.”

Bryan Cline, PhD, chief information security officer and vice president of development and implementation of the common security framework at the Health Information Trust Alliance, says security incidents must be carefully evaluated to determine whether they rise to the level of a breach. “These events must first be evaluated to determine if they pose a threat to an information asset,” he explains. “Some events we know will not rise to the level of a security incident. For instance, there are possibly thousands of scans of an organization’s firewall every day.

“But an event can easily become an incident,” he continues. “Suppose an organization receives a lot of spam e-mail. You train users not to respond, so the simple fact you receive spam does not constitute an incident. Now let’s suppose you discover that someone responded to a link in the e-mail. You now have a potential security incident, which requires further investigation since this could conceivably impact the security of PHI should it exist on the individual’s machine. If it can be shown that malware had been successfully downloaded onto that person’s computer, an investigation would be required to determine whether or not a breach occurred. Then you would go through the HITECH/Omnibus Rule evaluation to determine if the breach is reportable. If you can show that there is a low likelihood of harm, then you would not be required to report the incident.”

Cline cites the example of an individual who received the wrong paper record but returned it immediately to its rightful place. In such a scenario, the incident may not be considered a breach if the information could not have been reasonably retained. If that’s the case, then the incident is not reportable.

Another risk that must be evaluated is an individual who has access to information that he or she is not authorized to view. For example, a patient bill is sent to the wrong address or a receptionist at a clinic has access to a patient’s mental health information. At large hospitals in past years, staff members have occasionally intentionally viewed PHI. “Access should be the minimum necessary,” Apgar says. “If I give someone more information than they need to do their jobs, it becomes a breach. They didn’t have a business or clinical reason to see that information. Since the incidents in which staff were accessing celebrity information, policies and procedures should now be in place to ensure the privacy of VIPs is maintained.”

At most health care organizations, access rights are a burning—and sometimes confusing—issue. “Unauthorized access does have some nebulous aspects to it,” Temple says. “HIPAA mandates that each individual who has access is supposed to see the minimum necessary to do their jobs. If the individual’s user ID doesn’t allow her to see the [records of a] patient who is not in her care, then she shouldn’t be able to see it. Some folks also share passwords, which is unauthorized access. If you go somewhere that you don’t belong or access the system under false pretenses, those activities are considered unauthorized access. Even with increased awareness around HIPAA, people still leave their passwords on sticky notes on their computers. Both the sharer and sharee are on the hook then for password sharing.”

Salimone believes that while the Omnibus Rule provides guidance, it lacks clarity. “Overall, the rule is well thought out and helpful,” she says. “It will result in more reporting of breaches, but it still leaves the definition of a security incident open for interpretation.”

Agpar says the absence of definition always has been part of HIPAA. “The HIPAA Security Rule does not include a definition of what a security incident is to support flexibility,” he says. “The HITECH Act defined reportable breach in 2009: the compromise of PHI. It’s not completely and totally secure if it’s electronic and not encrypted per the NIST standards or it’s not totally and completely destroyed. If it’s nonelectronic, it’s not secure unless it’s totally and completely destroyed.”

Business Associate Obligations
The Omnibus Rule compels business associates to “report to the covered entity any security incident of which it becomes aware, including breaches of unsecured protected health information as required…”

Many individuals and organizations fall under the title of business associate. “Business associates can be IT folks, lawyers, accounting firms, coders—anyone who uses, discloses, stores, or transmits PHI on behalf of a covered entity,” Apgar says. “There are a lot of cloud vendors out there. They have no intention of looking at the data, but they have the opportunity to look at it. I’m a subcontractor that contracts with a covered entity. I am in a position to be fined if a breach occurs and I don’t report it.”

Cline says business associates must determine whether an incident occurred. If so, they must report it to the covered entity, which then determines whether it rises to the level of a breach and needs to be reported to HHS. “If something almost happened and was remediated, the business associate would want to report it to the covered entity,” he says. “That said, it’s up to the covered entity to state in the business associate agreement what needs to be reported and what doesn’t. Each covered entity uses a different definition. It’s always helpful to be specific in the agreement about what is reportable and what is not.”

Salimone says many covered entities are documenting that business associates must report anything that results in an improper use or disclosure. Otherwise, the business associate may decide unilaterally that the incidents are harmless. “Both business associates and covered entities have to keep track of those things and make a determination,” she says. “Both have to appoint a security official and take whatever action is necessary.”

The process can become cumbersome, Temple notes. “It may be counterproductive to attempt a formal, full remediation process for every single incident to be reported because of the logistics and paperwork involved,” he says. “It takes time and resources away from those incidents that really need attention. All incidents large or small should be documented in accordance with the Omnibus security regulations.”

Apgar takes a stricter view. “If I’m a business associate and I’m aware of a security incident, I have to consider how stringent the company is,” he says. “We always recommend that business associates report to the covered entities and let them make the decision because that’s what the final breach notification rule requires.”

Guidelines
According to the experts, there must be guidelines in place to determine whether an incident is reportable. “I’m going to have things that happen on a day-to-day basis,” Apgar says. “If a laptop gets infected, I know what to do, all the way up to ‘I have a major disaster,’ or someone lost a laptop that wasn’t encrypted. I have to have a security incident response team in place. Usually that core [team] looks at it if it’s only IT, then they go to the next level and may involve marketing, communications, the attorney, and so on. One of the problems is that people have policies in place that will do these things, but if the plan is not tested, things fall apart when that large breach occurs.”

A typical test scenario involves creating a breach incident that involves the entire team. The test incident begins just as it would if it actually were happening in the workplace when not much information is yet known. Then, as the investigation continues, more people become involved. “People don’t test because they don’t think about it,” Apgar says. “Generally, there are not enough resources invested in security; it’s not seen as an insurance policy, but that is what it is.”

Cline adds that incident response has been around a long time. “It’s really not rocket science; it’s pretty simple,” he says. “Unfortunately, a lot of organizations don’t have dedicated people for incident response. It’s usually an ad hoc team, and they bring in other people as needed. They would have procedures to follow, but it’s not something they would do day to day [because] these people have other full-time jobs.”

Consequences of Not Reporting
Organizations that fail to properly report potential breaches can expect a phone call from the OCR. It shouldn’t signal panic, though. “As long as you’re compliant and can back up your decision making, you’re likely OK,” Apgar says. “The OCR may ask you to make changes if they disagree with your procedures. If you fix it in a timely fashion, you will not be fined. However, if you can’t demonstrate that you’ve done your risk analysis, haven’t implemented policies and procedures, and so forth, you will be fined. That’s called willful neglect.”

Lack of compliance can get expensive. For example, recent security breaches cost WellPoint and Affinity Health $1.7 million and $1.2 million, respectively.

Breach consequences can extend beyond fines, Temple notes. “Companies should be concerned with criminal liability, and there should be sanctions for individuals who do not comply with organization security protocols,” he says.

To mitigate such circumstances, Temple encourages organizations to take a hard line by immediately terminating any person involved in mishandling a security breach and making such actions public without disclosing identities. “Having a documented policy of zero tolerance and enforcing that policy makes it self-policing and much easier to stay in compliance with the new rule,” he says.

— Susan Chapman is a Los Angeles-based writer.