June 8, 2009
By Selena Chavis
For The Record
Vol. 21 No. 12 P. 20
The new Red Flags Rule requires healthcare institutions to be on their toes when it comes to identifying potential identity theft.
As predicted by many healthcare and security professionals, the U.S. government is continuing to take steps to enact measures that will more effectively protect consumers against fallout from security breaches and identity theft in the marketplace. The latest of these efforts to have a direct effect on healthcare institutions comes in the form of the Identity Theft Red Flags Rule, which goes into effect August 1.
An outgrowth of the Fair and Accurate Credit Transaction Act of 2003, the Red Flags Rule specifically focuses on an organization’s ability to prevent identity theft and medical identity theft.
Susan Gindin, a Denver-based attorney with Isaacson Rosenbaum, believes the action should come as no surprise since the Federal Trade Commission (FTC) estimates that 9 million Americans have their identities stolen each year, with medical identity theft accounting for 4.5% of that total. “Identity theft was the No. 1 consumer complaint lodged in 2008,” she says. “The Red Flag rules have wide-sweeping ramifications for thousands of businesses nationwide that allow consumers to make payments for services.”
Chris Apgar, CISSP, president of Apgar and Associates, a consulting firm specializing in healthcare information security, suggests that many consumers more readily equate identity theft with credit card or banking transactions, even though medical identity theft can also have far-reaching ramifications. He points out that victims of medical identity theft can suddenly find themselves facing huge medical bills or a cancellation of their insurance coverage. “It [preventing medical identity theft] is what organizations should have been doing all along,” he says. “The Red Flags Rule is proactive in nature as opposed to the identity theft laws, which are reactive in nature.”
Entities covered under the rules include businesses and organizations that offer credit or payment plans to consumers, including businesses as diverse as utility companies and charities. It also covers institutions, such as healthcare organizations, that allow consumers to pay in installments or require multiple transactions on a particular account, notes Sue Marre, RHIA, president of the Massachusetts Health Information Management Association and HIM director and HIPAA privacy officer at New England Sinai Hospital and Rehabilitation Center.
“Originally, healthcare organizations didn’t think we would be covered,” she says, adding that many institutions were initially caught off guard and that as of March, statistics showed that 51% of healthcare institutions said they would not be ready. “Because we allow payment plans, then we fall in the covered category as a creditor. I don’t think healthcare organizations were ever purposely excluded.”
The FTC extended its original compliance deadline from November 1, 2008, to August 1, 2009, because many entities, including healthcare facilities, were unsure as to whether they were included as a covered entity, especially because many of them had not been required to comply with the FTC’s rules in other contexts. The deadline was extended to August to give institutions time to develop and implement written prevention programs.
While the ruling has left many organizations unprepared and scrambling to comply, Apgar says these rules are written to be taken seriously. Under the auspices of the FTC, he adds that, in his experience, “If you are an entity under investigation, you are guilty until proven innocent. That’s the way the FTC tends to operate.”
What the Rules Entail
It comes down to having a written program in place and following it, according to Apgar, who adds that if it’s not documented, then it didn’t happen in the legal world.
“As a responsible healthcare organization, I may be doing the right things, but if I haven’t documented it, to an outsider, it doesn’t look like I have done anything,” he explains.
The rules state that an organization must have a “program” as opposed to just a set of policies and procedures, and while many professionals are unclear as to how that will be defined, Marre points out that the key to what a program should encompass relates directly to an organization’s size. “My program here at Sinai is going to be on a much smaller scale than at Massachusetts General,” she says, comparing her 212-bed facility to that of the largest hospital in New England.
Apgar says the plan is intended to cover all accounts, not just those that fall under the conditions of payment plans or multiple transactions. “The FTC has taken the stand that it doesn’t matter if it’s only 1% of accounts that fall under that definition,” he says.
The rules state that organizations are required to identify patterns, practices, and specific forms of activity—red flags—that indicate the possible existence of identity theft. “The rule provides an extensive list of potential red flags for organizations to use in identifying red flags,” Gindin notes. “For medical practices, an appropriate first step before providing medical services is to check photo IDs and insurance information or credit cards to ensure they match the person and name on the account.”
She adds that potential offenses could include the attempted use of a photocopied driver’s license, unusual account activity, or a suspicious address change request.
Organizations are also required to provide specific procedures for detecting potential threats in their day-to-day operations. Actions could include the monitoring of people accessing patient files and noting suspicious activity in patient accounts, Gindin says.
Procedures should also be in place for responding when red flags are detected. Gindin offers examples of possible responses, including ensuring that information relating to the thief is not commingled with that of the victim, contacting insurance providers to prevent further issues, and notifying the patient.
Regular staff training is a requirement, as well as oversight of other service providers working with an institution. Marre notes that New England Sinai conducts regular staff development training and has put specific training for identifying red flags on the agenda for employee orientation.
Other provisions in the rules call for the regular monitoring of the program’s effectiveness and a well-documented policy that is signed, administered, and managed by the board of directors.
The Buck Stops at the Top
Apgar says the rules have been specifically written to require a top-down approach within a facility. “It requires senior management to adopt a policy and pay attention to what it’s doing or not doing,” he explains. “They are required to review the program on an annual basis.”
Gindin adds that the FTC wants to see documented proof that the board of directors, board committee, or managing physician is overseeing the program’s implementation and administration, which means senior stakeholders will be required to review reports regarding the program and approve any significant changes.
The day-to-day management of the program could fall under or include any number of departments, Marre suggests, including HIM, compliance, and IT. “At a lot of places, it will likely come under the privacy or security officer assigned to HIPAA,” she says. “The director of IT and I have been working together on our program. Most HIM staff have some experience based on what we had to do for HIPAA.”
Don’t Reinvent the Wheel
Gindin believes most facilities, if they have adopted appropriate polices to comply with HIPAA, should be well prepared for red flag ramifications. “I suspect most already have the procedures in place, and all they will have to do is document,” she says.
Apgar cautions organizations to avoid duplicating efforts when putting these plans in place. “You don’t need to go out and adopt a whole new set of Red Flags Rule policies,” he explains, pointing to provisions such as risk analyses that are already required by HIPAA. “Why do I want to do two risk analyses when I can do one?”
Pointing to another area that could possibly employ duplications, Apgar notes that most facilities probably already have a security response team identified. “Those [risk analysis and a security response team] are two of the more significant areas that should already be in place,” he says. “Instead of looking at a new security response team, why not use what’s already in place?”
When it comes to red flag enforcement, it’s a whole new ball game compared with HIPAA—at least, that’s what many industry professionals are suggesting. When HIPAA went into effect six years ago, facilities braced themselves for fallout from the Office for Civil Rights (OCR). To date, there have been no official civil penalties levied.
Apgar notes that while many institutions may have let their guard down in relation to HIPAA, the FTC’s enforcement arm is much different. “The FTC tends to take things very seriously,” he says. “They are more into enforcement than OCR.”
Gindin concurs, noting that to date, the FTC has taken more than 30 enforcement actions against violators of various privacy laws. “The penalties against violators have been as high as $10 million,” she notes.
Healthcare organizations can be penalized $2,500 for each Red Flags Rule violation. In addition, Gindin says the tendency is that once a healthcare organization is cited, things seem to snowball. “If the FTC takes action against a medical provider, the provider may then face an HHS [Health and Human Services] or ball game [Department of Medicaid Services] investigation into its HIPAA procedures, and the state attorneys general may bring actions under their state laws,” she explains. “We are also seeing class-action lawsuits based on data breaches, so this is also a possibility.”
Apgar points to a recent settlement that originated as a dual investigation by the FTC and HHS of allegations that CVS Caremark pharmacies nationwide were disposing of a multitude of sensitive personal health and financial information into open dumpsters. The nation’s largest retail pharmacy chain agreed to a $2.25 million settlement and a corrective action plan to ensure it does not violate the privacy of its millions of patients.
“If you have one employee throw out 12 documents and that continues, it can get very expensive very quick,” Apgar says.
However, Apgar says not to expect an army of FTC investigators to begin regular compliance audits. “The FTC does not have a huge enforcement arm, but they do respond strongly to complaints and headlines,” he notes.
Gindin explains that FTC enforcement is typically triggered by an event, such as a data breach or a major identity theft involving the organization. The key to the Red Flags Rule is that it is a proactive effort to identify problems before they occur, not necessarily to prevent them.
“If the medical provider can demonstrate that it has had a well-documented program in place and, in spite of their program, the event took place, it is much less likely to face legal action,” Gindin says.
Apgar agrees that if organizations are following the guidelines, there shouldn’t be much concern about fallout. “If my staff isn’t looking for [red flags] and taking action appropriately, then my organization will find itself in trouble,” he says.
— Selena Chavis is a Florida-based freelance journalist whose writing appears regularly in various trade and consumer publications covering everything from corporate and managerial topics to healthcare and travel.
Common Red Flag Categories
In a how-to guide, the FTC outlines five specific “red flags” to be included in a formalized program:
1. Alerts, notifications, and warnings from a credit reporting company: Changes in a credit report or a consumer’s credit activity may signal identity theft such as a fraud or an active duty alert on a credit report, a credit freeze notice in response to a request for credit, or a report of address discrepancy.
2. Suspicious documents: Sometimes paperwork has the telltale signs of identity theft, including identification that looks altered or forged or the person not resembling the photo ID or matching the physical description.
3. Suspicious personal identifying information: Identity thieves may use information that contains inconsistencies. Examples include an address that doesn’t match the credit report, the use of a Social Security number that’s listed on the Social Security Administration Death Master File or a number that hasn’t been issued, a date of birth that doesn’t correlate to the number range on the Social Security Administration’s issuance tables, information that has been used on a known fraudulent account, a bogus address or invalid phone number, a Social Security number that has been used by someone else opening an account, an address or telephone number that has been used by others, a person who omits required information and doesn’t respond to notices that the application is incomplete, or a person who can’t provide authenticating information.
4. Suspicious account activity: Sometimes the tip-off is how the account is being used. Red flag patterns include nonpayment when there’s no history of missed payments, mail sent to the customer that’s returned repeatedly as undeliverable although transactions continue to be conducted on the account, information that the customer isn’t receiving their account statements in the mail, or information about unauthorized charges on the account.
5. Notice from other sources: Sometimes a red flag that an account has been opened or used fraudulently can come from a customer, a victim of identity theft, or law enforcement.