Home  |   Subscribe  |   Resources  |   Reprints  |   Writers' Guidelines

June 11, 2007

The Proof Is in the Data
By Aggie Stewart
For The Record
Vol. 19 No. 12 P. 25

What good are performance initiatives without data collection and reporting standards? Several organizations in the quality arena are taking steps to establish needed data standards that will ensure more effective and efficient performance measurement.

Until recently, discussion about performance measurement in healthcare has focused on broad issues of what and how to measure, as well as issues of measure construction. While significant research time, energy, and resources have gone into advancing the science that undergirds performance measure development, comparatively less time has gone into addressing data collection methodology and standardization.

As performance measurement’s application through pay-for-performance programs has become increasingly viewed as a means to increase the value and safety of healthcare and restructure its payment system, requests for performance data have risen exponentially. With no change in the forecast for requests from performance measurement initiatives, providers aren’t likely to see the rising wave of requests crest anytime soon.

Yet, a crest achieved through standardization and coordination is what many in the quality arena believe is necessary to prevent performance measurement from imploding. At a time when providers face increased costs and other burdens, escalating and disparate requests for performance data for various reporting purposes have pushed a broad range of data collection issues to the surface, not the least of which are data quality, duplication, and the costs of data collection itself.

Last November, to begin bringing order to uncoordinated performance measurement efforts and issues raised by the burgeoning demands for performance data in a discordant measurement environment, the Agency for Healthcare Research and Quality (AHRQ) within the Health and Human Services’ (HHS) department partnered with the AHIMA’s Foundation of Research and Education (FORE) and the Medical Group Management Association (MGMA) Center for Research to convene a conference of more than 50 of the nation’s most influential healthcare experts to address problems relating to performance measurement, data collection, and reporting. In preparation for that conference, the AHIMA and the MGMA formed an expert task force to lay out the array of issues in data collection, aggregation, and reporting requirements. The AHRQ published the task force and conference findings and recommendations in the joint report “Collecting and Reporting Data for Performance Measurement: Moving Toward Alignment.” The accompanying sidebar summarizes six challenges in the current performance measurement data collection and reporting environment identified by task force members and conferees.

According to Jon White, MD, a senior portfolio manager for HIT at the AHRQ, the agency drew two overarching conclusions from the work presented. “The first conclusion is that there needs to be a single path forward in order to resolve issues related to variation in performance measurement requirements,” explains White. “And the second conclusion is that health information technology and health information systems could be a great boon for automating data collection and reporting, but if that is also done in a disorganized fashion without a unified process, we’ll just be taking bad processes and making them faster.” White’s comments refer to a similar state of disorganization in HIT development and implementation described in the report.

A Pressing Need to Establish and Maintain Order
The AHRQ’s first conclusion zeroes in on the report’s central recommendation—the creation and funding of a public-private oversight entity with the authority to set operating rules and standards governing performance measurement, including bringing standardization to measure development, measurement systems, and data content and collection, as well as harmonizing existing measures in and across care settings.

According to the report, such an authority is urgently needed to coordinate and organize performance measurement efforts to reduce the resource burden that discordant demands for performance data place on physicians and healthcare organizations, while more fully realizing the benefits performance measurement offers to improving the quality, safety, and efficiency of healthcare.

Problems from a lack of standardization and coordination among the various and increasing performance measurement initiatives drove the recommendation for a public-private oversight entity. “We can’t go on with multiple private, multiple public entities setting forth their requirements for [performance measurement] essentially in a vacuum,” says AHIMA CEO Linda Kloss, MA, RHIA, CAE. “And it isn’t just what’s going on nationally. It’s important to understand that there are quality and transparency initiatives going on at the state level.”

“We’ve raised this issue of how to achieve coordination between national initiatives and those at the state and local levels with the Office of the National Coordinator because we can’t compound the fragmentation,” continues Kloss, referring to a project FORE is completing for the Office of the National Coordinator.

With an increasing number of programs requesting data, the absence of alignment has amplified existing problems between programs, such as similar measures that vary across care settings or specialties and variation in the type of data used for collecting and aggregating requested data (eg, administrative, clinical, patient survey). Further differences in data characteristics, such as data element definitions and formats for similar fields (eg, mm/dd/yy, mm/dd/yyyy, mm/yy), medical documentation used as data sources (eg, progress notes, history & physical, physician, nurse practitioner), and submission guidelines/requirements only compound the growing melee and complicate, if not frustrate, attempts by providers to streamline data collection and reporting processes.

Unaligned measurement initiatives also raise issues related to the many-faceted aspects of data quality. According to performance measurement expert Jerod Loeb, PhD, executive vice president for the division of research for The Joint Commission, good data quality starts with good measure construction, which involves good data content specifications. “Data quality is only as good as data construct,” explains Loeb, adding that a “good measure needs to have very explicit attention paid to such things as standardized data element definitions and abstraction guidelines.”

While data content specifications within a measure set may be consistent, consistency across programs remains absent. This creates hurdles for organizations collecting data for multiple programs, adding to the time and costs required to collect, aggregate, and submit the data. It also introduces risk to data accuracy due to the amount of manual intervention required to massage the data and prepare it for submission.

These and other problems introduced by a lack of coordination in the performance measurement arena have led many industry leaders to believe that continued absence of order will lead not only to uneven quality monitoring but, more importantly, to a stifling of the best-intended, best-funded HIT initiatives.

HIT’s Role in a More Rational Performance Measurement Environment
The AHRQ’s second conclusion targets HIT as a tool for addressing the complex set of issues surrounding current approaches to data collection, aggregation, and reporting. The report outlines a vision for data collection that is simple and streamlined: collect the data once per encounter as close to the point of care/service as possible in a format that will enable multiple secondary uses internally and externally. In other words, make data collection a by-product of the care process. This vision is linked to HIT, particularly electronic health record (EHR) systems and the need for these systems to support and facilitate performance measurement.

While EHRs hold the potential to be a significant part of the solution to the challenges in data collection and reporting, its development and implementation have been similarly piecemeal and uncoordinated. As the report highlights, EHR development, like performance measurement development, lacks standardization—starting from basic components, such as data field length and labels, to the more complex, such as functionality and data storage. The absence of data standards that include definitions and taxonomy makes EHR standardization more challenging.

At present, performance measure and EHR development occur in separate industry silos, which accounts, in part, for the scant support EHRs currently provide to data collection, aggregation, and reporting processes. Experts believe that the silo approach needs to change, and the development of each needs to be informed by the needs of the other.

“The definition of data elements and the specific requirements for how those data elements are to be collected and from what sources should be tremendously helpful to the [EHR] vendor community so that they can embed those standards and specifications in their products,” says MGMA President and CEO Bill Jessee, MD. Loeb sees future collaboration between performance measure and EHR developers as key to making more streamlined data collection possible. “Every time a new measure comes out that hasn’t been developed with electronic health vendors so that they can embed the measure specs, we just magnify the [data collection] problem,” he says.

Taking Steps Toward a More Ordered Future
In an effort to move the report’s findings and recommendations forward, the AHIMA and MGMA presented them to the quality workgroup of the American Health Information Community (AHIC), a public-private advisory body within HHS focused on how to accelerate the development and adoption of HIT. “Our first objective was to raise the consciousness of the players in the quality and performance measurement arena that you can’t just focus on measure development. You’ve also got to focus on how to collect the data to apply the measures from what sources, at what frequency, and at what cost,” says Jessee. “I think we accomplished that.”

As the single largest purchaser of healthcare, the federal government has a vested interest in reducing the costs of care due to inefficiency, medical errors, and other factors responsible for poor quality. The AHIC formed the quality workgroup in August 2006 after it identified and prioritized several areas within HIT that could produce a tangible value to healthcare consumers. The workgroup has broad and specific charges to support HIT breakthroughs that support quality monitoring, including measure development and data collection, as well as accelerate the use of clinical decision support. Recommendations made through AHIC ultimately affect how HHS does business as a healthcare purchaser, and its actions, in turn, influence how business is done in the private sector.

In providing details about current data collection issues, as well as proposals for bringing order to the future of performance measurement, the report supports several quality workgroup recommendations to the AHIC, specifically the following:

• The Quality Alliance Steering Committee identifies a set of common data elements to be standardized, which would allow for electronic collection and exchange of data for prioritized measures developed by the Ambulatory Quality Alliance and the Hospital Quality Alliance.

• The Health Information Technology Standards Panel of the American National Standards Institute identifies data standards for those data elements that would allow for automated collection.

• The Certification Commission for Health Information Technology develops criteria for EHR products necessary to support reporting those data elements.

The AHIC’s approval of these and other related recommendations advances work that will essentially create process models for standardization and coordination, which can be implemented uniformly across all performance measurement efforts.

Public-Private Oversight: A Single Organization or Coalition Effort?
Quality and performance measurement experts know that getting to a unified process for performance measurement, including streamlined, automated data collection that occurs as a by-product of the care process, won’t happen by a wave of the wand or suspending current performance measurement efforts. There is general agreement, however, that the road to such a process can be paved only by some kind of public-private entity with the authority and funding to accomplish the broad range of functions necessary to harmonize existing efforts, then to create and apply standards to the data content required for meaningful performance measurement. “We believe it needs to be private sector, but it also needs to be public because it needs to be a source of authority,” says Kloss. “That’s a very important characteristic—that it’s bringing the two together.”

Although there is general agreement about the need for a source of public-private oversight authority, there is less agreement over the composition of that authority, whether it would involve creating something new or modifying an existing entity or group of entities. “Just the development and management of the measures, keeping those up-to-date, testing them, and so on, is a really major job when you look at the full scope of healthcare,” Kloss says. “To what extent could that same entity then take on the technical aspects of setting standards for data collection and that sort of thing? Do we need several organizations, and if so, how do we coordinate them?”

According to Jessee, the National Quality Forum (NQF), a not-for-profit membership organization specifically created to develop and implement a national strategy for healthcare quality measurement and reporting, was often referred to during discussions about the kind of oversight entity needed. “The NQF was mentioned frequently as the kind of model that people felt would be appropriate,” he says. “It’s open, it’s got public-private participation, and it’s pretty much the standard-setter in the measure development arena.”

The NQF has broad membership and participation from all healthcare segments in the public and private sectors and provides leadership in achieving voluntary consensus standards for performance measurement. Moreover, it was founded specifically to facilitate action on an integrated, national quality improvement agenda. “The NQF was formed to become the final common pathway that would be involved in setting the rules of the game [for quality measurement],” Loeb says. “Whether NQF can subserve all the functions identified in the AHRQ report is a question the NQF is going to have to consider and respond to.”

Whether the NQF, another entity, or a coalition of entities assumes the oversight role, funding will be a considerable part of the decision. While there are certainly practical issues surrounding the governance and operations of an oversight entity, they are inextricably intertwined with funding issues. “The question is, where are the resources going to come from to support the work needed on data collection and reporting?” Jessee wonders. According to Jessee and Kloss, it’s not simply a matter of where the dollars will come from, it’s also—and perhaps more importantly—a matter of how many dollars are needed to create and maintain order around data collection and reporting.

“Nobody knows how much it would take to have an entity like the NQF take the leadership role in data collection and reporting,” says Jessee, a point Kloss emphasizes when she talks about the need to conduct research on the costs of current data collection and reporting efforts. “Part of the reason for doing an analysis of cost is to demonstrate that we’re already spending so much money to gather data,” she says, “so [funding an entity to oversee data collection and reporting] won’t necessarily involve new costs.”

Actions for the Near Term
Kloss and Jessee see work such as that which led to the recent AHRQ report having both near- and long-term value. While the kinds of policy and structural changes they and others in the quality arena would like to see in the data collection and reporting environment won’t happen overnight, they believe the report’s findings and recommendations give HIM and group practice professionals important information they can use to help press for change of data collection practices in their organizations.

Kloss advises HIM professionals to take a stand for quality data. “First, really put in place processes to make sure that before data sets are released, they are reviewed to ensure that data is accurate, verifiable, and reliable,” she says. “Secondly, look for opportunities to streamline [data collection]. And third, look for opportunities to use technology to help reduce costs for these efforts.”

Aggie Stewart is a freelance writer and editor, specializing in HIM and HIT. She also serves as consulting editor of Health Information Management Manual, 2nd edition. She can be contacted at s-p-s@earthlink.net.


Challenges in Performance Measurement Data Collection and Reporting Environment
The following information is abstracted from the Agency for Healthcare Research and Quality-funded AHIMA/Medical Group Management Association joint report “Collecting and Reporting Data for Performance Measurement: Moving Toward Alignment.”

1. Inefficiencies associated with performance measurement data collection and reporting:

• variation in data collection due to the application of different taxonomies and data definitions, leading to problems in data quality and additional costs related to validation of transmitted data and the need to update forms and systems;

• documentation and data quality related to organizational issues, such as incomplete clinical documentation, failure to understand coding and performance measurement requirements, dependence on manual abstraction, and inconsistent policies and practices for the secondary use of data as a source of quality information; and

• provider staff resource requirements, which often involve increasing staff to meet the multiple and different demands for performance and quality data.

2. Variation among performance measurement systems:

Providers are often asked to collect, process, and report data about the same medical conditions, perhaps even the same populations, multiple times in different formats.

3. Organizational and cultural issues:

An organization’s internal structures and culture determine how well it can adjust to changing external requests to provide data. Complicating factors include the inability to link performance data to individual care providers when multiple providers care for the same patient and provider perception of inconsistent, complex, and unstable processes for analyzing and reporting performance measures.

4. Technological barriers:

HIT implementation has been largely uncoordinated across providers in the same organization and between regional and national organizations. Technology initiatives, such as broad HIT implementation, must better coordinate performance measurement. Interoperability must also improve, and EHR products must be capable of supporting broadly accepted performance measurement initiatives.

Many providers need to better understand how their use of and documentation in electronic systems can have a positive impact on reporting performance data on a national scale. Other EHR concerns related to performance measurement include the need to do the following:

• reduce start-up costs and uncertainties about the technology;

• manage HIT security and privacy issues;

• develop emergency management plans to safeguard stored data; and

• address ownership issues related to health and administrative data.

5. Economic pressures:

Data collection costs related to a rising tide of performance measurement reporting requirements coupled with the costs of internal dissemination and interpretation of performance data within an organization are significant, especially in the face of rising costs of doing business and implementing IT solutions and declining medical reimbursements.

6. Competing priorities:

Variation in measure sets, data metrics, and taxonomies across settings and between reporting deadlines presents a unique challenge to healthcare organizations given the mostly uncoordinated manner in which data collection and reporting requirements have been promulgated by public and private entities. Healthcare organizations continue to encounter the following:

• unclear guidance for prioritizing reporting data to mandated state and local performance measurement initiatives, payer and employer performance measurement initiatives, and national initiatives, few of which are aligned with each other; and

• absence of a national healthcare quality data set and report card that would provide defined categories of measures and measurement selection criteria or guidelines, such as defined measure sets.
The pressure to manage competing priorities is compounded by the following:

• concerns over upholding the privacy and security of personal health information that must be shared with performance measurement systems; and

• the need to keep up with multiple sets of reporting requirements that mature over time and independently change data elements, data definitions, deadlines, and analytic specification.

AS