Home  |   Subscribe  |   Resources  |   Reprints  |   Writers' Guidelines

December 2015

Hospital Care — Are We Getting What We Pay For?
By Gavin Miyasato
For The Record
Vol. 27 No. 12 P. 5

In recent years, media outlets (including The Washington Post and The New York Times) have reported on disparities between hospitals charging different prices for the same procedure. Hospitals less than one mile apart reported cost discrepancies of as much as two to four times. While pricing differences exist, it can be difficult to pinpoint exactly why. There could be a number of factors that influence what a hospital charges, including how sick a patient is, length of stay, and general hospital overhead. These factors notwithstanding, consumers like to believe that when they pay more for something, they are getting a better product. In the case of health care, this means better outcomes. This article empirically tests this hypothesis; it aims to measure how closely hospital charges align with quality of care.

To assist in this effort, the Centers for Medicare & Medicaid Services (CMS) provides two key data sources: one for charge data and one for quality data. The charge data represent 100% of all Medicare inpatient claims; quality data rely on CMS' Hospital Value-Based Purchasing (HVBP) program quality scores. CMS established the HVBP in 2011 to promote better clinical outcomes for hospital patients. As a part of the HVBP, hospitals were assigned quality scores related to their treatment of inpatients. CMS provided financial incentives to hospitals that scored highly on its quality metrics.

Using theses sources, it's possible to compare 2011 charge and quality data among a subset of Medicare inpatients: those suffering from acute myocardial infarction or heart failure. Rather than focusing on charges alone, it's important to examine the difference between charges and reimbursement amounts (the Delta) as the measure of economic benefit, given that charges are potentially irrelevant to what a patient or payer actually pays. Statistical models were fit (among either acute myocardial infarction inpatients or heart failure patients) to adjust for the impact of potential confounders—patient or hospital characteristics that might obscure the true relationship between the Delta and quality.

After controlling for all other patient and hospital characteristics, the statistical models reveal that there is only a very weak association between the Delta—the difference between charges and reimbursement amounts—and quality scores. Across both conditions, average patient length of stay, geographic region, and hospital ownership type (government/nongovernment) had a larger impact on the Delta than the quality scores. One plausible conclusion is that the lack of strong associations between quality and the Delta indicate hospital charges were not driven exclusively by the expenses incurred as a result of providing high-quality services—essentially, the cost of care does not always determine the quality. Patients should not assume that higher cost equals higher quality, nor should they assume low cost equals lower quality. A second plausible conclusion is that the quality scores were not accurate indicators of health outcomes and, therefore, we cannot accurately assess whether higher charges result in better patient results.

Ultimately, no direct correlations between higher cost and higher quality can be found using the tools available. While the example study does not necessarily provide a definitive answer to the broader question of quality for the money, it does shed light on the larger issue: Better information is necessary whether in the measurement of quality or in the transparency of cost and charge information from health care providers. CMS has taken steps to improve in both areas, launching several initiatives under the Affordable Care Act with aims to further develop performance measurement and make more financial information publicly available. As the United States increasingly moves toward pay-for-performance–based models, the accuracy and validity of this information will become even more important as we attempt to attribute clinical intervention to better patient outcomes.

— Gavin Miyasato is associate director of advanced analytics—statistics at Trinity Partners.