Identifying poor-quality hospitals. Can hospital mortality rates detect quality problems for medical diagnoses?

Med Care. 1996 Aug;34(8):737-53. doi: 10.1097/00005650-199608000-00002.


Objectives: Many groups involved in health care are very interested in using external quality indices, such as risk-adjusted mortality rates, to examine hospital quality. The authors evaluated the feasibility of using mortality rates for medical diagnoses to identify poor-quality hospitals.

Methods: The Monte Carlo simulation model was used to examine whether mortality rates could distinguish 172 average-quality hospitals from 19 poor-quality hospitals (5% versus 25% of deaths being preventable, respectively), using the largest diagnosis-related groups (DRGs) for cardiac, gastrointestinal, cerebrovascular, and pulmonary diseases as well as an aggregate of all medical DRGs. Discharge counts and observed death rates for all 191 Michigan hospitals were obtained from the Michigan Inpatient Database. Positive predictive value (PPV), sensitivity, and area under the receiver operating characteristic curve were calculated for mortality outlier status as an indicator of poor-quality hospitals. Sensitivity analysis was performed under varying assumptions about the time period of evaluation, quality differences between hospitals, and unmeasured variability in hospital casemix.

Results: For individual DRG groups, mortality rates were a poor measure of quality, even using the optimistic assumption of perfect casemix adjustment. For acute myocardial infarction, high mortality rate outlier status (using 2 years of data and a 0.05 probability cutoff) had a PPV of only 24%, thus, more than three fourths of those labeled poor-quality hospitals (high mortality rate outliers) actually would have average quality. If we aggregate all medical DRGs and continue to assume very large quality differences and perfect casemix adjustment, the sensitivity for detecting poor-quality hospitals is 35% and PPV is 52%. Even for this extreme case, the PPV is very sensitive to introduction of small amounts of unmeasured casemix differences between hospitals.

Conclusion: Although they may be useful for some surgical diagnoses, DRG-specific hospital mortality rates probably cannot accurately detect poor-quality outliers for medical diagnoses. Even collapsing to all medical DRGs, hospital mortality rates seem unlikely to be accurate predictors of poor quality, and punitive measures based on high mortality rates frequently would penalize good or average hospitals.

MeSH terms

  • Cerebrovascular Disorders / mortality
  • Diagnosis-Related Groups*
  • Feasibility Studies
  • Gastrointestinal Diseases / mortality
  • Health Services Research / methods*
  • Heart Diseases / mortality
  • Hospital Mortality*
  • Hospitals / classification
  • Hospitals / standards*
  • Humans
  • Lung Diseases / mortality
  • Michigan / epidemiology
  • Monte Carlo Method
  • Quality of Health Care*
  • Reproducibility of Results
  • Sensitivity and Specificity