Objectives: Reports on hospital quality performance are being produced with increasing frequency by state agencies, commercial data vendors, and health care purchasers. Risk-adjusted mortality rate is the most commonly used measure of quality in these reports. The purpose of this study was to determine whether risk-adjusted mortality rates are valid indicators of hospital quality performance.
Methods: Based on an analytical model of random measurement error, sensitivity and predictive error of mortality rate indicators of hospital performance were estimated.
Results: The following six parameters were shown to determine accuracy: (1) mortality risks of patients who receive good quality care and (2) of those who receive poor quality care, (3) proportion of patients (across all hospitals) who receive poor quality care, (4) proportion of hospitals considered to be "poor quality," (5) patients' relative risk of receiving poor quality care in "good quality" and in "poor quality" hospitals, and (6) number of patients treated per hospital. Using best available values for model parameters, analyses demonstrated that in nearly all situations, even with perfect risk adjustment, identifying poor quality hospitals on the basis of mortality rate performance is highly inaccurate. Of hospitals that delivered poor quality care, fewer than 12% were identified as high mortality rate outliers, and more than 60% of outliers were actually good quality hospitals.
Conclusions: Under virtually all realistic assumptions for model parameter values, sensitivity was less than 20% and predictive error was greater than 50%. Reports that measure quality using risk-adjusted mortality rates misinform the public about hospital performance.