Assessing the calibration of mortality benchmarks in critical care: The Hosmer-Lemeshow test revisited

Crit Care Med. 2007 Sep;35(9):2052-6. doi: 10.1097/01.CCM.0000275267.64078.B0.

Abstract

Objective: To examine the Hosmer-Lemeshow test's sensitivity in evaluating the calibration of models predicting hospital mortality in large critical care populations.

Design: Simulation study.

Setting: Intensive care unit databases used for predictive modeling.

Patients: Data sets were simulated representing the approximate number of patients used in earlier versions of critical care predictive models (n = 5,000 and 10,000) and more recent predictive models (n = 50,000). Each patient had a hospital mortality probability generated as a function of 23 risk variables.

Interventions: None.

Measurements and main results: Data sets of 5,000, 10,000, and 50,000 patients were replicated 1,000 times. Logistic regression models were evaluated for each simulated data set. This process was initially carried out under conditions of perfect fit (observed mortality = predicted mortality; standardized mortality ratio = 1.000) and repeated with an observed mortality that differed slightly (0.4%) from predicted mortality. Under conditions of perfect fit, the Hosmer-Lemeshow test was not influenced by the number of patients in the data set. In situations where there was a slight deviation from perfect fit, the Hosmer-Lemeshow test was sensitive to sample size. For populations of 5,000 patients, 10% of the Hosmer-Lemeshow tests were significant at p < .05, whereas for 10,000 patients 34% of the Hosmer-Lemeshow tests were significant at p < .05. When the number of patients matched contemporary studies (i.e., 50,000 patients), the Hosmer-Lemeshow test was statistically significant in 100% of the models.

Conclusions: Caution should be used in interpreting the calibration of predictive models developed using a smaller data set when applied to larger numbers of patients. A significant Hosmer-Lemeshow test does not necessarily mean that a predictive model is not useful or suspect. While decisions concerning a mortality model's suitability should include the Hosmer-Lemeshow test, additional information needs to be taken into consideration. This includes the overall number of patients, the observed and predicted probabilities within each decile, and adjunct measures of model calibration.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Critical Care / standards*
  • Humans
  • Logistic Models
  • Models, Statistical*
  • Mortality
  • Sensitivity and Specificity