Background: Rapid response teams (RRT) are used to prevent adverse events in patients with acute clinical deterioration, and to save costs of unnecessary transfer in patients with lower-acuity problems. However, determining the optimal use of RRT services is challenging. One method of benchmarking performance is to determine whether a department's event rate is commensurate with its volume and acuity.
Study design: Using admissions between 2009 and 2011 to 18 distinct surgical services at a tertiary care center, we developed logistic regression models to predict RRT activation, accounting for days at-risk for RRT and patient acuity, using claims modifiers for risk of mortality (ROM) and severity of illness (SOI). The model was used to compute observed-to-expected (O/E) RRT use by service.
Results: Of 45,651 admissions, 728 (1.6%, or 3.2 per 1,000 inpatient days) resulted in 1 or more RRT activations. Use varied widely across services (0.4% to 6.2% of admissions; 1.39 to 8.73 per 1,000 inpatient days, unadjusted). In the multivariable model, the greatest contributors to the likelihood of RRT were days at risk, SOI, and ROM. The O/E RRT use ranged from 0.32 to 2.82 across services, with 8 services having an observed value that was significantly higher or lower than predicted by the model.
Conclusions: We developed a tool for identifying outlying use of an important institutional medical resource. The O/E computation provides a starting point for further investigation into the reasons for variability among services, and a benchmark for quality and process improvement efforts in patient safety.
Keywords: AUC; O/E; ROM; RRT; SOI; area under the curve; observed-to-expected; rapid response team; risk of mortality; severity of Illness.
Copyright © 2014 American College of Surgeons. Published by Elsevier Inc. All rights reserved.