Reproducibility of physiological track-and-trigger warning systems for identifying at-risk patients on the ward

Intensive Care Med. 2007 Apr;33(4):619-24. doi: 10.1007/s00134-006-0516-8. Epub 2007 Jan 18.


Objective: Physiological track-and-trigger warning systems are used to identify patients on acute wards at risk of deterioration, as early as possible. The objective of this study was to assess the inter-rater and intra-rater reliability of the physiological measurements, aggregate scores and triggering events of three such systems.

Design: Prospective cohort study.

Setting: General medical and surgical wards in one non-university acute hospital.

Patients and participants: Unselected ward patients: 114 patients in the inter-rater study and 45 patients in the intra-rater study were examined by four raters.

Measurements and results: Physiological observations obtained at the bedside were evaluated using three systems: the medical emergency team call-out criteria (MET); the modified early warning score (MEWS); and the assessment score of sick-patient identification and step-up in treatment (ASSIST). Inter-rater and intra-rater reliability were assessed by intra-class correlation coefficients, kappa statistics and percentage agreement. There was fair to moderate agreement on most physiological parameters, and fair agreement on the scores, but better levels of agreement on triggers. Reliability was partially a function of simplicity: MET achieved a higher percentage of agreement than ASSIST, and ASSIST higher than MEWS. Intra-rater reliability was better then inter-rater reliability. Using corrected calculations improved the level of inter-rater agreement but not intra-rater agreement.

Conclusion: There was significant variation in the reproducibility of different track-and-trigger warning systems. The systems examined showed better levels of agreement on triggers than on aggregate scores. Simpler systems had better reliability. Inter-rater agreement might improve by using electronic calculations of scores.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Humans
  • Medical Staff, Hospital / statistics & numerical data*
  • Observer Variation*
  • Point-of-Care Systems*
  • Prospective Studies
  • Reproducibility of Results