Background: The Pennsylvania Trauma Systems Foundation ad hoc Outcomes Committee developed the Pennsylvania Outcomes and Performance Improvement Measurement System (POPIMS) software program that provided a consistent outcomes reporting template for trauma centers in the state. This study was performed to evaluate inter-rater reliability of POPIMS software for mortality classification.
Methods: All trauma centers in the state were instructed to submit one preventable (P), one potentially preventable (PP), and one nonpreventable (NP) POPIMS mortality report to the Pennsylvania Trauma Systems Foundation office. The reports were blinded, an equal number of P, PP, and NP classified mortalities were randomly selected, and a meeting of trauma directors who submitted cases was convened. Institutional classification (IC) was compared with reviewing trauma directors (reviewer classification [RC]) to evaluate inter-rater reliability of software. Chi-square test was used to analyze differences. Inter-rater reliability among reviews was assessed using Cronbach's alpha coefficient.
Results: Twenty-eight trauma surgeons reviewed 34 cases (11 preventable, 12 PP, 11 nonpreventable), each having a minimum of 10 reviews. When compared with IC, RC was significantly different (p < 0.001). In addition, factors contributing to mortality were different when comparing IC and RC reviews of different mortality preventability classes. There was a moderate level of inter-rater reliability among reviewers as measured by Cronbach's alpha coefficient of 0.64.
Conclusions: POPIMS is the first statewide PI reporting system to share outcomes information between trauma centers. Significant differences between IC of mortality and that provided by reviewers suggest that more objective criteria for mortality classification are needed. Realizing limitations of preventability classification, additional outcomes parameters should be pursued.