The primary goal of this study was to test the items in a rating system developed to evaluate anesthesiologists' performance in a simulated patient environment. A secondary goal was to determine whether the test scores could discriminate between resident and staff anesthesiologists. Two 5-item clinical scenarios included patient evaluation and induction and maintenance of anesthesia. Rating scales were no response to the problem (score = 0), compensating intervention (score = 1), and corrective treatment (score = 2). Internal consistency was estimated using Cronbach's coefficient alpha. Scores between groups were compared using the Cochran-Mantel-Haenszel test. Subjects consisted of 8 anesthesiology residents and 17 university clinical faculty. The Cronbach's coefficient alpha was 0.27 for Scenario A and 0.28 for Scenario B. Two items in each scenario markedly decreased internal consistency. When these four items were eliminated, Cronbach's coefficient alpha for the remaining six items was 0.66. Faculty anesthesiologists scored higher than residents on all six items (P < 0.001). A patient simulator-based evaluation process with acceptable reliability was developed.
Implications: The reliability of anesthesia clinical performance in a patient simulation environment was assessed in this study. Of 10 items, 4 were poor in the evaluation process. When these items were removed, the reliability of the instrument improved to a level consistent with other studies. Because faculty scored higher than resident anesthesiologists, the instrument also showed discriminant validity.