Objective: Pediatric residency programs need objective methods of trainee assessment. Patient simulation can contribute to objective evaluation of acute care event management skills. We describe the development and validation of 4 simulation case scenarios for pediatric resident evaluation.
Methods: We created 4 pediatric simulation cases: apnea, asthma, supraventricular tachycardia, and sepsis. Each case contains a scenario and an unweighted checklist. Case and checklist development began by reaching expert consensus about case content followed by 92 pilot simulation sessions used for content revision and rater training. After development, 54 first-and second-year pediatric residents participated in 108 simulation test cases to assess the validity of data from these tools for our population. We report outcomes for interrater reliability, discriminant validity, and the impact of potential confounding factors on validity estimates.
Results: Interrater reliability (kappa) ranged from 0.75 to 0.87. There were statistically and educationally significant differences in summary scores between first-and second-year residents for 3 of the 4 cases. Neither previous simulation exposure nor the order in which the cases were performed were found to be significant factors by multivariate analysis.
Conclusions: Simulation can be used to reliably measure and discriminate resident competencies in acute care management. Rigorous measurement development work is difficult and time-consuming. Done correctly, measurement development yields tangible and lasting benefits for trainees, faculty, and residency programs. Development studies that use systematic procedures and large trainee samples at multiple sites are the best approach to creating measurement tools that yield valid data.