Purpose: To examine the validity and reliability of performance assessment of undergraduate students using the anesthesia simulator as an evaluation tool.
Methods: After ethics approval and informed consent, 135 final year medical students and 5 elective students participated in a videotaped simulator scenario with a Link-Med Patient Simulator (CAE-Link Corporation). Scenarios were based on published educational objectives of the undergraduate curriculum in anesthesia at the University of Toronto. During the simulator sessions, faculty followed a script guiding student interaction with the mannequin. Two faculty independently viewed and evaluated each videotaped performance with a 25-point criterion-based checklist. Means and standard deviations of simulator-based marks were determined and compared with clinical and written evaluations received during the rotation. Internal consistency of the evaluation protocol was determined using inter-item and item-total correlations and correlations of specific simulator items to existing methods of evaluation.
Results: Mean reliability estimates for single and average paired assessments were 0.77 and 0.86 respectively. Means of simulator scores were low and there was minimal correlation between the checklist and clinical marks (r = 0.13), checklist and written marks (r = 0.19) and clinical and written marks (r = 0.23). Inter-item and item-total correlations varied widely and correlation between simulator items and existing evaluation tools was low.
Conclusions: Simulator checklist scoring demonstrated acceptable reliability. Low correlation between different methods of evaluation may reflect reliability problems with the written and clinical marks, or that different aspects are being tested. The performance assessment demonstrated low internal consistency and further work is required.