Background: Computer-based endoscopy simulators may enable trainees to learn and develop technical skills before performing on patients. Simulators require validation as adequate models of live endoscopy before being used for training or assessment purposes.
Objective: To evaluate content and criterion validity of the CAE EndoscopyVR Simulator colonoscopy and EGD modules as predictors of clinical endoscopic skills.
Design: Prospective, observational, non-randomized, parallel cohort study.
Setting: Single academic center with accredited gastroenterology training program.
Participants: Five novice first-year gastroenterology fellows and 6 expert gastroenterology attending physicians.
Intervention: Participants performed 18 simulated colonoscopies and 6 simulated EGDs. The simulator recorded objective performance parameters. Participants then completed feedback surveys.
Main outcome measurements: The 57 objective performance parameters measured by the endoscopy simulator were compared between the two study groups. Novice and expert survey responses were analyzed.
Results: Significant differences between novice and expert performance were detected in only 19 of 57 (33%) performance metrics. Eight of these 19 (42%) were time-related metrics, such as total procedure time, time to anatomic landmarks, and time spent in contact with GI mucosa. Of 49 non-time related measures, the few additional statistically significant differences between novices and experts involved air insufflation, sedation management, endoscope force, and patient comfort. These findings are of uncertain clinical significance. Survey data found multiple aspects of the simulation to be unrealistic compared with human endoscopy.
Limitations: Small sample size.
Conclusion: The CAE EndoscopyVR Simulator displays poor content and criterion validity and is thereby incapable of predicting skill during in vivo endoscopy.
Copyright © 2012 American Society for Gastrointestinal Endoscopy. Published by Mosby, Inc. All rights reserved.