Objectives: Improvement in performance as measured by metric-based procedural errors must be demonstrated if virtual reality (VR) simulation is to be used as a valid means of proficiency assessment and improvement in procedural-based medical skills.
Background: The Food and Drug Administration requires completion of VR simulation training for physicians learning to perform carotid stenting.
Methods: Interventional cardiologists (n = 20) participating in the Emory NeuroAnatomy Carotid Training program underwent an instructional course on carotid angiography and then performed five serial simulated carotid angiograms on the Vascular Interventional System Trainer (VIST) VR simulator (Mentice AB, Gothenburg, Sweden). Of the subjects, 90% completed the full assessment. Procedure time (PT), fluoroscopy time (FT), contrast volume, and composite catheter handling errors (CE) were recorded by the simulator.
Results: An improvement was noted in PT, contrast volume, FT, and CE when comparing the subjects' first and last simulations (all p < 0.05). The internal consistency of the VIST VR simulator as assessed with standardized coefficient alpha was high (range 0.81 to 0.93), except for FT (alpha = 0.36). Test-retest reliability was high for CE (r = 0.9, p = 0.0001).
Conclusions: A learning curve with improved performance was demonstrated on the VIST simulator. This study represents the largest collection of such data to date in carotid VR simulation and is the first report to establish the internal consistency of the VIST simulator and its test-retest reliability across several metrics. These metrics are fundamental benchmarks in the validation of any measurement device. Composite catheter handling errors represent measurable dynamic metrics with high test-retest reliability that are required for the high-stakes assessment of procedural skills.