Objective assessment of manual skills and proficiency in performing epidural anesthesia--video-assisted validation

Reg Anesth Pain Med. Jul-Aug 2006;31(4):304-10. doi: 10.1016/j.rapm.2006.04.006.

Abstract

Background and objectives: Demand is growing for objective assessment of manual skills and competencies of invasive procedures. The aim of this study was to validate an objective tool for assessing residents' skill in performing epidural anesthesia by use of a global assessment scale and a 3-scale, 27-stage checklist. We wish to demonstrate that this tool can differentiate operators with different levels of training.

Methods: Second-year anesthesia residents were recruited. Their previous experience was assessed by questionnaire. They were repeatedly videotaped performing epidural anesthesia over a 6-month period. Videotaping was done in a blinded manner that masked the identity and level of training of the residents. Three blinded, independent examiners evaluated each session by use of a specifically devised assessment tool that consisted of a global rating scale and a 3-scale, 27-stage checklist to judge the skill level and grade the videotaped sessions.

Results: Twenty-one sessions by 6 residents were videotaped over 6 months. Interrater reliability for the different checklist and global-rating form items shows moderate to high degree of agreement for most stages. Total scores demonstrate almost perfect agreement (kappa/ICC +/- SE = 0.90 +/- 0.03 and 0.83 +/- 0.13, respectively; P < .0001) between examiners. To test whether higher total scores are associated with greater experience, a series of repeated-measures ANCOVAs were performed. In both the global-rating form and the checklist, a significant relation between total scores and epidurals done was found to exist (checklist: P < .0001; global rating: P < .0001).

Conclusions: The results of our study show that scores on a system that consists of a global-rating form and a task-specific checklist had a significant relation to the number of epidural insertions performed (i.e., experience). The interrater reliability of these assessment tools was very strong. Evaluation of technical skills by an objective tool under direct observation, as opposed to laboratory setting, may create a more reliable standard of assessment. Furthermore, residency programs could use these evaluations to identify deficiencies in teaching programs and trainees who require extra instruction.

Publication types

  • Research Support, Non-U.S. Gov't
  • Validation Study

MeSH terms

  • Analysis of Variance
  • Anesthesia, Epidural / methods*
  • Clinical Competence / statistics & numerical data*
  • Educational Measurement / methods
  • Humans
  • Internship and Residency
  • Motor Skills / physiology*
  • Observer Variation
  • Ontario
  • Reproducibility of Results
  • Task Performance and Analysis
  • Videotape Recording*