Human Reliability Analysis of Cataract Surgery

Arch Ophthalmol. 2008 Feb;126(2):173-7. doi: 10.1001/archophthalmol.2007.47.

Abstract

Objective: To evaluate the use of the Human Reliability Analysis of Cataract Surgery tool to identify the frequency and pattern of technical errors observed during phacoemulsification cataract extraction by surgeons with varying levels of experience.

Design: Observational cohort study. Thirty-three consecutive phacoemulsification cataract operations were performed by 33 different ophthalmic surgeons with varying levels of operative experience: group 1, fewer than 50 procedures; group 2, between 50 and 250 procedures; and group 3, more than 250 procedures. Face and content validity were surveyed by a panel of senior cataract surgeons. The tool was applied to the 33 randomized and anonymous videos by 2 independent assessors trained in error identification and correct tool use. Task analysis using 10 well-defined end points and error identification using 10 external error modes were performed for each case. The main outcome measures were number of errors performed per task, nature of performed errors (executional or procedural), and surgical experience of operating surgeon.

Results: Analysis of 330 constituent steps of 33 operations identified 228 errors, of which 151 (66.2%) were executional and 77 (33.8%) were procedural. The overall highest error probability was associated with sculpting, followed by fragmentation of the nucleus; this was most evident in group 1. Surgeons in group 3 proportionally performed more errors during removal of soft lens matter than those in group 1 or 2. Surgical experience had a significant effect on the number of errors, with a statistically significant difference among the 3 groups (P < .001).

Conclusions: The Human Reliability Analysis of Cataract Surgery tool is useful for identifying where technical errors occur during phacoemulsification cataract surgery. The study findings, including the high executional error rate, could be used to enhance and structure resident surgical training and future assessment tools. Face, content, and construct validity of the tool were demonstrated.

MeSH terms

  • Clinical Competence*
  • Humans
  • Medical Errors / statistics & numerical data*
  • Ophthalmology / standards*
  • Phacoemulsification / standards*
  • Quality Assurance, Health Care
  • Reproducibility of Results
  • State Medicine
  • Task Performance and Analysis
  • United Kingdom
  • Video Recording