Inter- and intra-observer variability in the qualitative categorization of coronary angiograms

Int J Card Imaging. 1996 Mar;12(1):21-30. doi: 10.1007/BF01798114.

Abstract

The ABC classification of the American College of Cardiology and the American Heart Association is a commonly used categorization to estimate the risk and success of intracoronary intervention, as well as the probability of restenosis. To evaluate the reliability of qualitative angiogram readings, we randomly selected 200 films from single lesion angioplasty procedures. A repeated visual assessment (> or = 2 months interval) by two independent observers resulted in kappa values of inter and intra-observer variability for the ABC lesion classification and for all separate items that compile it. Variability in assessment is expressed in percentage of total agreement, and in kappa value, which is a parameter of the agreement between two or more observations in excess of the chance agreement. Percentage of total agreement and kappa value was 67.8% and 0.33 respectively for the ABC classification, indicating a poor agreement. Probably this is due to the deficiency of strict definitions. Further investigation has to demonstrate whether improvement can be achieved using complete and detailed definitions without ambiguity, and consensus after panel assessment.

MeSH terms

  • Coronary Angiography / statistics & numerical data*
  • Coronary Disease / diagnostic imaging*
  • Coronary Disease / epidemiology
  • Humans
  • Observer Variation
  • Reproducibility of Results