Aim: To validate a coding system implemented to summarise computed tomography colonography (CTC) findings for the detection of suspected colorectal cancer (CRC) by assessing interobserver variability and also to evaluate any weaknesses through qualitative analysis.
Materials and methods: All CTC investigations over a 6-month period (01/07/2016 to 31/12/2016) were analysed retrospectively. Each study was read initially by an advanced practitioner radiographer with a final report issued by a consultant gastrointestinal radiologist. Rates of interobserver agreement, using the kappa statistic, provided a quantitative assessment of levels of agreement. Areas of poor interobserver agreement were identified for further qualitative assessment.
Results: The present study included 1,321 CTC procedures and the mean age of patients was 68.4 years (range 28-96 years). Percentage agreement for colonic coding was 90% and for extra-colonic coding 47%. This corresponds to kappa scores of 0.69 (substantial agreement) and 0.22 (fair agreement), respectively. Reasons and examples of disagreement in the colonic coding are highlighted.
Conclusions: High interobserver agreement was observed for C coding, suggesting it is a reproducible method of classifying intra-colonic CTC findings. Some of the difference in classifying extra-colonic findings is the perceived importance of incidental findings between readers, as well as differences in skill set; however, some themes recurred in areas of disagreement and recommendations for refining and improving the coding system are provided.
Copyright © 2019 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.