Introduction: The Breast Imaging Reporting and Data System (BI-RADS) was introduced in the Dutch breast cancer screening programme to improve communication between medical specialists. Following introduction, a substantial variation in the use of the BI-RADS lexicon for final assessment categories was noted among screening radiologists. We set up a dedicated training programme to reduce this variation. This study evaluates whether this programme was effective.
Materials and methods: Two comparable test sets were read before and after completion of the training programme. Each set contained 30 screening mammograms of referred women selected from screening practice. The sets were read by 25 experienced and 30 new screening radiologists. Cohen's kappa (κ) was used to calculate the inter-observer agreement. The BI-RADS 2003 version was implemented in the screening programme as the BI-RADS 2008 version requires the availability of diagnostic work-up, and this is unavailable.
Results: The inter-observer agreement of all participating radiologists (n=55) with the expert panel increased from a pre-training κ-value of 0.44 to a post-training κ-value of 0.48 (p=0.14). The inter-observer agreement of the new screening radiologists (n=30) with the expert panel increased from κ=0.41 to κ=0.50 (p=0.01), whereas there was no difference in agreement among the 25 experienced radiologists (from κ=0.48 to κ=0.46, p=0.60).
Conclusion: Our training programme in the BI-RADS lexicon resulted in a significant improvement of agreement among new screening radiologists. Overall, the agreement among radiologists was moderate (guidelines Landis and Koch). This is in line with results found in the literature.
Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.