Background: Establishing the validity of classification schemes is a crucial preparatory step that should precede multicenter studies. There are no studies investigating the reproducibility of arthroscopic classification of meniscal pathology among multiple surgeons at different institutions.
Hypothesis: Arthroscopic classification of meniscal pathology is reliable and reproducible and suitable for multicenter studies that involve multiple surgeons.
Study design: Multirater agreement study.
Methods: Seven surgeons reviewed a video of 18 meniscal tears and completed a meniscal classification questionnaire. Multirater agreement was calculated based on the proportion of agreement, the kappa coefficient, and the intraclass correlation coefficient.
Results: There was a 46% agreement on the central/peripheral location of tears (kappa = 0.30), an 80% agreement on the depth of tears (kappa = 0.46), a 72% agreement on the presence of a degenerative component (kappa = 0.44), a 71% agreement on whether lateral tears were central to the popliteal hiatus (kappa = 0.42), a 73% agreement on the type of tear (kappa = 0.63), an 87% agreement on the location of the tear (kappa = 0.61), and an 84% agreement on the treatment of tears (kappa = 0.66). There was considerable agreement among surgeons on length, with an intraclass correlation coefficient of 0.78, 95% confidence interval of 0.57 to 0.92, and P < .001.
Conclusions: Arthroscopic grading of meniscal pathology is reliable and reproducible.
Clinical relevance: Surgeons can reliably classify meniscal pathology and agree on treatment, which is important for multicenter trials.