Purpose: To test the reliability of the Lens Opacities Classification System III (LOCS III) cataract grading between observers at different levels of ophthalmology experience.
Setting: Ophthalmology Department, National University Hospital, Singapore, Singapore.
Methods: In this comparative study, a non-ophthalmology trainee, a basic ophthalmology trainee, and an ophthalmology consultant graded cataracts in 28 patients preoperatively. The observers had a meeting to discuss their interpretations of the LOCS III manual to standardize the grading system and then graded 37 additional patients.
Results: There was a statistically significant increase in inter-observer agreement in all 3 LOCS III categories after standardization of the LOCS III system. The kappa values after standardization fell in the moderate (0.41 to 0.60) to substantial (0.61 to 0.80) range. There was no statistically significant relationship between the observer's experience and the kappa values.
Conclusions: There was an increase in inter-observer agreement in all categories after standardization between operators.