Reliability of diagnoses coding with ICD-10

Int J Med Inform. 2008 Jan;77(1):50-7. doi: 10.1016/j.ijmedinf.2006.11.005. Epub 2006 Dec 20.

Abstract

Objective: Reliability of diagnoses coding is essential for the use of routine data in a national health care system. The present investigation compares reliability of diagnoses coding with ICD-10 between three groups of coding subjects.

Method: One hundred and eighteen students coded 15 diagnoses lists, 27 medical managers from hospitals 34 discharge letters, and 13 coding specialists 12 discharge letters. Agreement in principal diagnosis was assessed using Cohen's Kappa and the fraction of coincidences over the number of pairs, agreement for the full set of diagnoses with a previously developed measure p(om).

Results: Kappa values were fair (managers) or moderate (coders) for terminal codes with 0.27 and 0.42 (agreement 29.2% versus 46.8%), substantial for the chapter level with 0.71 and 0.72 (agreement 78.3% versus 80.8%). p(om) was lower for the full set of diagnoses than for principal diagnoses, for example in case of managers with 0.21 versus 0.29 for terminal codes. Best results were achieved by students coding diagnoses lists. In summary, the results are remarkably lower than in earlier publications.

Conclusion: The refinement of the ICD-10 accompanied by innumerous coding rules has established a complex environment that leads to significant uncertainties even for experts. Use of coded data for quality management, health care financing, and health care policy requires a remarkable simplification of ICD-10 to receive a valid image of health care reality.

MeSH terms

  • Diagnosis-Related Groups / classification
  • Forms and Records Control / standards*
  • Germany
  • Health Personnel
  • Insurance Claim Reporting / standards
  • International Classification of Diseases*
  • Quality Control
  • Reproducibility of Results