Kappa coefficient: a popular measure of rater agreement

Shanghai Arch Psychiatry. 2015 Feb 25;27(1):62-7. doi: 10.11919/j.issn.1002-0829.215010.

Abstract

In mental health and psychosocial studies it is often necessary to report on the between-rater agreement of measures used in the study. This paper discusses the concept of agreement, highlighting its fundamental difference from correlation. Several examples demonstrate how to compute the kappa coefficient - a popular statistic for measuring agreement - both by hand and by using statistical software packages such as SAS and SPSS. Real study data are used to illustrate how to use and interpret this coefficient in clinical research and practice. The article concludes with a discussion of the limitations of the coefficient.

概述: 在精神卫生和社会心理学研究中,常常需要报 告研究使用某一评估方法的评估者间的一致性。本文 讨论了一致性的概念,强调一致性与相关性的本质区 别。Kappa系数是衡量一致性的一个常用统计方法。 我们用几个例子说明如何通过手工计算或统计软件包 SAS、SPSS等计算Kappa系数,用真实的研究数据说明 如何在临床研究和实践中使用和解释这个系数。最后 文章讨论了该系数的局限性。.

中文全文: 本文全文中文版从2015年4月8日起在http://dx.doi.org/10.11919/j.issn.1002-0829.215010可供免费阅览下载.

Keywords: correlation; interrater agreement; weighted kappa.

Grants and funding

None