Kappa is frequently used in epidemiology as an index of the quality of measurement for binary characteristics. The authors discuss the strong dependence of kappa on true prevalence, and they examine the relationship of the value of kappa to the degree of attenuation of the odds ratio that results from non-differential misclassification. It is concluded that under certain circumstances kappa can be interpreted as an indicator of validity, i.e. unbiasedness of the odds ratio, rather than simply as one of reliability. Cautions are stressed regarding (1) possible variation in the quality of measurement and (2) possible lack of independence of errors for the paired measurements from which kappa is calculated. An important implication for the design of reliability studies is that they should be conducted in populations where the distribution of the factor of interest is similar to that for the settings in which the measurement technique will ultimately be applied.