Nurse researchers typically provide evidence of content validity for instruments by computing a content validity index (CVI), based on experts' ratings of item relevance. We compared the CVI to alternative indexes and concluded that the widely-used CVI has advantages with regard to ease of computation, understandability, focus on agreement of relevance rather than agreement per se, focus on consensus rather than consistency, and provision of both item and scale information. One weakness is its failure to adjust for chance agreement. We solved this by translating item-level CVIs (I-CVIs) into values of a modified kappa statistic. Our translation suggests that items with an I-CVI of .78 or higher for three or more experts could be considered evidence of good content validity.