Optimal choice of a cut point for a quantitative diagnostic test performed for research purposes

J Clin Epidemiol. 2003 Oct;56(10):956-62. doi: 10.1016/s0895-4356(03)00153-7.

Abstract

Often, in epidemiologic research, classification of study participants with respect to the presence of a dichotomous condition (e.g., infection) is based on whether a quantitative measurement exceeds a specified cut point. The choice of a cut point involves a tradeoff between sensitivity and specificity. When the classification is to be made for the purpose of estimating risk ratios (RRs) or odds ratios (ORs), it might be argued that the best choice of cut point is one that maximizes the precision of estimates of the RRs or ORs. In this article, two different approaches for estimating RRs and ORs are discussed. For each approach, formulae are derived that give the mean squared error of the RR and OR estimates, for any choice of cut point. Based on these formulae, a cut point can be chosen that minimizes the mean squared error of the estimate of interest.

Publication types

  • Research Support, U.S. Gov't, P.H.S.

MeSH terms

  • Diagnostic Tests, Routine*
  • Epidemiologic Methods*
  • Humans
  • Odds Ratio
  • Research Design
  • Risk Factors
  • Sensitivity and Specificity