Often, in epidemiologic research, classification of study participants with respect to the presence of a dichotomous condition (e.g., infection) is based on whether a quantitative measurement exceeds a specified cut point. The choice of a cut point involves a tradeoff between sensitivity and specificity. When the classification is to be made for the purpose of estimating risk ratios (RRs) or odds ratios (ORs), it might be argued that the best choice of cut point is one that maximizes the precision of estimates of the RRs or ORs. In this article, two different approaches for estimating RRs and ORs are discussed. For each approach, formulae are derived that give the mean squared error of the RR and OR estimates, for any choice of cut point. Based on these formulae, a cut point can be chosen that minimizes the mean squared error of the estimate of interest.