A simulation study of cross-validation for selecting an optimal cutpoint in univariate survival analysis

Stat Med. 1996 Oct 30;15(20):2203-13. doi: 10.1002/(SICI)1097-0258(19961030)15:20<2203::AID-SIM357>3.0.CO;2-G.


Continuous measurements are often dichotomized for classification of subjects. This paper evaluates two procedures for determining a best cutpoint for a continuous prognostic factor with right censored outcome data. One procedure selects the cutpoint that minimizes the significance level of a logrank test with comparison of the two groups defined by the cutpoint. This procedure adjusts the significance level for maximal selection. The other procedure uses a cross-validation approach. The latter easily extends to accommodate multiple other prognostic factors. We compare the methods in terms of statistical power and bias in estimation of the true relative risk associated with the prognostic factor. Both procedures produce approximately the correct type I error rate. Use of a maximally selected cutpoint without adjustment of the significance level, however, results in a substantially elevated type I error rate. The cross-validation procedure unbiasedly estimated the relative risk under the null hypothesis while the procedure based on the maximally selected test resulted in an upward bias. When the relative risk for the two groups defined by the covariate and true changepoint was small, the cross-validation procedure provided greater power than the maximally selected test. The cross-validation based estimate of relative risk was unbiased while the procedure based on the maximally selected test produced a biased estimate. As the true relative risk increased, the power of the maximally selected test was about 10 per cent greater than the power obtained using cross-validation. The maximally selected test overestimated the relative risk by about 10 per cent. The cross-validation procedure produced at most 5 per cent underestimation of the true relative risk. Finally, we report the effect of dichotomizing a continuous non-linear relationship between covariate and risk. We compare using a linear proportional hazard model to using models based on optimally selected cutpoints. Our simulation study indicates that we can have a substantial loss of statistical power when we use cutpoint models in cases where there is a continuous relationship between covariate and risk.

Publication types

  • Comparative Study

MeSH terms

  • Humans
  • Linear Models
  • Models, Statistical*
  • Proportional Hazards Models
  • Risk*
  • Survival Analysis*