When investigating the effects of potential prognostic or risk factors that have been measured on a quantitative scale, values of these factors are often categorized into two groups. Sometimes an 'optimal' cutpoint is chosen that gives the best separation in terms of a two-sample test statistic. It is well known that this approach leads to a serious inflation of the type I error and to an overestimation of the effect of the prognostic or risk factor in absolute terms. In this paper, we illustrate that the resulting confidence intervals are similarly affected. We show that the application of a shrinkage procedure to correct for bias, together with bootstrap resampling for estimating the variance, yields confidence intervals for the effect of a potential prognostic or risk factor with the desired coverage.
Copyright 2004 John Wiley & Sons, Ltd.