In randomized cancer screening trials, the ratio of the mortality rate for the screened group to that for the control group is typically not constant as a function of years from randomization. This is due to an initial lag effect, but also to a dilution effect that results from the accrual of comparable cases in both groups after the end of the screening period. In order to combat the potential loss of power when applying conventional analysis tools, specifically the logrank test, Aron and Prorok (International Journal of Epidemiology 15, 36-43), have advocated analyzing the mortality experience using only the subcohort of cases ascertained within a given time period. However, it is not clear how to select an appropriate case ascertainment point, since this will depend on aspects of the natural history of the disease process which are poorly identified. Aron and Prorok suggest choosing the case ascertainment point to be the point at which the cumulative number of cases in the control group first becomes equal to that in the intervention group, that is, the "catch-up time." In this paper, we undertake a thorough evaluation of the bias and power properties of the catch-up time method. We base our study on simulated data resembling the Health Insurance Plan of Greater New York study cohort. We consider several models for postdiagnosis survival under the null hypothesis of no screening effect on mortality, and under the alternative hypothesis of an effect of screening. We show that the catch-up method can yield tests with sizeable bias. In the absence of detailed knowledge about the underlying disease process, we suggest some adaptive tests that maintain nominal size but have more attractive power properties than the standard logrank test.