In biomarker-disease association studies, the long-term average level of a biomarker is often considered the optimal measure of exposure. Long-term average levels may not be accurately measured from a single sample, however, because of systematic temporal variation. For example, serum 25-hydroxyvitamin D (25(OH)D) concentrations may fluctuate because of seasonal variation in sun exposure. Association studies of 25(OH)D and cancer risk have used different strategies to minimize bias from such seasonal variation, including adjusting for date of sample collection (DOSC), often after matching on DOSC, and/or using season-specific cutpoints to assign subjects to exposure categories. To evaluate and understand the impact of such strategies on potential bias, the authors simulated a population in which 25(OH)D levels varied between individuals and by season, and disease risk was determined by long-term average 25(OH)D. Ignoring temporal variation resulted in bias toward the null. When cutpoints that did not account for DOSC were used, adjustment for DOSC sometimes resulted in bias away from the null. Using season- or month-specific cutpoints reduced bias toward the null and did not cause bias away from the null. To avoid potential bias away from the null, using season- or month-specific cutpoints may be preferable to adjusting for DOSC.