Double-blinded trials are often considered the gold standard for research, but significant bias may result from unblinding of participants and investigators. Although the CONSORT guidelines discuss the importance of reporting "evidence that blinding was successful", it is unclear what constitutes appropriate evidence. Among studies reporting methods to evaluate blinding effectiveness, many have compared groups with respect to the proportions correctly identifying their intervention at the end of the trial. Instead, we reasoned that participants' beliefs, and not their correctness, are more directly associated with potential bias, especially in relation to self-reported health outcomes. During the Water Evaluation Trial performed in northern California in 1999, we investigated blinding effectiveness by sequential interrogation of participants about their "blinded" intervention assignment (active or placebo). Irrespective of group, participants showed a strong tendency to believe they had been assigned to the active intervention; this translated into a statistically significant intergroup difference in the correctness of participants' beliefs, even at the start of the trial before unblinding had a chance to occur. In addition, many participants (31%) changed their belief during the trial, suggesting that assessment of belief at a single time does not capture unblinding. Sequential measures based on either two or all eight questionnaires identified significant group-related differences in belief patterns that were not identified by the single, cross-sectional measure. In view of the relative insensitivity of cross-sectional measures, the minimal additional information in more than two assessments of beliefs and the risk of modifying participants' beliefs by repeated questioning, we conclude that the optimal means of assessing unblinding is an intergroup comparison of the change in beliefs (and not their correctness) between the start and end of a randomized controlled trial.