Counterbalanced designs are ubiquitous in cognitive psychology. Researchers, however, rarely perform optimal analyses of these designs and, as a result, reduce the power of their experiments. In the context of a simple priming experiment, several idealized data sets are used to illustrate the possible costs of ignoring counterbalancing, and recommendations are made for more appropriate analyses. These recommendations apply to assessment of both reliability of effects over subjects and reliability of effects over stimulus items.