In neuropsychological assessment, and many areas of research, it is common for the same test to be administered on more than one occasion to measure change. Measured changes are presumed to reflect true changes in the construct being measured by the test; for example, cognitive changes due to processes such as aging, advancing neurological disease, or treatment interventions. However, practice effects, defined as score increases due to factors such as memory for specific test items, learned strategies, or test sophistication, complicate the interpretation of change. This review presents meta-analyses of nearly 1600 individual effect sizes representing changes in mean-level performance on tests commonly used to assess core domains of neuropsychological function, with the goal of quantitatively summarizing the magnitude of practice effects on such tests. The use of alternate forms, the ages of participants, clinical diagnoses of study participants, and length of the test-retest interval were associated with the magnitude of change in many cases. These findings have important implications for the practice of clinical neuropsychology, as well as for research applications, and highlight the need for practice effects to be taken into account in interpreting change across time with multiple measurements.