Assessments of clinicians' professional performance have become more entrenched in clinical practice globally. Systems and tools have been developed and implemented, and factors that impact performance in response to assessments have been studied. The validity and reliability of data yielded by assessment tools have been studied extensively. However, there are important methodological and statistical issues that can impact the assessment of performance and change that are often omitted or ignored by research and practice. In this article, the authors aim to address five of these issues and show how they can impact the validity of performance and change assessments, using empirical illustrations based on longitudinal data of clinicians' teaching performance. Specifically, the authors address the following: characteristics of a measurement scale that affect the performance data yielded by an assessment tool; different summary statistics of the same data that lead to opposing conclusions about performance and performance change; performance at the item level that does not easily translate to overall performance; how estimating performance change from two time-indexed measurements and assessing change retrospectively yield different results; and the context that affects performance and performance assessments. The authors explain how these issues affect the validity of performance assessments and offer suggestions for how to correct these issues.