Purpose: The mini-Clinical Evaluation Exercise (mCEX) is increasingly being used to assess the clinical skills of medical trainees. Existing mCEX research has typically focused on isolated aspects of the instrument's reliability and validity. A more thorough validity analysis is necessary to inform use of the mCEX, particularly in light of increased interest in high-stakes applications of the methodology.
Method: Kane's (2006) validity framework, in which a structured argument is developed to support the intended interpretation(s) of assessment results, was used to evaluate mCEX research published from 1995 to 2009. In this framework, evidence to support the argument is divided into four components (scoring, generalization, extrapolation, and interpretation/decision), each of which relates to different features of the assessment or resulting scores. The strength and limitations of the reviewed research were identified in relation to these components, and the findings were synthesized to highlight overall strengths and weaknesses of existing mCEX research.
Results: The scoring component yielded the most concerns relating to the validity of mCEX score interpretations. More research is needed to determine whether scoring-related issues, such as leniency error and high interitem correlations, limit the utility of the mCEX for providing feedback to trainees. Evidence within the generalization and extrapolation components is generally supportive of the validity of mCEX score interpretations.
Conclusions: Careful evaluation of the circumstances of mCEX assessment will help to improve the quality of the resulting information. Future research should address issues of rater selection, training, and monitoring which can impact rating accuracy.