Purpose: Despite advances in learning sciences that highlight the efficacy of elaborative interrogation, in which students explain and elaborate on concepts in their own words, assessment techniques in medical education have commonly employed multiple-choice questions (MCQs). Educators' reluctance to consider alternatives such as open-ended questions (OEQs) stems from practical advantages of MCQs and the lack of empirical data on the predictability of OEQs for performance on other high-stakes assessments. In this study, the authors compared the predictive value of pre-clerkship assessments using OEQs for the outcomes of clerkship examinations and UMSLE Step 1.
Method: The authors compared outcomes of two assessment formats using multi-year performance data on pre-clerkship MCQ versus OEQ examinations for predicting students' subsequent performance on six clerkship examinations and USMLE Step 1. The authors conducted a regression analysis to compare the predictability of MCQs and OEQs by using clerkship exam scores and Step 1 scores as dependent variables and performance on MCQs and OEQs as predictors in the models.
Results: Regression models with OEQs were consistently higher for predicting clerkship exam (NBME shelf-exam) scores except for one clerkship compared to models using MCQs. For Step-1, R-square using MCQs was higher with 59% of the variance explained compared to 46% with OEQs, but OEQ cohort scored significantly higher on Step 1.
Conclusions: OEQ examinations predict performance on subsequent high stakes MCQ examinations. Given the predictive value and closer alignment with scientific principles of effective learning, OEQ examinations are an examination format worthy of consideration in pre-clerkship medical education programs.