Introduction: Despite the ubiquity of single-best answer multiple-choice questions (MCQ) in assessments throughout medical education, question writers often receive little to no formal training, potentially decreasing the validity of assessments. While lengthy training opportunities in item writing exist, the availability of brief interventions is limited.
Methods: We developed and performed an initial validation of an item-quality assessment tool and measured the impact of a brief educational intervention on the quality of single-best answer MCQs.
Results: The item-quality assessment tool demonstrated moderate internal structure evidence when applied to the 20 practice questions (κ=.671, p<.001) and excellent internal structure when applied to the true dataset (κ=0.904, p<.001). Quality scale scores for pre-intervention questions ranged from 2-6 with a mean ± standard deviation (SD) of 3.79 ± 1.23, while post-intervention scores ranged from 4-6 with a mean ± SD of 5.42 ± 0.69. The post-intervention scores were significantly higher than the pre-intervention scores, x 2(1) =38, p <0.001.
Conclusion: Our study demonstrated short-term improvement in single-best answer MCQ writing quality after a brief, open-access lecture, as measured by a simple, novel, grading rubric with reasonable validity evidence.