Determining Grades in the Internal Medicine Clerkship: Results of a National Survey of Clerkship Directors

Acad Med. 2020 Nov 3. doi: 10.1097/ACM.0000000000003815. Online ahead of print.


Purpose: Trust in and comparability of assessments are essential in clerkships in undergraduate medical education for many reasons, including ensuring competency in clinical skills and application of knowledge that will be important for the transition to residency and throughout the students' careers. The authors examined how assessments are utilized to determine internal medicine (IM) core clerkship grades across U.S. medical schools.

Methods: A multi-section web-based survey of core IM clerkship directors at 134 U.S. medical schools with membership in the Clerkship Directors in Internal Medicine was conducted in October through November 2018. The survey included a section on assessment practices to characterize current grading scales used, who determines students' final clerkship grades, the nature/type of summative assessments, and how assessments are weighted. Respondents were asked about perceptions of the influence of the National Board of Medical Examiners (NBME) Medicine Subject Examination (MSE) on students' priorities during the clerkship.

Results: The response rate was 82.1% (110/134). There was considerable variability in the summative assessments and their weighting in determining final grades. The NBME MSE (91.8%), clinical performance (90.9%), professionalism (70.9%), and written notes (60.0%) were the most commonly used assessments. Clinical performance assessments and the NBME MSE accounted for the largest proportion of the total grade (on average 52.8% and 23.5%, respectively). Eighty-seven percent of respondents were concerned that students' focus on the NBME MSE performance detracts from patient-care learning.

Conclusions: There was considerable variance in what IM clerkships assessed and how those assessments were translated into grades. The NBME MSE was a major contributor to the final grade despite concerns about the impact on patient-care learning. These findings underscore the difficulty in comparing learners across institutions and serve to advance discussions for how to improve accuracy and comparability of grading in the clinical environment.