Quality Control for Residency Applicant Scores

J Educ Perioper Med. 2019 Jan 1;21(1):E635. eCollection 2019 Jan-Mar.

Abstract

Background: The residency program selection process incorporates application review and candidate interviews to create an ordered rank list. Though this is the single most important process for determining the department's future trainees, the system lacks a quality control mechanism by which faculty ratings are scrutinized. This study used many-facet Rasch measurement (MFRM) to establish a quality control system for the candidate selection process.

Methods: This study took place from October 2017 to January 2018 at a large anesthesiology residency program with 25 available spots. Every candidate received scores from 3 faculty judges across 3 occasions: application review, interview, and interviewer group discussion. MFRM with 3 facets-faculty judges, candidates, and occasions-was used to identify sources of measurement error and produce fair averages for each candidate.

Results: A total of 1378 observations from 158 candidates were used in the MFRM model, explaining 58.42% of the variance in the data. Fit indices indicated that 1 of the 5 judges inconsistently applied the rating scale. MFRM output also flagged some scores as unexpected based on standardized residual values. This helped identify specific instances where inconsistent observations occurred.

Conclusions: MFRM is a relatively low-cost, efficient way to test the quality of the scores that are used to make a rank list and to investigate noise that represents outlier scores. When these outlier scores are due to biased factors such as particularly stringent or lenient interviewers, they may be unfairly influencing the rank list, and program directors may choose to adjust for them.

Keywords: Many-facet Rasch measurement; Match List; Psychometrics; Residency Application; Residency Interviews.