Purpose: This study explored factors that contribute to objectivity in objective structured clinical examinations (OSCEs). The authors quantified the effect of examiners on interrater reliability and separated this effect from that of station construction, determined the effect of objectification on station reliability and validity, and explored examiner factors that may contribute to interrater reliability.
Method: Data came from examiners' mark sheets from four annual OSCEs (1997-2000). The OSCEs were conducted identically and simultaneously at three sites, within the University of Otago medical school in New Zealand, with two examiners at each station. The contribution to interrater correlations of station construction and mark sheet compared with examiners' contribution was partitioned out using a random-effects analysis of variance. For one OSCE, a multiple linear regression was used to determine the independent contributions to interrater reliability of the number of checklist items per mark sheet, examiner experience, and examiner involvement in station construction.
Results: Station construction and mark sheets contributed 10.1% and examiners contributed 89.9% to the variation in interrater reliability. Following multivariate analysis, the number of items per mark sheet was negatively associated, and examiner involvement in station construction was positively associated, with interrater reliability. Examiner experience in examining or in clinical medicine was not associated with interrater reliability. There was a negative, but nonsignificant, correlation between number of items per mark sheet and that station's correlation with the aggregate OSCE mark.
Conclusions: The contribution of objective mark sheets to objectivity is relatively minor compared with examiners' contribution. Increasing the number of checklist items per mark sheet decreased both reliability and validity. Achieving objectivity requires diligent examiners who are involved in the whole assessment.