The effect of a Structured Question Grid on the validity and perceived fairness of a medical long case assessment

Med Educ. 2000 Jan;34(1):46-52. doi: 10.1046/j.1365-2923.2000.00465.x.

Abstract

Problem: A perception that the reliability of our oral assessments of clinical competence was vitiated by lack of consistency in questioning.

Design: Parallel group controlled trial of a Structured Question Grid for use in clinical assessments. The Structured Question Grid required assessors to see the patient personally in advance of the student and to write down for each case the points they wished to examine. The Structured Question Grid limited assessors to two questions on each point, one designated a pass question and one at a higher level. Three basic science and three clinical reasoning issues were required, so that a total of 12 questions was allowed.

Setting: Small (70 students/year) undergraduate medical school with an integrated, problem-based curriculum.

Subjects: Sixty-seven students in the fourth year of a 5-year course were assessed, each seeing one patient and being examined by a pair of assessors. Assessor pairs were allocated to use the Structured Question Grid or to assess according to their usual practice.

Results: After the assessment but before being informed of the result the students completed a questionnaire on their experience and gave their performance a score between 0 and 100. The questions asked were based on focus group discussions with a previous student cohort, and concerned principally the perceived fairness and subjective validity of the assessment. The assessors independently completed a similar questionnaire, gave the student's performance a score between 0 and 100, and assigned an overall pass/fail grade.

Conclusions: No difference was detected between students' or assessors' views of the fairness of the assessment for assessors who had used the Structured Question Grid compared to those who had not. Students whose assessors used the Structured Question Grid considered the assessment less representative of their ability. No difference was detected in the chance of students being assessed as failing or on the likelihood of a discrepancy between students' and assessors' ratings of students as passing or failing.

Publication types

  • Clinical Trial
  • Controlled Clinical Trial

MeSH terms

  • Clinical Competence
  • Curriculum
  • Education, Medical, Undergraduate / standards*
  • Educational Measurement / standards*
  • Humans
  • Surveys and Questionnaires*