A new vocabulary and other innovations for improving descriptive in-training evaluations

Acad Med. 1999 Nov;74(11):1203-7. doi: 10.1097/00001888-199911000-00012.

Abstract

Progress in improving the credibility of teachers' descriptive evaluations of students and residents has not kept pace with the progress made in improving the credibility of more quantified methods, such as multiple-choice examinations and standardized patient examinations of clinical skills. This article addresses innovative approaches to making the ongoing in-training evaluation (ITEv) of trainees during their clinical experiences more reliable and valid. The innovations include the development of a standard vocabulary for describing the progress of trainees from "reporter" to "interpreter" to "manager" and "educator" (RIME), the use of formal evaluation sessions, and closer consideration of the unit of clinical evaluation (the case, the rotation, or the year). The author also discusses initial results of studies assessing the reliability and validity of descriptive methods, as well as the use of quantified methods to complement descriptive methods. Applying basic principles--the use of a taxonomy of professional development and statistical principles of reliability and validity--may foster research into more credible descriptive evaluation of clinical skills.

MeSH terms

  • Clinical Competence*
  • Education, Medical / standards*
  • Educational Measurement / methods*
  • Educational Measurement / standards
  • Humans
  • Terminology as Topic