Objective: The evaluation and training of raters who conduct efficacy evaluations in clinical trials is an important methodological variable that is often overlooked. Few rater training programs focus on teaching and assessing applied clinical skills, and even fewer have been empirically examined for efficacy. The goal of this study was to develop a comprehensive, standardized, interactive rater training program using new technologies, and to compare the relative effectiveness of this approach to "traditional" rater training in a multi-center clinical trial.
Method: 12 sites from a 22 site multi-center study were randomly selected to participate (6=traditional, 6=enriched). Traditional training consisted of an overview of scoring conventions, watching and scoring videotapes with discussion, and observation of interviews in small groups with feedback. Enriched training consisted of an interactive web tutorial, and live, remote observation of trainees conducting interviews with real or standardized patients, via video- or teleconference. Outcome measures included a didactic exam on conceptual knowledge and blinded ratings of trainee's audiotaped interviews.
Results: A significant difference was found between enriched and traditional training on pre-to-post training improvement on didactic knowledge, t(27)=4.2, p<0.0001. Enriched trainees clinical skills also improved significantly more than traditional trainees, t(56)=2.1, p=0.035. All trainees found the applied training helpful, and wanted similar web tutorials with other scales.
Conclusions: Results support the efficacy of enriched rater training in improving both conceptual knowledge and applied skills. Remote technologies enhance training efforts, and make training accessible and cost-effective. Future rater training efforts should be subject to empirical evaluation, and include training on applied skills.