Assessing the Quality of Reports of Randomized Clinical Trials: Is Blinding Necessary?

Control Clin Trials. 1996 Feb;17(1):1-12. doi: 10.1016/0197-2456(95)00134-4.

Abstract

It has been suggested that the quality of clinical trials should be assessed by blinded raters to limit the risk of introducing bias into meta-analyses and systematic reviews, and into the peer-review process. There is very little evidence in the literature to substantiate this. This study describes the development of an instrument to assess the quality of reports of randomized clinical trials (RCTs) in pain research and its use to determine the effect of rater blinding on the assessments of quality. A multidisciplinary panel of six judges produced an initial version of the instrument. Fourteen raters from three different backgrounds assessed the quality of 36 research reports in pain research, selected from three different samples. Seven were allocated randomly to perform the assessments under blind conditions. The final version of the instrument included three items. These items were scored consistently by all the raters regardless of background and could discriminate between reports from the different samples. Blind assessments produced significantly lower and more consistent scores than open assessments. The implications of this finding for systematic reviews, meta-analytic research and the peer-review process are discussed.

Publication types

  • Meta-Analysis

MeSH terms

  • Double-Blind Method
  • Female
  • Humans
  • Male
  • Pain / drug therapy
  • Patient Dropouts
  • Peer Review / standards
  • Randomized Controlled Trials as Topic / methods
  • Randomized Controlled Trials as Topic / standards*
  • Reproducibility of Results
  • Research Design / standards*
  • Technology Assessment, Biomedical