Empirical Evidence of Associations Between Trial Quality and Effect Size [Internet]

Review
Rockville (MD): Agency for Healthcare Research and Quality (US); 2011 Jun. Report No.: 11-EHC045-EF.

Excerpt

Objectives: To examine the empirical evidence for associations between a set of proposed quality criteria and estimates of effect sizes in randomized controlled trials across a variety of clinical fields and to explore variables potentially influencing the association.

Methods: We applied quality criteria to three large datasets of studies included in a variety of meta-analyses covering a wide range of topics and clinical interventions consisting of 216, 165, and 100 trials. We assessed the relationship between quality and effect sizes for 11 individual criteria (randomization sequence, allocation concealment, similar baseline, assessor blinding, care provider blinding, patient blinding, acceptable dropout rate, intention-to-treat analysis, similar cointerventions, acceptable compliance, similar outcome assessment timing) as well as summary scores. Inter-item relationships were explored using psychometric techniques. We investigated moderators and confounders affecting the association between quality and effect sizes across datasets.

Results: Quality levels varied across datasets. Many studies did not report sufficient information to judge methodological quality. Some individual quality features were substantially intercorrelated, but a total score did not show high overall internal consistency (α 0.55 to 0.61). A factor analysis-based model suggested three distinct quality domains. Allocation concealment was consistently associated with slightly smaller treatment effect estimates across all three datasets; other individual criteria results varied. In dataset 1, the 11 individual criteria were consistently associated with lower estimated effect sizes. Dataset 2 showed some unexpected results; for several dimensions, studies meeting quality criteria reported larger effect sizes. Dataset 3 showed some variation across criteria. There was no statistically significant linear association of a summary scale or factor scores with effect sizes. Applying a cutoff of 5 or 6 criteria met (out of 11) differentiated high and low quality studies best. The effect size differences for a cutoff at 5 was -0.20 (95% confidence interval [CI]: -0.34, -0.06) in dataset 1 and the respective ratio of odds ratios in dataset #3 was 0.79 (95% CI: 0.63, 0.95). Associations indicated that low-quality trials tended to overestimate treatment effects. This observation could not be replicated with dataset 2, suggesting the influence of confounders and moderators. The size of the treatment effect, the condition being treated, the type of outcome, and the variance in effect sizes did not sufficiently explain the differential associations between quality and effect sizes but warrant further exploration in explaining variation between datasets.

Conclusions: Effect sizes of individual studies depend on many factors. The conditions where quality features lead to biased effect sizes warrant further exploration.

Publication types

  • Review

Grants and funding

Prepared for: Agency for Healthcare Research and Quality, U.S. Department of Health and Human Services, Contract No.HHSA 290-2007-10062-I. Prepared by: Southern California Evidence-based Practice Center, Santa Monica, CA