Detection of Associations Between Trial Quality and Effect Sizes [Internet]

Review
Rockville (MD): Agency for Healthcare Research and Quality (US); 2012 Jan. Report No.: 12-EHC010-EF.

Excerpt

Objectives: To examine associations between a set of trial quality criteria and effect sizes and to explore factors influencing the detection of associations in meta-epidemiological datasets.

Data Sources: The analyses are based on four meta-epidemiological datasets. These datasets consist of a number of meta-analyses; each contained between 100 and 216 controlled trials. These datasets have “known” qualities, as they were used in published research to investigate associations between quality and effect sizes. In addition, we created datasets using Monte Carlo simulation methods to examine their properties.

Review Methods: We identified treatment effect meta-analyses and included trials and extracted treatment effects for four meta-epidemiological datasets. We assessed quality and risk of bias indicators with 11 Cochrane Back Review Group (CBRG) criteria. In addition, we applied the Jadad criteria, criteria proposed by Schulz (e.g., allocation concealment), and the Cochrane Risk of Bias tool. We investigated the effect of individual criteria and quantitative summary scores on the reported treatment effect sizes. We explored potential reasons for differences in associations across different meta-epidemiological datasets, clinical fields and individual meta-analyses. We investigated factors that influence the power to detect associations between quality and effect sizes in Monte Carlo simulations.

Results: Associations between quality and effect sizes were small, e.g. the ratio of odds ratios (ROR) for unconcealed (vs. concealed) trials was 0.89 (95% CI: 0.73, 1.09, n.s.), but consistent across the CBRG criteria. Based on a quantitative summary score, a cut-off of six or more criteria met (out of 11) differentiated low- and high-quality trials best with lower quality trials reporting larger treatment effects (ROR 0.86, 95% CI: 0.70, 1.06, n.s.). Results for evidence of bias varied between datasets, clinical fields, and individual meta-analyses. The simulations showed that the power to detect quality effects is, to a large extent, determined by the degree of residual heterogeneity present in the dataset.

Conclusions: Although trial quality may explain some amount of heterogeneity across trial results in meta-analyses, the amount of additional heterogeneity in effect sizes is a crucial factor in determining when associations between quality and effect sizes can be detected. Detecting quality moderator effects requires more statistically powerful analyses than are employed in most investigations.

Publication types

  • Review

Grants and funding

Prepared for: Agency for Healthcare Research and Quality, U.S. Department of Health and Human Services, Contract No. HHSA 290-2007-10056-I. Prepared by: Southern California Evidence-based Practice Center