Background: Meta-analysis handles randomized trials with no outcome events in both treatment and control arms inconsistently, including them when risk difference (RD) is the effect measure but excluding them when relative risk (RR) or odds ratio (OR) are used. This study examined the influence of such trials on pooled treatment effects.
Methods: Analysis with and without zero total event trials of three illustrative published meta-analyses with a range of proportions of zero total event trials, treatment effects, and heterogeneity using inverse variance weighting and random effects that incorporates between-study heterogeneity.
Results: Including zero total event trials in meta-analyses moves the pooled estimate of treatment effect closer to nil, decreases its confidence interval and decreases between-study heterogeneity. For RR and OR, inclusion of such trials causes small changes, even when they comprise the large majority of included trials. For RD, the changes are more substantial, and in extreme cases can eliminate a statistically significant effect estimate.
Conclusion: To include all relevant data regardless of effect measure chosen, reviewers should also include zero total event trials when calculating pooled estimates using OR and RR.