For meta-analysis, substantial uncertainty remains about the most appropriate statistical methods for combining the results of separate trials. An important issue for meta-analysis is how to incorporate heterogeneity, defined as variation among the results of individual trials beyond that expected from chance, into summary estimates of treatment effect. Another consideration is which 'metric' to use to measure treatment effect; for trials with binary outcomes, there are several possible metrics, including the odds ratio (a relative measure) and risk difference (an absolute measure). To examine empirically how assessment of treatment effect and heterogeneity may differ when different methods are utilized, we studied 125 meta-analyses representative of those performed by clinical investigators. There was no meta-analysis in which the summary risk difference and odds ratio were discrepant to the extent that one indicated significant benefit while the other indicated significant harm. Further, for most meta-analyses, summary odds ratios and risk differences agreed in statistical significance, leading to similar conclusions about whether treatments affected outcome. Heterogeneity was common regardless of whether treatment effects were measured by odds ratios or risk differences. However, risk differences usually displayed more heterogeneity than odds ratios. Random effects estimates, which incorporate heterogeneity, tended to be less precisely estimated than fixed effects estimates. We present two exceptions to these observations, which derive from the weights assigned to individual trial estimates. We discuss the implications of these findings for selection of a metric for meta-analysis and incorporation of heterogeneity into summary estimates. Published in 2000 by John Wiley & Sons, Ltd.