An evidence synthesis of a medical intervention should assess the balance of benefits and harms. Investigators performing systematic reviews of harms face challenges in finding data, rating the quality of harms reporting, and synthesizing and displaying data from different sources. Systematic reviews of harms often rely primarily on published clinical trials. Identifying important harms of treatment and quantifying the risk associated with them, however, often require a broader range of data sources, including unpublished trials, observational studies, and unpublished information on published trials submitted to the U.S. Food and Drug Administration. Each source of data has some potential for yielding important information. Criteria for judging the quality of harms assessment and reporting are still in their early stages of development. Investigators conducting systematic reviews of harms should consider empirically validating the criteria they use to judge the validity of studies reporting harms. Synthesizing harms data from different sources requires careful consideration of internal validity, applicability, and sources of heterogeneity. This article highlights examples of approaches to methodologic issues associated with performing systematic reviews of harms from 96 Evidence-based Practice Center evidence reports.