Challenges in systematic reviews that assess treatment harms

Ann Intern Med. 2005 Jun 21;142(12 Pt 2):1090-9. doi: 10.7326/0003-4819-142-12_part_2-200506211-00009.

Abstract

An evidence synthesis of a medical intervention should assess the balance of benefits and harms. Investigators performing systematic reviews of harms face challenges in finding data, rating the quality of harms reporting, and synthesizing and displaying data from different sources. Systematic reviews of harms often rely primarily on published clinical trials. Identifying important harms of treatment and quantifying the risk associated with them, however, often require a broader range of data sources, including unpublished trials, observational studies, and unpublished information on published trials submitted to the U.S. Food and Drug Administration. Each source of data has some potential for yielding important information. Criteria for judging the quality of harms assessment and reporting are still in their early stages of development. Investigators conducting systematic reviews of harms should consider empirically validating the criteria they use to judge the validity of studies reporting harms. Synthesizing harms data from different sources requires careful consideration of internal validity, applicability, and sources of heterogeneity. This article highlights examples of approaches to methodologic issues associated with performing systematic reviews of harms from 96 Evidence-based Practice Center evidence reports.

Publication types

  • Research Support, U.S. Gov't, P.H.S.
  • Review

MeSH terms

  • Evidence-Based Medicine / methods*
  • Humans
  • Research Design / standards
  • Review Literature as Topic*
  • Therapeutics / adverse effects*
  • Therapeutics / standards*