Assessing "best evidence": issues in grading the quality of studies for systematic reviews

Jt Comm J Qual Improv. 1999 Sep;25(9):470-9. doi: 10.1016/s1070-3241(16)30461-8.


Background: Evidence-based medicine, clinical practice guidelines, quality and value of health services, and science-based decision making are becoming mainstays of the health care sector. As part of the evidence-based movement, systematic reviews of the literature on clinical questions are becoming increasingly common. Part of the structured approach to evaluating the literature involves assessing the quality of individual studies included in systematic reviews.

Review questions: To clarify issues in this area, in 1998 the Agency for Health Care Policy and Research commissioned a small project to determine how its 12 Evidence-based Practice Centers were carrying out this part of their systematic reviews (called evidence reports). The number of potential checklists, scales, and similar tools for grading the methodology or the clinical relevance of individual reports is large; the reliability, the validity, the feasibility, and the utility of these tools are either unmeasured or quite variable.

Conclusions: Numerous methodologic questions await definitive research and answers, but in the meantime teams developing authoritative systematic reviews can take certain steps to ensure that their approaches to grading the quality of articles meet applicable scientific standards. Clinicians, program administrators, and health policymakers can then be confident in the overall strength of the evidence and study conclusions.

Publication types

  • Research Support, U.S. Gov't, P.H.S.
  • Review

MeSH terms

  • Bias
  • Data Interpretation, Statistical*
  • Evidence-Based Medicine*
  • Humans
  • Meta-Analysis as Topic*
  • Randomized Controlled Trials as Topic / standards
  • Reproducibility of Results
  • Research Design / standards*
  • Sensitivity and Specificity
  • United States
  • United States Agency for Healthcare Research and Quality