Improving the utility of evidence synthesis for decision makers in the face of insufficient evidence

J Clin Epidemiol. 2021 Jul;135:170-175. doi: 10.1016/j.jclinepi.2021.02.028. Epub 2021 Mar 19.


Objective: To identify and suggest strategies to make insufficient evidence ratings in systematic reviews more actionable.

Study design and setting: A workgroup comprising members from the Evidence-Based Practice (EPC) Program of the Agency for Healthcare Research and Quality convened throughout 2020. We conducted iterative discussions considering information from three data sources: a literature review for relevant publications and frameworks, a review of a convenience sample of past systematic reviews conducted by the EPCs, and an audit of methods used in past EPC technical briefs.

Results: We identified five strategies for supplementing systematic review findings when evidence on benefits or harms is expected to be, or found to be, insufficient: 1) reconsider eligible study designs, 2) summarize indirect evidence, 3) summarize contextual and implementation evidence, 4) consider modelling, and 5) incorporate unpublished health system data in the evidence synthesis. While these strategies may not increase the strength of evidence, they may improve the utility of reports for decision makers. Adopting these strategies depends on feasibility, timeline, funding, and expertise of the systematic reviewers.

Conclusion: Throughout the process of evidence synthesis of early scoping, protocol development, review conduct, and review presentation, authors can consider these five strategies to supplement evidence with insufficient rating to make it more actionable for end-users.

Keywords: Decision-making; Health systems; Insufficient evidence; Systematic reviews; Translation.

Publication types

  • Research Support, U.S. Gov't, P.H.S.

MeSH terms

  • Decision Making*
  • Evidence-Based Practice / methods*
  • Humans
  • Research Design / statistics & numerical data*
  • Systematic Reviews as Topic / methods*