Quantifying and reporting uncertainty from systematic errors

Epidemiology. 2003 Jul;14(4):459-66. doi: 10.1097/01.ede.0000072106.65262.ae.

Abstract

Optimal use of epidemiologic findings in decision making requires more information than standard analyses provide. It requires calculating and reporting the total uncertainty in the results, which in turn requires methods for quantifying the uncertainty introduced by systematic error. Quantified uncertainty can improve policy and clinical decisions, better direct further research, and aid public understanding, and thus enhance the contributions of epidemiology. The error quantification approach proposed here is based on estimating a probability distribution for a bias-corrected effect measure based on externally-derived distributions of bias levels. Using Monte Carlo simulation, corrections for multiple biases are combined by identifying the steps through which true causal effects become data, and (in reverse order) correcting for the errors introduced by each step. The bias-correction calculations are the same as those used in sensitivity analysis, but the resulting distribution of possible true values is more than a sensitivity analysis; it is a more complete reporting of the actual study results. The approach is illustrated with an application to a recent study that resulted in the drug, phenylpropanolamine, being removed from the market.

MeSH terms

  • Appetite Depressants / adverse effects
  • Bias*
  • Decision Making
  • Environment
  • Epidemiologic Studies*
  • Humans
  • Monte Carlo Method
  • Phenylpropanolamine / adverse effects
  • Reproducibility of Results
  • Sensitivity and Specificity

Substances

  • Appetite Depressants
  • Phenylpropanolamine