Editors can lead researchers to confidence intervals, but can't make them think: statistical reform lessons from medicine

Psychol Sci. 2004 Feb;15(2):119-26. doi: 10.1111/j.0963-7214.2004.01502008.x.


Since the mid-1980s, confidence intervals (CIs) have been standard in medical journals. We sought lessons for psychology from medicine's experience with statistical reform by investigating two attempts by Kenneth Rothman to change statistical practices. We examined 594 American Journal of Public Health (AJPH) articles published between 1982 and 2000 and 110 Epidemiology articles published in 1990 and 2000. Rothman's editorial instruction to report CIs and not p values was largely effective: In AJPH, sole reliance on p values dropped from 63% to 5%, and CI reporting rose from 10% to 54%; Epidemiology showed even stronger compliance. However, compliance was superficial: Very few authors referred to CIs when discussing results. The results of our survey support what other research has indicated: Editorial policy alone is not a sufficient mechanism for statistical reform. Achieving substantial, desirable change will require further guidance regarding use and interpretation of CIs and appropriate effect size measures. Necessary steps will include studying researchers' understanding of CIs, improving education, and developing empirically justified recommendations for improved statistical practice.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Biomedical Research / education
  • Biomedical Research / statistics & numerical data*
  • Confidence Intervals*
  • Curriculum / trends
  • Editorial Policies*
  • Forecasting
  • Humans
  • Manuscripts, Medical as Topic*
  • Periodicals as Topic
  • Statistics as Topic / education
  • Thinking*