High impact = high statistical standards? Not necessarily so

PLoS One. 2013;8(2):e56180. doi: 10.1371/journal.pone.0056180. Epub 2013 Feb 13.

Abstract

What are the statistical practices of articles published in journals with a high impact factor? Are there differences compared with articles published in journals with a somewhat lower impact factor that have adopted editorial policies to reduce the impact of limitations of Null Hypothesis Significance Testing? To investigate these questions, the current study analyzed all articles related to psychological, neuropsychological and medical issues, published in 2011 in four journals with high impact factors: Science, Nature, The New England Journal of Medicine and The Lancet, and three journals with relatively lower impact factors: Neuropsychology, Journal of Experimental Psychology-Applied and the American Journal of Public Health. Results show that Null Hypothesis Significance Testing without any use of confidence intervals, effect size, prospective power and model estimation, is the prevalent statistical practice used in articles published in Nature, 89%, followed by articles published in Science, 42%. By contrast, in all other journals, both with high and lower impact factors, most articles report confidence intervals and/or effect size measures. We interpreted these differences as consequences of the editorial policies adopted by the journal editors, which are probably the most effective means to improve the statistical practices in journals with high or low impact factors.

MeSH terms

  • Bibliometrics*
  • Humans
  • Journal Impact Factor*
  • Periodicals as Topic / standards*
  • Periodicals as Topic / statistics & numerical data
  • Publishing / standards
  • Publishing / statistics & numerical data
  • Research Design / standards*
  • Research Design / statistics & numerical data

Grant support

There are no funders for this study