Publication bias, statistical power and reporting practices in the Journal of Sports Sciences: potential barriers to replicability

J Sports Sci. 2023 Sep;41(16):1507-1517. doi: 10.1080/02640414.2023.2269357. Epub 2023 Dec 11.

Abstract

Two factors that decrease the replicability of studies in the scientific literature are publication bias and studies with underpowered desgins. One way to ensure that studies have adequate statistical power to detect the effect size of interest is by conducting a-priori power analyses. Yet, a previous editorial published in the Journal of Sports Sciences reported a median sample size of 19 and the scarce usage of a-priori power analyses. We meta-analysed 89 studies from the same journal to assess the presence and extent of publication bias, as well as the average statistical power, by conducting a z-curve analysis. In a larger sample of 174 studies, we also examined a) the usage, reporting practices and reproducibility of a-priori power analyses; and b) the prevalence of reporting practices of t-statistic or F-ratio, degrees of freedom, exact p-values, effect sizes and confidence intervals. Our results indicate that there was some indication of publication bias and the average observed power was low (53% for significant and non-significant findings and 61% for only significant findings). Finally, the usage and reporting practices of a-priori power analyses as well as statistical results including test statistics, effect sizes and confidence intervals were suboptimal.

Keywords: Replicability; publication bias; reporting practices; reproducibility; statistical power.

MeSH terms

  • Bias
  • Humans
  • Publication Bias
  • Reproducibility of Results
  • Research Design*
  • Sample Size