Skip to main page content
Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
, 7, 291
eCollection

Deep Impact: Unintended Consequences of Journal Rank

Affiliations

Deep Impact: Unintended Consequences of Journal Rank

Björn Brembs et al. Front Hum Neurosci.

Abstract

Most researchers acknowledge an intrinsic hierarchy in the scholarly journals ("journal rank") that they submit their work to, and adjust not only their submission but also their reading strategies accordingly. On the other hand, much has been written about the negative effects of institutionalizing journal rank as an impact measure. So far, contributions to the debate concerning the limitations of journal rank as a scientific impact assessment tool have either lacked data, or relied on only a few studies. In this review, we present the most recent and pertinent data on the consequences of our current scholarly communication system with respect to various measures of scientific quality (such as utility/citations, methodological soundness, expert ratings or retractions). These data corroborate previous hypotheses: using journal rank as an assessment tool is bad scientific practice. Moreover, the data lead us to argue that any journal rank (not only the currently-favored Impact Factor) would have this negative impact. Therefore, we suggest that abandoning journals altogether, in favor of a library-based scholarly communication system, will ultimately be necessary. This new system will use modern information technology to vastly improve the filter, sort and discovery functions of the current journal system.

Keywords: impact factor; journal ranking; libraries; library services; open access; publishing; scholarly communication; statistics as topic.

Figures

Figure 1
Figure 1
Current trends in the reliability of science. (A) Exponential fit for PubMed retraction notices (data from pmretract.heroku.com). (B) Relationship between year of publication and individual study effect size. Data are taken from Munafò et al. (2007), and represent candidate gene studies of the association between DRD2 genotype and alcoholism. The effect size (y-axis) represents the individual study effect size (odds ratio; OR), on a log-scale. This is plotted against the year of publication of the study (x-axis). The size of the circle is proportional to the IF of the journal the individual study was published in. Effect size is significantly negatively correlated with year of publication. (C) Relationship between IF and extent to which an individual study overestimates the likely true effect. Data are taken from Munafò et al. (2009), and represent candidate gene studies of a number of gene-phenotype associations of psychiatric phenotypes. The bias score (y-axis) represents the effect size of the individual study divided by the pooled effect size estimated indicated by meta-analysis, on a log-scale. Therefore, a value greater than zero indicates that the study provided an over-estimate of the likely true effect size. This is plotted against the IF of the journal the study was published in (x-axis), on a log-scale. The size of the circle is proportional to the sample size of the individual study. Bias score is significantly positively correlated with IF, sample size significantly negatively. (D) Linear regression with confidence intervals between IF and Fang and Casadevall's Retraction Index (data provided by Fang and Casadevall, 2011).
Figure 2
Figure 2
No association between statistical power and journal IF. The statistical power of 650 neuroscience studies (data from Button et al., ; 19 missing ref; 3 unclear reporting; 57 published in journal without 2011 IF; 1 book) plotted as a function of the 2011 IF of the publishing journal. The studies were selected from the 730 contributing to the meta-analyses included in Button et al. (2013), Table 1, and included where journal title and IF (2011 © Thomson Reuters Journal Citation Reports) were available.
Figure 3
Figure 3
Trends in predicting citations from journal rank. The coefficient of determination (R2) between journal rank (as measured by IF) and the citations accruing over 2 years after publications is plotted as a function of publication year in a sample of almost 30 million publications. Lozano et al. (2012) make the case that one can explain the trends in the predictive value of journal rank by the publication of the IF in the 1960's (R2 increase is accelerating) and the widespread adoption of internet searches in the 1990's (R2 is dropping). The data support the interpretation that reading habits drive the correlation between journal rank and citations more than any inherent quality of the articles. IFs before the invention of the IF have been retroactively computed for the years before the 1960's.
Figure A1
Figure A1
Impact Factor of the journal “Current Biology” in the years 2002 (above) and 2003 (below) showing a 40% increase in impact. The increase in the IF of the journal “Current Biology” from approx. 7 to almost 12 from one edition of Thomson Reuters' “Journal Citation Reports” to the next is due to a retrospective adjustment of the number of items published (marked), while the actual citations remained relatively constant.

Similar articles

See all similar articles

Cited by 48 articles

See all "Cited by" articles

References

    1. Adam D. (2002). The counting house. Nature 415, 726–729 10.1038/415726a - DOI - PubMed
    1. Adler N. J., Harzing A.-W. (2009). When knowledge wins: transcending the sense and nonsense of academic rankings. Acad. Manag. Learn. Edu. 8, 72–95 10.5465/AMLE.2009.37012181 - DOI
    1. Adler R., Ewing J., Taylor P. (2008). Joint Committee on Quantitative Assessment of Research: Citation Statistics (A report from the International Mathematical Union (IMU) in cooperation with the International Council of Industrial and Applied Mathematics (ICIAM) and the Institute of Mathemat. Available online at: http://www.mathunion.org/fileadmin/IMU/Report/CitationStatistics.pdf
    1. Allen L., Jones C., Dolby K., Lynn D., Walport M. (2009). Looking for landmarks: the role of expert review and bibliometric analysis in evaluating scientific publication outputs. PLoS ONE 4:8 10.1371/journal.pone.0005910 - DOI - PMC - PubMed
    1. Anderson M. S., Martinson B. C., De Vries R. (2007). Normative dissonance in science: results from a national survey of u.s. Scientists. JERHRE 2, 3–14 10.1525/jer.2007.2.4.3 - DOI - PubMed

LinkOut - more resources

Feedback