Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2015 Jan 13;112(2):360-5.
doi: 10.1073/pnas.1418218112. Epub 2014 Dec 22.

Measuring the effectiveness of scientific gatekeeping

Affiliations

Measuring the effectiveness of scientific gatekeeping

Kyle Siler et al. Proc Natl Acad Sci U S A. .

Abstract

Peer review is the main institution responsible for the evaluation and gestation of scientific research. Although peer review is widely seen as vital to scientific evaluation, anecdotal evidence abounds of gatekeeping mistakes in leading journals, such as rejecting seminal contributions or accepting mediocre submissions. Systematic evidence regarding the effectiveness--or lack thereof--of scientific gatekeeping is scant, largely because access to rejected manuscripts from journals is rarely available. Using a dataset of 1,008 manuscripts submitted to three elite medical journals, we show differences in citation outcomes for articles that received different appraisals from editors and peer reviewers. Among rejected articles, desk-rejected manuscripts, deemed as unworthy of peer review by editors, received fewer citations than those sent for peer review. Among both rejected and accepted articles, manuscripts with lower scores from peer reviewers received relatively fewer citations when they were eventually published. However, hindsight reveals numerous questionable gatekeeping decisions. Of the 808 eventually published articles in our dataset, our three focal journals rejected many highly cited manuscripts, including the 14 most popular; roughly the top 2 percent. Of those 14 articles, 12 were desk-rejected. This finding raises concerns regarding whether peer review is ill--suited to recognize and gestate the most impactful ideas and research. Despite this finding, results show that in our case studies, on the whole, there was value added in peer review. Editors and peer reviewers generally--but not always-made good decisions regarding the identification and promotion of quality in scientific manuscripts.

Keywords: creativity; decision making; innovation; peer review; publishing.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Fig. 1.
Fig. 1.
Citation distribution of rejected articles (peer reviewed vs. desk-rejected).
Fig. 2.
Fig. 2.
Citation distribution of accepted and rejected articles.
Fig. 3.
Fig. 3.
Citation distribution of rejected articles by time to publication.

Comment in

Similar articles

Cited by

References

    1. Merton RK. The Matthew Effect in science. Science. 1968;159(3810):56–63. - PubMed
    1. Lamont M. How Professors Think: Inside the Curious World of Academic Judgment. Harvard Univ Press; Cambridge, MA: 2009.
    1. Kassirer JP, Campion EW. Peer review. Crude and understudied, but indispensable. JAMA. 1994;272(2):96–97. - PubMed
    1. Mahoney MJ. Publication prejudices: An experimental study of confirmatory bias in the peer review system. Cognit Ther Res. 1977;1(2):161–175.
    1. Horrobin DF. The philosophical basis of peer review and the suppression of innovation. JAMA. 1990;263(10):1438–1441. - PubMed

Publication types

LinkOut - more resources