Anchoring effects in the assessment of papers: The proposal for an empirical survey of citing authors

PLoS One. 2021 Sep 29;16(9):e0257307. doi: 10.1371/journal.pone.0257307. eCollection 2021.

Abstract

In our planned study, we shall empirically study the assessment of cited papers within the framework of the anchoring-and-adjustment heuristic. We are interested in the question whether citation decisions are (mainly) driven by the quality of cited references. The design of our study is oriented towards the study by Teplitskiy, Duede [10]. We shall undertake a survey of corresponding authors with an available email address in the Web of Science database. The authors are asked to assess the quality of papers that they cited in previous papers. Some authors will be assigned to three treatment groups that receive further information alongside the cited paper: citation information, information on the publishing journal (journal impact factor), or a numerical access code to enter the survey. The control group will not receive any further numerical information. In the statistical analyses, we estimate how (strongly) the quality assessments of the cited papers are adjusted by the respondents to the anchor value (citation, journal, or access code). Thus, we are interested in whether possible adjustments in the assessments can not only be produced by quality-related information (citation or journal), but also by numbers that are not related to quality, i.e. the access code. The results of the study may have important implications for quality assessments of papers by researchers and the role of numbers, citations, and journal metrics in assessment processes.

MeSH terms

  • Bibliometrics*
  • Data Management
  • Databases, Factual
  • Humans
  • Internet
  • Journal Impact Factor*
  • Publications*
  • Publishing / statistics & numerical data*
  • Research Personnel*
  • Surveys and Questionnaires

Grants and funding

The authors received no specific funding for this work.