On the evaluation of research software: the CDUR procedure

F1000Res. 2019 Aug 5:8:1353. doi: 10.12688/f1000research.19994.2. eCollection 2019.

Abstract

Background: Evaluation of the quality of research software is a challenging and relevant issue, still not sufficiently addressed by the scientific community. Methods: Our contribution begins by defining, precisely but widely enough, the notions of research software and of its authors followed by a study of the evaluation issues, as the basis for the proposition of a sound assessment protocol: the CDUR procedure. Results: CDUR comprises four steps introduced as follows: Citation, to deal with correct RS identification, Dissemination, to measure good dissemination practices, Use, devoted to the evaluation of usability aspects, and Research, to assess the impact of the scientific work. Conclusions: Some conclusions and recommendations are finally included. The evaluation of research is the keystone to boost the evolution of the Open Science policies and practices. It is as well our belief that research software evaluation is a fundamental step to induce better research software practices and, thus, a step towards more efficient science.

Keywords: Open Science; Research Software; Research Software Citation; Research Software Evaluation; Research evaluation; Scientific Software.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Research
  • Software*

Associated data

  • figshare/10.6084/m9.figshare.7887059

Grants and funding

Publication of this article is supported by the Gaspard-Monge computer science laboratory (LIGM) at the University of Paris-Est Marne-la-Vallée.