Is there a new performance indicator for ...
Type de document :
Article dans une revue scientifique: Éditorial
URL permanente :
Titre :
Is there a new performance indicator for reviewers on the rise?
Auteur(s) :
Titre de la revue :
Elements
Numéro :
17
Pagination :
422
Date de publication :
2022-03
Résumé en anglais : [en]
Most of us are experiencing a new trend when submitting a review report to editors of several journals. We are solicited to allow our contribution to be encoded as reviewer: name, e-mail, title of review manuscript, name ...
Lire la suite >Most of us are experiencing a new trend when submitting a review report to editors of several journals. We are solicited to allow our contribution to be encoded as reviewer: name, e-mail, title of review manuscript, name of the journal and date of review submission, in external databases (e.g. Publons, ORCID). It is called “Reviewer recognition”. For the moment, one can refuse to contribute to such data collection, but whether this will become automatic or mandatory in the future remains to be seen. This data collection has the flavor and the color of a new performance indicator. Owners of databases are well-known providers of scientific performance indicators. The first and foremost beneficiaries of such databases are publishers, who will have access to a wider pool of reviewers and, as with most indicators, alongside with a quantitative data to ease administrative (semi-automatic) uses for recruiting reviewers. Data on contributions as reviewers can easily be merged with other indicators, for instance journal impact factors (IF), in order to create more elaborated indicators measuring the ‘visibility’ of a scientist as reviewer (e.g. putting a weight on each review proportional to the IF of the journal). Notwithstanding, like many other indicators, such a performance indicator provides limited (to no) information on the quality of the reviews performed. We all know that the time and work dedicated by reviewers is very variable from paper to paper. Reviewers’ commitment to a review is also very variable. Most of us have been surprised when reading the light review provided by a second reviewer. I am probably not very far from the reality if I say that the amount of time spent on a review can vary up to a factor of 5, independently of the quality of both the journal and the paper. Finally, because data are a valuable ‘good’ and potentially a source of profit, we should be mindful when agreeing to provide data to build such datasets. By accepting we are contributing to build a product that is likely to be sold one day to customers (e.g., publishers, institutions). We are all free to provide data about ourselves. However, my plead to you all is that we, as a community, do it consciously and intentionally, if we believe that this indeed represents a benefit to our work. Personally, I am not convinced that it is the case and we should take a joint stand to prevent poor quality indicators to disrupt our work and careers. Would the poor added value of this new indicator, built just now in front our eyes, favor the quality of reviews over the quantity or the opposite? I let you judge this by yourselves. At the European Journal of Mineralogy (EJM) we have resisted to this trend. We hope we will not be forced to change it in the future due to peers’ pressure. However, it is up to reviewers to decide what will be the indicators that guide our community in the future. If the majority of reviewers follow the trend, in the end we will have no choice. Myself, I have up to now refused to fill out such ‘reviewer recognition’ databases – I hope I am not alone J. Ingrin Managing EditorLire moins >
Lire la suite >Most of us are experiencing a new trend when submitting a review report to editors of several journals. We are solicited to allow our contribution to be encoded as reviewer: name, e-mail, title of review manuscript, name of the journal and date of review submission, in external databases (e.g. Publons, ORCID). It is called “Reviewer recognition”. For the moment, one can refuse to contribute to such data collection, but whether this will become automatic or mandatory in the future remains to be seen. This data collection has the flavor and the color of a new performance indicator. Owners of databases are well-known providers of scientific performance indicators. The first and foremost beneficiaries of such databases are publishers, who will have access to a wider pool of reviewers and, as with most indicators, alongside with a quantitative data to ease administrative (semi-automatic) uses for recruiting reviewers. Data on contributions as reviewers can easily be merged with other indicators, for instance journal impact factors (IF), in order to create more elaborated indicators measuring the ‘visibility’ of a scientist as reviewer (e.g. putting a weight on each review proportional to the IF of the journal). Notwithstanding, like many other indicators, such a performance indicator provides limited (to no) information on the quality of the reviews performed. We all know that the time and work dedicated by reviewers is very variable from paper to paper. Reviewers’ commitment to a review is also very variable. Most of us have been surprised when reading the light review provided by a second reviewer. I am probably not very far from the reality if I say that the amount of time spent on a review can vary up to a factor of 5, independently of the quality of both the journal and the paper. Finally, because data are a valuable ‘good’ and potentially a source of profit, we should be mindful when agreeing to provide data to build such datasets. By accepting we are contributing to build a product that is likely to be sold one day to customers (e.g., publishers, institutions). We are all free to provide data about ourselves. However, my plead to you all is that we, as a community, do it consciously and intentionally, if we believe that this indeed represents a benefit to our work. Personally, I am not convinced that it is the case and we should take a joint stand to prevent poor quality indicators to disrupt our work and careers. Would the poor added value of this new indicator, built just now in front our eyes, favor the quality of reviews over the quantity or the opposite? I let you judge this by yourselves. At the European Journal of Mineralogy (EJM) we have resisted to this trend. We hope we will not be forced to change it in the future due to peers’ pressure. However, it is up to reviewers to decide what will be the indicators that guide our community in the future. If the majority of reviewers follow the trend, in the end we will have no choice. Myself, I have up to now refused to fill out such ‘reviewer recognition’ databases – I hope I am not alone J. Ingrin Managing EditorLire moins >
Langue :
Anglais
Comité de lecture :
Non
Audience :
Non spécifiée
Établissement(s) :
Université de Lille
CNRS
INRA
ENSCL
CNRS
INRA
ENSCL
Collections :
Équipe(s) de recherche :
Matériaux Terrestres et Planétaires
Date de dépôt :
2022-03-02T16:03:56Z
Fichiers
- editorial Elements 17 issue 6.pdf
- Non spécifié
- Accès restreint
- Accéder au document