This paper looks at peer review as a cooperation dilemma between scientists who might follow different strategies of behaviour that are sensitive to contextual conditions more than social predispositions. While Robert Merton suggested that scientists are actors who are socialised to certain, functional social norms, which depended on the historical institutionalisation of the scientific community, Pierre Bourdieu viewed scientists as mere "rational" actors competing individually or in groups for power, recognition and influence. The growing competition in science at all levels, the increasing demand for transparency and accountability of the process by various stakeholders, the disruptive technological innovation applied to the management of the publishing process, as well as the increasing complexity of scientific endeavour today, seem to indicate that these two competing theories cannot be straightforwardly falsified and could even be equally true, depending on specific conditions. By extending a previous agent-based model of peer review [Squazzoni & Gandelli 2012, Journal of Informetrics], which was built to look at implications of behavioural strategies of scientists on the quality and efficiency of peer review, we tested different conditions that can make scientist strategies intelligible and their consequences measurable, at least in an artificial model. We first assumed that reviewers could behave randomly (“random” baseline conditions) by providing random evaluations of the author submission quality and measured: (i) system evaluation bias in terms of misallocation of publication, (ii) resources lost by productive authors who were not published although deserved it, and (iii) reviewing expenses, i.e., the level of resources invested in reviewing compared to those invested in publishing at the system level. We built a second scenario in which reviewer reliability depended on previous success/failure as author (“indirect reciprocity” scenario). In this case, reviewers would follow an “indirect reciprocity” strategies in that, if previously published, they would reciprocate by providing reliable reviews to other authors when casted as reviewers next, whereas, if previously rejected, they would reciprocate by providing unreliable reviews in turn. In the last scenario, we assumed that peer review was not confidential but open and transparent so that the names of reviewers were visible to authors (as recently advocated by many analysts and supporters of the open peer review model and implemented in some journals like F1000, Economics E-Journal). This implies that scientists could play direct reciprocity strategies, by supporting reviewers who helped them to get published when authors and punishing those one who contributed to their rejection, when they were authors previously. In the last scenarios, we tested situations in which scientists reciprocate previous experiences not because of previous publication success/failure but by estimating the degree of proximity/distance between the “objective” value of their submission and the rating expressed by the reviewer. In this case, scientists looked at the pertinence of reviewer opinion rather than at their success and so were more critical on the quality of their own work. Results showed that, unlike common sense, the random walk is not the worst-case scenario in peer review. Indeed, the quality of peer review dramatically decreases when reviewers follow selfish strategies. Furthermore, we found that open and transparent peer review is the worst-case scenario in case reviewers do not look at the pertinence of reviewer opinion when authors previously but reflect only selfishly their publication score. Although abstract and not directly linked to empirical data, our findings help to discuss implications of Bourdieunian competitive spirits in the scientific community and indicate that Mertonian social norms of the scientific community must not be taken for granted but reinforced by exploring new rewards and sanctioning systems.

Transparency comes at a serious cost : An agent-based model of open vs. confidential peer review in science / F. Bianchi, F. Squazzoni. ((Intervento presentato al convegno INAS tenutosi a Utrecht nel 2016.

Transparency comes at a serious cost : An agent-based model of open vs. confidential peer review in science

F. Bianchi;F. Squazzoni
2016

Abstract

This paper looks at peer review as a cooperation dilemma between scientists who might follow different strategies of behaviour that are sensitive to contextual conditions more than social predispositions. While Robert Merton suggested that scientists are actors who are socialised to certain, functional social norms, which depended on the historical institutionalisation of the scientific community, Pierre Bourdieu viewed scientists as mere "rational" actors competing individually or in groups for power, recognition and influence. The growing competition in science at all levels, the increasing demand for transparency and accountability of the process by various stakeholders, the disruptive technological innovation applied to the management of the publishing process, as well as the increasing complexity of scientific endeavour today, seem to indicate that these two competing theories cannot be straightforwardly falsified and could even be equally true, depending on specific conditions. By extending a previous agent-based model of peer review [Squazzoni & Gandelli 2012, Journal of Informetrics], which was built to look at implications of behavioural strategies of scientists on the quality and efficiency of peer review, we tested different conditions that can make scientist strategies intelligible and their consequences measurable, at least in an artificial model. We first assumed that reviewers could behave randomly (“random” baseline conditions) by providing random evaluations of the author submission quality and measured: (i) system evaluation bias in terms of misallocation of publication, (ii) resources lost by productive authors who were not published although deserved it, and (iii) reviewing expenses, i.e., the level of resources invested in reviewing compared to those invested in publishing at the system level. We built a second scenario in which reviewer reliability depended on previous success/failure as author (“indirect reciprocity” scenario). In this case, reviewers would follow an “indirect reciprocity” strategies in that, if previously published, they would reciprocate by providing reliable reviews to other authors when casted as reviewers next, whereas, if previously rejected, they would reciprocate by providing unreliable reviews in turn. In the last scenario, we assumed that peer review was not confidential but open and transparent so that the names of reviewers were visible to authors (as recently advocated by many analysts and supporters of the open peer review model and implemented in some journals like F1000, Economics E-Journal). This implies that scientists could play direct reciprocity strategies, by supporting reviewers who helped them to get published when authors and punishing those one who contributed to their rejection, when they were authors previously. In the last scenarios, we tested situations in which scientists reciprocate previous experiences not because of previous publication success/failure but by estimating the degree of proximity/distance between the “objective” value of their submission and the rating expressed by the reviewer. In this case, scientists looked at the pertinence of reviewer opinion rather than at their success and so were more critical on the quality of their own work. Results showed that, unlike common sense, the random walk is not the worst-case scenario in peer review. Indeed, the quality of peer review dramatically decreases when reviewers follow selfish strategies. Furthermore, we found that open and transparent peer review is the worst-case scenario in case reviewers do not look at the pertinence of reviewer opinion when authors previously but reflect only selfishly their publication score. Although abstract and not directly linked to empirical data, our findings help to discuss implications of Bourdieunian competitive spirits in the scientific community and indicate that Mertonian social norms of the scientific community must not be taken for granted but reinforced by exploring new rewards and sanctioning systems.
4-giu-2016
peer review; evaluation; sociology of science; open peer review; agent-based model; social simulation
Settore SPS/07 - Sociologia Generale
Settore SPS/09 - Sociologia dei Processi economici e del Lavoro
Transparency comes at a serious cost : An agent-based model of open vs. confidential peer review in science / F. Bianchi, F. Squazzoni. ((Intervento presentato al convegno INAS tenutosi a Utrecht nel 2016.
Conference Object
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/478453
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact