scholarly journals Are disruption index indicators convergently valid? The comparison of several indicator variants with assessments by peers

2020 ◽  
Vol 1 (3) ◽  
pp. 1242-1259
Author(s):  
Lutz Bornmann ◽  
Sitaram Devarakonda ◽  
Alexander Tekles ◽  
George Chacko

Recently, Wu, Wang, and Evans (2019) proposed a new family of indicators, which measure whether a scientific publication is disruptive to a field or tradition of research. Such disruptive influences are characterized by citations to a focal paper, but not its cited references. In this study, we are interested in the question of convergent validity. We used external criteria of newness to examine convergent validity: In the postpublication peer review system of F1000Prime, experts assess papers whether the reported research fulfills these criteria (e.g., reports new findings). This study is based on 120,179 papers from F1000Prime published between 2000 and 2016. In the first part of the study we discuss the indicators. Based on the insights from the discussion, we propose alternate variants of disruption indicators. In the second part, we investigate the convergent validity of the indicators and the (possibly) improved variants. Although the results of a factor analysis show that the different variants measure similar dimensions, the results of regression analyses reveal that one variant ( DI5) performs slightly better than the others.

Author(s):  
Patricia Pedri ◽  
Ronaldo Ferreira Araújo

It presents a systematic literature review with the intention of highlighting the stage of studies in Portuguese about open peer review. This is a literature search with a qualitative and quantitative approach in order to present an overview of the studies published on the subject. Most of the articles surveyed were published or presented in the 2017-2018 biennium in journals or events in the area of Information Science. The articles state that the open peer review enables greater transparency in the scientific publication process, in addition to other advantages. However, some presente, in addition to disadvantages, contradictions in the editors and reviewers positions about open review. There was a need for new studies that present evidence on the practice of peer review to assist in a better understanding of the open peer review system and point out new perspectives.


Publications ◽  
2020 ◽  
Vol 8 (1) ◽  
pp. 4
Author(s):  
Vincent Raoult

The current peer review system is under stress from ever increasing numbers of publications, the proliferation of open-access journals and an apparent difficulty in obtaining high-quality reviews in due time. At its core, this issue may be caused by scientists insufficiently prioritising reviewing. Perhaps this low prioritisation is due to a lack of understanding on how many reviews need to be conducted by researchers to balance the peer review process. I obtained verified peer review data from 142 journals across 12 research fields, for a total of over 300,000 reviews and over 100,000 publications, to determine an estimate of the numbers of reviews required per publication per field. I then used this value in relation to the mean numbers of authors per publication per field to highlight a ‘review ratio’: the expected minimum number of publications an author in their field should review to balance their input (publications) into the peer review process. On average, 3.49 ± 1.45 (SD) reviews were required for each scientific publication, and the estimated review ratio across all fields was 0.74 ± 0.46 (SD) reviews per paper published per author. Since these are conservative estimates, I recommend scientists aim to conduct at least one review per publication they produce. This should ensure that the peer review system continues to function as intended.


2000 ◽  
Vol 176 (1) ◽  
pp. 47-51 ◽  
Author(s):  
Elizabeth Walsh ◽  
Maeve Rooney ◽  
Louis Appleby ◽  
Greg Wilkinson

BackgroundMost scientific journals practise anonymous peer review. There is no evidence, however, that this is any better than an open system.AimsTo evaluate the feasibility of an open peer review system.MethodReviewers for the British Journal of Psychiatry were asked whether they would agree to have their name revealed to the authors whose papers they review; 408 manuscripts assigned to reviewers who agreed were randomised to signed or unsigned groups. We measured review quality, tone, recommendation for publication and time taken to complete each review.ResultsA total of 245 reviewers (76%) agreed to sign. Signed reviews were of higher quality, were more courteous and took longer to complete than unsigned reviews. Reviewers who signed were more likely to recommend publication.ConclusionsThis study supports the feasibility of an open peer review system and identifies such a system's potential drawbacks.


Sign in / Sign up

Export Citation Format

Share Document