How Should the Performance of Peer Reviewers be Assessed?
Serving Two Masters
Which elements should be considered as a metric for performance as a good peer reviewer? If we look at the process from the perspective of the manuscript, a good peer review would be considered to be thorough, accurate and prompt, but how do we convert those values to the person performing the review?
In addition, how do we balance the expectations of the two key stakeholders observing the process? Journal editors want “thorough, accurate and prompt,” but anxious authors may be more concerned with fairness, and would probably have a different interpretation of “prompt.”
The Need for Performance Assessment
Peer reviewers work for free (unless you count author discounts and free subscriptions as sufficient compensation), and most do it for the opportunity to contribute to their field with the added bonus of association with a prestigious academic journal.
In that context, should there be a need for performance assessment, or does the system work well enough on “no news is good news”?
With the prerequisites of research experience in the field and prior peer review experience, what other metrics should be added, and how often should they be measured?
Time for Change
This is a period of transition for peer reviewers. On the positive side, Open access journals are starting to examine practical solutions for payment for services rendered. On the negative side, academics under pressure to “publish or perish,” are resorting to colluding with colleagues to review each other’s papers or, even worse, to creating false email accounts and reviewer profiles to review their own work. Payment will require performance metrics of some sort, and the need to remove any suggestion of fraudulent tactics may also lead journals down the path of more frequent interactions with reviewers than just sending them manuscripts or issuing access codes to download them from central servers.
Those Who are About to Be Assessed
If we speculate on the reaction of current and future peer reviewers to the prospect of some form of performance assessment policy, we can probably anticipate a very mixed reaction. Those who bristle at being unpaid labor might appreciate the chance to be perceived ask formal members of the journal staff, whereas those who have managed perfectly well in the current system and are very proud of their association with a prestigious journal may see it as totally unnecessary.
For journal editors and editorial boards, the writing on the wall is clear. Open access journals, whether you’re an advocate or critic, are changing the academic publishing landscape, and with change should come re-evaluation of every element of your business model. The best way to begin the discussion of peer reviewer evaluation would be to directly involve those reviewers in that dialog so that the new model is designed with the consideration of all stakeholders. In fact, just recognizing peer reviewers as stakeholders would be a tremendous step forward, and would probably open up a positive dialog as to what level of performance it should take to retain that position.