1

Is Open Peer Review Good Enough?

Peer review is an essential part of science publishing, and although it has been criticized many times, it remains the most accepted way to assess the quality of research papers. Its main goal is to filter out incorrect, fraudulent, or poorly written articles.

The limitations of the peer review process are well known, so researchers, editors, reviewers, and publishers are continuously searching for new ways to improve it. Several approaches have been tested. These include double-blind peer review, in which both the reviewer and the author are anonymous. Several types of open or transparent peer review such as public peer review—where the contents of the reports are published alongside the final versions of the articles—post-publication peer review—where anyone interested can comment on a published paper—or open identity peer review—where the reviewers are asked to sign their reports.

Open Peer Review

While some researchers see open review as a good way to prevent offensive comments and stop plagiarism, others believe that courtesy or fear of retribution may cause referees to avoid or mitigate criticism.

Usually, when we speak about open peer review, we actually mean the open identity approach. This method was recently tested by a research group at Lund University in Sweden. The team led by Martin Almquist found that open peer review attracts fewer, lower quality reports compared to the conventional reviewing process. After asking authors and reviewers for their opinions on peer review, the scientists piloted an open online forum at the British Journal of Surgery (BJS) with the goal of improving the peer review system. They published their results in the open access journal PLoS One.

The Trial

Between April and June 2015, authors submitting their papers to BJS were invited to allow their manuscripts to undergo open peer review in addition to the standard, single-blinded process. Support was quite strong and only 10% of the authors were against open online peer review, mainly because of intellectual property concerns. Finally, 110 manuscripts were included in the trial, with e-mail invitations sent to 7000 reviewers having BJS accounts in Scholar One. The papers are available online for three weeks and could be accessed by the referees through e-mail links. The manuscripts were also sent to reviewers in the conventional way. The quality of the reports was evaluated by editors and editorial assistants using a validated system.

Only 44 of the 110 manuscripts (40%) received at least one open peer review, and only 59 reviewers—from 7000—participated. The researchers also found that the quality of the open forum reports was lower than that of the conventional ones.

A Randomized Study

Almquist and colleagues pointed out that one of the limitations of their study was that it lacked randomization and a proper control group. A randomized investigation carried out in 1999 found that reviewers randomized to be identified were more likely to decline the reviewing request than referees randomized to remain anonymous. However, it did not find any significant effect on the quality of the reports, the recommendation regarding publication, or the time taken to review.

Most researchers agree that the principle of peer review is right, but the system does have great potential for improvement. Therefore, many publishers are increasingly opening up their practices and finding ways to bring more transparency into their peer review processes.

Rate this article

Rating*

Your email address will not be published.

You might also like
X

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • Q&A Forum
  • 10+ eBooks
  • 10+ Checklists
  • Research Guides
[contact-form-7 id="40123" title="Global popup two"]





    Researchers' Poll

    What should universities' stance be on AI tools in research and academic writing?