Will Crowd-based Peer Review Replace Traditional Peer Review?

In simple terms, if you want your research paper published in a reputable journal, it must be read and assessed by experts in the field through the peer review process before being accepted by the journal. As hard as it is to accept criticisms from peers, especially after spending months or years researching the subject and analyzing the data, peer review is one of those necessary steps on the way to being accepted by a publisher and getting your research results out there for all to see. Without it, you will most likely not be published in any reputable journal.

There are different forms of peer review; however, a new method is gaining interest. These are discussed below.

Open Versus Blind Peer Review

Different review methods are being tested to facilitate the review process, which can sometimes take months. Reviewers, for the most part, are volunteers who have other jobs that demand their attention. Thus, although they take great strives to review papers in a timely manner; many are under a lot of pressure and must sometimes put reviews for a later date. According to a recent blog, reviewers “invested roughly 13–20 billion person hours in 2015.” Even then, this work does not receive the recognition that it should. In spite of this, we need to have our papers peer reviewed.

Blind peer review is the traditional process in which the reviewers and authors don’t know each other either personally or by professional accomplishments. This type of review helps to avoid conflicts caused by biases. Without the author knowing where the review originated, that reviewer can be rest assured that there would be no professional repercussions directed at him or her. But then, where’s the reviewer’s accountability? Many authors are shying away from this simply for that reason.

Open peer review is another option tested by a few journals with mixed results. One of the successful tests was done by the folks at Atmospheric Chemistry and Physics (ACP) who tried an open system of authors posting their papers on the Internet and simultaneously submitting them for a formal review. The open review published the critiques for everyone to see, along with the authors’ responses. For this particular journal, the option was well received. Both authors and reviewers believe that transparency is one way to identify whether a bad review is the result of a personal bias and not caused by faulty research. In addition, it might prompt more recognition for the hard work put in by the reviewers, which would, in turn, encourage better, unbiased reviews.

The New Crowdsourced Reviewing Method

“Crowdsourcing” (also called “intelligent crowd reviewing”) is used to describe a new peer review method through which several (a “crowd” of) reviewers are chosen for their expertise in a specific discipline and are asked to accept manuscripts in that discipline for review in a public forum. This is different from conventional peer reviewers who are part of a “stable” of reviewers in a journal or through a university and who receive multiple manuscripts. In an article published in Nature entitled “Crowd-based peer review can be good and fast,” the author describes the results of using this new method. The author and his colleague chose 100 reviewers and created a closed online forum. All manuscript authors provided written permission for their papers to be used in this study. Concurrently, the papers were also evaluated using the normal peer review process. Both were given 72 hours to finish their reviews.

The results were positive for this new crowdsourcing method. The authors found that there was not much difference in the quality of the reviews between the crowd and the traditional reviewers. They also found that the crowd tended to complete the reviews faster than the traditional reviewers did and that their comments to authors tended to be more detailed, which the manuscript authors found very helpful. For upcoming peer reviews in Nature, this option is expected to take hold. According to the article, the journal plans to expand on this idea. It will use key words that will match manuscripts with reviewers who specialize in that particular discipline. The crowd of reviewers will remain as a functioning unit for the journal, which will find ways by which to acknowledge those reviewers who remain with the new protocols.

Future of Peer Review

At the 2016 SpotOn conference in London, the main subject was the future of peer reviews. Those who regularly participate in the publishing process (e.g., publishers, stakeholders) were asked for their input on peer review by 2030. The discussions centered around the fact that the process is slow and inefficient and, as stated above, often subject to bias.

Some of the suggestions for change were as follows:

  • Expand the role of technology, such as artificial intelligence to match reviewers with the subject matter.
  • Increase reviewer diversity in, for example, age and geographic area.
  • Increase transparency.
  • Use new artificial intelligence to detect inconsistencies in a manuscript, such as throughout figures.
  • Use newly created programs to better detect plagiarism.
  • Increase reviewer recognition to increase their incentives.


It is clear that the current process is lacking and needs to be revamped. Many of the suggested changes, especially that of using artificial intelligence will help streamline the process, make it faster, and reduce the workload and stress of reviewers. A more efficient review process will provide for faster dissemination of new research and information. After all, that’s what we hope for when publishing our papers.

Rate this article


Your email address will not be published.

You might also like

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • Q&A Forum
  • 10+ eBooks
  • 10+ Checklists
  • Research Guides
[contact-form-7 id="40123" title="Global popup two"]

    Researchers Poll

    Which is the best strategy to support research integrity according to you?