You may have probably heard about fake papers that made it into peer-reviewed publications over the past few years, but now even fake peer reviews are being generated. A group of Italian scientists recently developed a software that can produce referee reports, which are very similar to those developed by journals. In 30% of the cases, the difference between the two types of reports could not be identified.
How Automation is Affecting Science
Some researchers argue that automated reviews could be a threat to scientific integrity. However, automation could prove to be a helpful tool to address some of the problems and limitations of the peer review process. In fact, researchers believe that, if used appropriately, algorithms and text mining could help reduce human errors.
Eric Medvet, one of the researchers involved in the original study says that such programs could be misused either by scholars who want to improve their reviewing prestige without actually investing time in evaluating manuscripts or by predatory journals trying to gain credibility by sending genuine-looking (fake) reports to the authors.
Semi-automated Peer Review
The open access publisher BioMed Central has started a pilot project to determine whether usage of text mining to automate some aspects of the peer review process helps referees and editors make a more accurate decision. The intention is not to replace the existing system, but to support it in a manner similar to other tools, such as duplicate submission check (Aries) or the plagiarism detection software CrossCheck (iThenticate), do.
Four journals published by BioMed Central are taking part in the pilot evaluation: Trials, Critical Care, BMC Medicine, and Arthritis Research and Therapy. Currently, all the papers submitted to these journals undergo the regular pre-submission and editor evaluation. Some articles are then assessed using a new program called StatReviewer in addition to the normal peer review process of the journal.
Peerless Support for Referees and Editors?
This software automatically reviews the statistical and reporting integrity of the manuscripts, checking them against common reporting guidelines, such as the Consolidated Standards of Reporting Trials (CONSORT) Statement, the Standards for the Reporting of Diagnostic Accuracy Studies (STARD), and others. These guidelines were introduced to facilitate the complete and transparent reporting of scientific data, but unfortunately many authors still do not follow them properly.
Although programs, such as StatReviewer, will not able to replace the knowledge, creativity, and expertise of referees and editors, but they could certainly make academic publishing more transparent. At present, if the data in a manuscript is not reported in sufficient detail, reviewers are not able to judge the validity and reliability of the results in a proper way, and thus automated pre-reviewing tools might be helpful.
A System with Potential for Improvement
Despite criticism, peer review remains a crucial part of scholarly communication—and the most accepted method to evaluate the quality of a manuscript—but it is a human process, so it cannot be completely error-free. Researchers, publishers, and funders are working hard to make the peer review process more transparent and efficient, and the use of pre-scanning software has turned out to be a great support in this respect.
Plagiarism detection programs have now become a common tool for evaluating scholarly publications. Now, it remains to be seen how they will respond to other (new) types of softwares that are being developed to improve the peer review process.
Comments are closed for this post.