Upskilling the Next Generation of Peer Reviewers for AI Integration

There are already a lot of different tools that assist the research process using AI, from transcribing, literature discovery, data analysis, graphics creation, and reference citation. Within the academic publishing process, AI-based tools range from those created to spot integrity issues to those aimed at collecting data for indexing and marketing purposes. However, as AI becomes more embedded in research and publishing, reviewers will need new competencies to evaluate AI-assisted manuscripts responsibly and ethically.

Why now?

AI-enabled tools are swiftly replacing more traditional reference managing software. Tools that integrate grammar checks and paraphrasing suggestions also enable in-text citation and creation of reference lists, so the final step of checking the accuracy of reference lists becomes even more important in the current landscape. Hallucinated references have already caused damage and shaped vaccine denialism policies, so checking for retractions and the context in which the references are cited is still something that needs to be handled by a human.

The people working in and with higher education and research institutions are already using AI-based tools for some of the tasks we traditionally associate with boring research work. The role of institutions is to democratize access to these tools and enable collaboration based on knowledge and shared research interests, not access to equipment and tools. To build an AI-ready peer review ecosystem, publishers must be very clear on what they allow within their own workflows, how the data they collected is used, and where copyright still plays a role.

What Skills are Essential?

We don’t need to know how AI works to understand its advantages and pitfalls. But we need to be aware that we cannot blindly trust the output of any tool. As the number and variety of these tools’ increases, learning what works best for specific purposes and audiences becomes the next logical step.

Integrity checks are a great example of the tasks that are augmented by the use of AI-enabled tools, where human judgment cannot be replaced. As much as ethical statements and declarations become standardized, it is impossible to cover all the nuances of the ways in which the use of data collected from humans can be detrimental to people participating in some studies.

Moreover, the use of tools that enhance the researcher (author, reviewer, and editor) experience when dealing with editorial processes is key, and a lot more attention should be devoted to this question. This is ever more relevant with the pressing need to make software interfaces accessible to all.

How can Reviewers Upskill?

We all know what should be presented as a histogram or a line graph, so understanding the best way to analyze the data provided by whatever tools are available is key. If an editor or a journal enables the use of AI-based solutions for peer reviewers, all researchers will need to learn how to write prompts, as at some stage we all learnt how to use a computer.

It’s not just reviewers, but all researchers that need to upskill. I envision that some of these tools will become available as institutional subscriptions, whereupon 45- or 60-minute-long modules teaching users the basics of AI will become available.

Short term and self-paced programs are my favorite, as they can be completed around predictable pauses in lab work.

Nobody expects that we all pause what we are already doing to move our workflows to the latest shiny tool, especially if validation and testing are still ongoing.

However, I hope this means that reviewer training will become increasingly more available to early career researchers, making access to editorial board memberships more equitable.

What’s the Bigger Picture?

AI-based tools are only as effective as the way in which they have been trained and the quality of the data used to train them. As reviewers, we need to become increasingly stringent with what is deemed publishable.

Scientific research is done by humans and for humans, so human insight is key to its development. A lot of data-heavy studies that look at correlations between massive datasets have been published that lack any true critical analysis. Placing the findings in context is not just stating what is similar or dissimilar to what has already been published, but also an evaluation of how the new knowledge can and should be used, avenues to improve adjacent fields, and an exploration of emerging and interesting questions.

Curiosity and the need to solve problems are what motivate the drive for scientific research, but, ultimately, imagination is intrinsically human still.

Disclaimer: The opinions/views expressed in this article exclusively represent the individual perspectives of the author. While we affirm the value of diverse viewpoints and advocate for the freedom of individual expression, we do not endorse derogatory or offensive comments against any caste, creed, race, or similar distinctions. For any concerns or further information, we invite you to contact us at academy@enago.com
  • By clicking here, I state that I have read and understood the terms and conditions mentioned above.

Rate this article

Rating*

Your email address will not be published.