AI-assisted Peer Review: Early findings and why your voice matters?

Executive Summary:
Artificial intelligence is rapidly entering the conversation, offering tools for everything, from tone editing to triage and stirring intense debate about its rightful place. We present data gathered from the first 100 researchers, editors, publishers, and reviewers from diverse disciplines around the world, through an ongoing, comprehensive, and anonymous online survey.
The survey aims to understand AI usage, perceived benefits, ethical concerns, and training needs. Our early findings reveal not consensus, but a crossroad: optimism, skepticism, curiosity, and caution all converge at a pivotal moment that may shape research practices for decades. While most see potential in AI-assisted peer review, many lack exposure or remain skeptical.
Key Takeaway:
- For the small but growing number of early adopters, AI use is highly focused on editing and communication improvements.
- Confidentiality emerges as the primary concern among non-users, followed by apprehensions regarding accuracy limitations of AI tools (19%) and their perceived inability to ensure fair judgment (16%).
- Reviewer views on AI remain split. Support for AI-assisted peer review hinges on transparency, oversight, and institutional approval, underscoring the need for broad stakeholder input to guide ethical adoption.
The Big Question: Where does AI fit in peer review?
Most reviewers see promise in AI. Many remain wary. What will tip the scales? That’s where you come in. Researchers, editors, publishers, and ethicists everywhere are debating: Will artificial intelligence transform peer review for the better, or pose new risks to the gatekeeping process? Our field stands at a crossroads, and the answer depends on what our community does next.
As global research becomes faster, broader, and more complex, AI tools are being offered as solutions to reviewer overload and bottlenecks. But are these tools ready? Moreover, are we ready to integrate AI into the process?
Peer Review Week gives our community a perfect opportunity to weigh in. If you’ve used AI, your real-world experience is vital. If not, are you anxious about data security? Disappointed by bias or “black box” algorithms? Or do you believe transparent guidelines could make AI a valuable support tool?? Your perspective will directly shape the conversation around AI’s role in research and help chart the course for peer review’s next chapter.
How do Reviewers Feel About AI-assisted Peer Review?
The split is as much about opportunity as opinion. Preliminary findings paint a picture of ambivalence. While nearly 63% of reviewers believe AI could be useful in peer review, they have not yet tried it. Meanwhile, about 24% feel AI simply doesn’t belong in the process, and over 12% are not even aware of relevant tools. In other words, most reviewers while curious tread cautiously, watching and waiting as the peer review landscape continues to shift around them. This measured approach reflects a broader hesitation to fully embrace AI without clearer evidence of its benefits and safeguards.
How are Reviewers Actually Using AI?
Beneath the overarching debate, real adoption is fragmented but evolving in instructive stages. For those experimenting, AI is playing a supportive, not substitutive role. However, the top actual use cases are strikingly practical:
- 27% have used AI to polish the tone or clarity of review comments, making feedback more constructive.
- 21% use it to assist with literature discovery for navigating vast, complex content more efficiently.
- 18% turn to AI to summarize research papers to make sense of submissions faster.
- While only around 10–11% have tested AI for specialized evaluation tasks like spotting missing references, verifying statistics, or assessing novelty.
What emerges is a pattern: AI is a tool for assistance, not an automation of judgment. Reviewers are letting AI lighten the load, while high-stakes technical checks remain firmly human terrain. Reviewers are most comfortable letting algorithms help clarify language or navigate dense literature but keep human instincts at the core of quality decisions. This reveals a practical caution that deserves broader recognition in policy discussions. Yet, this cautious optimism coexists with clear reservations.
What Holds Reviewers Back?
Preliminary findings indicate that respondents who have not yet adopted AI often cite concerns extending beyond technical barriers, highlighting issues that warrant careful consideration.
- 20% worry about confidentiality breaches, “What if uploading a manuscript leaks confidential ideas?” Authors need assurance that their unpublished ideas and data will be protected by strict publisher policies during peer review and editorial processes.
- 19% are unconvinced about accuracy, “Can AI really judge scholarly nuance?”
- 16% question whether it can make fair, unbiased judgments
- 11% lack access to approved or reliable tools
- 9% report that their universities and research institutes actively discourage AI use
Concerns about trust, privacy, and control are not just footnotes, they are the heart of the debate. While 13% reviewers simply don’t feel the need to use AI, the top concerns — confidentiality breaches, bias reinforcing systemic inequities, and the imperative for transparent human oversight — reflect the professional standards that uphold peer review integrity. These issues go beyond surface-level ethics and underscore why adoption remains cautious and contested. Without clear, transparent policies to address these barriers, AI integration will remain partial and fraught with mistrust. This tension leads to ethical gray areas and competing priorities that spark real fascination and friction.
Ethical Gray Areas and Competing Priorities
When reviewers are presented with specific use cases. Their answers highlight just how layered the debate has become and how divided the community remains on issues of disclosure, oversight, and legitimacy:
- Respondents support AI involvement in reviewer selection (44%) or manuscript screening (13%) only if the use is transparent and the tools are approved by the journal.
- AI-generated content (11%) and AI-facilitated language translation (20%) are particularly thorny; most support their use only when disclosure, quality control, and policy approval are guaranteed.
- Concerns about over-reliance and loss of human judgment temper even positive opinions about efficiency gains.
- The “black box” nature of commercial AI systems is worrisome as reviewers may trust AI outputs without accountability. Potentially allowing errors, bias, or flawed recommendations to go unchecked. Thus undermining peer review integrity.
These are emerging patterns, not absolute conclusions. Policymakers, editors, and technology developers need input from skeptics, early adopters, and those who haven’t even considered using AI. Without broader participation, the picture will be incomplete and future policies may tilt toward the views of the most vocal few. To shape ethical and effective AI use in peer review, we need perspectives from every corner.
Why Your Perspective and Broad Participation Matters Right Now?
This is not a field trying to resist change. The stakeholders are demanding that change unfold on fair, open, and well-understood terms. These insights animate the ethical gray zones. The community isn’t simply “pro” or “anti” AI. The terms of use, transparency, safeguards, and policy endorsement are where the real debate sits. It’s tempting to see these patterns as a divide between optimists and skeptics. But the more meaningful split is between those experimenting and a silent majority not yet represented in policy debates. If you use AI, your creativity and caution both matter. If you don’t, your concerns and hesitations show where more work must be done to build trust and understanding.
This survey aims to gather insights into the current use and perception of AI in the peer review process. It seeks to understand how AI tools are integrated into workflows, identify barriers to adoption such as ethical concerns and biases, and inform the development of transparent policies and guidelines. Additionally, the survey explores gaps in AI literacy and training needs to build trust and support among reviewers and editors, ultimately guiding responsible AI implementation in scholarly publishing. We also plan delve deep into research areas, geographical influence, and other factors for a better understanding of trends and direction.
Take the survey today and help chart the course for peer review’s next chapter.
By adding your voice to the survey, you help ensure the future of peer review is guided by collective wisdom not just technological momentum.