Can AI Make Peer Review More Equitable?

Just like good research, good peer review must be very robust. I take my responsibilities as a researcher very seriously, and whenever I have the opportunity to conduct a peer review, I approach it with equal seriousness. I have a responsibility to both the author, the journal, and the discipline of research itself. And if I do not provide the best feedback, I am capable of to the author, I would definitely be failing in my duty. Are there some established best practices for writing a paper and doing a review? Yes, there are, and I feel I am well-versed in both.
Peer review is the cornerstone of academic publishing, intended to ensure quality, rigor, and fairness in the dissemination of knowledge. However, the system has long been criticized for bias, inconsistency, a lack of transparency, and inequities based on factors such as gender, geography, institutional prestige, or disciplinary traditions. As artificial intelligence (AI) increasingly enters the academic ecosystem, an important question arises: Can AI make peer review more equitable?
Before we delve into the topic, let’s examine what AI has accomplished over the past couple of years since its groundbreaking entry into the world, particularly in the realm of research. There is no doubt that overall productivity for most of us who use AI has gone up substantially, but there are some sobering facts. MIT’s Project NANDA analyzed 300+ generative AI initiatives and concluded that 95% failed to produce tangible ROI. The failures weren’t due to poor technology—but rather poor execution. The 5% which were successful had good models which relied on human AI partnership, and had iterated solutions. On the other side we have a different problem:
The Split in Workforce Attitudes Toward AI
According to the 2025 McKinsey report and the KPMG–University of Melbourne global study, the workforce is sharply divided in how it perceives AI adoption:
The Embracers (≈50%)
- “Bloomers”: As McKinsey describes them, these are AI optimists who actively collaborate with their organizations to build responsible AI solutions.
- 66% of global respondents in the KPMG study said they use AI regularly, and 83% believe it will bring wide-ranging benefits.
- Many employees report that AI tools like ChatGPT, GitHub Copilot, and Notion AI have made them more productive, creative, and confident.
The Skeptics (≈50%)
- “Gloomers” and “Doomers”: These archetypes represent those who are either disillusioned or deeply distrustful of AI’s impact on work and society.
- Some view AI users as “cheating” or “lazy,” especially in traditional industries or roles where manual effort is culturally valued.
- Only 46% globally say they trust AI systems, and 56% admit to making mistakes due to overreliance on AI.
I would say if this survey was done on the research community, this number would have been much higher. The number of folks in the research community who would not like to see any AI and actually look down on folks who use AI is still fairly large. I admit this is anecdotal, but this is from the researchers I meet regularly.
I could show proof after proof and paper after paper, on how AI could make them more productive, but it would make no difference to them, which is ironic, considering that they are researchers.
Current Challenges in Peer Review
For the “purists” in peer review, we already know the challenges facing peer review:
- Bias and Discrimination: Reviewers may unconsciously favor well-known authors, institutions, or Western-centric research while undervaluing early-career scholars or work from underrepresented regions.
- Lack of Transparency: Many journals use single-blind review, where reviewers know the author’s identity but not vice versa, creating potential for bias.
- Inconsistent Standards: Different reviewers often provide conflicting evaluations of the same manuscript.
- Gatekeeping and Delays: Review processes can take months, disadvantaging scholars from fast-moving fields or those needing timely publications.
I have no doubt that most would agree these are real problems, which would need fixing. However, even those of us who are who favor using AI peer review would agree that the potential for Artificial Intelligence (AI) to make peer review more equitable is a complex topic, presenting significant opportunities, particularly in promoting consistency and efficiency, alongside serious risks, mainly concerning algorithmic bias and unequal access (Hartel, 2025).
While AI tools, such as Large Language Models (LLMs) like ChatGPT, can streamline and enhance the process, rigorous human oversight and the implementation of strong ethical guidelines are essential to ensure fairness (Ebadi et al., 2025).
Potential Benefits for Equitability and Consistency
A major challenge to equitable peer review is the inconsistency observed among human reviewers and editors, where a manuscript’s fate can depend heavily on the individual assigned. AI offers several potential advantages in addressing these inconsistencies:
- Enhanced Consistency and Standard Application: LLMs have the ability to apply uniform standards when reviewing manuscripts, which helps to minimise potential biases or discrepancies that arise from variable human judgment. Reviewers have noted that LLMs can automate tasks and thereby enhance consistency in applying review standards.
- Higher Quality Reviews: Studies suggest that AI can generate peer reviews of higher quality than those produced by human reviewers, provided the AI receives precise instructions (Marrella et al., 2025). In one study comparing human reviews (from accepting and rejecting journals) and ChatGPT-generated reviews, the AI reviews (ChatGPT 4o and o1) received statistically significantly higher ARCADIA quality scores than the human-generated reviews. Producing more detailed and thorough reviews, which comment on items like methodological quality and statistical methods, contributes to a more rigorous and potentially fairer assessment. LLMs themselves are being peer reviewed (Editorial, 2025).
- Reduced Workload and Accelerated Process: AI can significantly expedite the review process by automating preliminary screening, checking for plagiarism, formatting, and language verification. This efficiency can help mitigate the current crisis in peer review caused by the increasing volume of submissions and prolonged review times. A faster, more efficient system could be considered more equitable for authors awaiting publication decisions. In a study more than 76% of researchers were open to using AI assisted peer review under human supervision. (Daoudi, 2025). Indeed, collaboration seems to be the way to go.
- Language Support for Non-Native Speakers: AI tools can assist in editing
manuscripts for clarity and grammar, a function that is particularly beneficial for authors who are not native English speakers, thereby fostering inclusion.
Significant Threats to Equitability
Despite these potential gains, the integration of AI introduces severe ethical and practical challenges that threaten to undermine fairness and introduce new forms of bias and injustice:
- Algorithmic Bias: The most critical concern is that AI models, due to being trained on human-biased data, may reproduce or amplify these existing biases. This could lead to AI judgements influenced by factors like the authors’ or institutions’ origins, compromising objectivity and fairness in the peer review process.
- Lack of Transparency and Opacity: Critics argue that the opacity of AI systems—the lack of knowledge about how the AI reaches its conclusions—can exacerbate existing biases and compromise accountability and transparency. Researchers demand transparency in how LLMs are programmed and how they make decisions.
- Unequal Access and Injustice: LLMs can create a form of injustice due to unequal access. Not all scholars can afford to subscribe to proprietary AI tools, which may place them at a disadvantage. Limited access to AI tools, platforms, and specialized training among researchers is cited as a significant barrier to fully integrating AI equitably across the scientific community.
Ensuring Responsible and Equitable AI Integration
To successfully leverage AI while maintaining or enhancing fairness, the sources stress that AI must function as a complementary tool under human supervision, guided by robust ethical frameworks:
- Human Oversight is Crucial: LLMs should complement human expertise and not replace human judgment. Reviewers must independently assess manuscripts to ensure that AI-generated feedback is appropriate and pertinent, as AI lacks the necessary depth of human creativity, intuition, and critical reasoning (Overcash, 2025).
- Need for Clear Guidelines and Transparency: Uniform guidelines are necessary to ensure fairness, transparency, and responsibility in AI use in publishing. Institutions must establish clear ethical guidelines and legal frameworks.
- Mitigating Bias: To mitigate bias, academic institutions should promote transparency regarding the use of LLMs. Additionally, researchers must ensure that datasets used to train or validate AI models are representative and inclusive, and they should apply fairness-aware data preprocessing techniques to minimize algorithmic bias.
- Addressing the Access Gap: Enhancing access to AI tools, platforms, and training is essential to democratize their use and reflect the deontological commitment to fairness. Furthermore, ethics training should be integrated into curricula, covering data bias and algorithmic transparency.
There is no doubt, this will be a work in progress for some time to come. However, you do not need for perfection (which may never come) before you start using these tools.
AI Tools for Research Paper Review: Comparison table
Here’s a comparison of popular AI-powered tools designed to assist with reviewing research papers—whether you’re a peer reviewer, researcher, or academic editor.
Tool Name | Key Features | Strengths | Limitations |
Enago Read | Manuscript screening, gap identification, ethical issue detection | Strong editorial support and ethical compliance checks | May require manual oversight for nuanced content |
Paper Wizard | AI-assisted review, grammar check, citation validation | Streamlines academic writing and review | Limited customization for journal-specific formats |
Three | Semantic analysis, argument mapping, peer review simulation | Deep reasoning and critique modeling | Still evolving; limited access in some regions |
JAPRA | Journal Article Peer Review Assistant—structure, clarity, novelty checks | Tailored for journal editors and reviewers | Limited support for interdisciplinary papers |
Perplexity AI | Real-time Q&A, citation-backed summaries | Fast and accurate literature insights | Not a full peer review tool—better for research prep |
Penelope AI | Formatting, reference checks, compliance with journal guidelines | Excellent for final submission prep | Doesn’t offer deep content critique |
How to Choose the Right Tool?
- For Peer Reviewers: Enago Read and JAPRA offer structured critique and ethical checks.
- For Researchers: Paper Wizard and Three help simulate peer review and improve argumentation.
- For Editors: Penelope AI streamlines formatting and collaborative review.
In conclusion, I would like to say that these AI reviewer tools are great resources, and they will make you a better researcher, reviewer, or an editor, as long as you use them as a tool.
Sources
- Daoudi, M. (2025). Ethical limits and suggestions for improving the use of AI in scientific research, academic publishing, and the peer review process, based on deontological and consequentialist viewpoints. Discover Education, 4(1). https://doi.org/10.1007/s44217-025-00696-z
- Ebadi, S., Nejadghanbar, H., Salman, A. R., & Khosravi, H. (2025). Exploring the Impact of Generative AI on Peer Review: Insights from Journal Reviewers. Journal of Academic Ethics. https://doi.org/10.1007/s10805-025-09604-4
- Editorial. (2025). Bring us your LLMs. International Journal of Science Nature.
- Hartel, R. (2025). Peer Review—Can AI Help? In Journal of Food Science (Vol. 90, Issue 9). John Wiley and Sons Inc. https://doi.org/10.1111/1750-3841.70537
- Marrella, D., Jiang, S., Ipaktchi, K., & Liverneaux, P. (2025). Comparing AI-generated and human peer reviews: A study on 11 articles. Hand Surgery and Rehabilitation. https://doi.org/10.1016/j.hansur.2025.102225
- Overcash, J. (2025). Human Expertise in an AI-Collaborative Peer-Review Process. In Oncology Nursing Forum (Vol. 52, Issue 5, pp. 316–317). Oncology Nursing Society. https://doi.org/10.1188/25.ONF.316-317