Our Mission
The Responsible AI Movement is dedicated to promoting transparent and ethical AI usage in the fast-evolving research publishing landscape. In partnership with publishers and universities, our goal is to establish standardized AI usage guidelines and educate researchers on best practices.
Clarity in AI Guidelines
We advocate for comprehensive AI guidelines in research publishing, including mandatory human verification disclosure and clear classification of AI use.
AI-Literate Scholars
Empower researchers worldwide with clear guidelines and best practices for ethical and transparent use of AI tools in manuscript preparation.
Why The Responsible AI Movement
Is the Need of the Hour
Rise in Retractions: A Growing Crisis in Publishing
AI-related retractions are rising fast, putting scientific integrity at risk. The advent of LLMs like DeepSeek will only increase AI-generated submissions, pressuring editorial systems and blurring authorship. This growing challenge demands urgent action from publishers.
*Based on analysis of the Retraction Watch Data
last updated on 6 June 2025
Honest Research Turned into Tainted Work
Without proper human oversight and review, AI-generated content can compromise genuine research and lead to retractions. Human evaluation is crucial to prevent this risk, yet its necessity is often unclear or absent, exposing publishers and authors to ethical risks.
*Based on analysis of the Retraction Watch Data
last updated on 6 June 2025
Critical Gaps in AI Guidelines:
What Publishers Must Address Now
Disclosure Gap
- While many publishers ask authors to disclose AI usage, they often don't require verification that AI-generated or AI-edited content has undergone thorough human review.
- This lack of transparency leaves room for uncertainty about whether manuscripts have been properly evaluated.
- Without clear disclosure of human verification, the integrity of research remains at risk.
Classification Gap
- AI use in manuscript preparation can range from simple language editing to full content generation, However, there is no standardized system for classification.
- Without clear classification, authors are unsure which rules applies to their specific use of AI.
- As a result, publishers lack clarity on how AI is used in submitted manuscripts, making oversight challenging.

Authors Demand Answers
Researchers Are Confused: Uncertainty Around AI Guidelines
%
of researchers want publishers to provide guidelines on acceptable AI use.
%
want publishers to help researchers avoid potential pitfalls, errors, biases, etc.
%
want publishers to share best practices and tips for using AI.
*Source: An AI study by Wiley
The Responsible AI Movement Provides
Responsible AI hinges on clear publisher guidelines and informed authors. Publishers and universities play a crucial role in providing researchers with: