A Call to Publishers:
Join the Responsible AI Movement

Protecting research integrity through collaboration and education

Our Mission

The Responsible AI Movement supports publishers in navigating the evolving AI landscape by establishing clear standards and promoting responsible, transparent AI use in research publishing.

Clarity in AI Guidelines

We aim for comprehensive AI guidelines in research publishing, including mandatory human verification disclosure and clear classification of AI use.

AI-Literate Scholars

Empower researchers worldwide with clear guidelines and best practices for the ethical and transparent use of AI tools in manuscript preparation.

Why The Responsible AI Movement
Is the Need of the Hour

Rise in Retractions: A Growing Crisis in Publishing

AI related retractions are rising fast, putting scientific integrity at risk. The advent of LLMs like DeepSeek will only increase AI generated submissions, pressuring editorial systems and blurring authorship. This growing challenge demands urgent action from publishers.

Honest Research Turned into Tainted Work

Without proper human oversight and review, AI-generated content can compromise genuine research and lead to retractions. Human evaluation is crucial to prevent this risk, yet its necessity is often unclear or absent, exposing publishers and authors to ethical risks.

Critical Gaps in AI Guidelines:
What Publishers Must Address Now

Current AI guidelines fail to keep pace with rapidly evolving technologies, resulting in key transparency and classification gaps. Publishers must act to provide researchers with clear guidelines to uphold trust in scientific publications.

Disclosure Gap

  • Many publishers require authors to disclose AI use but do not mandate verification that AI-generated or AI-edited content has undergone thorough human review.
  • This lack of transparency leaves publishers uncertain whether manuscripts have been properly evaluated.
  • Without clear disclosure of human verification, the integrity of research remains at risk.

Classification Gap

  • AI use in manuscript preparation can range from simple language editing to full content generation; however, there is no standardized system for classification.
  • Without clear classification, authors are unsure which rules applies to their specific AI use scenario.
  • As a result, publishers lack clarity on how AI is used in submitted manuscripts, making oversight challenging.

Authors Demand Answers

Q.
If guidelines only mandate the disclosure of AI tool use but do not require human evaluation, does that mean unverified AI content is acceptable for publication?
Q.
If I use AI tools to help with grammar and sentence structure, should I still provide detailed disclosure about every instance of AI use, or is a general statement enough?
Q.
If I use AI-generated insights to inform the analysis of my data, but not the writing itself, do I need to disclose the AI tool use in my manuscript?
Q.
If I use AI for paraphrasing and summarizing existing research, but all ideas and references are my own, is this considered acceptable, or should AI use be disclosed in full detail?
Q.
In a multi-author manuscript, if one author uses AI tools to assist with writing while others do not, how should this contribution be disclosed to ensure clarity and transparency for the publisher and reviewers?

Researchers Are Confused: Uncertainty Around AI Guidelines

Many researchers remain unsure about how to use AI tools responsibly due to unclear or inconsistent publishing guidelines. This confusion leads to inconsistent disclosure and unintentional misuse of AI, increasing risks for publishers and complicating enforcement of ethical standards.

%

of researchers want publishers to provide guidelines on acceptable AI use.

%

want publishers to help researchers avoid potential pitfalls, errors, biases, etc.

%

want publishers to share best practices and tips for using AI.

*Source: An AI study by Wiley

The Responsible AI Movement Provides

Responsible AI hinges on clear publisher guidelines and informed authors. Publishers play a crucial role in providing researchers with:

Framework

A practical framework to help publishers standardize AI guidelines on disclosure, human verification, and classification. We also review and refine your existing guidelines to ensure alignment with industry standards.
Links

Whitepapers & Toolkits

Access actionable whitepapers and toolkits to help editorial teams and authors effectively identify, disclose, and manage AI-generated content, ensuring responsible use and proper disclosure.

Partnerships with KOLs

Collaborate with AI ethics experts through roundtables and exclusive events to stay ahead of the field and shape best practices for ethical AI use in the research publication industry.

Education & Training

Customized educational resources, including webinars, podcasts, and tailored sessions, to empower your community with knowledge on AI in publishing and ensure thorough human review of AI-generated content.
Links