A Call to Action:
Join the Responsible AI Movement

Protecting research integrity through collaboration and education

Our Mission

The Responsible AI Movement is dedicated to promoting transparent and ethical AI usage in the fast-evolving research publishing landscape. In partnership with publishers and universities, our goal is to establish standardized AI usage guidelines and educate researchers on best practices.

Clarity in AI Guidelines

We advocate for comprehensive AI guidelines in research publishing, including mandatory human verification disclosure and clear classification of AI use.

AI-Literate Scholars

Empower researchers worldwide with clear guidelines and best practices for ethical and transparent use of AI tools in manuscript preparation.

Why The Responsible AI Movement
Is the Need of the Hour

Rise in Retractions: A Growing Crisis in Publishing

AI-related retractions are rising fast, putting scientific integrity at risk. The advent of LLMs like DeepSeek will only increase AI-generated submissions, pressuring editorial systems and blurring authorship. This growing challenge demands urgent action from publishers.

Honest Research Turned into Tainted Work

Without proper human oversight and review, AI-generated content can compromise genuine research and lead to retractions. Human evaluation is crucial to prevent this risk, yet its necessity is often unclear or absent, exposing publishers and authors to ethical risks.

Critical Gaps in AI Guidelines:
What Publishers Must Address Now

Current AI guidelines fail to keep pace with rapidly evolving technologies. It's time for universities, researchers, and publishers to come together to close these gaps to uphold trust in scientific publications.

Disclosure Gap

  • While many publishers ask authors to disclose AI usage, they often don't require verification that AI-generated or AI-edited content has undergone thorough human review.
  • This lack of transparency leaves room for uncertainty about whether manuscripts have been properly evaluated.
  • Without clear disclosure of human verification, the integrity of research remains at risk.

Classification Gap

  • AI use in manuscript preparation can range from simple language editing to full content generation, However, there is no standardized system for classification.
  • Without clear classification, authors are unsure which rules applies to their specific use of AI.
  • As a result, publishers lack clarity on how AI is used in submitted manuscripts, making oversight challenging.

Authors Demand Answers

Q.
If guidelines only mandate the disclosure of AI tool use but do not require human evaluation, does that mean unverified AI content is acceptable for publication?
Q.
If I use AI tools to help with grammar and sentence structure, should I still provide detailed disclosure about every instance of AI use, or is a general statement enough?
Q.
If I use AI-generated insights to inform the analysis of my data, but not the writing itself, do I need to disclose the AI tool use in my manuscript?
Q.
If I use AI for paraphrasing and summarizing existing research, but all ideas and references are my own, is this considered acceptable, or should AI use be disclosed in full detail?
Q.
In a multi-author manuscript, if one author uses AI tools to assist with writing while others do not, how should this contribution be disclosed to ensure clarity and transparency for the publisher and reviewers?

Researchers Are Confused: Uncertainty Around AI Guidelines

Researchers remain unsure about how to use AI tools responsibly due to unclear or inconsistent publishing guidelines. This confusion leads to improper disclosure and unintentional misuse of AI, increasing risks for publishers and complicating enforcement of ethical standards. Universities must also join in advocating for greater clarity and help educate researchers, and act as stewards of their institution’s scholarly integrity.

%

of researchers want publishers to provide guidelines on acceptable AI use.

%

want publishers to help researchers avoid potential pitfalls, errors, biases, etc.

%

want publishers to share best practices and tips for using AI.

*Source: An AI study by Wiley

The Responsible AI Movement Provides

Responsible AI hinges on clear publisher guidelines and informed authors. Publishers and universities play a crucial role in providing researchers with:

Framework

A practical framework to help publishers standardize AI guidelines on disclosure, human verification, and classification. We can also review and refine your existing guidelines to ensure alignment with industry standards.
Links

Whitepapers & Toolkits

Access actionable whitepapers and toolkits to help editorial teams and authors effectively identify, disclose, and manage AI-generated content, ensuring responsible use and proper disclosure.

Partnerships with KOLs

Collaborate with AI ethics experts through roundtables and exclusive events to stay ahead of the field and shape best practices for ethical AI use in the research publication industry.

Education & Training

Customized educational resources, including webinars, podcasts, and tailored sessions, to empower your community with knowledge on AI in publishing and ensure thorough human review of AI-generated content.
Links