By: EnagoWhy 65% of Publishers Still Have No Clear AI Policy and What It Means for Researchers? – 6 Important Points to Consider
AI may be rewriting research at record speed, but publishers are struggling to keep up. An audit of 162 academic publishers (all members of STM, the international scientific, technical, medical publishers), only 56 had publicly available AI-use policies, meaning, 65.4% of publishers currently offer authors no clear guidance.
Researchers are adopting generative AI faster than the publishing ecosystem can regulate it, creating a widening gap in expectations, transparency, and integrity. As generative AI becomes central to how researchers draft, refine, and communicate their work, this policy vacuum is creating uncertainty and raising critical questions about responsible use, disclosure, and accountability. Below, we answer a few key questions that will help researchers navigate the evolving landscape of scholarly publishing.
- What do Publisher AI Policies Typically Allow or Restrict?
Among publishers that do have AI policies, several norms are emerging:
- AI tools cannot be listed as authors. This is consistent across major publishers, as AI cannot take responsibility for the scientific accuracy of a manuscript.
- Most publishers expect full disclosure of AI use. If a researcher uses AI for drafting, editing, or language support, they are usually required to declare it in the manuscript (often in the Methods or Acknowledgements section).
Sample AI-Use Disclosure Statement
“During the preparation of this work, the authors used Chat Generative Pre-Trained Transformer (ChatGPT; OpenAI, San Francisco, CA, USA) to enhance readability and language, aiding in formulating and structuring content. After using this tool, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication.”
- A small number of publishers impose strict bans on generative AI. A minority prohibit the use of generative AI tools entirely, especially for drafting or rewriting scientific content.
- Permitted use is typically limited to language support. Many policies allow AI for grammar, clarity, or formatting but discourage using AI-generated ideas, interpretations, figures, or citations.
- Policies vary widely in detail and enforcement. Some are clear and comprehensive; others are vague especially about whether AI-generated text, images, or summaries count as acceptable. This inconsistency leaves many researchers unsure where the line is drawn.
The emerging consensus is that you may use AI to polish your writing but not to create or interpret scientific content. However, you must be transparent about any AI use. As part of our Responsible AI initiative, we have developed an updated, easy-to-navigate list of publisher AI policies to help researchers ensure transparent and compliant submissions.
2. Why is There So Much Variation or Lack of Policy Across Publishers?
- AI adoption is outpacing policy development. Researchers have embraced AI tools at extraordinary speed, while publishers are still debating or piloting guidelines. Many policies remain in early stages or unpublished.
There are several systemic challenges that explain this inconsistency:
- The definition of “AI use” is broad, ambiguous, and evolving. AI can be used for tasks as simple as grammar correction or as complex as drafting paragraphs, summarizing literature, checking references, or generating figures. Because these use-cases vary so widely, publishers struggle to draw clear, enforceable boundaries. As The Publication Plan notes, this diversity makes standardization difficult.
- Many publishers defer AI-related decisions to journal editors. Instead of adopting a uniform policy across all titles, some publishers leave decisions about AI use to individual journals. This creates significant variations (even within the same publishing house) and can make compliance unpredictable for authors. For instance, Springer has acknowledged that journal-level discretion can lead to inconsistent expectations. Therefore, checking target journal guidelines has become essential, because high-level publisher rules rarely capture the nuances that editors apply in practice.
3. How Are Different Regions Approaching AI Use?
The absence of global harmonization is widening the compliance gap. China has implemented some of the strictest generative AI and data governance rules, including mandatory security assessments for AI models. Many Chinese journals require explicit AI-use declarations as part of submission. Chinese universities and research bodies have issued clear, usage-focused guidelines emphasizing disclosure, human accountability, and restrictions on AI-generated data or interpretations.
Europe’s EU AI Act directly influences research workflows by emphasizing transparency, traceability, and risk classification. Funders like Horizon Europe and agencies across Germany, the Netherlands, and Scandinavia emphasize ethical AI use, documentation, and auditability. European publishers (e.g., Elsevier, Wiley) tend to have stricter, more detailed AI guidelines, aligned with broader EU digital governance priorities.
The United States has no central AI-use mandate for research or publishing; policies are fragmented across institutions and publishers. Most US publishers provide principle-based, flexible policies, but clarity is highly variable. Universities offer diverse interpretations, from encouragement of AI literacy to outright bans in certain departments.
4. Is there a Demand for Clear Policies? What are the Risks Researches Face and Why it Matters Now?
Yes, according to the preliminary findings of a major survey conducted for Wiley’s AI Research Study, ExplanAItions there is a community-wide demand for clarity, consistency, and training to use AI responsibly in research communication.
- 70% of researchers want publishers to provide clear AI-use guidelines
- 62% want concrete best-practice examples
The academic community is ready for more consistent and well-defined norms around AI use.
This growing expectation signals that the academic community is ready for clearer, more structured norms around AI use. Without clarity, well-intentioned authors risk being accused of misconduct simply for using AI tools differently than an editor expected. Good-faith disclosure becomes a liability without consistent rules. Authors may over-disclose and trigger unnecessary scrutiny, or under-disclose and be flagged for nondisclosure. Furhter, editorial decisions become subjective and unpredictable. Two editors at the same publisher may respond differently to identical AI use. As publishers lack standardized criteria to assess whether AI was used responsibly. This leaves the scholarly record vulnerable precisely when integrity risks are rising.
5. Given the Ambiguity in Policies, how Should Researchers Approach Using AI Tools Responsibly to Draft Manuscripts?
Here are some good-practice recommendations (based on literature and evolving norms):
- Did I restrict AI to acceptable use-cases (e.g., language polishing)? Avoid letting AI generate large chunks of text, literature reviews, analyses, or interpretations without careful human oversight and validation.
- Did I verify all facts, references, and claims myself? As human authors, retain full responsibility for facts, citations, interpretations, and overall integrity.
- Did I add a disclosure statement? Even among publishers with policies, disclosure (e.g., in Methods or Acknowledgments) is the most common requirement.
- Are my manuscript drafts saved to demonstrate human authorship? Document your draft history. Keep track of drafts, edits, and your own contributions versus AI contributions. This is especially important if questions about integrity arise later.
- Did I avoid listing AI as a co-author? Most journals now are very clear that AI should not be credited as an author.
- Did I read the journal’s AI policy? If there is no public policy, follow all the above-recommended practices. Additionally, consider contacting the editorial office directly for more details.
6. What Does This Mean for the Future of AI-assisted Research and Publishing?
The publishing landscape is evolving dramatically as AI tools become more powerful and accessible. This has nudged all stakeholders to take notice and seriously reconsider research evaluation, editorial and publication strategies. Several shifts are imminent:
- Cross-publisher standardization is likely. More publishers will be compelled to adopt formal, clear, public policies on AI use. This will lead to setting a standardized process for ethical AI use globally and lead to a complete compliance over time.
- Tools like automated pre-submission integrity checks will become common. Authors will have to treat AI as an assistant and not a co-author, maintaining rigorous oversight, and being transparent about AI’s role in their workflows
- AI literacy will become a core research skill. Researchers and institutions will increasingly need AI literacy to use advanced AI tools responsibly and transparently.
As AI accelerates research writing, responsible use has become a core professional responsibility. Editors are already flagging submissions unpredictably, while genuine misuse becomes harder to detect. As global regulations diverge, the absence of publisher clarity creates inequity and uncertainty. This is the moment for decisive, transparent standards—before trust erodes further.
The longer publishers take to define boundaries, the more the gap widens between what AI can do and what journals explicitly permit. This gap is already being exploited and honest authors are caught in the crossfire. Until publishers finalize clearer guidelines, the best approach for researchers is simple: be transparent, verify every AI-supported output, and retain full human accountability.
Another safe bet would be to get professional editorial help with your manuscript and research workflows to avoid being flagged and to optimize publication timelines. This will also help avoid any integrity issues that could be raised in the future. Following these best practices will not only protect your submission but also strengthen the integrity of the scholarly record.
Similar Articles
Load more