
AI in Academic Writing: Ethics, Accountability, and the Importance of Disclosure
“Paper Retracted When Authors Caught Using ChatGPT to Write It” –Futurism
“As Springer Nature journal clears AI papers, one university’s retractions rise drastically” –Retraction Watch
Such headlines are becoming uncomfortably common.
From international journals to university corridors, the fallout from improper or undisclosed use of AI in academic publishing is echoing across the scholarly world. Researchers face retractions and loss of credibility, institutions risk reputational damage, and journals struggle with maintaining trust and editorial integrity. The consequences are not just limited to individual careers but ripple out to affect funding bodies, institutional rankings, and public trust in science.
These incidents are not isolated. Springer Nature journal has retracted over 200 papers since September and among the primary reasons for this mass retraction is blind use of AI or machine-translation software in the writing and peer-review process. A recent report revealed that over 700 published articles may have utilized AI tools without proper disclosure, raising serious concerns about the integrity of academic research.
With wider integrations of AI into the research process, their misuse (particularly without transparency) poses serious threats to academic integrity. While AI can be a valuable productivity tool when used for language enhancement, grammar checks, and citation management, its role must be clearly defined and responsibly managed. The question is no longer if AI should be used in research writing but how it should be used responsibly.
To navigate this evolving landscape, it's essential to understand where the ethical lines are drawn.
Decoding the Role of AI in Academic Writing
The extensive use of AI becomes problematic when it is used beyond its intended scope as a language model. This includes:
- Generating substantial content for drafting sections of a paper
- Interpreting results without proper human oversight
- Paraphrasing in ways that change the intended meaning
AI is also reported to produce inaccuracies, writing errors, and even falsified references. For example, AI journalism has been reported to commit plagiarism and contain significant writing errors. The bibliographies generated by AI could be inaccurate, with no matching authors or publication titles, a flaw that would likely lead to rejection by diligent peer reviewers. Moreover, hallucinations, fabricated citations or facts remain a persistent issue. In such cases, the use of AI veers into ethically grey territory, raising serious concerns about accountability, data validity, and transparency.
Role of Publishers in Addressing AI Misuse
Given these complexities, publishers play a critical role in addressing the risks associated with AI use. Researchers are actively looking to publishers for guidance, with a large majority wanting clear guidelines on acceptable AI use, help avoiding potential pitfalls, and tips on best practices.
Many publishers now require authors to disclose the use of AI in manuscript preparation–the below table* gives a snapshot.

Check “Responsible Use of AI Checklist” by Enago for more details and comprehensive insights
These examples show that while many publishers permit AI use, they stress human oversight and author responsibility. To uphold transparency and trust, disclosure must be multi-fold and comprehensive, including:
- What tool or service was used?
- For what purpose it was used (e.g., grammar correction, paraphrasing, formatting citations)?
- To what extent the AI tool contributed to the manuscript (e.g., specific sections or tasks, such as methods or literature review)?
- Whether AI was used during peer review or editing stages (for example, to summarize reviewer feedback or revise responses)?
While disclosure is essential for transparency, it does not clear the authors of accountability. Authors are solely responsible for all content, including that which was assisted by AI. Although current authorship guidelines implicitly exclude AI text generation, some publishers like Springer Nature have integrated explicit bans on AI content generation tools, potentially leading to retractions for papers generated in this way.
Despite these policies in place, the situation gets complicated when authors and peer reviewers do not explicitly disclose AI use.
Consequences of Non-Disclosure
When authors do not disclose AI use and perform silent corrections or claim to have reviewed AI-generated content without doing so thoroughly, it creates a verification problem. Editors and reviewers are left uncertain about which parts of a manuscript were generated, checked, and/or manipulated by AI tools. A 2025 Nature report revealed that over 700 published articles may have used AI tools without proper disclosure. Some red flags included:
- Tell-tale phrases of AI use like "as an AI language model," "regenerate response," or "certainly, here are"
- Subtler indicators (or) tortured phrases. These occur when paraphrasing tools or less sophisticated AI models replace established scientific terms with nonsensical synonyms. For example, the use of "straight relapse" for linear regression or "blunder rate" for error rate
These papers are now under review, and several have already been retracted. For example, a recent Retraction Watch post details how tortured phrasing tipped off editors to undisclosed AI use in a preprint. Retraction Watch also maintains a list of suspicious papers and peer reviews that is potentially written by ChatGPT. Such reports and retractions can cause significant damage to the reputation of researchers, their institutions, and funders.
Constructive Way Forward
The future of academic publishing hinges on responsible AI integration. Authors are must lead the charge in ensuring transparent AI use by:
- Actively disclosing all AI tool usage, specifying the tool, purpose, and extent
- Taking full responsibility for the integrity of their work, regardless of AI assistance
- Staying informed about evolving publisher policies and best practices
Initiatives by Wiley and STM like recommendations for classifying AI use in manuscript preparation are steps in the right direction, providing a reliable framework for publishers to develop policies and authors to declare their use. There is a growing need for publishers to collaborate across the industry to establish unified, detailed disclosure rules that go beyond simple acknowledgments. Such unified policies will not only enhance transparency but also reduce confusion for authors working across multiple journals with differing expectations.
In an era where algorithms co-author reality, the burden of integrity remains firmly human. Responsible use of AI isn’t just a publisher requirement, it is a collective responsibility we all share as researchers, editors, and institutions. Let’s use AI tools responsibly without compromising the values that make research credible, transparent, and trustworthy.
References:
- https://www.nature.com/articles/d41586-025-00343-5 | 700+ article stats
- https://www.nature.com/articles/d41586-025-01180-2
- https://www.wired.com/story/use-of-ai-is-seeping-into-academic-journals-and-its-proving-difficult-to-detect/
- https://pubpeer.com/publications/CC7BD83B8979D54C5C11F9E3CC61B9
- https://futurism.com/the-byte/paper-retracted-authors-used-chatgpt
- https://publicationethics.org/guidance/cope-position/authorship-and-ai-tools
- https://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html
- https://www.nature.com/articles/d41586-023-00107-z | GPT as author
- https://www.sciencedirect.com/science/article/pii/S2468023024002402 | retraction due to non-disclosure of AI use
- https://www.elsevier.com/about/policies-and-standards/article-withdrawal
- https://arxiv.org/abs/2107.06751 | tortured phrasing
- https://onlinelibrary.wiley.com/doi/10.1002/ijc.34995 | misspelling leading to unidentifiable cancer cell lines
- https://retractionwatch.com/2025/04/24/google-ai-engineer-withdraws-arxiv-preprint-tortured-phrases-genai/ | tortured phrases and undisclosed AI
- https://pmc.ncbi.nlm.nih.gov/articles/PMC9936276/ | concerns and recommendations
- https://www.the-scientist.com/detection-or-deception-the-double-edged-sword-of-ai-in-research-misconduct-72354
- file:///C:/Users/parvathy.v/Downloads/Dialnet-AuthorsDisagreeingWithRetractions-9732017%20(1).pdf | authors disagreeing with retractions (just for personal ref)
- https://www.infodocket.com/2025/02/04/report-how-are-researchers-using-ai-survey-reveals-pros-and-cons-for-science/
- https://www.wiley.com/en-us/ai-study/publishers-role-ai
- https://www.sciencedirect.com/science/article/pii/S2666990024000120 | limitations
- https://stm-assoc.org/document/recommendations-for-a-classification-of-ai-use-in-academic-manuscript-preparation/ | STM
- https://www.wiley.com/en-us/ai-study/publishers-role-ai | Wiley
- https://retractionwatch.com/2025/02/10/as-springer-nature-journal-clears-ai-papers-one-universitys-retractions-rise-drastically/ | retraction due to non-disclosure
Similar Articles
Load more