By: EnagoAI Misuse Behind a Third of Retractions – 7 Important Perspectives!
The rapid integration of generative AI into research workflows has sparked both optimism and growing alarm. A recent analysis in the Journal of Korean Medical Science (JKMS) found that nearly one-third of retractions in AI-related publications were attributed to AI misuse, signaling that integrity risks are rising rather than stabilizing.
This finding echoes a broader concern highlighted by our global Responsible Use of AI (RUAI) Movement, which urges researchers and publishers to balance efficiency gains with human judgment, transparency, and accountability. The integrity landscape is shifting faster than current policies can accommodate. As researchers rely more heavily on AI to accelerate writing, data handling, or literature synthesis, journals have begun tightening scrutiny. This can to lead to either rejections or worse retractions. The later not only undermines public trust but also compromise the scholarly record. Once faulty work enters the literature, it can continue to influence citations long after it is withdrawn, compounding the damage.
What Types of AI Misuse are Actually Leading to Retractions?
The JKMS study highlighted several recurring patterns of misuse. The most common include:
- Undisclosed AI-generated text, where authors rely on AI to produce sections of the manuscript without transparency. For example, authors submitting literature reviews written largely with generative AI, without disclosure may result in factual inaccuracies, oversimplified interpretations, or missing nuance. Editors often detect inconsistencies in tone or sudden shifts in technical depth.
- Fabricated or manipulated data/images produced using AI tools. Some papers have been flagged as they had clinical images enhanced or altered with AI-based editing tools. In one case, contrast-enhancement algorithms unintentionally removed crucial artifacts, making the data appear more conclusive than it actually was.
- Hallucinated references, where AI generates plausible-looking but non-existent citations. Common in early-career submissions, students and researchers relying on AI to generate citations have submitted references to non-existent trials or incorrectly attributed studies.
Retractions were also triggered by methodological flaws introduced or amplified by AI-generated content. In essence, the issue is that AI use is opaque, unverified, and sometimes misrepresented as genuine human scholarship.
Are AI detectors Creating New Integrity Risks Through False Positives?
Surprisingly, yes. Multiple independent studies have now shown that AI-text detectors produce both high false positives and false negatives at concerning rates. A 2024 analysis in Research Integrity and Peer Review found that several leading AI-detection tools misclassified human-written academic text as AI-generated, sometimes with high confidence, and no tool achieved consistently reliable accuracy across disciplines.
Additionally, ESL and international scholars are disproportionately flagged because detectors often conflate simpler writing styles with AI-generated text. This has led to documented cases of false accusations, unnecessary investigations, and strained editor–author interactions.
A researcher recently shared his experience as he received reviewer reports alleging AI misuse, solely based on detector output, later proven false.
False positives are creating a secondary crisis: misplaced suspicion, strained editor–author communication, and delayed decision times. This represents a new dimension of the AI-integrity crisis, where researchers are also at risk of being wrongly accused of misusing it. As part of our RUAI Initiative, we specifically warn against “automated gatekeeping,” encouraging publishers to treat detectors only as supplementary tools—not evidence.
Is AI Affecting Data Integrity Beyond Text, Especially in Images and Figures?
Visual-data manipulation has become one of the most rapidly evolving concerns in academic integrity. A 2025 article in Ethics and Information Technology noted increasing concerns about AI-generated scientific figures, warning that AI tools can create images that look “too perfect,” complicating detection efforts and increasing the risk of unintentional manipulation.
Platforms like PubPeer have also flagged a growing number of altered biomedical images, showing that enhanced contrast, noise removal, or smoothing can unintentionally distort scientific meaning, even when authors intended only cosmetic adjustments.
Even well-meaning authors who use AI for “clean-up” or “clarity” may inadvertently violate journal policies or data-integrity norms. As a result, publishers and editors are increasingly calling for transparency statements for figure preparation, similar to transparency expectations for text.
How do we Distinguish Responsible use of AI from Misuse?
When integrated responsibly, AI can improve clarity, accelerate workflows, and help researchers overcome language barriers. But the key difference lies in transparency and human oversight.
Best Practices while using AI for Research Publication:
- Grammar support or language editing (with disclosure). For instance, using AI to refine grammar when preparing a multi-author manuscript from non-native speakers.
- Brainstorming ideas or summarizing literature (verified by the researcher). This would look like asking AI for a structured outline of existing literature, then cross-checking each point manually.
- Analytic assistance where models are validated and outputs are cross-checked. Here, an AI-based statistical explanation tool can be to understand a method, not to generate results.
- Assisting with non-interpretive tasks such as formatting or translation.
Misuse Occurs When:
- AI text output is copied verbatim without disclosure.
- AI-generated findings, data, or figures are used to compensate for lack of domain knowledge without disclosure.
- Researchers skip critical thinking, and rely on AI as a surrogate for expertise.
Responsible AI use means treating AI as a tool, not a co-author or a shortcut past scientific rigor.
What are the Consequences of Irresponsible AI use for Researchers?
The most visible consequence is rejection or retraction, often occurring months or even years after publication. But the impact runs deeper:
- Damage to Credibility: Retractions can follow researchers throughout their careers, leading to a loss of trust with journals and collaborators as retractions or corrections become attached to a researcher’s profile in databases like PubMed and ORCID.
- Institutional Repercussions: Misconduct investigations, lost funding, or damaged departmental reputation.
- Harm to the Scientific Record: Flawed findings may mislead future research before they’re corrected.
Perhaps the biggest cost is a cultural one, each case of AI misuse erodes trust in digital scholarship, making publishers more cautious and slowing scientific communication overall.
What can Researchers, Editors, and Institutions do to Ensure Responsible AI Use?
This is where frameworks like our RUAI Initiative provide much-needed structure to promote transparency, disclosure, and human oversight across the research cycle. Here are a few steps researchers, editors, publishers, and institutions can take to support responsible, transparent, and integrity-driven AI use in research:
For Researchers
- Always declare AI use in the manuscript or cover letter.
- Treat AI content as preliminary and verify every fact, citation, and inference.
- Use AI selectively for tasks where errors are unlikely to distort scientific meaning and can be rectified.
- Avoid AI in data generation or image manipulation.
For Journals and Publishers
- Implement clear, visible AI-use policies, including acceptable and prohibited practices.
- Train reviewers and editors to recognize AI-related red flags.
- Avoid over-relying on AI detectors
- Adopt and strengthen post-publication monitoring systems.
For Research Institutions
- Integrate AI literacy and ethics into researcher training.
- Encourage open conversations about AI in research groups.
- Encourage responsible and transparent AI use by supporting researchers facing false-positive accusations.
- Build internal guidelines that align with international integrity frameworks.
Responsible use must become part of the research culture. Outright bans risks driving AI use underground, while unchecked adoption leads to integrity failures. A balanced position can ensures that AI enhances scholarship accompanied ethical judgment.
Academic integrity in the AI era is now about keeping humans in the loop and ensuring that expertise, critical thinking, and accountability remain at the heart of research.
Similar Articles
Load more