EnagoBy: Enago

AI-Generated Content and Plagiarism: How Publishers Can Ensure Research Integrity in the Age of AI

AI-Generated Content and Plagiarism: How Publishers Can Ensure Research Integrity in the Age of AI

The integration of AI into research writing is accelerating rapidly, bringing both benefits and concerns. Recent data highlights how widespread AI use has become in scholarly outputs. A comprehensive analysis of over 15,000 oncology scientific abstracts presented at the ASCO Annual Meetings revealed a significant increase in AI content detection in 2023 compared to preceding years. The odds of an abstract containing detectable AI content in 2023 were found to be nearly 2 to 2.5 times higher than in 2021. This surge aligns with a recent survey by Oxford University Press (OUP) which indicated that 76% of researchers are already utilizing some form of AI tool in their work. This poses a new challenge for upholding research integrity.

 

Complexities of AI-Plagiarism

 

Plagiarism is traditionally the act of taking someone else’s work or ideas and presenting them as your own without proper credit. While AI-generated content is created algorithmically by analyzing and synthesizing vast amounts of data, it introduces unique forms of "AI plagiarism". 

Unlike simple copy-paste plagiarism, AI-generated content exists on a spectrum—from AI-assisted editing to completely AI-generated sections. Major publishers and societies like COPE and ICMJE have established that Large Language Models (LLMs) cannot be listed as an author of a work, nor take responsibility for the text they generate. But, determining the appropriate level of AI involvement is yet to be actioned better considering the nuances involved in this constantly evolving space. In this regard, the scholarly community is facing 2 major challenges:

 

The Detection Challenge

Identifying AI-generated content remains a complex task. Detection tools use machine learning and NLP to flag AI-generated patterns, but their effectiveness varies. The AI content detectors are also widely reported to be inconsistent, flag false-positives, and less accurate overall. Additionally, AI has been shown to mimic human writing styles and/or add minor, natural-sounding changes. This is making it increasingly tricky for editors and reviewers to identify the authenticity of the submitted manuscripts.

Humans are struggle to identify AI text, with a preprint showing that reviewers were able to correctly flag only 68% of ChatGPT content, with a 14% false-positive rate. Compounding the issue, AI also fabricate citations and misrepresent facts, posing serious risks to research integrity. Given these limitations, AI detectors should be treated as preliminary screening tools, with human oversight remaining crucial.

 

Policy Fragmentation

The second major challenge is inconsistent policy across publishers. While most agree that AI cannot be credited as an author and must be disclosed, specific guidelines are often vague. There is little distinction between AI tools used for grammar correction and those for content generation or data analysis. This lack of clarity leads to contradictory practices. Disclosure is often reduced to generic statements, lacking details about tool types or usage context.

Divergence also exists in AI use in peer review. Some publishers use AI for tasks like statistical checks or reviewer suggestions. Others prohibit AI use entirely due to confidentiality concerns and a few allow limited use with disclosure. This inconsistency confuses researchers—a survey showed that 72% researchers were not aware of any institutional policies or there were none to start with. This underscores the need for clearer, standardized AI governance in scholarly publishing.


Check Responsible Use of AI Checklist by Enago for more details and comprehensive insights 

 

The Publisher's Opportunity: Leading Through Standards

Publishers have a unique opportunity to shape responsible AI adoption by establishing clear policies that balance innovation with integrity. This requires moving beyond reactive measures to proactive leadership.

Comprehensive Disclosure Frameworks

Leading publishers now require authors to disclose and describe the use of GenAI tools in writing the manuscript text. However, effective disclosure goes beyond simple declarations.

Best Practice Framework:

  • Set clear and transparent guidelines on responsible use of AI
  • Mandate verification of AI generated content by subject matter expert and language expert
  • Ensure that disclosure of AI use along with information on specific tasks AI assisted with (editing, ideation, analysis, writing) is shared transparently
  • Require authors to confirm human oversight and verification of all AI outputs as this is a crucial step that ensures accountability

 

Quality Assurance Through Technology and Human Expertise

A hybrid model that combines AI detection tools with trained editorial staff offers the best safeguards against the misuse of AI in scholarly publishing. 

What Publishers Can Do:

  • Deploy multiple AI detection tools with different algorithmic approaches
  • Train editorial staff to recognize patterns in AI-generated content
  • Establish review protocols that specifically address AI-related concerns
  • Create feedback loops between detection tools and editorial decisions

Author Education and Support

Rather than simply prohibiting AI use, publishers should position themselves as guides for responsible AI adoption. This involves creating resources that help authors understand both the potential and the pitfalls of AI assistance.

What Publishers Can Do:

  • Guidelines for effective AI prompting that maintains scholarly rigor
  • Examples of appropriate vs. inappropriate AI assistance
  • Training on fact-checking and verifying AI-generated citations
  • Workshops on integrating AI tools into research workflows responsibly

 

Addressing the Integrity Challenge- The Human Factor is Essential
 

Generative AI is reshaping the scholarly publishing industry by integrating into research and publishing workflows, fundamentally changing how knowledge is created, refined, and disseminated. Recent data shows that 68% of educators now rely on AI detection tools, reflecting the rapid adoption of AI technologies across academic institutions. Publishers are also reporting significant increase in submissions that involve AI assistance, from literature reviews to data analysis and manuscript preparation. For publishers, this represents both unprecedented opportunity and profound responsibility. 

The impact of AI depends entirely on how it is integrated into publishing workflows. Without clear guardrails and active human oversight, AI risks weakening the very principles that uphold academic integrity. It’s not AI itself that threatens trust in research but the lack of consistent, transparent, and responsible human involvement. 

The decisions publishers, institutions, and researchers make now will shape how future generations view credibility, trust, and innovation in scholarly publishing.

Key Areas for Collective Action:

  • Developing shared taxonomies for AI disclosure requirements
  • Creating interoperable detection and verification systems
  • Establishing common ethical frameworks for AI use in research
  • Building peer networks for sharing best practices

By establishing clear standards, investing in appropriate technologies, and fostering a culture of responsibility, publishers can lead with foresight and build a system where AI strengthens the integrity of scholarly publishing. 

 

References

https://hai-production.s3.amazonaws.com/files/hai_ai_index_report_2025.pdf
https://www.pemavor.com/ai-generated-content-vs-plagiarism-where-do-we-draw-the-line/
https://www.thesify.ai/blog/when-does-ai-use-become-plagiarism-what-students-need-to-know
https://pmc.ncbi.nlm.nih.gov/articles/PMC10844801/
https://thepublicationplan.com/2024/10/10/publisher-policies-on-ai-use-is-it-time-for-change/
https://corp.oup.com/news/how-are-researchers-responding-to-ai/
https://pmc.ncbi.nlm.nih.gov/articles/PMC12001429/
https://www.theblogsmith.com/blog/is-using-ai-plagiarism/
https://pmc.ncbi.nlm.nih.gov/articles/PMC11371107/
https://www.sciencedirect.com/science/article/pii/S1877056823002049
https://www.nature.com/articles/d41586-025-00894-7
https://edintegrity.biomedcentral.com/articles/10.1007/s40979-023-00140-5

 

References (fragmented policies)

https://info.library.okstate.edu/AI/publisher-policies
https://www.elsevier.com/about/policies-and-standards/publishing-ethics
https://www.emeraldgrouppublishing.com/publish-with-us/ethics-integrity/research-publishing-ethics
https://www.nature.com/nature-portfolio/editorial-policies/ai
https://grants.nih.gov/grants/guide/notice-files/NOT-OD-23-149.html
https://grants.nih.gov/grants/guide/notice-files/NOT-OD-23-149.html
https://www.nsf.gov/news/notice-to-the-research-community-on-ai
https://us.sagepub.com/en-us/nam/artificial-intelligence-policy
https://www.springer.com/us/editorial-policies/artificial-intelligence--ai-/25428500
https://newsroom.taylorandfrancisgroup.com/taylor-francis-clarifies-the-responsible-use-of-ai-tools-in-academic-content-creation/
https://authorservices.wiley.com/ethics-guidelines/index.html#5
https://pmc.ncbi.nlm.nih.gov/articles/PMC10844801/
https://pubrica.com/academy/citation-and-formatting/ai-authorship-policy-in-academic-publishing/
https://www.sciencedirect.com/science/article/abs/pii/S1472811723000605
https://pmc.ncbi.nlm.nih.gov/articles/PMC10844801/