EnagoBy: Enago

What Counts as “Generative AI Use” in Research Writing? The Definition Problem No One Has Solved!

What Counts as “Generative AI Use” in Research Writing? The Definition Problem No One Has Solved!

Transparency around artificial intelligence use in research writing is now a near-universal expectation across scholarly publishing. Major publishers, journals, and ethics bodies have introduced Generative AI disclosure requirements, often framed as a cornerstone of responsible research practice. Yet, despite this apparent consensus, the ecosystem lacks agreement on a more fundamental question: what exactly counts as “AI use” in research writing?

While much of the latest debate focuses narrowly on manuscript writing and editing, this framing overlooks a broader reality. AI tools now influence foundational research decisions, including literature discovery, study design, data analysis, visualization, interpretation, and decision-making—often well before a manuscript is drafted. Treating AI disclosure as a writing-stage concern alone risks obscuring its deeper epistemic impact on research itself.

Existing Gap Between AI Use and Disclosure

 A 2025 survey of 5,000 researchers found more than 50% use Generative AI (GenAI) assistance for their research writing tasks. Additionally, detection studies at AI conferences identified AI modifications in 6.5-16.9% of peer reviews via phrase frequency spikes, indicating hidden integration.

However, a broad literature review of the available sources indicate a significant gap between AI use and its disclosure. Bibliometric analysis of 1,998 radiology manuscripts in Elsevier journals showed only 1.7% disclosed large language model use, mostly for minor edits and predominantly from non-English-speaking institutions. 

Collectively, these findings highlight a stark gap between AI adoption and disclosure, a gap shaped by multiple forces, including uncertainty over what types of AI assistance meaningfully count as “use” at different stages of the research lifecycle.

A Patchwork of Definitions Across the Research Lifecycle

Across policy reviews and comparative analyses, a recurring theme emerges: AI disclosure requirements are fragmented and inconsistently defined. Most guidelines emphasize transparency in principle, but offer limited guidance on where in the research process AI involvement becomes ethically or scientifically material. Policies from major publishers show different interpretations of what qualifies as AI use requiring disclosure. For example:

  • SAGE distinguishes assistive AI (no disclosure) from generative AI (disclosure), with specific but separate criteria across text, images, and analysis.
  • Nature Portfolio journals, by contrast, require authors to disclose the use of large language models or generative AI tools (apart from their use in copy-editing), but do not provide a formal classification of AI use by function (e.g., assistive vs. generative, surface-level vs. substantive).

While some publishers have begun to draw explicit functional distinctions between different types of AI use, while others retain broad, principle-based language that leaves substantial room for interpretation. Terms such as “substantial contribution,” “meaningful assistance,” or “content generation” appear frequently in publisher policies without thresholds, examples, or decision criteria. As a result, identical AI uses may be disclosed in one journal and omitted in another. This inconsistency extends beyond writing into upstream research activities, where AI support is often invisible to editors and readers alike.

The Narrow Focus on Writing Misses the Larger Risk

Focusing disclosure policies primarily on manuscript preparation creates a misleading boundary and misrepresents where AI influence is most consequential. AI systems increasingly assist with:

  • Identifying relevant literature and synthesizing prior work
  • Generating or refining research questions
  • Supporting statistical analysis and modeling
  • Producing data visualizations and figures
  • Identifying patterns that influence interpretation

Each of these activities can shape how knowledge is produced, framed, and interpreted. Yet many fall outside the scope of current disclosure norms, not because they are ethically neutral, but because definitions of AI use remain underdeveloped.

This gap creates a paradox: visible AI use in writing is scrutinized, while less visible but potentially more influential AI use earlier in the research process often goes undisclosed.

Editing, Automation, and Intellectual Influence: Where confusion begins

The definitional challenge is compounded by long-standing norms around automation and support tools in research. Statistical software, reference managers, plagiarism detectors, and language editors have long been integrated into scholarly workflows without formal disclosure requirements.

AI blurs these categories. When automation shifts from executing predefined instructions to generating suggestions, interpretations, or summaries, the boundary between support and intellectual influence becomes blurred. Studies consistently show that researchers struggle to determine when AI assistance crosses this line—particularly when the output is reviewed and approved by a human.

Without guidance that distinguishes between process efficiency and epistemic contribution, disclosure decisions are left to individual judgment, resulting in uneven reporting.

Why Precision is Important to Responsible AI

The absence of clear, lifecycle-wide definitions has implications that extend beyond compliance:

  • Trust: Readers cannot reliably interpret disclosure statements if similar AI uses are reported differently. A recent analysis of >5.2 million papers showed a dramatic transparency gap. Only 0.1 % of 75,000 papers published since 2023 explicitly disclosed AI use, despite widespread policy adoption and actual use, suggesting trust issues when actual AI involvement is not known.
  • Comparability: Inconsistent and incomplete AI disclosure affects the reliability of meta-research that seeks to assess AI’s prevalence, impact, and risks in scholarly communication. A comparative analysis of AI policies in bioethics and humanities journals found 30% journals did not have a publicly available AI policy. This fragmentation makes it difficult to compare AI use across journals, weakening evidence-based policy development and obscuring the true role of AI in research practices.
  • Governance: Editors and publishers lack shared enforceable standards. Confusion about policy application risks uneven editorial decisions. The BMC comparative study discusses that lack of clear AI policies can confuse authors about whether to disclose AI use, and can result in different editors within the same journal applying different interpretations, weakening governance.
  • Equity: Definitional uncertainty and policy inconsistency create asymmetries among researchers: those who disclose AI use may face perceived editorial risk, while others benefit from AI support without transparency. These dynamics disadvantage authors who seek to comply ethically, reinforcing inequities in scholarly recognition and trust.

As a result, responsible AI governance in research depends not simply on disclosure mandates, but on definitions that align with how research is actually conducted. Rather than asking whether AI was used, the more meaningful question is: Did AI meaningfully influence research decisions, interpretations, or scholarly outputs?

This shift enables proportional transparency, recognizing that not all AI use carries the same ethical or scientific weight.

Toward a Holistic, Tiered Approach

A lifecycle-aware disclosure framework distinguishes between:

Proposed Category of AI UseDefinitionTypical Examples to ConsiderProposed Disclosure ExpectationExpected Impact on Editorial Handling
Routine AutomationAI-enabled functions that support efficiency for mechanical tasks without shaping research content, reasoning, or credibility in any manner.Spell-checking, citation formatting, transcription without summarization, plagiarism screening, file or figure formattingNot requiredTreated as standard digital infrastructure; no impact on peer review or authorship
Assistive UseAI support that improves workflow or presentation by reshaping the expression, but does not directly or greatly alter scholarly claims or conclusionsLanguage editing, paraphrasing for clarity, translation verified by authors, structural suggestionsRecommended (light-touch)Disclosure recorded for transparency; no editorial escalation
Substantive ContributionsAI use that contributes directly or indirectly to research outputs, while humans retain decision-making controlDrafting background sections, generating tables or figures, data classification, literature synthesis supportRequiredEditors assess scope and appropriateness; reviewers may be informed
Interpretive InfluenceAI use that affects reasoning, interpretation, or conclusionsAI-supported interpretation of results, predictive modeling, inference generation, scenario analysisMandatory with detailed descriptionHeightened editorial scrutiny; possible methodological justification or limitations required

Such an approach aligns disclosure requirements with function, influence, and accountability, rather than with specific technologies or brand names. It also preserves a foundational principle consistently emphasized across ethics guidelines: human researchers remain fully responsible for the integrity of their work, regardless of AI involvement.

Defining AI Use Is a Governance Imperative

The scholarly community has moved quickly to demand transparency around AI. What it has not yet resolved is the definitional foundation that makes transparency meaningful.

As AI continues to shape research across the lifecycle, limiting disclosure discussions to writing alone risks misrepresenting its role in knowledge production. A holistic, impact-based understanding of AI use is essential—not only for consistent disclosure, but for sustaining trust in the scholarly record.

For Enago’s Responsible AI Movement, this challenge conveys a central message: responsible AI adoption begins with clear definitions that reflect real research practice, not just policy intent.