By: EnagoOnly 5.7% Disclosed AI Use! Why Researchers Need Unified Guidelines Urgently
The recent preprint titled “Authors self-disclosed use of artificial intelligence in research submissions to 49 biomedical journals” reports a striking figure: only 5.7% of the 25,114 empirical biomedical manuscripts submitted between April and November 2024 openly disclosed any use of AI. This sharply contrasts with prior surveys, where between 28% and 76% of researchers admitted to using AI tools in preparing their manuscripts. This gap between estimated AI use and actual disclosure highlights a growing tension: AI is rapidly reshaping biomedical research and writing, but transparency is dangerously lagging behind.
The 5.7% Disclosure Rate — and Why It Matters
The study analyzed 25,114 empirical manuscripts submitted across 49 biomedical journals published by BMJ between April 8 and November 6, 2024. Of these, only 1,431 (5.7%) included a self-report of AI use. The most commonly disclosed AI tools were generative chatbots (e.g., LLM-based writing assistants), accounting for 56.7% of disclosures; writing-assistants more broadly accounted for 12.7%. The main declared purpose: improving writing quality (87% of disclosing authors). Other uses like translation, data generation, literature search, data analysis, image processing, code writing, reference management, were reported far less frequently. This means most disclosures reveal only low-risk uses while higher-risk AI tasks remain unreported. Furthermore, a separate bibliometric analysis of 1,998 radiology papers in Elsevier journals reported that only 1.7% disclosed the use of LLM’s (large language models).
Why does this low rate of disclosure matter?
- Scientific Integrity and Reproducibility:
The use of AI for writing, analysis, data generation, or translation can affect meaning, nuance, and even compromise data integrity. Without transparency, peer reviewers and readers cannot properly evaluate the provenance of text, analyses, or interpretations.
- Erosion of Trust and Accountability:
When AI use remains hidden, it fuels what some describe as a “hidden ecosystem” of AI-assisted writing and research. This puts accountability, authorship normativity, and the trustworthiness of the scientific record under radar.
Is This Figure a Tip of the Iceberg?
The answer remains “Yes!” Several independent studies suggest AI involvement in a much higher proportion of biomedical research, often without disclosure. A large-scale linguistic analysis of over 15 million biomedical abstracts (2010–2024) found a marked increase in so-called “style words,” which are adjectives and verbs that are typically found in LLM-generated text. Authors argue that these stylistic markers point to at least 13.5% of 2024 abstracts having been processed by large language models.
These findings collectively suggest that the true prevalence of AI assistance in biomedical research is likely much higher than what disclosure statistics indicate. In other words: many papers may depend on AI for writing, editing, data handling, or even drafting, without any formal acknowledgment or structured disclosure.
What are the Repercussions?
1. Accountability
When authors use AI but do not disclose it, responsibility for accuracy, integrity, and originality becomes dangerously murky. If AI-generated content contains errors or biased phrasing, who is responsible? Undisclosed use blurs that line.
2. Accuracy and Validity
Generative AI tools, irrespective of their latest updates and the extent of training, are not flawless. They can hallucinate facts, misinterpret data, fabricate citations, overstate significance, or introduce subtle biases. Without a clear record of what was AI-assisted (e.g., text generation, data summarization, image processing), peers and readers cannot reliably assess and cross-verify methodological rigor, reproducibility, or interpret results in context.
3. Peer Review and Publishing Systems
Systematic non-disclosure can create a creeping “shadow literature,” where parts of academic output are AI-augmented and unknown to reviewers, editors, or readers. This threatens long-term trust in published evidence, especially in highly-sensitive fields like medicine, where accuracy matters for patient safety and policy decisions.
Where Do the Existing Guidelines Fall Short?
Many publishers and journals have issued guidance on AI use. Major academic publishers like Elsevier, Wiley, Taylor & Francis, and SAGE have issued policies prohibiting AI authorship while allowing limited, disclosed use for tasks like language editing, with human oversight required. However, existing practices are fragmented, inconsistent, and often insufficient to ensure real transparency. Moreover, these guidelines often vary not only between publishers, but also between journals under the same publisher, leaving authors confused, and disclosure optional, inconsistent, or omitted. In addition, a lack of unified language and structure, i.e., where to disclose (methods section, acknowledgments, cover letter), how detailed to be (just tool name or detailed description of use), what types of AI use require disclosure (writing, data processing, analysis, translation, etc.) — adds to the opacity. This confusion and heterogeneity can dissuade authors from disclosing, even when they rely heavily on AI.
What a Comprehensive Disclosure Guideline Could and Should Offer
Given the stakes, the scholarly community needs a unified, standardized framework for AI disclosure. Here’s what such a guideline should include:
1. Clear Definition of What Constitutes the “Right AI Use”
The guideline should broadly define AI use to include: generative writing assistants (e.g., large language models), translation tools, code-generation, data analysis/processing tools, image-processing AI (e.g., for radiology), reference management AI, and any other AI-driven assistance. This helps avoid narrow definitions that only consider “writing” as AI use.
2. Mandatory Disclosure With Structured Format
Instead of leaving disclosure optional or hidden in cover letters, journals should require a dedicated section (e.g., “AI-Assistance Declaration”) in manuscripts where authors mention the AI tools used (by name or version), describe what they were used for (writing, translation, data analysis, image processing, reference management, etc.), clarify whether AI was used to generate new content (text/data), or only to assist (editing, translation, polishing).
3. Alignment With Broader Open Science and Reproducibility Practices
AI disclosure should be integrated into larger frameworks of transparency: data sharing, code availability, methodology reporting. Especially for studies using AI for data analysis or image processing, journals should encourage (or require) sharing of AI-generated code, models, and data, where permissible.
Towards a Transparent, Trustworthy AI-Augmented Future
The fact that only 5.7% of biomedical research manuscripts disclosed AI use, despite far higher estimates of actual AI involvement, is a wake-up call for the entire publishing ecosystem. AI tools are already reshaping how scientific research is conducted, written, and reviewed. But this transformation comes with risks: loss of transparency, dilution of accountability, compromised reproducibility, and potential erosion of trust in the scientific record.
A unified, community-wide guideline for AI disclosure that is mandatory, structured, transparent, is no longer an optional nicety. It is essential. Supported by frameworks like AI Usage Cards, and aligned with open-science and robust reporting practices, such guidelines can safeguard the integrity of biomedical research even as AI becomes deeply integrated.
If journals, publishers, researchers, and funders adopt clear, consistent disclosure standards now, we can reap the productivity and accessibility benefits of AI without compromising the core ethical values in publishing. If you believe the scientific community deserves clarity and not guesswork, then join Enago’s Responsible AI movement to help set the standard and define what responsible AI looks like in global research!
Similar Articles
Load more
Frequently Asked Questions
Why should I use Enago's English Editing services ?
+Enago has been providing editing services to authors globally since 2005. We are an ISO-certified quality-first Author Services company with a 99.45% client satisfaction rate. We are trusted by authors and recommended by universities and publishers for the quality of our English editing services.
What is English Editing ?
+English editing is a difficult task for native speakers and often doubly difficult for non-native speakers. This is why many English editing services offer specialized English as a Second Language (ESL) editing services. Vital research is done globally, and English is the global language it is shared in. ESL editing is offered to ensure ESL researchers and writers do not have their important papers rejected from prestigious journals and publications simply due to poor English.