By: EnagoResearcher Alert! AI Writing Surges in Research: 1 in 5 Computer Science Papers and Over 1,000 Journals at Risk
In a stark wake-up call for scholarly communication, a recent Science report revealed: about one in five computer science papers may contain content generated by artificial intelligence (AI). What makes this even more alarming is the speed of escalation. In early 2022, detectable AI-written text in such papers was negligible; by mid-2023 it began to surface sporadically; and by late 2024 it had surged to over 22% of abstracts in some subfields. This stunning, two-year rise has sent shockwaves through the research ecosystem because it spotlights a broader and accelerating trend of AI use in research writing, raising urgent concerns about authorship, accountability, and trust.
What did the Science Study Find?
According to the Science article, researchers observed a sharp surge in AI-generated language across scientific manuscripts, particularly since November 2022, when OpenAI released ChatGPT. By analyzing patterns in phrasing, especially in abstracts and introductions in more than 1.1 million scientific articles and preprints (2020–2024), they found that up to 22.5% of computer science abstracts showed signs of large language model (LLM) involvement.
Other fields also saw increasing AI influence: electrical engineering (~18%), statistics (~12.9%), physics (~9.8%), and even mathematics (~7.8%). Notably, these increases accelerated sharply after the release of ChatGPT, indicating a structural shift rather than a temporary trend.
These numbers suggest that generative AI isn’t just a writing aid. What makes this trend especially concerning is the human detection gap. Reviewers, editors, and even experienced researchers are increasingly unable to differentiate between human-written and AI-written text. As a result, AI-modified writing is silently entering the literature, often without disclosure, and often without being noticed at all.
The Other Side of the Coin: Predatory journals exposed by AI
Meanwhile, a recent University of Colorado Boulder study revealed that a different AI tool has been working to identify potential threats to scientific integrity. The system screened ~15,200 open-access journals and initially flagged more than 1,400 journals as suspicious. After expert human review, over 1000 were confirmed to exhibit hallmarks of predatory publishing. This means that roughly 1 in every 14 open-access journals may be operating without credible scholarly standards. This figure is explosive, yet still dramatically under-communicated in the broader academic community.
Some of the strongest red flags included missing or fake editorial boards, unusually high levels of self-citation, and authors with many institutional affiliations.
Why These Two Threads Matter? And How They Connect
Putting both trends together— AI quietly appearing in thousands of manuscripts and a different AI system exposing more than 1,000 suspect journals—reveals a landscape of both opportunity and peril for academic publishing.
1. Scaling Risk & Reward
AI helps researchers write faster, polish text, and generate ideas. However, the same tools can produce authoritative-sounding yet misleading content (“hallucinations”), and may be used to inflate publication count with low-quality papers.
2. Broken Transparency
Many authors now generate substantial portions of their manuscripts using AI but do not disclose it. For example, the Academ-AI dataset shows hundreds of suspected instances of undeclared AI usage in peer-reviewed literature. Without clear acknowledgement, AI-assisted writing can affect trust: readers, reviewers, and institutions may not realize what “authorship” actually means in these cases.
3. Predatory Publishing + AI = Double Threat
Predatory journals exploit academic pressure for “publish or perish.” With AI writing, it becomes easier and cheaper to generate manuscripts that appear credible but lack scientific rigor. This creates a dangerous “supply chain” between AI-generated text and journals willing to publish anything for a fee.
Why This Matters? Risks and Implications
1. Transparency & Disclosure
One of the most significant problems is that AI use is rarely being disclosed. Without clear acknowledgment, readers and reviewers may misinterpret AI-assisted sections as entirely human-written. This opacity affects trust. As some journals still lack concrete guidelines, many authors may not even realize the ethical dimensions of non-disclosure.
2. Quality vs. Quantity Trade-off
While AI can accelerate the writing process, there’s a risk that parts of a manuscript, especially pivotal sections like introductions, become formulaic or hollow. If generative tools are overused, we might see a loss of the subtle creativity, rigor, or critical insight that characterizes strong scientific writing.
3. Peer-Review Challenges
If peer reviewers are unaware of AI involvement, they may fail to catch errors, hallucinations, or misleading phrasing. Given how LLMs can hallucinate plausible but false claims, this is not a hypothetical risk. The more LLM-generated content slips through unchecked, the more credibility the academic record could lose.
4. Erosion of Genuine Authorship and Credit
When AI writes portions of a paper (or entire drafts), how do we assess contribution? Traditional norms of authorship like who did the experiments, who wrote, who edited may blur. This could challenge how credit, responsibility, and accountability are allocated in research teams.
What Can Be Done?
Given the scale and speed of AI’s infiltration into scientific writing, the academic community must act quickly. Here are some practical steps forward:
- Standardized Disclosure Statements
Journals should require authors to clearly disclose where and how they used AI. The disclosure should include:
- Which tool was used (e.g., ChatGPT, Claude)
- The version or model (e.g., GPT-4)
- When and how it was used (e.g., drafting introduction, polishing grammar)
- The extent of use (percentage of text, sections)
- AI-Usage Checklists for Submissions
Just as authors fill out conflict-of-interest or data-sharing checklists, they should also complete an AI-usage checklist. This could help editors and reviewers understand potential AI contributions and decide where to scrutinize more deeply. - Training for Reviewers and Editors
Peer reviewers and journal editors need to be educated on how to detect LLM-like writing. By learning common AI tropes — unusual phrasing, repetitive structures — they can better assess manuscript authenticity. - Open, Transparent Detection Tools
Develop and support open-source tools for detecting AI-generated text. Transparency in the detection process will build trust; black-box detectors may raise fairness issues (e.g., bias against non-native English writers). - Institutional & Funder Policies
Universities, funding bodies, and research institutions should develop policies around acceptable AI usage. They should balance encouraging productivity with ensuring scientific integrity.
Why This Moment Matters
The convergence of AI-assisted writing and AI-enabled journal policing represents a pivotal moment in the history of science. On one hand, generative tools can democratize writing, make knowledge creation more accessible, and speed up idea exchange. On the other hand, without guardrails, generative AI may erode the integrity of the research record, blur authorship, and bolster shady publishing practices. Yet there is hope. What we urgently need are community norms, shared policies, and transparency. With well-defined AI-disclosure guidelines, we can ensure that generative AI adds value to research, rather than becoming a reason for its degradation.
Organizations across the ecosystem are beginning to define what “responsible AI” truly means in research, and one of the most proactive voices in this space is Enago’s Responsible AI Movement. By offering frameworks, training resources, and guidance for researchers worldwide, Enago’s movement reinforces the principle that AI should augment and not replace human scholarship. Ultimately, the future of science depends not only on technological tools but on the values that guide their use.
Similar Articles
Load more