Insights on Responsible AI for Manuscript Preparation
Explore the latest articles, podcasts, webinars, and lectures on ethical AI use in manuscript preparation to stay informed in this fast-evolving field.
Show All
Articles
Videos
Podcasts
Webinars

Can You Tell if AI Wrote Something? What Researchers Should Look For
As generative AI becomes embedded in research workflows, distinguishing between human and AI-generated writing is increasingly complex. This article explores the common linguistic patterns of LLM-generated text and offers practical strategies for researchers to maintain originality, critical thinking, and academic integrity while using AI tools.

50% More Papers, Less Science? What Does This Mean For You as a Researcher?
Generative AI tools are helping researchers publish 50% more papers—but is quantity driving down quality? This article explores the productivity paradox in academia, examining how AI adoption creates both opportunities and risks for research integrity, equity, and peer-review systems.

Guardrails First: Why STM’s Vision Must Anchor Responsible AI in Research
As generative AI reshapes scholarly publishing, the STM Association's governance framework offers a vital foundation — arguing that trust in research depends on systems, not just outputs. This opinion piece examines why structural guardrails must come first.

Adapting to the AI Revolution in Research: What You Need to Know and Do
Generative AI has rapidly shifted from a novelty to an essential tool in academic research, fundamentally changing how scholars write, code, and review literature. However, this rapid adoption has outpaced institutional guidelines. Explore the evolution of AI in academia and discover actionable steps for researchers, publishers, and universities to bridge the gap and ensure responsible, ethical use.

AI-Assisted Writing Makes Paper Sounds Better, But Does It Compromise Scientific Rigor?
AI now assists over half of researchers with literature reviews, summarization, and manuscript drafting. But when fluent language obscures flawed science, the cost to research integrity can be significant. Here's what the evidence shows.

Wiley Did a Study on GenAI Use. 3 Things Stood Out.
Wiley surveyed 2,400+ researchers worldwide and found GenAI is now embedded in research workflows. But beyond rising adoption, three trends stand out: expectations are becoming more realistic, institutional policies are lagging behind usage, and heavy reliance on general-purpose tools is creating new risks for research integrity, privacy, and disclosure.

What Counts as “Generative AI Use” in Research Writing? The Definition Problem No One Has Solved!
Most journals now require AI disclosure but what actually counts as "AI use"? With over 50% of researchers using GenAI tools yet fewer than 2% disclosing it, the problem isn't transparency policies it's the lack of clear, lifecycle-wide definitions of AI involvement. This article examines why current disclosure norms fall short and proposes a tiered, impact-based framework for responsible AI governance in research.

Journals want transparency. Prompt disclosure could be the key.
AI prompts shape research outcomes as much as the models themselves, yet remain invisible in published studies. This article explores why prompt disclosure is essential for research transparency, reproducibility, and integrity and why journal policies must evolve to require it. Discover how hidden prompts undermine scientific rigor and what researchers, editors, and journals can do to close this critical transparency gap.

Why 65% of Publishers Still Have No Clear AI Policy and What It Means for Researchers? – 6 Important Points to Consider
A clear breakdown of why most publishers still lack formal AI-use policies, the risks this creates for researchers, and the practical steps scholars must take to protect their work and reputation.

Researcher Alert! AI Writing Surges in Research: 1 in 5 Computer Science Papers and Over 1,000 Journals at Risk
Discover how AI is reshaping academic publishing: from 22% of computer science papers containing AI-generated content to over 1,000 predatory journals exposed by detection tools. Learn about the transparency crisis and what the research community must do now.

Retractions are at an all-time high. How can you avoid it?
A 10-fold surge in AI-linked retractions is threatening researcher careers worldwide. In 2025, over 200 papers have been retracted or investigated for undisclosed AI use, including 129 papers from a single journal. With 52% of AI-generated references being fabricated or distorted, the question isn't whether AI will cause retractions—it's whether your paper will be next. Learn the real dangers of blind trust in LLMs, why detection tools can't save you, and the 3 critical steps to protect your research credibility.

700 Research Papers Flagged for Undisclosed AI — Is Science Losing Its Human Voice?
The scientific community faces an unprecedented integrity crisis: 700+ papers suspected of containing undeclared AI-authored content, with Springer Nature alone retracting 3,000 articles in 2024. High-profile journals have lost their impact factors, and researchers face career-ending consequences for undisclosed AI use. Learn what triggers retractions, where the ethical line lies, and how to disclose AI use properly to protect your research credibility.

Fabricated Foundations of Modern Research — 55% of AI References Are Fake!
The rise of AI in research writing has created an unprecedented crisis: 55% of AI-generated citations are completely fabricated. From the "vegetative electron microscopy" incident to thousands of retracted papers, AI hallucinations are undermining scientific credibility. Learn why AI's polished fluency masks dangerous errors and how researchers can protect their work from this silent threat.

Integrity Alarms Sound as AI Text Creeps Into One in Every Three Academic Papers! – Enago
Artificial intelligence is reshaping academic writing at an unprecedented pace, raising serious concerns about plagiarism, detection accuracy, and the future of research integrity. This article explores how AI-generated text is infiltrating scholarly work, why detecting it is so challenging, and what publishers must do to ensure responsible, transparent use.

Avoid Desk Rejection: Insider Tips to Meet Submission Requirements and Ethical Standards
Join Enago's FREE webinar to learn essential strategies to avoid desk rejection. Gain valuable insights into manuscript submission, AI use, and ensuring compliance with journal guidelines to improve your chances of success.

Authenticating AI-Generated Scholarly Outputs: Practical approaches to take
Learn how to responsibly use AI in academic writing with Enago’s guidelines. Explore ethical practices, transparency in AI use, and how to maintain research integrity while incorporating AI tools into your manuscripts.

Unified AI Guidelines Crucial as Academic Writing Embraces Generative Tools
Learn how to responsibly use AI in academic writing with Enago’s guidelines. Explore ethical practices, transparency in AI use, and how to maintain research integrity while incorporating AI tools into your manuscripts.

Responsible AI in Research: Establishing Trust with Clear Guidelines
Randy Townsend from Origin Editorial joins the discussion on the ethical concerns surrounding AI in scholarly publishing. He highlights the need for transparency, accountability, and clear guidelines to foster responsible AI use.

Retractions, AI, and the Future of Research Integrity
We dive into the significant impact of retractions on researchers' careers, especially for early-career academics. We also explore the critical need for transparency in authorship and AI usage, emphasizing how AI tools can enhance editorial quality without replacing human judgment. Balancing technology and human expertise is key to preserving the integrity of scientific publishing.









