
Anagha Nair
Anagha Nair is a content expert and publishing professional specializing in responsible AI, scholarly communications, and research integrity. Her work explores the growing tensions between rapid AI adoption and the need for clear ethical boundaries, particularly in areas such as peer review, authorship, and research integrity. At Enago, she contributes to thought leadership and outreach initiatives that engage publishers, researchers, and policymakers on issues such as AI disclosure, peer review integrity, and ethical oversight.
She is passionate about shaping conversations that move beyond AI enthusiasm toward more grounded, transparent, and responsible practices in scholarly communication.
Articles Published:

1.3 Million Scientists Ask ChatGPT 8 Million Questions a Week: 6 Strategies for Researchers to Stay in Control
With over 1.3 million scientists generating millions of weekly queries on ChatGPT, AI is rapidly becoming embedded in research workflows. While it accelerates discovery and simplifies complex tasks, it also introduces hidden risks like citation errors, bias, and reduced critical evaluation. This article explores how AI is evolving into research infrastructure and outlines six practical strategies to help researchers maintain control, accuracy, and intellectual independence.

50% More Papers, Less Science? What Does This Mean For You as a Researcher?
Generative AI tools are helping researchers publish 50% more papers—but is quantity driving down quality? This article explores the productivity paradox in academia, examining how AI adoption creates both opportunities and risks for research integrity, equity, and peer-review systems.

Wiley Did a Study on GenAI Use. 3 Things Stood Out.
Wiley surveyed 2,400+ researchers worldwide and found GenAI is now embedded in research workflows. But beyond rising adoption, three trends stand out: expectations are becoming more realistic, institutional policies are lagging behind usage, and heavy reliance on general-purpose tools is creating new risks for research integrity, privacy, and disclosure.

What Counts as “Generative AI Use” in Research Writing? The Definition Problem No One Has Solved!
Most journals now require AI disclosure but what actually counts as "AI use"? With over 50% of researchers using GenAI tools yet fewer than 2% disclosing it, the problem isn't transparency policies it's the lack of clear, lifecycle-wide definitions of AI involvement. This article examines why current disclosure norms fall short and proposes a tiered, impact-based framework for responsible AI governance in research.

Researcher Alert! AI Writing Surges in Research: 1 in 5 Computer Science Papers and Over 1,000 Journals at Risk
Discover how AI is reshaping academic publishing: from 22% of computer science papers containing AI-generated content to over 1,000 predatory journals exposed by detection tools. Learn about the transparency crisis and what the research community must do now.


