{"id":56914,"date":"2025-11-21T20:09:08","date_gmt":"2025-11-21T14:09:08","guid":{"rendered":"https:\/\/www.enago.com\/academy\/?p=56914"},"modified":"2026-03-31T15:00:28","modified_gmt":"2026-03-31T09:00:28","slug":"ai-hallucinations-research-citations","status":"publish","type":"post","link":"https:\/\/www.enago.com\/academy\/ai-hallucinations-research-citations\/","title":{"rendered":"Understanding Citation Ethics: Why You Should Never Rely Solely on AI for Literature Discovery"},"content":{"rendered":"<p>Recent evaluations of generative AI show a worrying pattern: many AI systems produce plausible-looking but incorrect or entirely fabricated bibliographic references. In <a href=\"https:\/\/arxiv.org\/abs\/2505.18059\">one multi-model study<\/a> of academic bibliographic retrieval, only 26.5% of generated references were entirely correct, while nearly 40% were erroneous or fabricated.<\/p>\n<p>For researchers, students, and institutional authors, this matters because literature discovery and accurate citation underpin reproducibility, <a href=\"https:\/\/www.enago.com\/publication-support-services\/peer-review-process\" data-internallinksmanager029f6b8e52c=\"115\" title=\"Peer Review\" target=\"_blank\" rel=\"noopener\">peer review<\/a>, and scholarly trust. This article explains what goes wrong when you rely solely on AI for literature discovery, why those failures occur, and most importantly practical, implementable workflows and checks you can use to preserve research integrity.<\/p>\n<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_74 counter-hierarchy ez-toc-counter ez-toc-grey ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\">\n<p class=\"ez-toc-title\" style=\"cursor:inherit\">Table of Contents<\/p>\n<span class=\"ez-toc-title-toggle\"><a href=\"#\" class=\"ez-toc-pull-right ez-toc-btn ez-toc-btn-xs ez-toc-btn-default ez-toc-toggle\" aria-label=\"Toggle Table of Content\"><span class=\"ez-toc-js-icon-con\"><span class=\"\"><span class=\"eztoc-hide\" style=\"display:none;\">Toggle<\/span><span class=\"ez-toc-icon-toggle-span\"><svg style=\"fill: #999;color:#999\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" class=\"list-377408\" width=\"20px\" height=\"20px\" viewBox=\"0 0 24 24\" fill=\"none\"><path d=\"M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z\" fill=\"currentColor\"><\/path><\/svg><svg style=\"fill: #999;color:#999\" class=\"arrow-unsorted-368013\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"10px\" height=\"10px\" viewBox=\"0 0 24 24\" version=\"1.2\" baseProfile=\"tiny\"><path d=\"M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z\"\/><\/svg><\/span><\/span><\/span><\/a><\/span><\/div>\n<nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/www.enago.com\/academy\/ai-hallucinations-research-citations\/#Benefits_of_using_AI_in_literature_discovery\" >Benefits of using AI in literature discovery<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/www.enago.com\/academy\/ai-hallucinations-research-citations\/#Risks_of_relying_solely_on_AI\" >Risks of relying solely on AI<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/www.enago.com\/academy\/ai-hallucinations-research-citations\/#How_AI_hallucinations_happen\" >How AI hallucinations happen<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/www.enago.com\/academy\/ai-hallucinations-research-citations\/#Practical_step-by-step_workflow\" >Practical, step-by-step workflow<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-5\" href=\"https:\/\/www.enago.com\/academy\/ai-hallucinations-research-citations\/#Common_mistakes_to_avoid\" >Common mistakes to avoid<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-6\" href=\"https:\/\/www.enago.com\/academy\/ai-hallucinations-research-citations\/#Next_steps\" >Next steps<\/a><\/li><\/ul><\/nav><\/div>\n<h2><span class=\"ez-toc-section\" id=\"Benefits_of_using_AI_in_literature_discovery\"><\/span><strong>Benefits of using AI in literature discovery <\/strong><span class=\"ez-toc-section-end\"><\/span><\/h2>\n<ul>\n<li><strong>Rapid ideation and scope definition<\/strong>: AI can suggest search terms, identify related topics, and help outline a search strategy.<\/li>\n<li><strong>Time savings on routine tasks<\/strong>: <a href=\"https:\/\/read.enago.com\/features\/\" data-internallinksmanager029f6b8e52c=\"80\" title=\"Summarizarion\" target=\"_blank\" rel=\"noopener\">Summarization<\/a> and screening of abstracts can reduce workload when used as an assistive tool. However, speed is not the same as validated accuracy.<\/li>\n<\/ul>\n<p>These strengths make AI a useful assistant but not a substitute for rigorous literature discovery.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Risks_of_relying_solely_on_AI\"><\/span><strong>Risks of relying solely on AI <\/strong><span class=\"ez-toc-section-end\"><\/span><\/h2>\n<ul>\n<li><strong>Hallucinated or fabricated citations<\/strong>: Multiple domain-specific evaluations have documented substantial rates of fabricated or incorrect references from large language models. For example, a nephrology-focused <a href=\"https:\/\/pmc.ncbi.nlm.nih.gov\/articles\/PMC10488525\">evaluation<\/a> found that only 62% of ChatGPT\u2019s suggested references existed and that about 31% were fabricated or incomplete.<\/li>\n<li><strong>Variable accuracy by topic and recency<\/strong>: Hallucination rates tend to rise for newer or niche topics where the model\u2019s training data is sparse; one <a href=\"https:\/\/arxiv.org\/abs\/2411.07031\">evaluation<\/a> of chatbots found hallucination rates increased for more recent topic areas.<\/li>\n<\/ul>\n<h2><span class=\"ez-toc-section\" id=\"How_AI_hallucinations_happen\"><\/span><strong>How AI hallucinations happen <\/strong><span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>AI language models are pattern predictors: they generate plausible text given a prompt, but they do not \u201cretrieve\u201d verified bibliographic records in the way a database does. When asked for citations, models may invent titles, DOIs, or journal names that fit learned patterns. Retrieval-augmented approaches (RAG) can reduce this risk but do not eliminate it.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Practical_step-by-step_workflow\"><\/span><strong>Practical, step-by-step workflow <\/strong><span class=\"ez-toc-section-end\"><\/span><\/h2>\n<ol>\n<li><strong>Use AI for brainstorming\u2014not for sourcing<\/strong>\n<ul>\n<li>Ask AI to suggest keywords, synonyms, and broader search terms to inform database queries. Verify every specific reference yourself.<\/li>\n<\/ul>\n<\/li>\n<li><strong>Search primary bibliographic databases first<\/strong>\n<ul>\n<li>Perform structured searches in discipline-appropriate databases (PubMed\/Medline, Scopus, Web of Science, IEEE Xplore, Google Scholar) and record your search strings and date ranges. Avoid treating AI output as a primary search result.<\/li>\n<\/ul>\n<\/li>\n<li><strong>Treat AI-recommended references as leads, not authorities<\/strong>\n<ul>\n<li>If AI provides a citation (title, DOI, authors), independently verify the DOI, publisher, and full text via the relevant database or the publisher site before citing.<\/li>\n<\/ul>\n<\/li>\n<li><strong>Use a verification checklist for every new reference:<\/strong>\n<ul>\n<li>Confirm DOI resolves to the correct article.<\/li>\n<li>Verify author names, journal, volume, pages, and year in CrossRef\/Google Scholar.<\/li>\n<li>Access the abstract or full text to ensure the article supports your claim.<\/li>\n<li>Flag any mismatch and remove fabricated or unverifiable items.<\/li>\n<\/ul>\n<\/li>\n<li><strong>Combine AI with structured, reproducible review methods<\/strong>\n<ul>\n<li>For systematic reviews, document your protocol and follow PRISMA guidelines for search, selection, and reporting. This preserves transparency and mitigates propagation of AI errors.<\/li>\n<\/ul>\n<\/li>\n<li><strong>Use retrieval-augmented tools cautiously.<\/strong>\n<ul>\n<li>Tools built to combine LLMs with database retrieval can reduce hallucinations but are not foolproof; continue human validation.<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<h2><span class=\"ez-toc-section\" id=\"Common_mistakes_to_avoid\"><\/span><strong>Common mistakes to avoid<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h2>\n<ul>\n<li>Copy-pasting AI-provided references into your bibliography without verification.<\/li>\n<li>Assuming an AI\u2019s confidence equals correctness. LLMs express falsehoods convincingly.<\/li>\n<li>Skipping full-text reads and relying on AI abstracts or summaries alone. This can produce misinterpretations of methods or findings.<\/li>\n<\/ul>\n<h2><span class=\"ez-toc-section\" id=\"Next_steps\"><\/span><strong>Next steps <\/strong><span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>As you conduct your next literature search, be sure to implement a verification checklist. If you&#8217;re preparing a systematic review, remember to register your protocol (e.g., PROSPERO, where applicable), follow PRISMA guidelines, and collaborate with a librarian or information specialist. If you need editorial or bibliographic support, check out our <a href=\"https:\/\/www.enago.com\/publication-support-services\/Literature-search-and-citation-service\">Literature Search and Citation Service<\/a> and <a href=\"https:\/\/www.read.enago.com\/\">our AI assistant on literature discovery<\/a>.<\/p>\n<p>Enago\u2019s <a href=\"https:\/\/www.enago.com\/editing-services\">manuscript services<\/a> help researchers ensure clarity, proper citation formatting, and adherence to reporting guidelines, including those for systematic reviews. Our expert editors can review your bibliography for consistency, check citation formats, and provide guidance on best practices for reporting, ensuring your submission meets journal standards.<\/p>\n<div style=\"display:flex; gap:10px;justify-content:\" class=\"wps-pgfw-pdf-generate-icon__wrapper-frontend\">\n\t\t<a  href=\"https:\/\/www.enago.com\/academy\/wp-json\/wp\/v2\/posts\/56914?action=genpdf&amp;id=56914\" class=\"pgfw-single-pdf-download-button\" ><img data-src=\"https:\/\/www.enago.com\/academy\/wp-content\/plugins\/pdf-generator-for-wp\/admin\/src\/images\/PDF_Tray.svg\" title=\"Generate PDF\" style=\"width:auto; height:45px;\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\"><\/a>\n\t\t<\/div>","protected":false},"excerpt":{"rendered":"<p>Recent evaluations of generative AI show a worrying pattern: many AI systems produce plausible-looking but&hellip;<\/p>\n","protected":false},"author":4,"featured_media":56918,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"om_disable_all_campaigns":false,"footnotes":""},"categories":[1988,2],"tags":[],"ppma_author":[1895],"class_list":["post-56914","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-articles","category-academic-writing"],"better_featured_image":{"id":56918,"alt_text":"AI Hallucinations in Research: Why 40% of AI Citations Are Wrong","caption":"","description":"Discover why AI generates fake citations and how to verify references safely. Learn practical workflows to prevent AI hallucinations from compromising your research integrity and literature reviews.","media_type":"image","media_details":{"width":910,"height":340,"file":"2025\/11\/Sami-EA-Blogs-Banner-910-x-340-px-2.jpg","filesize":145412,"sizes":{},"image_meta":{"aperture":"0","credit":"","camera":"","caption":"","created_timestamp":"0","copyright":"","focal_length":"0","iso":"0","shutter_speed":"0","title":"","orientation":"1","keywords":[]}},"post":56914,"source_url":"https:\/\/www.enago.com\/academy\/wp-content\/uploads\/2025\/11\/Sami-EA-Blogs-Banner-910-x-340-px-2.jpg"},"acf":{"faq_main_heading":"Frequently Asked Questions","faq_heading_one":"What are AI hallucinations in academic research and citation generation?","faq_heading_two":"How accurate are AI-generated citations and bibliographic references?","faq_heading_three":"How can researchers verify AI-generated citations before using them?","faq_heading_four":"Why do AI language models fabricate citations and bibliographic references?","faq_heading_five":"What is the safest workflow for using AI in literature discovery and systematic reviews?","faq_heading_six":"Should researchers use AI tools for systematic literature reviews and meta-analyses?","faq_description_one":"AI hallucinations occur when generative AI systems produce plausible-sounding but fabricated or incorrect information, including fake citations, non-existent DOIs, and invented journal articles. In academic research, these hallucinations undermine reproducibility and scholarly trust. Multi-model studies show that nearly 40% of AI-generated references contain errors or complete fabrications, with only 26.5% being entirely correct, making verification essential for maintaining research integrity.","faq_description_two":"AI citation accuracy varies significantly by topic and recency. A comprehensive multi-model study found only 26.5% of generated references were entirely correct, while approximately 40% were erroneous or fabricated. Domain-specific evaluations reveal further concerns: a nephrology-focused study discovered only 62% of ChatGPT's suggested references actually existed, with 31% being fabricated or incomplete. Hallucination rates increase substantially for newer or niche topics where training data is limited.","faq_description_three":"Researchers should implement a systematic verification checklist for every AI-suggested reference: confirm the DOI resolves to the correct article through CrossRef or publisher websites, verify all metadata including author names, journal title, volume, pages, and publication year in primary databases like PubMed or Web of Science, access and review the abstract or full text to ensure content supports your claim, and remove any unverifiable items immediately from your bibliography.","faq_description_four":"AI language models are pattern predictors, not bibliographic databases. They generate text that appears plausible based on learned patterns from training data, but they don't retrieve verified records. When prompted for citations, models may invent titles, DOIs, authors, or journal names that fit statistically likely patterns without confirming actual existence. Retrieval-augmented generation (RAG) approaches can reduce this risk by connecting models to real databases, but they don't eliminate hallucination entirely.","faq_description_five":"The safest approach uses AI for brainstorming keywords and search terms only, not for sourcing citations. Conduct structured searches in discipline-specific databases like PubMed, Scopus, Web of Science, or IEEE Xplore first, documenting search strings and date ranges. Treat any AI-recommended references as unverified leads requiring independent confirmation through primary databases. For systematic reviews, register your protocol with PROSPERO, follow PRISMA reporting guidelines, and collaborate with information specialists to ensure transparency and reproducibility.","faq_description_six":"AI can serve as a supplementary brainstorming tool for systematic reviews but should never replace structured, reproducible methodology. Researchers must follow established protocols like PRISMA guidelines, register review protocols in appropriate registries such as PROSPERO, conduct searches in primary bibliographic databases, and maintain detailed documentation of search strategies. AI-assisted screening may reduce workload, but human validation of every citation, inclusion decision, and data extraction step remains essential for maintaining systematic review quality and research integrity."},"views":1425,"single_webinar_page_date":null,"single_webinar_page_time":null,"session_agenda":null,"who_should_attend_this_session":null,"about_the_speaker_field":null,"co-webinar-sec":null,"co_webinar_sec_one":null,"speaker-name":null,"webinar-date":null,"webinar-time":null,"webinar-s-image":null,"custum_webinar_category":null,"authors":[{"term_id":1895,"user_id":4,"is_guest":0,"slug":"editor","display_name":"Enago Academy","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/2ef4bc47f3ceaa56f5eb3b26f9520fad298ba36ede4f86315997ffb45db37a1f?s=96&d=identicon&r=g","author_category":"","user_url":"","last_name":"Academy","first_name":"Editor","job_title":"","description":"Enago Academy, the knowledge arm of Enago, offers comprehensive and up-to-date resources on academic research and scholarly publishing to all levels of scholarly professionals: students, researchers, editors, publishers, and academic societies. It is also a popular platform for networking, allowing researchers to learn, share, and discuss their experiences within their network and community. The team, which comprises subject matter experts, academicians, trainers, and technical project managers, are passionate about helping researchers at all levels establish a successful career, both within and outside academia."}],"_links":{"self":[{"href":"https:\/\/www.enago.com\/academy\/wp-json\/wp\/v2\/posts\/56914","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.enago.com\/academy\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.enago.com\/academy\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.enago.com\/academy\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/www.enago.com\/academy\/wp-json\/wp\/v2\/comments?post=56914"}],"version-history":[{"count":2,"href":"https:\/\/www.enago.com\/academy\/wp-json\/wp\/v2\/posts\/56914\/revisions"}],"predecessor-version":[{"id":56917,"href":"https:\/\/www.enago.com\/academy\/wp-json\/wp\/v2\/posts\/56914\/revisions\/56917"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.enago.com\/academy\/wp-json\/wp\/v2\/media\/56918"}],"wp:attachment":[{"href":"https:\/\/www.enago.com\/academy\/wp-json\/wp\/v2\/media?parent=56914"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.enago.com\/academy\/wp-json\/wp\/v2\/categories?post=56914"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.enago.com\/academy\/wp-json\/wp\/v2\/tags?post=56914"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.enago.com\/academy\/wp-json\/wp\/v2\/ppma_author?post=56914"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}