{"id":44054,"date":"2023-09-13T13:27:16","date_gmt":"2023-09-13T07:27:16","guid":{"rendered":"https:\/\/www.enago.com\/academy\/?p=44054"},"modified":"2023-09-19T01:30:36","modified_gmt":"2023-09-18T19:30:36","slug":"disclosing-ai-usage","status":"publish","type":"post","link":"https:\/\/www.enago.com\/academy\/disclosing-ai-usage\/","title":{"rendered":"Disclosing the Use of Generative AI: Best practices for authors in manuscript preparation"},"content":{"rendered":"<p>The rapid proliferation of generative and other AI-based tools in research writing has ignited an urgent need for transparency and accountability. Esteemed scientific journals such as <em>Nature<\/em> and reputable organizations like the Committee on Publication Ethics (COPE) have unequivocally emphasized the paramount significance of meticulously documenting <a href=\"https:\/\/www.enago.com\/academy\/guestposts\/harikrishna12\/best-ai-tools-to-empower-your-academic-research\/\" target=\"_blank\" rel=\"noopener\">AI tool usage in research<\/a>. It has become imperative for authors and publishers to adopt best practices for disclosing the use of these tools in manuscript preparation. Such practices not only enhance the transparency and reproducibility of research but also ensures ethical considerations are adequately addressed.<\/p>\n<p>The transparency of methods, data sources, and limitations is not just an academic exercise but a moral and scientific obligation. It ensures the integrity of research findings, facilitates reproducibility, and safeguards against unintended consequences. The responsible development and deployment of AI technologies hinge on the willingness of authors to share their insights, methodologies, and ethical considerations. In this article, we delve into the importance of disclosing the use of generative and other Al tools in manuscript preparation. We will explore essential best practices for authors, offering guidance on how to navigate the intricate landscape of <a href=\"https:\/\/www.enago.com\/ai-disclosure-statement-generator\" data-internallinksmanager029f6b8e52c=\"149\" title=\"AI Disclosure\" target=\"_blank\" rel=\"noopener\">AI disclosure<\/a>.<\/p>\n<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_74 counter-hierarchy ez-toc-counter ez-toc-grey ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\">\n<p class=\"ez-toc-title\" style=\"cursor:inherit\">Table of Contents<\/p>\n<span class=\"ez-toc-title-toggle\"><a href=\"#\" class=\"ez-toc-pull-right ez-toc-btn ez-toc-btn-xs ez-toc-btn-default ez-toc-toggle\" aria-label=\"Toggle Table of Content\"><span class=\"ez-toc-js-icon-con\"><span class=\"\"><span class=\"eztoc-hide\" style=\"display:none;\">Toggle<\/span><span class=\"ez-toc-icon-toggle-span\"><svg style=\"fill: #999;color:#999\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" class=\"list-377408\" width=\"20px\" height=\"20px\" viewBox=\"0 0 24 24\" fill=\"none\"><path d=\"M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z\" fill=\"currentColor\"><\/path><\/svg><svg style=\"fill: #999;color:#999\" class=\"arrow-unsorted-368013\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"10px\" height=\"10px\" viewBox=\"0 0 24 24\" version=\"1.2\" baseProfile=\"tiny\"><path d=\"M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z\"\/><\/svg><\/span><\/span><\/span><\/a><\/span><\/div>\n<nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/www.enago.com\/academy\/disclosing-ai-usage\/#Why_Disclosing_the_Use_of_Generative_and_Other_AI_Tools_Matters\" >Why Disclosing the Use of Generative and Other AI Tools Matters<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/www.enago.com\/academy\/disclosing-ai-usage\/#Disclosing_AI_Tools_in_Research_Articles\" >Disclosing AI Tools in Research Articles<\/a><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/www.enago.com\/academy\/disclosing-ai-usage\/#Why_Bots_Cannot_Be_Authors\" >Why Bots Cannot Be Authors<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/www.enago.com\/academy\/disclosing-ai-usage\/#Crediting_AI_Tools_in_the_Acknowledgments_Section\" >Crediting AI Tools in the Acknowledgments Section<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-5\" href=\"https:\/\/www.enago.com\/academy\/disclosing-ai-usage\/#Disclosing_the_Use_of_Generative_and_Other_AI_Tools_in_the_Body_of_the_Article\" >Disclosing the Use of Generative and Other AI Tools in the Body of the Article<\/a><\/li><\/ul><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-6\" href=\"https:\/\/www.enago.com\/academy\/disclosing-ai-usage\/#Collaborative_Efforts_to_Enforce_AI_Tool_Disclosure\" >Collaborative Efforts to Enforce AI Tool Disclosure<\/a><\/li><\/ul><\/nav><\/div>\n<h2><span class=\"ez-toc-section\" id=\"Why_Disclosing_the_Use_of_Generative_and_Other_AI_Tools_Matters\"><\/span>Why Disclosing the Use of Generative and Other AI Tools Matters<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Disclosing AI tools used for manuscript preparation is of paramount importance for several critical reasons:<\/p>\n<p><img decoding=\"async\" class=\"wp-image-44074 aligncenter lazyload\" data-src=\"https:\/\/www.enago.com\/academy\/wp-content\/uploads\/2023\/09\/Disclosure-of-generative-AI-tools-2-184x230.png\" alt=\"\" width=\"451\" height=\"564\" data-srcset=\"https:\/\/www.enago.com\/academy\/wp-content\/uploads\/2023\/09\/Disclosure-of-generative-AI-tools-2-184x230.png 184w, https:\/\/www.enago.com\/academy\/wp-content\/uploads\/2023\/09\/Disclosure-of-generative-AI-tools-2-384x480.png 384w, https:\/\/www.enago.com\/academy\/wp-content\/uploads\/2023\/09\/Disclosure-of-generative-AI-tools-2-768x960.png 768w, https:\/\/www.enago.com\/academy\/wp-content\/uploads\/2023\/09\/Disclosure-of-generative-AI-tools-2-150x188.png 150w, https:\/\/www.enago.com\/academy\/wp-content\/uploads\/2023\/09\/Disclosure-of-generative-AI-tools-2.png 800w\" data-sizes=\"(max-width: 451px) 100vw, 451px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 451px; --smush-placeholder-aspect-ratio: 451\/564;\" \/><\/p>\n<p><strong>1. Transparency and Reproducibility:<\/strong> Transparent disclosure of AI tools is crucial for scientific research, enabling replication and verification. It allows for building upon prior work, refining methodologies, and potentially uncovering errors or biases.<\/p>\n<p><strong>2. <a href=\"https:\/\/www.enago.com\/publication-support-services\/peer-review-process\" data-internallinksmanager029f6b8e52c=\"115\" title=\"Peer Review\" target=\"_blank\" rel=\"noopener\">Peer Review<\/a> and Evaluation:<\/strong> Open AI tool disclosure assists reviewers in assessing research validity, including AI model suitability, data sources, and methodologies, ensuring <a href=\"https:\/\/www.enago.com\/academy\/why-is-quality-control-in-research-so-important\/\" target=\"_blank\" rel=\"noopener\">research quality<\/a>.<\/p>\n<p><strong>3. Ethical Considerations:<\/strong> Manuscript disclosure addresses AI\u2019s ethical implications, like privacy, fairness, bias, and societal impacts, promoting responsible AI development.<\/p>\n<p><strong>4. Community Building:<\/strong> Research is a collaborative effort, and the sharing of knowledge and resources is crucial for the growth of any scientific discipline. Transparent disclosure fosters a sense of research community, encouraging collaboration and speeding up innovation.<\/p>\n<p><strong>5. Trust and Credibility:<\/strong> Transparent disclosure of generative and other AI tool usage enhances research and researcher credibility, instilling trust among peers, the public, and stakeholders.<\/p>\n<p><strong>6. Preventing Misuse:<\/strong> AI technologies can be powerful tools, but they can also be misused. Mandatory disclosure deters unethical AI applications, making it harder for malicious users to exploit AI technology.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Disclosing_AI_Tools_in_Research_Articles\"><\/span>Disclosing AI Tools in Research Articles<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>No doubt that disclosing the <a href=\"https:\/\/www.enago.com\/academy\/manuscript-preparation-with-ai\/\" target=\"_blank\" rel=\"noopener\">use of AI tools in manuscript preparation<\/a> are crucial to ensure transparency, replicability, and responsible research in the field; however, the question of how and where to disclose this information in research articles has been a subject of debate among publishers and researchers. This debate stems from the need to strike a balance between providing comprehensive information for transparency and fair assignment of credit.<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Why_Bots_Cannot_Be_Authors\"><\/span>Why Bots Cannot Be Authors<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>The ethical stance against designating LLMs and related AI tools as authors in research manuscripts is grounded in the principles of responsibility, accountability, transparency, and the understanding of AI\u2019s role as a tool in the research process. Authorship carries with it a responsibility to stand behind the research, take accountability for its content, and address any issues or concerns raised by readers, reviewers, or the wider research community. AI tools, being non-legal entities, cannot fulfill this responsibility as they lack the capacity for moral judgment and accountability.<\/p>\n<blockquote><p>\u201cAn attribution of authorship carries accountability for the work, which cannot be effectively applied to LLMs\u201d.<br \/>\n(<a href=\"https:\/\/www.nature.com\/articles\/d41586-023-00107-z\" target=\"_blank\" rel=\"noopener\">Magdalena Skipper<\/a>, editor-in-chief of Nature)<\/p><\/blockquote>\n<p>This view aligns with the broader ethical framework of research integrity and is supported by organizations like COPE, which emphasize the importance of upholding these principles in scholarly publishing.<\/p>\n<blockquote><p>\u201cAI tools cannot meet the requirements for authorship as they cannot take responsibility for the submitted work. As non-legal entities, they cannot assert the presence or absence of conflicts of interest nor manage copyright and license agreements\u201d<br \/>\n(<a href=\"https:\/\/publicationethics.org\/cope-position-statements\/ai-author\" target=\"_blank\" rel=\"noopener\" class=\"broken_link\">COPE Position Statement<\/a>, 2023: para. 2).<\/p><\/blockquote>\n<h3><span class=\"ez-toc-section\" id=\"Crediting_AI_Tools_in_the_Acknowledgments_Section\"><\/span>Crediting AI Tools in the Acknowledgments Section<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>Recognizing LLMs or other AI tools in the acknowledgments section of a research manuscript is a practical way to credit the contributions of these tools without conferring authorship status. This practice aligns with widely accepted guidelines, including those provided by the International Committee of Medical Journal Editors (ICMJE), which state that contributors whose roles do not meet authorship criteria may be acknowledged individually or collectively. This approach has garnered support from some of the reputable publishers. For example, Magdalena Skipper, the editor-in-chief of Nature, has stated that researchers using AI tools while preparing their article \u201cshould document their use in the methods or acknowledgments sections\u201d. Sabina Alam, the director of publishing ethics and integrity at Taylor &amp; Francis, also supports this approach.<\/p>\n<blockquote><p>\u201cAuthors are responsible for the validity and integrity of their work, and should cite any use of LLMs in the acknowledgments section.\u201d<br \/>\n(<a href=\"https:\/\/guides.lib.usf.edu\/c.php?g=1315087&amp;p=9689256\" target=\"_blank\" rel=\"noopener\">Sabina Alam<\/a>)<\/p><\/blockquote>\n<p>However, acknowledging AI tools in the acknowledgments section of a manuscript raises concerns similar to the reasons why these should not be credited as authors. This is primarily due to the absence of free will in AI tools, rendering them incapable of providing consent for acknowledgment. While being mentioned in the acknowledgments section may not carry the same level of accountability as being listed as an author, it nonetheless carries ethical and legal implications that warrant the need for consent. Additionally, individuals may decline acknowledgment if they disagree with the study\u2019s conclusions and wish to disassociate themselves from it, which is not applicable in the case of AI tools. In short, these tools cannot be considered accountable or responsible in the way human beings can be.<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Disclosing_the_Use_of_Generative_and_Other_AI_Tools_in_the_Body_of_the_Article\"><\/span>Disclosing the Use of Generative and Other AI Tools in the Body of the Article<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>Revealing the utilization of LLMs and other AI tools in research articles typically involves disclosing this information within the body of the text, akin to how other research tools are acknowledged. In the context of software applications, proper citation practices, including in-text citations and references, are followed. However, articulating the use of AI tools and elucidating their role in research requires careful consideration due to their intricate capabilities.<\/p>\n<p>Nevertheless, the approach of solely mentioning the use of AI tools within the text raises certain challenges. These issues are particularly noticeable concerning the discoverability of articles that have employed these tools. Challenges encompass factors such as the absence of indexing for non-English content and limited access to full-text articles, especially in cases of paywalled content. Moreover, inconsistencies in how researchers disclose the use of AI tools can impact the openness and transparency of research. For instance, variations in reporting practices may occur when LLMs are engaged in tasks that defy quantification, such as the conceptualization of ideas. Significantly, even with this level of disclosure, readers may still find it challenging to discern which portions of the text were generated by AI-based tools.<\/p>\n<p>Adopting general norms of software citation, i.e. including in-text citations and referencing, can effectively address both challenges associated with the use of LLMs in research articles. <a href=\"https:\/\/apastyle.apa.org\/blog\/how-to-cite-chatgpt\" target=\"_blank\" rel=\"noopener\">APA style<\/a> has already offered a structured format for describing the use of LLMs and other AI tools, incorporating in-text citations, and providing proper references. As per this template, disclosure practices can vary depending on the type of article. For instance, in research articles, disclosure is advised within the methods section, while in literature reviews, essays, or response papers, it is suggested in the introduction. Here\u2019s the format recommended by APA for describing the use of ChatGPT, along with in-text citation and referencing:<\/p>\n<div class=\"form-template-container\">\n<p><strong>In-text Citation:<\/strong><\/p>\n<p>\u201cWhen prompted with \u201cIs the left brain right brain divide real or a metaphor?\u201d the ChatGPT-generated text indicated that although the two brain hemispheres are somewhat specialized, \u201cthe notation that people can be characterized as \u2018left-brained\u2019 or \u2018right-brained\u2019 is considered to be an oversimplification and a popular myth\u201d (OpenAI, 2023).<\/p>\n<p><strong>Reference:<\/strong><\/p>\n<p>OpenAI (2023). ChatGPT (Mar 14 version) [Large language model]. <a href=\"https:\/\/chat.openai.com\/chat\" target=\"_blank\" rel=\"noopener\" class=\"broken_link\">https:\/\/chat.openai.com\/chat<\/a>&#8221;<\/p>\n<p style=\"text-align: center\">Source: Ayubi, E. (2023, April 7). <a href=\"https:\/\/apastyle.apa.org\/blog\/how-to-cite-chatgpt\" target=\"_blank\" rel=\"noopener\">How to cite ChatGPT<\/a>. <em>APA Style<\/em><\/p>\n<\/div>\n<p>&nbsp;<\/p>\n<p>However, incorporating details \u2014 such as the specific version, model, date of use, and user\u2019s name \u2014 provides a more robust picture of the conditions under which the AI tools contributed to the research. This approach allows for better tracking, accountability, and transparency, acknowledging the dynamic nature of LLMs and AI tools, and their responses to different inputs and contexts.<\/p>\n<p>For the purpose of verification, it is advisable to document and reveal interactions with AI-based text generation tools, which should encompass particular prompts and the dates of queries. This information can be provided as supplementary material or within appendices for transparency and validation purposes. Authors can also include Complex AI models, extensive code, or detailed data preprocessing steps in supplementary materials. Also, acknowledge limitations and potential biases of AI technologies, if any, in the discussion section. Discuss how these limitations may impact the interpretation and generalizability of the results.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Collaborative_Efforts_to_Enforce_AI_Tool_Disclosure\"><\/span>Collaborative Efforts to Enforce AI Tool Disclosure<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Certainly, considering the diverse applications of LLMs and AI tools across various research domains, it may be beneficial to establish more comprehensive guidelines or specific criteria governing their utilization. Professional associations or editorial boards of journals need to take the lead in formulating more consistent and uniform guidelines. A notable example of this proactive approach was demonstrated by the organizers of the 40<sup>th<\/sup> International Conference on Machine Learning (ICML). They highlighted in their conference policies that \u201cPapers containing text generated from a large-scale language model (LLM) like ChatGPT are not permitted, unless this generated text is integrated as a component of the paper&#8217;s experimental analysis\u201d.<\/p>\n<p>Thus, the roles of various stakeholders, including journals, funding agencies, and the scientific community, are pivotal in enforcing rules mandating the disclosure of AI tool usage in research. Funding agencies can also explicitly request grantees to disclose their use of generative AI tools and technologies in their research proposals. Furthermore, they can conduct compliance checks during the grant review process to ensure researchers\u2019 adherence to these disclosure guidelines.<\/p>\n<p>By raising awareness of the significance of disclosure, the scientific community can foster a culture of transparency within the research ecosystem. Researchers can actively advocate for responsible research practices and encourage their peers to adhere to disclosure guidelines. Additionally, the scientific community can exert pressure on journals and funding agencies, urging them to rigorously enforce rules related to AI tool disclosure. By working collectively, the scientific community can play a pivotal role in maintaining the integrity and credibility of scientific research.<\/p>\n<div style=\"display:flex; gap:10px;justify-content:\" class=\"wps-pgfw-pdf-generate-icon__wrapper-frontend\">\n\t\t<a  href=\"https:\/\/www.enago.com\/academy\/wp-json\/wp\/v2\/posts\/44054?action=genpdf&amp;id=44054\" class=\"pgfw-single-pdf-download-button\" ><img data-src=\"https:\/\/www.enago.com\/academy\/wp-content\/plugins\/pdf-generator-for-wp\/admin\/src\/images\/PDF_Tray.svg\" title=\"Generate PDF\" style=\"width:auto; height:45px;\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\"><\/a>\n\t\t<\/div>","protected":false},"excerpt":{"rendered":"<p>The rapid proliferation of generative and other AI-based tools in research writing has ignited an&hellip;<\/p>\n","protected":false},"author":8292,"featured_media":44056,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"om_disable_all_campaigns":false,"footnotes":""},"categories":[1893],"tags":[1631,1492,1495,1618],"ppma_author":[1905],"class_list":["post-44054","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-in-academia","tag-ai-in-academic-writing","tag-ethics-misconduct","tag-scientific-transparency","tag-tips-for-phd-students-and-postdocs"],"better_featured_image":{"id":44056,"alt_text":"","caption":"","description":"","media_type":"image","media_details":{"width":910,"height":340,"file":"2023\/09\/910x340-Exploring-Responsible-AI.png","filesize":479583,"sizes":{"medium":{"file":"910x340-Exploring-Responsible-AI-470x176.png","width":470,"height":176,"mime-type":"image\/png","filesize":137736,"source_url":"https:\/\/www.enago.com\/academy\/wp-content\/uploads\/2023\/09\/910x340-Exploring-Responsible-AI-470x176.png"},"large":{"file":"910x340-Exploring-Responsible-AI-750x280.png","width":750,"height":280,"mime-type":"image\/png","filesize":310572,"source_url":"https:\/\/www.enago.com\/academy\/wp-content\/uploads\/2023\/09\/910x340-Exploring-Responsible-AI-750x280.png"},"thumbnail":{"file":"910x340-Exploring-Responsible-AI-170x150.png","width":170,"height":150,"mime-type":"image\/png","filesize":43993,"source_url":"https:\/\/www.enago.com\/academy\/wp-content\/uploads\/2023\/09\/910x340-Exploring-Responsible-AI-170x150.png"},"medium_large":{"file":"910x340-Exploring-Responsible-AI-768x287.png","width":768,"height":287,"mime-type":"image\/png","filesize":323884,"source_url":"https:\/\/www.enago.com\/academy\/wp-content\/uploads\/2023\/09\/910x340-Exploring-Responsible-AI-768x287.png"},"tf-client-image-size":{"file":"910x340-Exploring-Responsible-AI-120x120.png","width":120,"height":120,"mime-type":"image\/png","filesize":26503,"source_url":"https:\/\/www.enago.com\/academy\/wp-content\/uploads\/2023\/09\/910x340-Exploring-Responsible-AI-120x120.png"},"publisher-tb1":{"file":"910x340-Exploring-Responsible-AI-86x64.png","width":86,"height":64,"mime-type":"image\/png","filesize":12050,"source_url":"https:\/\/www.enago.com\/academy\/wp-content\/uploads\/2023\/09\/910x340-Exploring-Responsible-AI-86x64.png"},"publisher-sm":{"file":"910x340-Exploring-Responsible-AI-210x136.png","width":210,"height":136,"mime-type":"image\/png","filesize":51592,"source_url":"https:\/\/www.enago.com\/academy\/wp-content\/uploads\/2023\/09\/910x340-Exploring-Responsible-AI-210x136.png"},"publisher-mg2":{"file":"910x340-Exploring-Responsible-AI-279x220.png","width":279,"height":220,"mime-type":"image\/png","filesize":95928,"source_url":"https:\/\/www.enago.com\/academy\/wp-content\/uploads\/2023\/09\/910x340-Exploring-Responsible-AI-279x220.png"},"publisher-md":{"file":"910x340-Exploring-Responsible-AI-357x210.png","width":357,"height":210,"mime-type":"image\/png","filesize":120704,"source_url":"https:\/\/www.enago.com\/academy\/wp-content\/uploads\/2023\/09\/910x340-Exploring-Responsible-AI-357x210.png"},"publisher-lg":{"file":"910x340-Exploring-Responsible-AI-750x340.png","width":750,"height":340,"mime-type":"image\/png","filesize":396171,"source_url":"https:\/\/www.enago.com\/academy\/wp-content\/uploads\/2023\/09\/910x340-Exploring-Responsible-AI-750x340.png"},"publisher-tall-sm":{"file":"910x340-Exploring-Responsible-AI-180x217.png","width":180,"height":217,"mime-type":"image\/png","filesize":61735,"source_url":"https:\/\/www.enago.com\/academy\/wp-content\/uploads\/2023\/09\/910x340-Exploring-Responsible-AI-180x217.png"},"publisher-tall-lg":{"file":"910x340-Exploring-Responsible-AI-267x322.png","width":267,"height":322,"mime-type":"image\/png","filesize":119662,"source_url":"https:\/\/www.enago.com\/academy\/wp-content\/uploads\/2023\/09\/910x340-Exploring-Responsible-AI-267x322.png"},"publisher-tall-big":{"file":"910x340-Exploring-Responsible-AI-368x340.png","width":368,"height":340,"mime-type":"image\/png","filesize":185362,"source_url":"https:\/\/www.enago.com\/academy\/wp-content\/uploads\/2023\/09\/910x340-Exploring-Responsible-AI-368x340.png"},"rpwe-thumbnail":{"file":"910x340-Exploring-Responsible-AI-45x45.png","width":45,"height":45,"mime-type":"image\/png","filesize":4924,"source_url":"https:\/\/www.enago.com\/academy\/wp-content\/uploads\/2023\/09\/910x340-Exploring-Responsible-AI-45x45.png"},"web-stories-poster-portrait":{"file":"910x340-Exploring-Responsible-AI-640x340.png","width":640,"height":340,"mime-type":"image\/png","filesize":337823,"source_url":"https:\/\/www.enago.com\/academy\/wp-content\/uploads\/2023\/09\/910x340-Exploring-Responsible-AI-640x340.png"},"web-stories-publisher-logo":{"file":"910x340-Exploring-Responsible-AI-96x96.png","width":96,"height":96,"mime-type":"image\/png","filesize":18204,"source_url":"https:\/\/www.enago.com\/academy\/wp-content\/uploads\/2023\/09\/910x340-Exploring-Responsible-AI-96x96.png"},"web-stories-thumbnail":{"file":"910x340-Exploring-Responsible-AI-150x56.png","width":150,"height":56,"mime-type":"image\/png","filesize":18714,"source_url":"https:\/\/www.enago.com\/academy\/wp-content\/uploads\/2023\/09\/910x340-Exploring-Responsible-AI-150x56.png"}},"image_meta":{"aperture":"0","credit":"","camera":"","caption":"","created_timestamp":"0","copyright":"","focal_length":"0","iso":"0","shutter_speed":"0","title":"","orientation":"0","keywords":[]}},"post":44054,"source_url":"https:\/\/www.enago.com\/academy\/wp-content\/uploads\/2023\/09\/910x340-Exploring-Responsible-AI.png"},"acf":{"faq_main_heading":"Frequently Asked Questions","faq_heading_one":"How to disclose the use of AI?","faq_heading_two":"How to write a declaration of generative AI in scientific writing?","faq_heading_three":"Can AI be listed as an author?","faq_heading_four":"How can we ensure that AI systems are transparent?","faq_heading_five":"","faq_heading_six":"","faq_description_one":"To disclose the use of AI, specify the AI tools, models, and versions used in your research in the methods section of your manuscript. You may also acknowledge AI tool usage in the acknowledgments section, providing details like the model, version, date of use, and user\u2019s name for thorough transparency. Following the guidelines provided by the publishers of your target journal is an essential step in disclosing the use of AI. These guidelines will outline the specific requirements and preferred format for disclosing AI tool usage in your manuscript.","faq_description_two":"Check the guidelines provided by your target journal or publisher and ensure that your declaration aligns with it. These guidelines may vary from journal to journal. Depending on the article type, consider disclosing AI tool usage in the methods section for research articles or in the introduction for literature reviews, essays, or response papers. You may follow general norms of software citation by including in-text citations and references. Additionally, for verification, document interactions with AI-based tools, including specific prompts and query dates, and provide this information as supplementary material or in appendices to enhance transparency and validation.","faq_description_three":"AI cannot be listed as an author in scientific publications. While AI, like large language models (LLMs), can assist in research and writing, authorship implies responsibility and accountability for the content, which AI lacks. Ethical and professional standards in scientific writing reserve authorship for human individuals who can take ownership of their work, make ethical judgments, and fulfill responsibilities associated with research.","faq_description_four":"Ensuring transparency in AI systems is of paramount importance in today\u2019s technology-driven world. To achieve this, comprehensive disclosure is essential, encompassing the AI system\u2019s configuration, algorithms, parameters, and data sources. Additionally, favor AI models that offer explainability, enabling users to understand the rationale behind AI decisions. External audits and adherence to publishers\u2019 guidelines and ethical practices further solidify the commitment to transparency, fostering trust and accountability in AI applications.","faq_description_five":"","faq_description_six":""},"views":4681,"single_webinar_page_date":null,"single_webinar_page_time":null,"session_agenda":null,"who_should_attend_this_session":null,"about_the_speaker_field":null,"co-webinar-sec":null,"co_webinar_sec_one":null,"speaker-name":null,"webinar-date":null,"webinar-time":null,"webinar-s-image":null,"custum_webinar_category":null,"authors":[{"term_id":1905,"user_id":8292,"is_guest":0,"slug":"riyat","display_name":"Riya Thomas","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/3e47a7988ba5ac727894387d32bd9add72890f9bf31e57bd1c73f27f7b0ec7a2?s=96&d=identicon&r=g","author_category":"","user_url":"","last_name":"Thomas","first_name":"Riya","job_title":"","description":"Riya Thomas is a scientific content expert with a passion for communicating complex scientific concepts to diverse audiences. She earned her PhD in Physics from the Manipal Institute of Technology, Manipal, India, where she conducted research on thermoelectric materials. She has 10+ publications in Scopus indexed peer reviewed journals and has also presented at international and national conferences. She also has experience in reviewing scientific manuscripts and writing various project proposals. Riya has a genuine passion for scientific excellence and strongly encourages the development and spread of scientific knowledge."}],"_links":{"self":[{"href":"https:\/\/www.enago.com\/academy\/wp-json\/wp\/v2\/posts\/44054","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.enago.com\/academy\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.enago.com\/academy\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.enago.com\/academy\/wp-json\/wp\/v2\/users\/8292"}],"replies":[{"embeddable":true,"href":"https:\/\/www.enago.com\/academy\/wp-json\/wp\/v2\/comments?post=44054"}],"version-history":[{"count":15,"href":"https:\/\/www.enago.com\/academy\/wp-json\/wp\/v2\/posts\/44054\/revisions"}],"predecessor-version":[{"id":44140,"href":"https:\/\/www.enago.com\/academy\/wp-json\/wp\/v2\/posts\/44054\/revisions\/44140"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.enago.com\/academy\/wp-json\/wp\/v2\/media\/44056"}],"wp:attachment":[{"href":"https:\/\/www.enago.com\/academy\/wp-json\/wp\/v2\/media?parent=44054"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.enago.com\/academy\/wp-json\/wp\/v2\/categories?post=44054"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.enago.com\/academy\/wp-json\/wp\/v2\/tags?post=44054"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.enago.com\/academy\/wp-json\/wp\/v2\/ppma_author?post=44054"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}