Karger AI Guidelines

Last updated: April 1, 2026

Check AI policies for below classifications

01

Refinement, correction, editing or formatting the manuscript to improve clarity of language

Human Review Mandatory

Policy Summary

Karger permits authors to use generative AI/LLM tools when preparing a manuscript, including for language refinement. If such a tool has been used as part of a study or manuscript, the use must be clearly declared in the manuscript (in the Methods section, or in the Acknowledgements section when the article type does not include a Methods section) and cited in line with Karger’s software citation policy. Authors must guarantee the accuracy and originality of the manuscript and must include details on how the accuracy of any generative AI-based output was verified. The policy does not specify human review requirements for this classification.

Evidence

  • If a Large Language Model (LLM), or other generative AI-based tool (e.g. chatbots or image creators), has been used as part of a study or manuscript, the use must be clearly declared in the manuscript Methods, or Acknowledgements section, if the article type does not include a Methods section. Authors are responsible for guaranteeing the accuracy and originality of the content of their manuscript. The manuscript must include detail on how the accuracy of any generative AI-based output was verified. Failure to comply with the above will be considered a violation of our Editorial Policies and may result in the rejection of a manuscript or post-publication notice. Use of a Large Language Model (LLM), or other generative AI-based tool, must be declared in the manuscript and cited in line with our software citation policy.

AI Status

Allowed

Human Review Mandatory

Disclosure Required

02

Writing or drafting manuscript content

Human Review Mandatory

Policy Summary

Karger permits authors to use large language models or other generative AI tools when preparing a study or manuscript, but this use must be declared in the manuscript. The declaration must appear in the Methods section, or in the Acknowledgements section if the article type does not include a Methods section, and the tool must be cited in line with Karger’s software citation policy. Authors must guarantee the accuracy and originality of the manuscript’s content and must describe how the accuracy of any generative AI output was verified. The policy does not specify human review requirements for this classification.

Evidence

  • If a Large Language Model (LLM), or other generative AI-based tool (e.g. chatbots or image creators), has been used as part of a study or manuscript, the use must be clearly declared in the manuscript Methods, or Acknowledgements section, if the article type does not include a Methods section. Authors are responsible for guaranteeing the accuracy and originality of the content of their manuscript. The manuscript must include detail on how the accuracy of any generative AI-based output was verified. Failure to comply with the above will be considered a violation of our Editorial Policies and may result in the rejection of a manuscript or post-publication notice. Use of a Large Language Model (LLM), or other generative AI-based tools must be declared in the manuscript and cited in line with our software citation policy. Authors are responsible for guaranteeing the accuracy and originality of the content of their manuscript

AI Status

Allowed

Human Review Mandatory

Disclosure Required

03

Translation of manuscript text for the purpose of publishing

Human Review Mandatory

Policy Summary

When a large language model or other generative AI-based tool is used as part of a study or manuscript, this use must be clearly declared in the manuscript, and the manuscript must include details on how the accuracy of the AI output was verified. Translated works may be considered for publication at the Editor's discretion, and the translation should be declared in the cover letter and manuscript.

Evidence

  • If a Large Language Model (LLM), or other generative AI-based tool (e.g. chatbots or image creators), has been used as part of a study or manuscript, the use must be clearly declared in the manuscript Methods, or Acknowledgements section. The manuscript must include detail on how the accuracy of any generative AI-based output was verified. The consideration of translated works for publication is at the discretion of the Editor and should be declared in the cover letter and Manuscript.

AI Status

Allowed

Human Review Mandatory

Disclosure Required

04

Refining or formatting of data reported in the manuscript

Human Review Mandatory

Policy Summary

Karger permits authors to use generative AI/LLM tools as part of a study or manuscript, but such use must be clearly declared in the manuscript. Authors must provide details on how the accuracy of any generative AI-based output was verified and remain responsible for the accuracy and originality of the manuscript content. The policy does not otherwise provide separate, classification-specific rules addressing the refining or formatting of data reported in the manuscript. The policy does not specify human review requirements for this classification.

Evidence

  • If a Large Language Model (LLM), or other generative AI-based tool (e.g. chatbots or image creators), has been used as part of a study or manuscript, the use must be clearly declared in the manuscript Methods, or Acknowledgements section, if the article type does not include a Methods section. Authors are responsible for guaranteeing the accuracy and originality of the content of their manuscript. For all submitted manuscripts, the manuscript must include detail on how the accuracy of any generative AI-based output was verified. Use of a Large Language Model (LLM), or other generative AI-based tools must be declared in the manuscript and cited in line with our software citation policy.

AI Status

Allowed

Human Review Mandatory

Disclosure Required

05

Generation, refinement, correction, editing or formatting of images, diagrams or other figures for illustrative purposes only

Human Review Mandatory

Policy Summary

If authors use a Large Language Model or other generative AI-based tool, including image creators, as part of a study or manuscript, they must clearly declare that use in the manuscript's Methods or Acknowledgements section. Authors must include details on how they verified the accuracy of any generative AI-based output. Authors remain responsible for guaranteeing the accuracy and originality of their manuscript content.

Evidence

  • If a Large Language Model (LLM), or other generative AI-based tool (e.g. chatbots or image creators), has been used as part of a study or a manuscript, the use must be clearly declared in the manuscript's 'Methods or Acknowledgements' section. Authors are responsible for guaranteeing the accuracy and originality of the content of their manuscript. The manuscript must include detail on how the accuracy of any generative AI-based output was verified.

AI Status

Allowed

Human Review Mandatory

Disclosure Required

06

Generation, refinement, correction, editing or formatting of visualisations of research data or results

Human Review Mandatory

Policy Summary

Karger permits the use of LLMs or other generative AI-based tools in connection with a study or manuscript, including for figures and images. If such tools are used, this use must be clearly declared in the manuscript’s “Methods or Acknowledgements” section, including details on how the accuracy of any generative AI-based output was verified. Authors must guarantee the accuracy and originality of their manuscript content. The policy does not specify human review requirements for this classification.

Evidence

  • ## The use of generative AI and AI-assisted technologies in scientific writing for journals and books (including figures and images) If a Large Language Model (LLM), or other generative AI-based tool (e.g. chatbots or image creators), has been used as part of a study or a manuscript, the use must be clearly declared in the manuscript’s “Methods or Acknowledgements” section. Use must be clearly declared in the manuscript including details on how the accuracy of any generative AI-based output was verified. Authors are responsible for guaranteeing the accuracy and originality of the content of their manuscript. The manuscript must include detail on how the accuracy of any generative AI-based output was verified.

AI Status

Allowed

Human Review Mandatory

Disclosure Required

07

Refinement or formatting of code reported in the submitted manuscript

Human Review Mandatory

Policy Summary

Karger requires that any use of a Large Language Model or other generative AI-based tool be declared in the manuscript and cited in line with its software citation policy. Any software used must be cited in the References. Authors must ensure the manuscript's accuracy and originality, and the manuscript must include details describing how the accuracy of any generative AI-based output was verified.

Evidence

  • Any software used must be cited in the References, in line with our software citation policy. Authors are responsible for guaranteeing the accuracy and originality of the content of their manuscript. The manuscript must include detail on how the accuracy of any generative AI-based output was verified. Use of a Large Language Model (LLM), or other generative AI-based tools must be declared in the manuscript and cited in line with our software citation policy.

AI Status

Allowed

Human Review Mandatory

Disclosure Required

08

Assisting with gathering references

Human Review Mandatory

Policy Summary

Karger permits the use of LLMs or other generative AI-based tools in connection with a study or manuscript, but such use must be clearly declared in the manuscript (in Methods, or in Acknowledgements when the article type does not include a Methods section). Any software used must be cited in the References, and LLM/generative AI tool use must be cited in line with Karger’s software citation policy. The policy does not specify additional requirements that are specific to using these tools for gathering references beyond these declaration and citation obligations. The policy does not specify human review requirements for this classification.

Evidence

  • If a Large Language Model (LLM), or other generative AI-based tool (e.g. chatbots or image creators), has been used as part of a study or manuscript, the use must be clearly declared in the manuscript Methods, or Acknowledgements section, if the article type does not include a Methods section. Any software used must be cited in the References, in line with our software citation policy. Use of a Large Language Model (LLM), or other generative AI-based tools must be declared in the manuscript and cited in line with our software citation policy.

AI Status

Allowed

Human Review Mandatory

Disclosure Required

09

Presentation of any kind of content generated by AI tools as though it were original research data/results from non-machine sources

Human Review Mandatory

Policy Summary

If a large language model or other generative AI-based tool is used as part of a study or manuscript, Karger requires that this use be clearly declared in the manuscript. Authors must ensure the accuracy and originality of their manuscript content, and the manuscript must include details describing how the accuracy of any generative AI-based output was verified. Authors are fully responsible for the accuracy and originality of all content.

Evidence

  • If a Large Language Model (LLM), or other generative AI-based tool (e.g. chatbots or image creators), has been used as part of a study or manuscript, the use must be clearly declared in the manuscript Methods, or Acknowledgements section. Authors are responsible for guaranteeing the accuracy and originality of the content of their manuscript. The manuscript must include detail on how the accuracy of any generative AI-based output was verified.

AI Status

Allowed

Human Review Mandatory

Disclosure Required

The Responsible Use of AI Initiative

Enago has been working with authors from Japan, China, South Korea, the Middle East, and South America, to edit and polish their manuscripts, supporting them in getting their voices heard and their research out.

The advent of GenAI brought incredible new opportunities, but with a nice side of confusing policies, unclear guardrails, and a lot of responsibility on researchers.

Our own frustration, coupled with what we were hearing from our customers, led to this initiative. We think there should be zero compromise on research integrity, and this is our way of supporting the research publishing ecosystem and its stakeholders.

We love feedback. So if you have something to tell us, please send us a message. 

WRITE TO US