The growing popularity of generative artificial intelligence and AI-enabled technologies, which are expected to be increasingly used by authors, has led the journal to develop an AI policy. These guidelines are designed to provide greater transparency and improve the quality of publications for authors, reviewers, editors, and readers. The editorial board will monitor the development of AI technologies and adjust or improve these guidelines.
For Authors
Use of Generative Artificial Intelligence and AI-Enabled Technologies in Scientific Writing
PLEASE NOTE THAT THESE RULES APPLY ONLY TO THE WRITING PROCESS, NOT TO THE USE OF AI TOOLS TO ANALYZE AND OBTAIN INFORMATION FROM DATA AS PART OF THE RESEARCH PROCESS.
If authors use generative artificial intelligence and AI-enabled technologies in the writing process, these technologies should be used only to improve readability and correct grammatical errors in the work. The use of AI technology should be carried out under human supervision and control, and authors should carefully review and edit the output, as AI can produce authoritative results that may be incorrect, incomplete, or biased. Authors are ultimately responsible for the content of the work. Authors should disclose in their manuscripts the use of AI and AI-enabled technologies. The use of AI should be acknowledged in the published work. The statement about the use of these technologies supports transparency and trust between authors, readers, reviewers, and editors, and promotes compliance with the terms of use of the relevant tool or technology. Authors should not list AI and AI-enabled technologies as authors or co-authors, nor cite AI as authors. Authorship involves duties and tasks that can only be assigned and performed by humans. Each (co)author is responsible for ensuring that issues related to the accuracy or integrity of any part of the work are properly investigated and resolved, and authorship requires the ability to approve the final version of the work and agree to its submission. Authors are also responsible for ensuring that the work is original and that the work does not infringe the rights of third parties. All authors should read our Publication Ethics Policy before submitting. Use of Generative Artificial Intelligence and AI Tools in Figures, Images, and Illustrations We do not allow the use of Generative Artificial Intelligence or AI Tools to create or modify images in submitted manuscripts. This may include enhancing, obscuring, moving, removing, or adding a feature to an image or drawing. Adjustments to brightness, contrast, or color balance are acceptable as long as they do not obscure or eliminate any information present in the original.
The only exception is when the use of AI or AI tools is part of the research design or research methods (e.g., in AI-assisted visualization approaches to generate or interpret key research data, such as in the field of biomedical imaging). If this is done, such use should be acknowledged and described appropriately in the text of the manuscript. This should include an explanation of how AI or AI tools were used in the process of generating or modifying the image, as well as the model or tool name, version and extension number, and manufacturer. Authors should follow the specific rules for using AI-based software and ensure proper attribution of content. Where possible, authors may be asked to provide pre-AI-corrected versions of images and/or composite raw images used to create the final submitted versions for editorial review. The use of generative AI or AI-powered tools in the creation of artwork, such as graphic annotations, is prohibited. The use of generative AI in the creation of covers may be permitted in some cases, provided the author obtains prior permission from the editor and publisher of the journal, can demonstrate that all necessary rights to use the material have been obtained, and ensures proper attribution of the content.
For reviewers
When a researcher is invited to review another researcher's article, the manuscript
should be treated as a confidential document. Reviewers should not upload the submitted manuscript or any part of it into a generative AI tool, as this may violate the confidentiality and proprietary rights of the authors, and if the article contains personal information, may violate data privacy rights.
This confidentiality requirement extends to the reviewer's report (review),
as it may contain confidential information about the manuscript and/or the authors. For this reason, reviewers should not upload their review into an AI tool, even if it is only for the purpose of correcting grammatical errors and readability.
Reviewing is the foundation of the scientific ecosystem, and the Editorial Board
holds the highest standards of integrity in this process.
Reviewing a scientific manuscript involves a responsibility that can only be assigned to humans. Reviewers should not use generative artificial intelligence or AI-enabled technologies to assist in the scientific review of a paper, as the critical thinking and original evaluation required for review are beyond the scope of this technology, and there is a risk that this technology will lead to incorrect, incomplete, or biased conclusions about the manuscript. The reviewer is responsible for the content of the review. The Editorial Board’s AI-based Author Policy states that authors are permitted to use generative artificial intelligence and AI-enabled technologies in the writing process prior to submission, but only to improve the readability and correct grammatical errors of their paper, with appropriate disclosure.
For Editors
The submitted manuscript should be treated as a confidential document. Editors should not upload the submitted manuscript or any part of it to a generative AI tool, as this may violate the confidentiality and proprietary rights of the authors and, if the article contains personal information, may violate data privacy rights.
This confidentiality requirement applies to all communications about the manuscript,
including any decision letters or correspondence, as they may contain confidential information about the manuscript and/or the authors. For this reason, editors should not upload their correspondence to an AI tool, even if it is only for the purpose of improving the language and readability.
Managing the editorial evaluation of a scientific manuscript involves a responsibility that can only be assigned to humans. Generative AI or AI-enabled technologies should not be used by editors to assist in the process of evaluating or making decisions about a manuscript, as the critical thinking and original evaluation required for this work are beyond the scope of this technology, and there is a risk that the technology will lead to incorrect, incomplete, or biased conclusions about the manuscript. The editor is responsible for the editorial process, the final decision, and communication of it to the authors. The Editorial Board's author policy states that authors are permitted to use generative AI and AI-enabled technologies in the writing process prior to submission, but only to improve the readability and correct grammatical errors of their article and with appropriate disclosure. If an editor suspects that an author or reviewer has violated the AI Policy, he or she should notify the editorial board.