Given the rapid growth of generative artificial intelligence and other AI-based technologies, which are increasingly used by authors of scientific publications, the editorial board of the journal has developed an appropriate policy on the use of artificial intelligence. The purpose of these rules is to ensure transparency, integrity and high quality of publications for all participants in the scientific process - authors, reviewers, editors and readers. The editorial board will constantly monitor the development of AI technologies and, if necessary, update or improve this policy.
For authors
Using generative AI in the process of writing the text of the publication (these rules apply only to the process of writing the text, and not to the use of AI tools for analysis or processing of data within the research process).
Authors may use generative artificial intelligence or tools that support it exclusively to improve style, readability and correct grammatical errors. The use of AI should be under human supervision, and authors are required to carefully review and edit the output, as AI can produce inaccurate, incomplete, or biased content. Authors are ultimately responsible for the content of their work. Any use of AI should be openly disclosed in the manuscript. This disclosure increases transparency and trust between authors, editors, reviewers, and readers, and demonstrates compliance with the guidelines for using the tool. AI cannot be listed as an author or co-author, and cannot be cited as an author. Authorship entails responsibilities that only humans can fulfill, including responsibility for the accuracy, integrity, originality, and consent for submission.
All authors should review the Publication Ethics Policy before submitting a manuscript.
Use of AI in Images, Figures, and Illustrations
The use of generative AI or AI tools to create or edit images in submitted manuscripts is prohibited. This applies to any manipulation — enhancing, darkening, removing, moving or adding elements. Only minor corrections to brightness, contrast or colors are permissible, as long as they do not distort the original information. An exception is possible only when the use of AI is part of the scientific method (for example, in biomedical imaging). In such cases, this should be clearly stated in the text with an explanation of the technology or model used, its version, manufacturer and purpose of application. Authors must comply with the terms of the software license and provide the original, unedited images upon request by the editors. The creation of artistic illustrations, graphic annotations or covers using generative AI is not allowed, except in specific cases where the editor and publisher have provided prior permission. In such situations, the author must confirm that all necessary rights to the material are available and indicate the source of the content.
For reviewers
Manuscripts submitted for review are confidential documents. Reviewers are not allowed to upload manuscripts or parts of them to any generative AI tools, as this may violate privacy, copyright or personal data rights. This rule also applies to the text of the review, as it may contain confidential information. Reviewers should not use AI, even to improve the language or grammar of the review. Scientific review is based on human critical thinking and analytical skills, which cannot be replaced by technology. The use of AI during review may lead to biased or erroneous conclusions, therefore the reviewer bears full responsibility for the content of his/her conclusion. The Editorial Board reminds that authors are allowed to use AI only for linguistic improvement of the text before submission, provided that appropriate disclosure is made in the manuscript.
For Editors
Submitted manuscripts are treated as confidential documents. Editors should not upload them or any related materials (including letters to authors or decisions) to generative AI tools. This may violate confidentiality, proprietary rights, or the protection of personal data. The editorial review process involves human responsibility and cannot be delegated to artificial intelligence. Editors should not use AI to analyze, evaluate, or make decisions about manuscripts, as this may lead to erroneous or biased conclusions. The editor is personally responsible for the editorial process, the final decision, and notification to authors. If the editor has reason to believe that an author or reviewer has violated the AI Policy, he or she is obliged to notify the editorial board for consideration.