AI Usage Policy

Authors submitting manuscripts to the journal are required to declare any use of generative artificial intelligence (AI) or AI-supported technologies in scientific writing in accordance with the following principles:

Chatbots cannot be authors. Chatbots do not meet the criteria for authorship, particularly in terms of giving final approval to the version to be published and taking responsibility for all aspects of the work, including ensuring proper research and addressing issues related to the accuracy or integrity of any part of the work. No artificial intelligence tool can “understand” a conflict of interest statement, and it does not have the legal status to sign such a statement. Chatbots have no affiliation with any organization, regardless of who their developers are. Since authors submitting a manuscript must ensure that all authors meet the criteria for authorship, chatbots cannot be included as authors.

Authors should be transparent when using chatbots and provide clear information about how they were utilized. The extent and type of chatbot use in scientific publications should be indicated. This is in line with the recommendation to acknowledge assistance in writing the article and to provide detailed information in the article about how the research was conducted and how the results were obtained.

Authors submitting an article in which a chatbot/AI was used to develop new text should indicate such use in the acknowledgements section; all prompts used to create new text or to convert text or text prompts into tables or illustrations should be specified.

If an artificial intelligence tool, such as a chatbot, was used to perform or create analytical work, assist in reporting results (e.g., creating tables or making figures), or write computer code, this shows. In that case, it would be indicated in both the abstract and the main body of the article. In the interest of enabling scientific control, including replication and detection of falsification, the full prompt (query operator) used to generate the research results, the time and date of the query, and the AI tool used, along with its version, should be provided.

Authors are responsible for the material provided by the chatbot in their article (in particular, for the accuracy of the material presented and the absence of plagiarism) and for the appropriate citation of all sources (including the original materials created by the chatbot). Authors are responsible for ensuring that the content of the article accurately reflects their data and ideas, and that it is not plagiarized, fabricated, or falsified. Otherwise, submitting such material for publication, regardless of how it was written, is a violation of scientific norms. Similarly, authors must ensure that all cited material is properly rereferenced with full citations and that the cited sources support the text generated by the chatbot. Since a chatbot may be designed not to use sources that contradict the views expressed in its results, authors are responsible for identifying, reviewing, and incorporating such opposing views into their articles. Authors should indicate what they have done to reduce the risk of plagiarism, provide a balanced view, and ensure the accuracy of all their references.

Editors and reviewers should inform authors and each other of any use of chatbots for evaluating manuscripts, generating reviews, and correspondence. Editors and reviewers are responsible for any content and citations generated by a chatbot. They should be aware that chatbots store the prompts sent to them, including manuscript content, and that providing an author's manuscript to a chatbot violates the confidentiality of a manuscript submitted for publication.

Editors should, where possible, use appropriate tools to help them identify content created or modified by AI. Editors should use such tools for the benefit of science and the public.