Authors submitting manuscripts to our journal are required to declare any use of generative artificial intelligence (AI) or AI-supported technologies in scientific writing in accordance with the following principles:
Chatbots cannot be authors. Chatbots do not meet the criteria for authorship, particularly with regard to the ability to give final approval of the version to be published and to take responsibility for all aspects of the work in ensuring proper research and addressing issues related to the accuracy or integrity of any part of the work. No artificial intelligence tool can “understand” a conflict of interest statement and does not have the legal status to sign such a statement. Chatbots have no affiliation with any organization, regardless of their developers. Since authors submitting a manuscript must ensure that all authors meet the criteria for authorship, chatbots cannot be included as authors.
Authors should be honest when using chatbots and provide information about how they were used. The extent and type of chatbot use in scientific publications should be indicated. This is in line with the recommendation to acknowledge assistance in writing the article and to provide detailed information in the article about how the research was conducted and how the results were obtained.
Authors submitting an article in which a chatbot/AI was used to develop new text should indicate such use in the acknowledgements section; all prompts used to create new text or to convert text or text prompts into tables or illustrations should be specified.
If an artificial intelligence tool, such as a chatbot, was used to perform or create analytical work, assist in reporting results (e.g., creating tables or figures), or write computer code, this should be indicated in both the abstract and the main body of the article. In the interests of enabling scientific control, including replication and detection of falsification, the full prompt (query operator) used to generate the research results, the time and date of the query, and the AI tool used and its version should be provided.
Authors are responsible for the material provided by the chatbot in their article (in particular, for the accuracy of the material presented and the absence of plagiarism) and for the appropriate citation of all sources (including the original sources for materials created by the chatbot). Authors are responsible for ensuring that the content of the article reflects the authors' data and ideas and is not plagiarized, fabricated, or falsified. Otherwise, submitting such material for publication, regardless of how it was written, is a violation of scientific norms. Similarly, authors must ensure that all cited material is properly referenced, including full citations, and that the cited sources support the text generated by the chatbot. Since a chatbot may be designed not to use sources that contradict the views expressed in its results, authors are responsible for finding, reviewing, and including such opposing views in their articles. Authors should indicate what they have done to reduce the risk of plagiarism, provide a balanced view, and ensure the accuracy of all their references.
Editors and reviewers should inform authors and each other of any use of chatbots for evaluating manuscripts and generating reviews and correspondence. Editors and reviewers are responsible for any content and citations generated by a chatbot. They should be aware that chatbots store the prompts sent to them, including manuscript content, and that providing an author's manuscript to a chatbot violates the confidentiality of a manuscript submitted for publication.
Editors should, where possible, use appropriate tools to help them identify content created or modified by AI. Such tools should, where possible, be used by editors for the benefit of science and the public.