The recommendations on application of generative artificial intelligence (AI) technologies in the paper publication process are developed on the basis of recommendations of World Association of Medical Editors (WAME) on chatbots and generative artificial intelligence in relation to scholarly publications.
Recommendation 1: Chatbots cannot be authors. Chatbots do not meet the authorship criteria, particularly that of being able to give final approval of the version to be published and to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. No AI tool can “understand” a conflict-of-interest statement, and does not have the legal standing to sign a statement. Chatbots have no affiliation independent of their developers. Since authors submitting a manuscript must ensure that all those named as authors meet the authorship criteria, chatbots cannot be included as authors.
Recommendation 2: Authors should be transparent when chatbots are used and provide information about how they were used. The extent and type of use of chatbots in scientific publications should be indicated. This is consistent with the recommendation of acknowledging writing assistance and providing detailed information in the paper about how the study was conducted and the results generated.
Recommendation 2.1: Authors submitting a paper in which a chatbot/AI was used to draft new text should note such use in the acknowledgment; all prompts used to generate new text, or to convert text or text prompts into tables or illustrations, should be specified.
Recommendation 2.2: When an AI tool such as a chatbot was used to carry out or generate analytical work, help report results (e.g., generating tables or figures), or write computer codes, this should be stated in both the abstract and the body of the paper. In the interests of enabling scientific scrutiny, including replication and identifying falsification, the full prompt (query statement) used to generate the research results, the time and date of query, and the AI tool used and its version, should be provided.
Recommendation 3: Authors are responsible for material provided by a chatbot in their paper (including the accuracy of what is presented and the absence of plagiarism) and for appropriate attribution of all sources (including original sources for material generated by the chatbot). It is the author’s responsibility to ensure that the content reflects the author's data and ideas and is not plagiarism, fabrication or falsification. Otherwise, it is scientific misconduct to offer such material for publication, irrespective of how it was written. Similarly, authors must ensure that all quoted material is appropriately attributed, including full citations, and that the cited sources support the chatbot’s statements. Since a chatbot may be designed to omit sources that oppose viewpoints expressed in its output, it is the authors’ responsibility to find, review and include such counterviews in their papers. Authors should specify what they have done to mitigate the risk of plagiarism, provide a balanced view, and ensure the accuracy of all their references.
Recommendation 4: Editors and peer reviewers should specify, to authors and each other, any use of chatbots in the evaluation of the manuscript and generation of reviews and correspondence. Editors and reviewers are responsible for any content and citations generated by a chatbot. They should be aware that chatbots retain the prompts fed to them, including manuscript content, and supplying an author's manuscript to a chatbot breaches confidentiality of the submitted manuscript.
Recommendation 5: Editors should use, if possible, appropriate tools to help them detect content generated or altered by AI. Such tools should be used, if possible, by editors for the good of science and the public.