The journal Transport Technologies acknowledges that artificial intelligence (AI) and the tools that use it can be useful aids in the process of preparing scientific publications. At the same time, the use of such technologies must comply with the principles of academic integrity, scientific responsibility, and transparency.
Large language models (LLMs), (e.g., ChatGPT), cannot be considered co-authors of a scientific article, as recognition of authorship implies responsibility for the work, which cannot be effectively applied to LLMs.
Recommendations for the use of generative AI technologies in the publication of scientific articles have been developed based on the recommendations of the World Association of Medical Editors (WAME) regarding chatbots and generative AI for scientific publications.
Acceptable areas of AI application:
- literature review: searching and systematizing publications; analyzing market trends/patent landscape;
- methodology development: assisting in developing research protocols and selecting methods;
- software development and automation: code generation; code optimization; process automation; creating data analysis algorithms;
- data management: cleaning; curation and systematization;
- data analysis: analysis and visualization;
- writing and editing: proofreading and grammar checking; preparation of press releases/information materials;
- ethical and social analysis: analysis of bias/discrimination; assessment of ethical risks; monitoring of compliance with ethical standards;
- supervision and management: quality assessment; identification of trends; identification of limitations.
Unacceptable areas of AI application:
The use of AI is strictly prohibited in cases where it may question the originality, authenticity, or scientific integrity of the work, in particular:
- generation of scientific content (setting tasks, describing methods, analyzing results) that has not been verified and confirmed by the authors;
- automatic creation of references or bibliographies, which may lead to fictitious or incorrect sources;
- masking plagiarism by paraphrasing or automatically rewriting other people's texts;
- using AI as a co-author of an article—artificial intelligence systems cannot be recognized as authors or co-responsible for scientific results.
Information disclosure requirements:
- authors must clearly indicate in a special note which AI tools were used and for what purpose;
- the indication should be as transparent as possible (for example: “The MATLAB system with elements of AI analytics was used to create graphs”);
- concealing the use of AI is considered a violation of the principles of academic integrity.
Authors' responsibility:
- the authors bear full responsibility for the content of the article, the reliability of the results, and the correctness of the interpretations.
- the use of AI does not exempt authors from their obligation to ensure scientific novelty, accuracy, and compliance with ethical standards.
- in the event of violations of the AI policy, the editorial board has the right to request corrections, reject, or retract the article.
The appropriateness of the use of AI tools will be further assessed by the editor and experts during the double-blind review stage.
Reviewers should refrain from loading manuscripts into AI tool software or using other AI technologies that may lead to a breach of confidentiality. Reviewers should not use AI tools to write their reports.
As we expect the situation in this area to develop rapidly in the near future, we will regularly review this policy and make any necessary changes.