AI Ethics Principles

Artificial Intelligence Ethics Principles

Given the importance and rapid development of artificial intelligence (AI) and its use in scientific publishing, there is a need for clear guidelines to uphold academic integrity and ethical standards for all participants involved. The ITLT's position on AI use is based on the Artificial Intelligence Act, passed by the European Parliament, principles published by COPE (the Committee on Publication Ethics), and the application of ethical standards in the scientific publication process. Additionally, recommendations from well-known international publishers, such as Elsevier, Thomson Reuters, and Emerald Publishing, have been taken into account.

The journal's policy outlines ethical guidelines for using AI by authors, reviewers, and editors in article submission, review, and publication to ensure transparency and compliance with ethical research standards.

For Authors

Authors are fully responsible for the content of their article, including parts created using AI tools, and they are accountable for any inaccuracies or breaches of publication ethics. AI tools (e.g., ChatGPT) cannot be credited as authors in submitted articles. Authorship should be limited to individuals who have made significant intellectual contributions to the work. Authors must not cite AI as an author. They are responsible for plagiarism, including in text and images generated by AI, and must ensure that all cited materials, including full citations, are genuine and appropriate.

The intentional use of generative AI to produce manuscripts with fake citations and references is considered text plagiarism.

Authors are responsible for verifying the accuracy of information generated or analyzed by AI tools. Content created using AI must be carefully checked for errors or biases.

Authors should consider the potential negative consequences of using AI in writing a scientific article.

AI tools like ChatGPT can be used as supplementary sources of information but not for generating scientific texts. Each response generated by ChatGPT should be referenced with an appropriate URL. AI use is permitted in certain cases, such as literature reviews, scientific essays, and reviews. However, the author must describe the method and purpose of such use in the "Methods" section. Including the prompt used by the author and then the relevant AI-generated text in the article is possible. Alternatively, full responses from AI tools like ChatGPT can be included in an appendix or supplementary online materials.

Authors may use tools like spell-checkers, grammar checkers, and reference managers like Mendeley, EndNote, or Zotero during research and article preparation. These tools can be used without disclosure. This policy applies to AI and AI-supported tools, such as large language models that can generate content used for scientific work.

The use of generative AI or related tools without citation is prohibited. An exception is when AI or AI-supported tools are used as part of the research design or methods (e.g., AI-supported visualization approaches for acquiring or interpreting core research data, such as in biomedical imaging). A detailed description of such use must be provided in the corresponding section.

For Reviewers

Reviewing a scientific manuscript is a responsibility that can only be entrusted to humans. Reviewers must not use generative AI or AI-supported technologies to assist in manuscript review, as critical thinking and original assessment required for peer review are beyond the scope of this technology, and there is a risk of incorrect, incomplete, or biased conclusions. Reviewers are responsible for their expert evaluations and comments.

Reviewers must maintain confidentiality during manuscript review. They should not upload the submitted manuscript or any part of it to a generative AI tool, as this could violate the authors' confidentiality and intellectual property rights. If the manuscript contains personal data, it may infringe on data privacy rights.

This confidentiality requirement also applies to the review report, which may contain confidential information about the manuscript and/or authors. For this reason, reviewers should not upload their review reports to AI tools, even if done solely to improve language and readability.

AI tools can assist in the review process, such as for plagiarism detection or data consistency analysis, but they must not replace the reviewer’s critical judgment. In such cases, reviewers should disclose whether AI tools were used during the review and provide detailed information about the nature and extent of their use.

Reviewers must ensure that any AI tools used in the review process comply with confidentiality agreements and do not misuse manuscript data.

In their expert evaluations, reviewers should insist that authors consider potential negative consequences or clearly argue that the paper's contributions will have a "net" positive impact. For example, a "Broader Impacts" or "Societal Impacts" section could be added to the manuscript, similar to sections like "Future Work" or "Limitations."

For Editors

The submitted manuscript must be treated as a confidential document. Editors should not upload the submitted manuscript or any part of it to a generative AI tool, as this could violate the authors' confidentiality and intellectual property rights. If the manuscript contains personal data, it may infringe on data privacy rights.

This confidentiality requirement applies to all communications related to the manuscript, including any notifications or decision letters, as they may contain confidential information about the manuscript and/or authors. For this reason, editors should not upload their letters to AI tools, even if done solely for improving language and readability.

Editors may use AI tools to detect plagiarism or identify potential reviewers. However, the final decision remains with human editors.

If editors use AI tools during the editing process, they must disclose these tools to ensure transparency in the publication process for authors and reviewers.

The final decision on whether the use of AI-generated content is appropriate or acceptable in a submitted manuscript or published article rests with the journal editor or another person responsible for the editorial policy.