Artificial Intelligence (AI) Guidelines


Artificial Intelligence (AI), AI is just a supporting tool. It cannot replace human judgment, critical thinking, or scientific integrity. We understand how rapidly tools like Large Language Models (LLMs) and Generative AI are transforming the way we work in research and publishing.

We are committed to promoting the ethical and responsible use of AI in all aspects of research publication.


For Authors

• AI tools (such as Large Language Models [LLMs], chatbots, or image creators) can be helpful for basic tasks—like grammar   correction or language improvement. However, using AI to generate original research, medical opinions, or visual content is not   allowed.

AI cannot be listed as an author or co-author nor should they be cited as sources of authorship. Authors are fully responsible   for the accuracy, originality, and integrity of their work.

• If AI tools are used for analysis, data collection, writing assistance or figure generation, this must be disclosed clearly in the Cover   letter and Acknowledgement section (or in the Methods section). Include details such as the tool’s name, version/model, and   source.

• Authors must confirm that their work is free from plagiarism, including any text or images generated by AI. It is the responsibility   of the authors to properly attribute all quoted content, providing complete and accurate citations.


For Reviewers and Editors

Peer reviewers play a crucial role in maintaining the quality and credibility of research. That is why it is essential to approach this responsibility with transparency and care.

AI tools must not be used to process or review submitted manuscripts. Uploading any part of a manuscript to an AI tool can:

    • Violate confidentiality.

    • Breach the author’s rights.

    • And potentially compromise personal or sensitive data.


Manuscript review demands human expertise, thoughtful analysis, and subject-specific judgment. While AI can mimic patterns, it cannot offer the depth or accountability that human reviewers provide.


Cautious Use and Disclosure of AI Tools

If an AI tool is used in any way to support the evaluation of a manuscript’s claims, reviewers must first obtain prior approval from the journal. Additionally, the use of such tools must be clearly and transparently disclosed in the peer review report. While AI-generated content may appear accurate or authoritative, it can often be misleading, biased, or incomplete. Therefore, reviewers are strongly advised to use such tools with caution.


To conclude, reviewers and editors need to be very careful for their review reports, and for upholding the standards of the peer-review process in publication.