INTRODUCTION
This policy is based on and refers to the guidelines outlined in the Generative AI Policies for Journals, as provided by:
-
STM : Recommendations for classifying AI use in academic manuscript preparation
-
Elsevier : The application of generative AI and AI-assisted technologies within the review process.
-
WAME : Chatbots, generative AI, and scholarly manuscripts
JIKO (Journal of Informatics and Computer) recognizes the vital role of artificial intelligence (AI) and its potential to assist authors in their research and writing processes. JIKO welcomes the emerging opportunities brought by generative AI tools, particularly in idea generation, research acceleration, data analysis, writing enhancement, manuscript organization, supporting authors writing in a second language, and expediting the dissemination of research findings. Accordingly, JIKO provides guidance for authors, editors, and reviewers on the appropriate use of such tools, acknowledging that this guidance may evolve in line with the rapid advancement of AI technologies.
Generative artificial intelligence, including large language models (LLMs) and multimodal models, continues to progress rapidly, especially in its applications across business and consumer domains. While generative AI holds great potential to enhance creativity among authors, it is essential to recognize the associated risks of its current use. These tools can produce diverse forms of content, including text, images, audio, and synthetic data. Prominent examples include ChatGPT, Copilot, Gemini, Claude, NovelAI, Jasper AI, DALL-E, Midjourney, and Runway.
However, the use of generative AI tools today also presents several risks, such as:
-
Inaccuracy and bias: Because these tools operate on statistical models rather than factual reasoning, they may generate inaccurate, misleading, or biased information that can be difficult to verify and correct.
-
Lack of attribution: Generative AI tools often fail to follow established academic practices regarding proper referencing, citation, and attribution of ideas or sources.
-
Confidentiality and intellectual property risks: Many AI platforms managed by third parties may not provide adequate data security, confidentiality, or copyright protection.
-
Unintended data use: AI developers may reuse user inputs or generated outputs for further training, potentially infringing upon the rights of authors, publishers, and other stakeholders.
AUTHORS
Authors may utilize generative AI tools (e.g., ChatGPT, GPT models) to assist with specific tasks such as improving grammar, clarity, and readability in their manuscripts. Nevertheless, authors retain full responsibility for the originality, validity, and integrity of their work. Any use of AI tools must be conducted responsibly and in accordance with the journal’s policies on authorship and publication ethics. This responsibility includes thoroughly reviewing all AI-generated outputs to ensure content accuracy and reliability.
JIKO supports the ethical and responsible use of generative AI tools, provided that proper standards of data security, confidentiality, and copyright protection are maintained. Acceptable uses include:
-
Idea generation and conceptual exploration.
-
Language editing and improvement.
-
Conducting online searches using LLM-based search engines.
-
Classifying literature.
-
Receiving coding assistance.
Authors are expected to ensure that all content submitted for publication meets rigorous scientific and scholarly standards and is fundamentally the result of human research and validation.
Generative AI tools must not be listed as authors, as they cannot assume responsibility for the work, participate in authorship consent, or manage copyright and licensing agreements. Authorship demands accountability, willingness to enter a publishing agreement, and responsibility for the integrity of the research—all of which require human judgment and ethical understanding beyond the capacity of AI systems.
Authors must explicitly disclose any use of generative AI tools within their manuscript by including a clear statement specifying the tool’s full name (and version, where applicable), its application, and the purpose of its use. This disclosure should appear in the Methods or Acknowledgements section. Such transparency enables editors to evaluate whether AI tools have been used responsibly. JIKO reserves the right to make editorial decisions concerning publication to uphold ethical and scholarly standards.
When using AI tools, authors must ensure that such tools are appropriate and reliable for their intended function. They should also carefully review the related terms of use to confirm adequate protection of intellectual property, privacy, and data security.
Authors must refrain from employing generative AI in ways that undermine research integrity or authorial accountability, including:
-
Generating text or code without thorough human revision.
-
Creating synthetic data to replace missing data without a robust methodological framework.
-
Producing inaccurate or misleading content, including abstracts or supplementary materials.
Manuscripts found to violate these principles may be subject to editorial review and investigation.
JIKO currently prohibits the use of generative AI in producing or altering images, figures, or original research data submitted for publication. The term images and figures covers photographs, charts, tables, medical imagery, image fragments, computer code, and mathematical formulas. Manipulation includes altering, concealing, adding, or deleting elements within such materials.
Throughout every stage of the research and publication process, human oversight and transparency remain essential when using generative AI or AI-assisted technologies. JIKO acknowledges that research ethics evolve alongside technological progress and will continue to refine its editorial policies to reflect ongoing developments in generative AI and ethical research standards.
EDITORS AND PEER REVIEWERS
JIKO upholds the highest standards of editorial integrity, confidentiality, and transparency. The use of unpublished manuscripts in generative AI systems can risk breaching confidentiality, ownership rights, and personal privacy. Therefore, editors and peer reviewers are strictly prohibited from uploading any unpublished materials—including files, images, or information—into generative AI tools. Failure to comply with this policy may result in the infringement of intellectual property rights.
Editors
Editors play an essential role in safeguarding the quality and integrity of the journal’s published research. Accordingly, they must maintain strict confidentiality throughout the submission and peer review processes.
The use of generative AI systems for handling manuscripts poses serious risks to data privacy, proprietary rights, and information security. For this reason, editors are not permitted to upload unpublished manuscripts or any related materials, including supplementary files and images, into AI-driven platforms or tools.
Peer Reviewers
Peer reviewers, as experts in their respective fields, must avoid relying on generative AI for evaluating or summarizing submitted manuscripts or their components. Reviewers are likewise prohibited from uploading any unpublished manuscripts, project proposals, or related materials into AI systems.
Generative AI may only be used to improve the language or clarity of the review text itself. However, peer reviewers remain fully responsible for the accuracy, fairness, and integrity of their evaluations.






1.png)
