Generative Ai Policy
Generative AI and AI-Assisted Tools Policy
The rapid advancement of generative artificial intelligence (AI) has brought significant changes to academic writing and research practices. Tools such as ChatGPT, Gemini, and other AI-driven platforms are increasingly used to assist authors, editors, and reviewers in the scholarly communication process. While these technologies offer new possibilities for efficiency and creativity, they also raise important questions about research integrity, authorship, originality, and accountability.
Recognizing these opportunities and challenges, Nabawi Journal of Hadith Studies considers it essential to provide clear guidance on the responsible and ethical use of generative AI within the research and publication process. The journal’s policy has therefore been developed with reference to Elsevier’s Generative AI Policies for Journals and the APA Journals Policy on Generative AI, ensuring that human scholarly judgment remains central at all stages of academic work.
Article 1: Purpose
This policy provides clear guidance on the responsible use of generative AI and AI-assisted tools by authors, reviewers, editors, and the publisher in all stages of academic publishing.
Article 2: Definitions
For the purpose of this policy:
- Generative AI refers to artificial intelligence systems capable of producing new text, images, or other content in response on user prompts.
- Text Based Generative AI is divided into two categories:
- Generative AI with verifiable references, which provides responses by attaching links to verifiable academic sources (e.g., journals or books). Examples include Usul.ai, Perplexity, Semantic Scholar AI Assistant, Scite Assistant, Elicit, and Consensus.
- Generative AI without verifiable references, whichproduces outputs or explanations without direct reference links to academic materials. Examples include ChatGPT, Gemini, Claude, and Copilot.
- AI-Assisted Tools refer to non-generative AI technologies that support technical or linguistic aspects of writing—such as grammar correction, translation, or formatting—without generating substantive scholarly content. Examples include Grammarly, DeepL, Trinka, and LanguageTool.
- AI Use in this policy includes the use of any system above for writing, editing, translation, data analysis, image generation, or reference searching.
Article 3: Use of AI by Authors
- AI may assist writing but must never replace scholarly reasoning or authorship.
- Authors may use generative AI tools under direct human supervision to improve language, readability, or structure.
- Authors remain fully responsible for the accuracy, originality, and ethical integrity of the entire manuscript.
- Any use of AI must be clearly declared in the manuscript—preferably in the Methods or Author’s Note section—stating the name and purpose of the AI tool used.
- AI use must respect privacy, intellectual property rights, and third-party data protection.
Reason: To maintain author accountability and prevent AI misuse that may compromise the authenticity of research.
Article 4: Use of AI in Images and Illustrations
AI-generated visuals may distort or misrepresent scientific information.
- The use of AI to generate or modify images within manuscripts is prohibited.
- Minor image adjustments (brightness, contrast, color) are allowed if they do not alter original data or meaning.
- Exceptions are permitted only if AI is part of the research methodology and fully explained in the Methods section.
- AI-generated illustrations or cover art require explicit editorial approval.
Reason: To preserve visual integrity and prevent data manipulation.
Article 5: Disclosure and Transparency
Transparency supports trust and allows accurate evaluation of AI’s role.
- Authors must disclose the AI tools used, their specific purpose, and the extent of human oversight.
- AI may not be listed or cited as an author.
- Disclosure should appear in the Methods or Author’s Note section of the manuscript.
Reason: To ensure openness and accurate attribution of scholarly work.
Article 6: Authorship Responsibility
- Human authors are solely responsible for verifying all AI-generated or AI-assisted content.
- AI tools cannot hold authorship status, nor can they be credited as co-authors.
- Authors must verify all quotations, data, and interpretations produced through AI before publication.
Reason: To affirm human accountability for the scholarly accuracy and ethical conduct of all publications.
Article 7: Ethics, Privacy, and Confidentiality
- Authors must not upload confidential data, participant information, or unpublished manuscripts into open AI systems.
- Reviewers and editors must preserve manuscript confidentiality and must not upload any part of the manuscript into AI systems.
- All parties are responsible for ensuring that AI use aligns with ethical standards and data protection regulations.
Reason: To safeguard privacy, confidentiality, and research integrity.
Article 8: Use of AI by Reviewers and Editors
- AI cannot replace human judgment in editorial or peer review decisions.
- Reviewers and editors must not upload manuscripts or editorial communications to generative AI tools, as this may violate confidentiality.
- They must not rely on generative AI to evaluate, summarize, or recommend editorial decisions.
- However, reviewers and editors may use AI-based academic search tools—such as Usul.ai, Perplexity, Semantic Scholar AI Assistant, Scite Assistant, Elicit, or Consensus—to verify references or locate relevant literature, provided that all sources are verified manually.
- Reviewers and editors retain full responsibility for their assessments and decisions.
- This policy may be updated when AI tools demonstrably meet ethical, security, and accuracy standards set by the journal.
Reason: To preserve the independence, confidentiality, and objectivity of the review and editorial process.
Article 9: Use of AI by the Publisher (LP2M Ma'had Aly Hasyim Asy'ari)
- AI may support technical operations but not editorial judgment.
- The publisher may use AI tools to assist with technical checks, copyediting, and proof preparation.
- All publication decisions remain under direct human oversight.
Reason: To improve efficiency while ensuring human control and accountability.
Article 10: Review and Revision of Policy
This policy will be periodically reviewed in response to developments in AI technology and evolving standards of research ethics.

















