Use of Artificial Intelligence Tools by Reviewers

Reviewers play a confidential role in the editorial process of the Journal of Computers, Mechanical and Management. The use of artificial intelligence tools by reviewers raises specific confidentiality concerns that are addressed by this policy. The journal's position aligns with guidance from COPE and with current best practice across major scholarly publishers.

1. Manuscripts must not be uploaded to public AI tools

Reviewers must not upload manuscripts under review, in whole or in part, to public or third-party generative AI tools, including but not limited to ChatGPT, Gemini, Claude, Copilot, and comparable services. Manuscripts under review are confidential documents, and uploading them to such tools constitutes a breach of confidentiality. It also exposes the manuscript to potential incorporation into model training data and to unknown retention by the service provider.

2. Manuscripts must not be processed by AI tools that retain or train on inputs

Even where a tool offers an enterprise option that claims not to retain or train on inputs, reviewers must verify the data-handling terms before using the tool with manuscript content. Where verification is not possible, the tool must not be used.

3. Permitted, limited uses

Reviewers may use AI tools for purposes that do not involve the manuscript content, including:

  • General methodological reference, for example asking a tool to explain a statistical concept in the abstract, without sharing manuscript content;
  • Translating short, generic terminology where the reviewer's working language differs from the manuscript language;
  • Editing the language of the reviewer's own report, before submission, where the reviewer is satisfied that no manuscript content is included in the prompt.

4. The review must be the reviewer's own work

The substantive content of a peer-review report, that is, the assessment of the manuscript's originality, methodology, results, and contribution, must be the reviewer's own intellectual work. AI-generated reviews are not acceptable.

5. Disclosure

If, in the course of preparing a review, a reviewer has used an AI tool in any way that touches on the manuscript content, the reviewer must disclose this to the handling editor in the comments-to-editor field of the review form, with sufficient detail for the editor to assess any confidentiality implications.

6. Consequences of breach

A reviewer found to have breached this policy may be removed from the journal's reviewer pool. Where the breach affects a specific manuscript, the journal will inform the authors and may require additional reviews to ensure that the editorial decision is sound.

7. The journal's own use of AI

The editorial team and the publisher do not use generative AI to make editorial decisions or to write decision letters. AI tools may be used by the editorial team for administrative tasks, for example similarity checking through licensed services, and any such use is subject to the journal's broader confidentiality and data-handling commitments.