
Nowadays, when starting a new writing project, many of us turn to our favorite virtual assistant for some inspiration and support. By that, I mean ChatGPT, or insert your other Large Language Model (LLM) Artificial Intelligence (AI) tool of choice.
LLMs can be game changers for content idea generation, outline development, translation, and even the legwork of actual writing. However, they can also be prone to error and bias, produce near-verbatim duplications of their training materials that can lead to unintended plagiarism, and pose data privacy risks, among other concerns. All of this can be problematic in any use case, but especially in academic writing and peer review.
So, what policies should journals have in place to ensure ethical AI use? And how can editors monitor compliance? In this blog post, I cover current industry recommendations and tips for baking AI disclosures into your journal forms so authors and reviewers don’t miss them.
But first, thanks for clicking through to read this from whatever AI response Google generated for your initial search query. :)
Current industry guidelines for AI use in scholarly publishing
Before we get into what to include in journal AI policies, here’s an overview of the current industry consensus on AI use case dos and don’ts.
For authors: Industry guidelines released by COPE, STM, ICMJE, WAME, and EASE regard AI technologies as resources researchers may use to support the preparation of a manuscript but agree that AI tools cannot be listed as authors because they cannot assume the full responsibilities of authorship, including signing forms, asserting the presence or absence of conflicts of interest, and attesting to the accuracy and integrity of a work (e.g., that the work is original to the author and not plagiarized in any way, and that all quoted materials are verified and attributed). COPE states:
“Authors are fully responsible for the content of their manuscript, even those parts produced by an AI tool, and are thus liable for any breach of publication ethics.”
Given the rapid adoption of AI tools among academics, the policies of COPE, STM, ICMJE, WAME, and EASE all state that journals should require authors to disclose whether they used AI technologies in preparing their manuscript and, if so, in what capacity, in line with journal requirements.
Currently, there isn’t a standard place for AI disclosures in articles, so it’s up to your editors to decide where to put them. ICMJE and WAME suggest directing authors to state AI use for writing support (e.g., drafting or editing text) in the acknowledgments section, and AI use for data collection, analysis, coding, or image generation in the Methods section. Alternatively, journals may add a separate AI declaration section to the body of articles, as suggested by EASE.
Regardless of where you choose to place AI declarations, WAME recommends requiring the following details for AI use in data collection and analysis:
“In the interests of enabling scientific scrutiny, including replication and identifying falsification, the full prompt used to generate the research results, the time and date of query, and the AI tool used and its version, should be provided.”
For reviewers: Industry guidelines around the use of AI tools in preparing review reports are somewhat less explicit than those for authorship. In recommendations released by STM, ICMJE and WAME, there is consensus that reviewers should under no circumstances supply an author’s manuscript to an AI tool (unless sanctioned by the journal publisher), as that would be a breach of confidentiality since publicly available AI tools retain data inputs with no controls around their future use. However, guidelines around the use of AI tools to support review writing vary. WAME leaves setting parameters up to individual journals, and ICMJE recommends that journals consider possible reviewer use of AI tools on a by-request basis, stating “reviewers must request permission from the journal prior to using AI technology to facilitate their review. Reviewers should be aware that AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased.”
While STM’s latest guidelines state “reviewers should not use publicly available GenAI services as basic author tools (e.g., to refine, correct, edit, and format text and documents),” due to confidentiality concerns.
So, for now, it’s up to journals to draw their own lines in the sand (though I imagine additional recommendations are to come).
For editors: Current guidelines agree that journals should have clear editorial policies around only using secure publisher-approved AI tools (e.g., plagiarism detection software) and not uploading manuscripts into public AI tools. Further, EASE recommends that any journals that use AI tools in editorial processes disclose that on their website and in communications with authors and reviewers.
What to cover in journal AI policies and disclosures
Do all AI use cases carry the same risk level? No, absolutely not.
There’s a big difference between using an AI tool to check the grammar of a sentence versus using an AI tool to write a substantial portion of a paper or generate images for it. For this reason, it’s essential to consider AI use cases in context to determine what you will (and won’t) permit and clearly communicate those nuances to authors and reviewers in your journal AI policies.
STM recently released recommendations for the classification of AI use in manuscript preparation with a chart of scenarios that journal editors can look to for support in devising their policies for authors. The scenarios span using AI to “refine, correct, edit, or format the manuscript to improve clarity of language,” which STM’s guidelines consider broadly permissible with no disclosure necessary, to presenting “any kind of content generated by AI as though it were original research data/results from non-machine sources,” which STM’s guidelines prohibit. We at Scholastica highly recommend reviewing these classifications!
Once you’ve determined your team’s stance on possible AI use cases, you’ll be ready to outline explicit policies. For authors, be sure to include:
- The scope of permitted AI use cases
- Prohibited AI use cases
- Where to place AI disclosures within the body of submitted manuscripts and any accompanying files)
- What to include in AI disclosure statements, specifying any required details by use case (e.g., for data analysis) like the AI tool(s) employed, prompt(s) used to generate results, and/or the time and date of AI queries
- Whether you require authors to upload any/all AI outputs they used as supplementary materials (a practice APA recently implemented)
Purdue University Libraries has a list of publisher policies and requirements for AI use among authors that you can look to for examples.
Scholastica user tip: Journals subscribed to the Scholastica Peer Review System can customize manuscript submission and reviewer feedback forms to include AI disclosure requirements so authors and reviewers don’t miss them. For example:
For your submission form: You can add an affirmation section that requires authors to confirm they followed your journal AI policies with a link to the policy page for quick reference. You can also add a required open-ended AI declaration field so all authors have to address AI use in their submission to prevent errors of omission. Journals that require authors to upload AI research outputs as supplementary materials, like the APA’s, can also add a dedicated field for AI outputs to the file upload section of their submission form and include special instructions for authors at the start of that section explaining when such uploads are necessary. Learn how to customize your Scholastica submission form in this help document.
For your reviewer feedback form: You can add a required open response question around AI use to monitor reviewer interpretations of and compliance with your journal policies. If a reviewer inadvertently misuses AI tools, this will allow you (and them) to catch that and nip the issue in the bud (e.g., clarifying policy language to remove ambiguities). Learn how to customize your reviewer feedback form in this help document.
Monitoring for potential AI policy breaches
All the above AI compliance monitoring approaches depend on authors and reviewers being forthright. What about bad actors? How can editors stave off nefarious AI use? That’s the million-dollar question for the foreseeable future as editors race to keep up with rapidly advancing AI tools.
The first line of defense against research misconduct is robust peer review. EASE notes that journal AI policies should include information on “how parties may raise concerns about possible policy infringement, consequences that any party might face in case of infringement, and possible means to appeal to journal decisions regarding these policies,” so authors and reviewers are equipped to raise alarm bells in cases of suspected malfeasance. Those details should also be included in your journal update policies for published articles (so readers know how they can support research integrity monitoring post publication).
Journal editors and peer reviewers will likely catch apparent AI tells like text with repetitive language and a lack of coherence or uncanny valley images, but as discussed by Avi Staiman in a 2024 Scholarly Kitchen article, the still lurking concern is “the non-explicit, subtle extrapolations often made by AI that appear as accurate science even to the well-trained author or reviewer. This could rear its ugly head in the form of a fictitious reference, mistaken data analysis, faulty information, or image manipulation.”
Tools are emerging to help spot potential plagiarism or AI image and text generation. For example, iThenticate’s plagiarism detection software and Crossref’s sister service, Similarity Check (which Scholastica integrates with), now includes AI writing detection support.
Such AI-based research integrity tools suggest the problem may very well be the solution, though as discussed by Scholastica CEO Brian Cody in our scholarly publishing predications for 2025 blog post, nascent research integrity tools come with cost/benefit tradeoffs for publishers to consider, like weighing the risk-reduction advantages they offer versus the time demands to manage false positives. Ultimately, all AI-based research integrity checkers still require human oversight, as STM emphasizes in its Gen AI guidelines.
At Scholastica, we’re continually working to build out research integrity support for our software and services. We welcome input from Scholastica users and folks in the broader scholarly publishing community around the promises and perils you see for different research integrity automation areas and specific tools you’re excited about. So feel free to drop us a line! We’ve also launched a survey with Maverick Publishing specialists on the “Technology Needs of Small and Medium Journal Publishers” that we encourage small-to-mid-sized publishers to check out.
Looking to the future
As demonstrated in Oxford University Press’ 2024 study on AI use in scholarly publishing (and many others like it), researchers are already experimenting with gen AI by and large, so now’s the time to think about AI policy development.
Given the myriad variables in potential AI use and associated risk profiles by discipline, there is no one-size-fits-all AI policy (and likely never will be), so it’s up to individual publishers and journals to parse out parameters for themselves. And with the warp speed evolution of AI tools, they’ll have to reevaluate their policies frequently.
It can all be unquestionably overwhelming. However, frameworks like STM’s recommendations for the classification of AI use are emerging to help publishers and journals wade through the nuances of AI policy development. Ithaka also released a Generative AI Product Tracker that you can periodically review to stay vigilant of new AI tools. Knowledge is power!
At this point, we’re merely on the precipice of the AI future, and while there are real risks to navigate, there are also boundless opportunities to explore. Developing comprehensive AI policies is one way journals can help safeguard research integrity and facilitate ethical AI use as scholars navigate this new frontier.
AI disclosure: this post was human-written with AI-assisted grammar checks courtesy of Grammarly.