What will the next iteration of scholarly journal peer review look like?
That’s the question team Scholastica is pondering this Peer Review Week (PRW), themed “Peer Review and the Future of Publishing.” Coming off the tail end of conference season, following events like the Society for Scholarly Publishing and Association of Learned and Professional Society Publishers’ annual meetings, PRW brings a new wave of discussions surrounding the latest developments in journal peer review policies and processes, key questions stakeholders are grappling with as they look to the future, and the rippling effects throughout the research ecosystem.
Below are the primary issues and innovation areas we’re tracking with high-level breakdowns of the latest updates.
Close encounters with AI
Let’s jump right in and talk about the elephant (robot elephant?) in the room — artificial intelligence (AI).
First, a fun fact: did you know AI has been around for 60+ years? The first proof of concept was Logic Theorist, a computer program designed to mimic human problem-solving skills, written by Allen Newell, Cliff Shaw, and Herbert Simon, with funding from the RAND Corporation.
Scholars have also been dipping their toes into the waters of AI research aids for quite some time now, including tools like Paperapl, Elicit, Semantic Scholar, and newcomer Scite.
And then it happened. The launch of ChatGPT in November of 2022, a large language model–based (LLM) chatbot developed by OpenAI, catapulted AI into the mainstream, introducing a host of imminent ethical quandaries and exciting opportunities for those in scholarly publishing to parse out.
Suddenly, there’s a free tool anyone can use to answer questions and generate convincingly original content that’s known to “hallucinate“ on occasion (i.e., make things up) and has ushered in a wave of copyright infringement allegations from creators. At the same time, ChatGPT and other tools like it are proving to be promising forces for good, helping scholars gather and synthesize information more quickly (e.g., literature reviews, data analysis, etc.), manage citations, and draft research papers, particularly for non-native English speakers, as discussed in this Scholarly Kitchen article by Avi Staiman, CEO of Academic Language Experts. (A word to the wise — Staiman also said publishers should be wary of using AI detection tools, which frequently produce unreliable results.)
At Scholastica, we’ve been getting a lot of questions from our editor users about how to handle the use of AI in research writing, so we’ll start there. What are the latest recommendations?
Here’s a breakdown of developing guidelines from standard-setting bodies:
- AI Ethics in Scholarly Communication: STM Best Practice Principles for Ethical, Trustworthy and Human-centric AI
- COPE position statement on “Authorship and AI tools”
- COPE position statement on “Artificial intelligence (AI) in decision making”
- CSE Guidance on Machine Learning and Artificial Intelligence Tools
- Chatbots, Generative AI, and Scholarly Manuscripts: WAME Recommendations
- ICMJE Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals
Some key takeaways from the above guidelines: The use of AI tools in research development and writing is widely considered acceptable. Most agree that AI tools can’t be listed as authors, but authors should disclose and credit their use. Authors are also ultimately accountable for ensuring the accuracy of AI outputs and that AI-assisted content isn’t plagiarized.
ISMTE’s EON also recently published a comprehensive overview of the current state of LLMs and helpful recommendations for journals by Caitlyn Trautwein, Chhavi Chauhan, PhD, and Chirag Jay Patel.
Beyond research writing, publishers are navigating opportunities and challenges around using AI to support peer review workflows, including manuscript screening, plagiarism detection, and reviewer searches. As discussed by Chief Publication Officer at APS Rachel Burley, it’s critical for publishers and vendors to tread carefully here. “AI algorithms can inherit biases from the data they’re trained on. It could lead to even more bias, like biased reviewer recommendations. We have to ensure we’re making efforts to eliminate that and reduce unintended bias,” said Burley. “There are also ethical considerations around privacy and data security and transparency. Authors and reviewers need to be aware of how their data is being used and who has access to it.”
Stakeholders are sure to be traversing AI gray areas for quite some time. But there’s no question that AI offers immense time-saving potential from peer review all the way downstream that we’re already seeing. For example, at Scholastica, we’ve been leveraging machine learning, an application of AI, in our digital-first production service for faster and more accurate manuscript formatting, citation normalization, and citation enhancement, eliminating manual steps for editors and authors.
Renewed focus on research integrity
All these discussions surrounding the use of AI in scholarly publishing, combined with ongoing concerns about research reproducibility, replicability, and plagiarism, have resulted in a renewed focus on research integrity, fittingly the theme of the last PRW.
In our always-on information age, all scholarly communication stakeholders are grappling with tensions between increasing research speed and output and ensuring rigor and accountability.
The first line of defense for scholarly journals is developing and regularly reassessing research integrity policies and processes, including a publication ethics statement, statements of originality and disclosures, and plagiarism guidelines for editors and authors. Scholastica teamed up with Research Square to create a publication integrity toolkit last PRW with resources to help. There are also many software tools journals can use to support research integrity checks, including plagiarism detection services like Crossref Similarity Check (if you use Scholastica’s peer review system, we offer an integration option!) and scientific image checking tools — another area where AI is serving as a force for good — such as Proofig and ImageTwin.
Wondering what research integrity developments to watch from there? Among the latest are:
- Publication Integrity Week (October 2-6, 2023): this COPE event will consist of a series of webinars on case studies for handling publication misconduct and fraud and emerging topics like the use of AI in peer review and the role of editors in assessing content for inclusivity
- Paper Mills Research report from COPE & STM: an overview of the current state of paper mills and what scholarly publishing stakeholders can collectively do to address the problem. COPE also recently held a webinar on “Practical steps for managing paper mills” available to watch on-demand here
- STM Research Integrity Hub: A developing collaborative platform of research integrity evaluation tools and resources for publishers, including new integrations with Clear Skies Papermill Alarm and the PubPeer database
- Crossref joins forces with Retraction Watch: Crossref has acquired Retraction Watch and will make the database open and freely available, creating new opportunities for publishers to catch retracted papers in citation lists and remove them before the citing paper is published (AI could no doubt play a supporting role here)
Of course, at the root of addressing research integrity issues is promoting responsible research evaluation. For years, scholarly publishing stakeholders have been warning of the risks of relying too heavily on the Journal Impact Factor and other bibliometric research impact indicators that contribute to research inequities and have fueled the current “publish or perish” research culture, which emphasizes research quantity, potentially to the detriment of quality, contributing to pressing issues like paper mills and ghost/gift authorship, as well as positive outcomes over negative or inconclusive results.
“Future Trends in Responsible Research Evaluation” was the topic of the 2023 ALPSP Conference keynote, featuring Elizabeth Gadd, Research Policy Manager at Loughborough University; Nicola Nugent, Publishing Manager, Quality & Ethics at the Royal Society of Chemistry; and Sarah Faulder, Chief Executive of Publishers’ Licensing Services. The speakers discussed current challenges in research evaluation and promising corrective initiatives, including:
- The Declaration on Research Assessment (DORA): A community-led initiative to divorce the Journal Impact Factor from research assessment, launched in 2012
- The Leiden Manifesto: 10 Principles to guide research evaluation to support researchers and managers, led by Diana Hicks, professor in the School of Public Policy at Georgia Institute of Technology, and Paul Wouters, director of CWTS at Leiden University
- The SCOPE Framework: a five-stage process for evaluating research responsibly developed by the Research Evaluation Group (REG)
- Humane Metrics Initiative: A community-led initiative to create and support frameworks for more well-rounded research evaluation
- Resources to support researchers in writing narrative CVs, like The Royal Society’s Resume for Researchers and UKRI’s “Resume for Research and Innovation (R4RI)“ CV template
- The Coalition for Advancing Research Assessment (CoARA): Guiding principles for research assessment reform from the European Commission, the European University Association, and Science Europe initiated in 2022 (learn more about the latest developments in this article from Physics Today)
As more scholarly institutions embrace alternative research assessment approaches, journal publishers and editorial teams can support by helping authors track alternative impact metrics. In this vein, Scholastica users should know that we recently added new public-facing readership metrics and the option to integrate with Altmetric Badges to our OA publishing platform.
The rise of PIDs and new peer review taxonomies
Now, on to one of the Scholastica team’s favorite topics — metadata! And we’re not the only ones. This year’s SSP conference even featured a metadata themed musical!
Why do we love metadata so much? Because when publishers collect clean, correct, and complete metadata at the point of manuscript submission and translate it into standardized machine-readable JATS XML so it can accompany articles downstream from production to publication to dissemination via content registration, abstracting, and indexing services, they open up a whole new world of opportunities to improve research discoverability and accessibility and strengthen scholarly communication infrastructure by promoting interoperability between tools and systems. All this can help us achieve the “research nexus,” defined by Crossref as “a rich and reusable open network of relationships connecting research organizations, people, things, and actions.”
At Scholastica, we’ve been talking a lot about the importance of journals including Persistent Identifiers (PIDs) in their article-level metadata to prevent link rot, help streamline information flow/reduce administrative burdens, and promote research transparency and discoverability. You can read our latest guide, “A PID’s Life: What journals and scholars need to know,” to learn more.
Another emerging area all publishers and journals should be following is the rise of peer review taxonomies, most notably the emerging ANSI/NISO standards:
- CRediT (Contributor Roles Taxonomy): 14 research contributor roles that journals can include in the body and machine-readable metadata of their articles to increase recognition of and transparency around the various possible forms of research contribution
- The Peer Review Terminology Standard: standard definitions and best practice recommendations for communicating peer review processes, now available in version 2.0
In addition to improving transparency around research development, writing, and peer review processes and recognition of contributors, peer review taxonomies can support research discovery and even archiving when added to article-level metadata. CRediT already offers conventions for including contributor role details in article-level metadata, and the Peer Review Terminology Standard is working towards the same aim. Lois Jones, a member of the STM working group and Peer Review Manager at the American Psychological Association (APA), discussed the benefits of including the Terminology in article-level metadata in a past Scholastica interview.
“Having Taxonomy information travel with metadata means it can also be available within the articles themselves. This introduces the ability to someday potentially have mechanisms to search for articles based on their peer review details,” said Jones. “It also means if a publisher were to migrate its content to a different database, or maybe move from PDF into a reflowable, mobile-friendly format, for example, that review information can stay with articles instead of being lost. The idea is that information should be able to travel along with the content itself to make it truly transparent and open.”
You can learn more about the benefits of CRediT implementation and how to get started during Scholastica’s upcoming webinar, “Giving CRediT Where it’s Due: What journals need to know about the Contributor Roles Taxonomy,” on October 10th, 2023 at 11 AM EDT (all registrants also get the recording). And to learn more about The Peer Review Terminology Standard, you can join STM’s webinar “Launch of the Peer Review Terminology Standard“ this week on September 26th at 10 AM EDT.
Finally, if you’re looking for a high-level overview of the metadata elements all journals should have, Scholastica’s blog post “In pursuit of rich article-level metadata: 5 elements journal publishers should prioritize“ has you covered.
Other notable developments and closing thoughts
Before we wrap up this blog post, we wanted to highlight a few more notable developments:
- Increased focus on the author experience: As the publishing world moves toward an open access future, it’s causing another major shift for publishers and journals from focusing on marketing to research institutions purchasing subscription deals to authors choosing which titles to submit to, as discussed during this year’s ALPSP conference session, “Marketing, data and analytics for an open research future.” Clarke & Esposito even coined the term “Author Experience (AX)” to denote this new focus area. To learn more about the latest AX developments and best practices, we recommend checking out their blog “Author Experience (AX): An Essential Framework for Publishers.”
- DEIA statements in action: In recent years, scholarly publishers and individual journals have been putting more resources toward promoting diversity, equity, inclusion, and accessibility in the publishing industry and academia more broadly. As we enter the next iteration of peer review, it’s promising to see more initiatives to help organizations not only develop DEIA statements but put them into action, including The Coalition for Diversity and Inclusion in Scholarly Communications (C4DISC)’s Toolkits for Equity and case study discussions like the 2023 Council of Science Editors conference session “Incorporating Demographic Data.” COPE’s Publication Integrity Week 2023 will also feature hands-on DEIA webinars.
- Initiatives to train the next generation of peer reviewers: Publishers and individual journals also appear to be investing more resources in initiatives to help train up-and-coming peer reviewers, likely in response to ongoing reviewer shortages. We cover tips for working with early-career peer reviewers in this Scholastica blog post. Be sure to also check out the many great resources on the topic this PRW, including Origin Editorial and Canadian Science & Medical Editors’ webinar, “Training Peer Reviewers as a Form of Engagement: A Solution to the Peer Review Crisis?“ and the American Society for Microbiology’s webinar “Peering into Peer Review.”
As we progress toward the next iteration of peer review, there are SO many changes and exciting innovations happening in the world of scholarly publishing. As such, we acknowledge one blog post can’t possibly cover them all.
What would you add that’s missing here? We invite you to join the conversation by sharing questions and additional examples in the blog comments below and on social media. You can find Scholastica on LinkedIn, X (formerly Twitter), and Facebook.