We’re continuing our Research Integrity Toolkit blog series with Research Square in honor of this year’s Peer Review Week theme, “Research Integrity: Creating and supporting trust in research.” In this post, freelance scholarly publishing writer Victoria Kitchener discusses steps journals can take to promote research reproducibility and replicability. Be sure to check out the corresponding tools for authors posted on the Research Square blog!

Over the past two decades, research reproducibility and replicability have become hot-button issues. Examples of failed attempts to duplicate previously published research conclusions brought forward in recent years are sparking increased scrutiny of the data and methods underpinning scholarly reports inside and outside academia. And this has spotlighted the uncomfortable truth that we are amid an ongoing reproducibility and replication crisis, which spans research disciplines and affects all stakeholders.

There are myriad reasons why it’s critical for research to be reproducible and replicable, particularly for publishers, editors, authors, and readers of scholarly journals. Of course, the most fundamental is to avoid the dissemination and perpetuation of misleading results produced either through unintentional errors or, in some cases, by intentional dishonesty. Others include increasing confidence in research, encouraging collaboration, optimizing time and resources, facilitating diversity and inclusion, and accelerating discovery. You can read a collection of articles on these topics in the Harvard Data Science Review (HDSR) Reproducibility and Replicability special issue.

While reproducibility and replicability are closely linked, it’s worth acknowledging the nuances in their definitions. Reproducibility generally refers to the ability to produce the same findings as a published report using the same methods, whereas replicability refers to the ability to reach the same findings as a published report using different methods.

The jury is out on how well the community has tackled these issues up to this point. But it’s clear there’s still work ahead to ensure the integrity of research data and methods and foster greater trust in scholarship.

This blog post covers practical steps scholarly journals can take to help promote reproducible and replicable research findings.

Prioritizing data sharing and transparency

Non-transparent reporting is one of the primary reasons it can be so difficult to reproduce and replicate published research findings. Possibly the most essential way for journals to increase the reproducibility and replicability of the material they publish is to take proactive measures to maximize data availability and transparency. This can have tangible and far-reaching benefits, including the potential for increased citability of articles when all community members can freely assess the quality of their data and methodology. Moreover, it can boost data reuse, expanding the scope for generating new insights. For more in-depth considerations of the topic of transparency in the peer-review process, check out our blog posts “4 ways to increase peer review transparency to foster greater trust in the process“ and “3 pillars of quality peer review at academic journals.”

Practical approaches toward prioritizing data-sharing and transparency include:

  • First, establishing solid data-sharing policies in line with the needs of journal contributors and readers. These can range from asking researchers to agree to openly share their source data and methods if and when requested, to requiring them to submit those resources as a standard part of your publishing agreement.
  • Many publishers are also beginning to implement the FAIR data principles. FAIR stands for making data Findable, Accessible, Interoperable, and Reusable. For further discussion of strategies to achieve this, check out our blog post “3 Ways scholarly journals can promote FAIR data.”
  • Journals can also champion open and comprehensive sharing not just of the raw data underlying research studies but also the details of the procedures and equipment used to generate them, the data-collection methods employed, the statistical techniques applied for analysis, and the peer-review process. PLOS is a helpful case study for incorporating open methods into research findings. They provide scholars with four submission options, including Registered Reports, study protocols, and lab protocols, as well as traditional research articles.
  • The Center for Open Science (COS) also features a new Open Science Badge scheme journals can adopt to encourage authors to share their research data and methods. There are badges to acknowledge Registered Reports and reports with open data and/or materials. According to COS, “Implementing badges is associated with [an] increasing rate of data sharing (Kidwell et al, 2016), as seeing colleagues practice open science signals that new community norms have arrived.”
  • Another initiative from COS to improve the robustness of research reporting is the Transparency and Openness Promotion (TOP) guidelines, which various publishers now endorse, including the American Psychological Association (APA). The TOP guidelines comprise eight transparent reporting standard areas (e.g., citation standards and data transparency) with three possible compliance levels increasing in stringency. Accompanying the TOP guidelines is TOP Factor, a metric that reports the steps a journal is taking to implement open science practices. COS aims for TOP Factor to be considered alongside other publication reputation and impact indicators, such as the Journal Impact Factor (JIF).

Embracing the negatives

Putting policies in place that actively encourage the dissemination of robust and high-quality research that has inconclusive, negative or statistically insignificant results is another valuable step that journals can take toward improving data reproducibility and replicability. There’s growing recognition that overlooking such studies or actively rejecting them as candidates for submission and publication in peer-reviewed journals is slowing progress in combating the reproducibility and replication crisis. Challenging these biases will contribute to more objective and credible literature across disciplines.

Some journals are starting to produce special issues, supplements, or collections dedicated to spotlighting articles with negative or inconclusive results and attempts at reproducing or replicating past studies. A recent example is the “negative results collection“ produced by Scientific Reports in February 2022, which highlighted scientifically-valid papers they published in the natural and clinical sciences reporting null or negative findings.

Other publications are also starting to include specific submission categories for research that reports negative or null results among the standard list of manuscript types they accept for peer review. The “negative result” category in the journal IEEE Access is a good example.

To facilitate the publication of all legitimate and reliable research papers, regardless of their outcome, some journals are offering options for “results-blind” peer review. For example, journals can give authors a choice to obscure the results of their study in the first round of peer review to protect against the possibility of the outcome biasing reviewer input.

Similarly, some journals are inviting authors to submit manuscripts for peer review detailing proposed studies before conducting the research. COS has endorsed this peer-review model, which they coined Registered Reports. The goal of Registered Reports is to evaluate the validity of research questions, methodologies, and analytical techniques to determine whether they’re sound and likely to contribute to the literature before gathering results. If the results of an accepted Registered Report are null or negative, they should still be published as long as the researchers stick to the agreed methodology.

An overview of journals that are currently accepting Registered Reports, either as part of their regular submission process or through special issues, along with links to specific guidelines and practical steps for implementing this type of framework, can be found at COS Registered Reports.

Building best practices

To advance research reproducibility and replicability, stakeholders must collaboratively develop and promote best-reporting practices. This includes setting standards for post-publication debate and discussion; normalizing the correction, revision, and retraction of articles; and fostering a culture that does not stigmatize these processes but openly accepts their role in furthering academic discourse.

A great example of a subject-specific initiative focused on pulling together and building upon work done to establish and support the adoption of good research-reporting practices by individual groups is the Enhancing the QUAlity and Transparency Of health Research (EQUATOR) Network. EQUATOR is a coordinated effort to improve the reliability and value of published health literature by promoting transparent and accurate reporting and the wider use of robust guidelines internationally.

Inevitably, enforcing best-practice standards is not proving straightforward, as evidenced by a May 2022 study in the Journal of Clinical Epidemiology (JCE). The report found that researchers who agreed to comply with data-availability statement (DAS) requirements for manuscripts published in open-access journals were, in fact, no more likely to eventually share their data than those authors who did not commit to a DAS. Continued innovation and ensuring follow-through for all transparency initiatives is essential. For a wealth of practical tips on these issues and links to tools, online resources, and up-to-date information on current efforts to create a culture of publication integrity and best-reporting practices, check out the Committee on Publication Ethics (COPE) website.

Advocating integrity

Pandora’s box has been opened regarding challenges with reproducing and replicating scholarly research articles. Far too many of the findings published in peer-reviewed journals still cannot be recreated, and this has proven to be an issue that can affect any journal, regardless of topic or ranking.

It’s clear that publishers and funders need to reassess the extent to which research with positive results is overshadowing equally rigorous and high-quality studies with negative or inconclusive findings, or studies that attempt to replicate or reproduce previously published work. And recent efforts to factor reproducibility and replicability into research development and peer review signal a path forward.

The time seems ripe for scholarly communication stakeholders to come together to break down the barriers to producing more reproducible and replicable research.

As noted, this blog is part of a Research Integrity Toolkit series in partnership with Research Square in honor of Peer Review Week 2022. You can read Part 3 on plagiarism detection best practices for journals here and corresponding tools for authors here.

Victoria Kitchener
This post was written by Victoria Kitchener Freelance scholarly publishing writer
Tales from the Trenches