Image: Screenshot of infographic on reproducibility by The Winnower

“How do we ensure that research is reproducible?” In April 2016 The Winnower and the Laura and John Arnold Foundation launched a contest with this question as the title, putting out an open call for short essays on how to address concern among the scientific community over growing reports of research being published that cannot be reproduced.

There’s no question that scholars should be able to recreate each other’s work in order to build upon it in future research and to preserve the integrity of scientific findings, but that doesn’t mean things always turn out that way. Problems inherent in the current research landscape, including pressures being put upon researchers to publish more quickly and to find flashier results, and a lack of data transparency, are at times resulting in both accidental errors and unethical practices in the reporting of research findings. When these mistakes in reporting go unrealized and flawed research is published, it can lead to an impasse in advancing scholarly work or, worse yet, subsequent research standing on a faulty foundation.

In their recent contest The Winnower and the Laura and John Arnold Foundation turned to the academic community to come up with suggestions for how to help scholars spot problems in reproducibility sooner and monitor each other’s behavior to ensure ethical research reporting. They asked respondents to consider many angles of the reproducibility problem including how to incentivize robust work, whether the peer review process should be changed, and whether data sharing needs to be further incentivized or mandated. The two contest winners were:

We recently caught up with Josh Nicholson, CEO and Co-Founder of The Winnower, to learn more about the goals and outcomes of the contest.

Q&A with Josh Nicholson

What was the goal of the “how do we ensure that research is reproducible” contest and how were submissions evaluated?

JN: Independent verification is the defining characteristic of whether a result is an actual result or just a story. Currently, the incentive structure in academic research favors splashy, yet superficial results with no or limited access to the raw data. The misplaced incentives in research have resulted in crisis in research with many results failing to be reproduced. It’s obvious what can be done and yet we are still in this situation. With this competition we hoped to find non-obvious answers to the reproducibility problem. How do we change the incentive structure? What other practices could be employed to improve reproducibility?

A panel of researchers, librarians, students, and members of non-profit organizations evaluated the submissions. We tried to make the judging panel as diverse as possible in order to evaluate proposals from different perspectives.

Can you share some highlights from the essays you received? What aspects of the reproducibility crisis did scholars comment on and what were some new ideas generated?

JN: A few ideas stood out to me, particularly because I had never considered them or heard them advocated for before. The first idea was using training, either at the undergraduate level or graduate levels, to more vigorously test research. Other ideas included using cryptocurrency as an incentive towards better vigilance and practice in research. Overall there were many great ideas generated that can be read here.

What do you think are the primary factors that have contributed to the reproducibility crisis?

JN: Publish or perish. Get a patent or perish. Get a grant or get out. In short, there is a “small” pool of money that an increasing number of researchers are competing over. How they are judged is not aligned with the best research practices and thus we’re seeing a failing of how robust our work is. There is no silver bullet beyond mandating better practices. I think some of the essays in the contest offer potential solutions but even then who will implement them and how to do it is not easy to answer.

I think we are seeing more irreproducible research because we are looking for it. But I also think the incentive structure and hyper-competition in academia has worsened research over the past few decades.

Do you think the need to make research more accessible is part of the reproducibility crisis?

JN: I think we need more transparency from start to finish in the research cycle. Open access is one part of that but just as important is data sharing, open peer review, and the publication of so-called “negative results.”

How do you think efforts such as this contest are helping the academic community recognize and overcome this crisis?

JN: It’s great that many people are thinking about how to make research stronger. I think The Winnower sought to open up the podium for researchers and students that may not typically get to have a voice in the conversation. My hope is that the contest will act as a nucleus out of which new ideas may arise and ultimately come to fruition.




Danielle Padula

This post was written by Danielle Padula,
Community Development