Next in Scholastica’s “How We Open Knowledge” series, we welcome to the blog, Dr. Simine Vazire, Professor and Co-director of the MetaMelb Lab Ethics and Wellbeing Hub at the University of Melbourne School of Psychological Sciences. Dr. Vazire is Editor-in-Chief of Collabra: Psychology, a mission-driven OA journal from the Society for the Improvement of Psychological Science published by the University of California Press. Collabra aims to promote not only research accessibility but also the value created by the psychology community during peer review. The journal uses Scholastica’s peer review software and article production service.

Thanks to Dr. Vazire for taking the time to be a part of this series! We invite you to join the conversation around approaches to more equitable and sustainable OA publishing by sharing your thoughts and examples of other fully-OA journal models in the comments section below and on Twitter using the hashtag #HowWeOpenKnowledge.

Q&A with Dr. Simine Vazire

Can you share a brief overview of Collabra’s OA publishing model and how the founding editors and now your team have approached journal development? How have and are you factoring structural equity into planning?

SV: Collabra: Psychology is an OA journal published by UC Press in collaboration with the Society for the Improvement of Psychological Science. It’s a mission-driven journal, where the priorities are to promote scientific integrity and transparency in the work we publish and our operations. Our business model is APC-based. But we try to keep the APC low, and we provide waivers. I hear from UC Press that, so far, we’ve been able to grant waivers whenever authors needed them. Of course, APCs are not the most equitable way to fund a journal, even with relatively low APCs and plenty of waivers. We’d love to move away from that business model and are exploring options.

Besides the business model, we try to combat structural inequity in a few ways. We have a self-nomination system for associate editors, and we don’t have requirements regarding the career stage of the potential AE. We try to recruit AEs from a broad range of backgrounds, including career stage, subdiscipline, research methods, geographical region, gender identity, race and ethnicity, and other demographics. We haven’t been very successful yet on some dimensions — that’s something we’re working on.

In addition, we have a policy that the Collabra: Psychology peer review history will be published with every published paper. And, for rejected papers, we give authors full control over their peer review history (they are free to share it, post it, etc., if they want). That’s another way to hold ourselves accountable — readers can see the peer review process we used for every published paper, and authors can share their experiences if they believe we did something wrong or unfair. It’s always frustrated me that authors often feel (for good reason) that they can’t share decision letters or reviews publicly. It makes it very hard to hold journals and editors accountable for individual errors or bad decisions and also for patterns of bias or errors that can only be detected across a range of decisions. Ultimately, I’d like for all peer review history to be public (without necessarily identifying reviewers, unless they choose to share their identity), but we haven’t implemented that policy so far.

Finally, our policy to emphasize rigor rather than novelty or impact is another way to try to level the playing field. Very often, judgments of novelty or expected impact reflect idiosyncratic preferences or even biases (e.g., that research with human participants from countries that are not typically represented in the literature are somehow less valuable or appropriate for a broad journal). In addition, these judgments create incentives for authors to submit papers with positive, exciting findings, which leads to an ecosystem in science where the evidence base is distorted (this can happen even if an individual paper distorts its findings simply by suppressing negative results). By judging papers based on rigor, we can keep our standards high while contributing to a more accurate and solid knowledge base. We want to reward authors for designing good studies, analyzing them appropriately, and interpreting the results reasonably. The rest is mostly out of authors’ hands (or can only be controlled by researchers if they engage in questionable research practices), and shouldn’t be a major reason for accepting or rejecting papers.

Collabra has taken steps to promote transparency in publishing, including publishing referee reports with articles. Can you share a bit about the initiatives you’ve focused on and how you think they contribute to your OA model?

SV: Another initiative we’ve participated in is the uptake of Registered Reports, where authors submit their manuscripts before they’ve collected (usually) or analyzed (always) their data. This Stage 1 report is evaluated only on the basis of the quality of the study design and analysis plan. If it’s accepted, the journal commits to publishing the paper regardless of how the results come out (as long as authors follow the plan and the data meet some predetermined quality checks). There’s already some early evidence suggesting the Registered Report format produces a much less biased evidence base.

We’re also taking part in the Peer Community In Registered Reports (PCI-RR), where reviews of Stage 1 Registered Reports happen in a journal-agnostic platform. Then journals can offer acceptance on the basis of this communal peer review. We just offered our first acceptance to a PCI-RR reviewed article, and I’m hoping we’ll see more uptake of this innovative mechanism.

These new developments are opportunities to potentially drastically improve how we do peer review. By separating evaluation from some of the more perverse influences (e.g., journal prestige, and reviewers’ or editors’ idiosyncratic demands), we can create an evaluation system that prioritizes quality and integrity. I strongly suspect that many authors would like to write papers that are more accurate, calibrated, and cautious, but that sometimes the journals’ and editors’ practices get in the way (i.e., authors suspect, probably correctly, that they could be punished if they are as cautious as they’d like to be). Things like journal-agnostic peer review, Registered Reports, and fully transparent peer review (where the entire peer review history is published) are promising ways to help curb some of those perverse incentives. Of course, we’ll need more metaresearch to see how effective they are, but there are reasons to be optimistic.

What advice do you have for scholarly organizations that want to develop and promote more equitable OA journal models?

SV: I would say one important thing is don’t underestimate authors and reviewers. In my experience, many journals and editors are reluctant to try new models of peer review because they don’t want to take the risk of driving away authors and reviewers. But assuming that authors and reviewers want things to stay the same is a pretty bold assumption. Continuing with the status quo is itself a risk, and it’s unfair to authors and reviewers to assume that they want things to stay the same without finding out. The status quo has a lot of problems. So, in a way, the burden of evidence should perhaps be on those who want to keep things the same to justify continuing to impose old-fashioned methods (e.g., judging papers based on their assumed future impact) — especially when there are pretty clear demonstrations of how this is bad for science (e.g., see Smaldino and McElreath’s paper on ‘The natural selection of bad science’).

Collabra: Psychology has done pretty well, despite doing a lot of things differently from other journals. That doesn’t mean it’ll always work, but I think editors and journals owe it to their communities to explore how they might improve their own practices.

What do you think are the most significant advances towards structural equity in OA publishing to date, and where do you see the most work to be done?

SV: I think the most pressing issue is moving away from the APC model. It’s a major barrier for a lot of researchers and a huge problem for structural equity. I’m excited to see new business models emerging (e.g., I’m on the board of PLOS and have been very excited to see their new business models gaining traction — of course, I have a conflict of interest here).

Another norm that needs to change is that there is currently very little recourse for holding editors accountable, especially in cases of corruption. There seems to be an assumption that editors always have noble intentions and never abuse their power, which is, of course, very naive. We need to make the process of peer review more transparent so that it’s easier to hold editors accountable. And we need better systems in place for investigating and removing editors who engage in misconduct. (For one example, see this story we covered in our podcast, also covered here.

Check out the next post in the “How We Open Knowledge” series — an interview with Anne Oberlink and Dave Melanson of the University of Kentucky Center for Applied Energy Research about the launch of the Center’s Diamond OA journal Coal Combustion and Gasification Products!

Academic-led Publishing eBook