Photo by Denys Nevozhai on Unsplash
Photo by Denys Nevozhai on Unsplash

Since its widespread adoption, beginning in the 1940s, peer review has been considered the gold standard of scholarly journal publishing — but in many ways, you could say that there is nothing standard about it. Of course, the fundamentals of peer review hold true across journals and there are many peer review best practices that have become expected of reputable journals, such as the COPE guidelines and disciplinary norms like the International Committee of Medical Journal Editors’ peer review recommendations. However, journals’ peer review models can vary widely — from different takes on single-blind, double-blind, and triple-blind to the somewhat elusive “open peer review,” a term that can have multiple definitions depending on whom you ask.

With so many possible approaches to peer review come critical choices for editorial teams to make around how to structure their processes and when and where to make adjustments over time. Pippa Smart, President and founder of PSP Consulting, has worked with journals across disciplines to support their development goals, including choosing and honing peer review models, a topic she’ll be covering in her upcoming Medical Editor’s online training course. We caught up with Pippa to learn more about how she advises journals on peer review process decisions and her upcoming course, the 16th-18th of November. In the interview below, Pippa overviews the many variations of blind and open peer review that she’s come across and some of the core benefits and challenges of each. She also discusses key considerations for journals working to make peer review more rapid in the midst of the COVID-19 pandemic.

Q&A with Pippa Smart

In your editorial course you cover multiple peer review models — can you briefly unpack those?

PS: It’s interesting how many editors are researchers working in fields that tend to adhere to one peer review model. Often they aren’t aware of the many different types of peer review. To start, you have the two main closed models, which are double-blind and triple-blind. Double-blind is where the reviewers and the authors don’t know each others’ identities. And then triple-blind is where the editors don’t know the identities of the authors either until after they’ve made a decision. That type of peer review tends to happen a lot in small communities where editors want to ensure that they’re avoiding any real or perceived potential conflict of interest. There is also the semi-closed system, which is single-blind, where the reviewer knows the identity of the author but not vice versa. I would guess that, around the world, the most common systems are single-blind and double-blind.

You’ve then got the more open systems, and here you have all sorts of terminology difficulties. At the moment, there is a group that is part of the STM Association looking at the terminology of peer review to try and come up with some fixed taxonomy. Because when you talk about open peer review, it can mean that the authors and reviewers know each other’s identities, or it can also mean that reviews are published alongside articles and readers know reviewers’ identities as well. Open peer review is being advocated for much more with the open science movement with the idea being, if we are working in an equitable environment, no one should be hidden behind some anonymous barrier. Having said that, open peer review is not a panacea for making peer review perfect, and many authors and reviewers do have concerns about it.

What are some of the main opportunities and challenges that you’ve seen with the above peer review models?

PS: When considering the opportunities and challenges in open peer review, you have examples of environments where it works really well, like at the BMJ. They are strong advocates for open peer review because, in their experience, it produces more useful and collegiate reviews. But in other publishing situations, you see concerns with open peer review over issues of seniority and power balances. A lot of junior researchers will say, if I have to disclose my name and I’m asked to review an article for someone who will likely be in the position of deciding on one of my job applications, I’m going to find it really difficult to be critical. That might mean that they either turn down reviews or they write overly nice reviews. So there are a lot of communities that have concerns that open peer review could prevent reviewers from being honest. There are also, of course, many concerns about implicit biases.

Another interesting example is a conversation I had with a group of editors I trained who used single-blind review about what peer review model would be best for them moving forward. They came to the moral argument that surely we should either have blind, double-blind, or open peer review because you are introducing an imbalance of power otherwise by allowing the reviewers to know the authors’ names but not vice versa. That was an interesting discussion we had that I hadn’t come across before.

Under the umbrellas of these main peer review models, what sorts of different editorial processes and structures have you come across that stand out?

PS: One publisher that comes to mind is Frontiers. They have a system they call “Collaborative Peer Review,” whereby everybody who reviews is part of a sort of social network. So you know that every article is being reviewed by someone who is part of that social network. Their system also works such that when a reviewer posts a review via their platform, the author is put in correspondence with them — not directly where they know the reviewer, but they are in contact. The idea is so authors and reviewers can have discussions about the points covered in the review report to address any clarifying questions. Post-review, the handling editor then makes the final decision. The Frontiers system is also designed to try to make papers publishable. So unlike some of the big journals with very high rejection rates, Frontiers is philosophically looking for reasons to accept articles. Their system is not lowering quality standards at all. Rather it’s saying, what’s the point of finding one thing wrong with an article and then rejecting it? Surely we can be working together to make that article suitable for publishing. Of course, if an article is not suitable it’s not, but we should be working more positively towards making articles publishable where possible.

eLife has another interesting model where the editors correspond with reviewers before making decisions. For example, an editor might say I was concerned about point three because reviewer 1 didn’t mention it but you did. So the editors have more of a collegiate discussion amongst the reviewers and, ultimately, what they send back to the authors is a single report. Part of the reason for the single report format is to make sure what goes back to the authors has been thoughtfully considered. It’s also to avoid the admin issue where you have one reviewer who says, “this is a great paper;” another reviewer who says, “this is rubbish because this and this are all wrong;” and then a third reviewer who says, “it’s OK, but you have to change this and that.” In that case, either the editor has to fight to get some consensus or, and, unfortunately, this is what happens with a lot of journals, if the editor doesn’t have time, they have to send the review reports to the authors as is. Then the authors are presented with contradicting opinions that simply aren’t helpful.

Biology Direct also has an interesting system where, as an author, when you submit your article, you have to select members of the editorial board to do an initial desk review. I believe those editors only have 72 hours to say whether they think the paper is worth considering. As long as enough of the editors nominate the article for consideration, it goes on to peer review, and those editors are responsible for either doing the review or finding someone. What this model does well is distributing the admin work around finding reviewers from perhaps a few handling editors to this big editorial board. They also have lots of controls in place to avoid any unethical behavior. It’s an interesting model, but you need a big enough editorial board that’s inclined to do it.

Finally, you also see more publishers doing cascading reviews, either among their journals or even across publishers. In this case, when a paper is submitted, and there is nothing inherently wrong with it, but it’s not a fit for the journal, a message goes back to the author saying, “this paper is not a fit for us; however, we can cascade it to such and such journal along with the reviews if you’re interested.” I believe eLife and BMC both offer this option. The idea is to avoid authors submitting rejected papers to those other journals as new submissions and the editors then having to find new reviewerswhen, quite often, they end up asking the same people who reviewed the first time.

So there are many interesting examples like those that aren’t necessarily pilots of entirely new peer review models or people changing the primary role of peer review but rather people sort of tinkering around the edges in the administration of peer review.

Many journals have had to dramatically speed up their peer review processes in response to the COVID-19 pandemic. To what extent do you think editorial teams will be able to maintain faster review speeds, and how can they ensure accuracy with speed?

PS: The challenge at the moment is that everyone wants articles on COVID-19 to get through review really quickly. There was a study done recently that looked at articles published on COVID-19 that found the average submission to acceptance time was five days, which is amazing. Some are saying, having had these much faster review times, researchers aren’t going to be willing to go back to the one or two-month delays of before. I’m not sure the speed will continue. We also don’t yet really know how much it’s affected the speed of publishing in other areas that are not related to COVID-19 and whether potentially they’ve been delayed.

With COVID-19 research, you also have these limited agreements for faster review. For example, there’s the C19 Rapid Reviews initiative where reviewers are committing to rapid review for COVID-19 research. The other thing we know is editors have called in favors and asked their editorial boards to peer review where previously they would have looked externally. If you’re an editorial board member and you get a call from your editor in chief asking you to turn around some COVID-19 papers really quickly, you’re of course going to say yes this is really important. But people are not going to do that every month, and I think because of that these short times are likely not going to last.

It is interesting that quite a few more journals are accepting early reports than in previous years as a way to make research more transparent and accurate. I’m not sure if that’s due to COVID-19, or it might be that it’s been on the books long enough and now you see more journals willing to jump in and do Registered Reports. I personally think it’s a really good idea, but I also know researchers who are wary of early reports because they don’t want to publish anything until they put the final results out. I think sometimes we underestimate competitiveness in research — it’s not in all areas, but in some, there is a lot. But I think Registered Reports is a great idea and open data is also a good idea, I’m certainly an advocate for anything around improving reproducibility. I suspect the acceleration of open science will continue and I suspect journals will have more stringent data sharing policies in the future.

Are there any steps you think journals can take, regardless of their discipline, to improve peer review speed and outcomes?

PS: I believe a little extra time at the beginning of peer review saves you a lot of time in the end. Mainly, it’s worth making sure you have found the right reviewers, not just sandblasting loads of people that you think might be suitable. It also makes a huge difference if you think you can add something personal in your email. For example, based on your previous works like X we believe you’re the right reviewer for this. Then your strike rate will be much higher. At my journal, Learned Publishing, I’ve also had instances where I looked at an article and saw it was worth reviewing; however, I knew the language in the abstract and title weren’t very good. In those cases, I’ve personalized my reviewer invitations to explain that the article is actually quite good if you look past the abstract and title because the reviewer will usually make their decision based on those two factors. I might spend about 10 more minutes of my time sending out review requests like that, but that’s 10 minutes at the beginning versus one or two months down the line spending another 30 minutes trying to find new reviewers. I know it can be a bit of a pain in the neck spending more time when an article comes in, but it will save you time in the long term and improve peer review outcomes.

Tales from the Trenches