When COVID-19 first hit, MIT Press was quick to respond, making relevant book and journal content freely available to help scholars and the general public better understand the pandemic. But, the press’ publishing team wanted to do something more. Like so many in academia, they were becoming concerned with rising instances of false scientific claims entering the mainstream media and eager to stop the spread. Recognizing misinformation in preprints as well as misinterpretation of preprint findings as two primary causes, they began considering ways to flag questionable preprint information while boosting the signal of promising new research.
“Our Press Director Amy Brand and I were talking one day about what we could do, and that’s when the notion of launching an overlay journal of preprint reviews popped up,” said Nick Lindsay, MIT Press’ Director of Journals and Open Access. Lindsay and Brand brought the idea back to their team and began planning what would become Rapid Reviews: COVID-19 (RR:C19), the first multi-disciplinary OA overlay journal for peer reviews of coronavirus-related preprints. MIT Press launched RR:C19 in August 2020.
As with any new initiative, the press wasn’t sure of exactly what the final output should look like, but they knew they needed to act fast to respond to a pressing need. So they opted to take an iterative approach to publication development, adopting many hallmarks of Agile project management in the process. In the interview below, Lindsay discusses the launch of RRC:19 and how Agile planning enabled them to get the journal off the ground at record speed.
NL: The idea of launching this type of overlay journal is something we had been kicking around at the press for a while before COVID-19, so in many ways the pandemic set that in motion. The need for the journal became clear to us in January when the media started picking up on a preprint indicating there was a link between HIV and coronavirus and suggesting COVID-19 was created in a lab and weaponized by the Chinese government. All of this was clearly false, but, nonetheless, it seeped out into YouTube and the mainstream media to the point where Tom Cotton was talking about it on the floor of the senate. We saw a lot of examples of this kind of fake information starting to do serious damage and that’s when we revisited the overlay journal concept as a way to bring together verified viewpoints and peer reviews on COVID-19. From the start, we knew we wanted the journal to be fully OA.
That was all in early March. Then in late March/early April I started scouring lists of prominent figures in public health to find our Editor in Chief. I think I ended up having to ask around 30 people, because everyone was so stretched. In the end I found a fantastic editor, Stefano Bertozzi, who is the former dean of the school of public health at UC Berkeley. He was extremely interested because he was seeing many preprints arriving at incorrect conclusions or engaging in poor data gathering methods and he recognized the challenge of trying to sift through all of it as a scholar. So he signed on right away. From there, we were able to quickly get funding. The visionaries at The Patrick J. McGovern Foundation agreed almost immediately, which allowed us to pull together a substantial editorial office at Berkeley.
Since we started discussing this journal idea, the question has been — A. will this actually work, and B. can it contribute to the speed at which science is interpreted and communicated, particularly around COVID-19? So I would love to say that we had grander things in mind, but we were very much sort of responding to the moment.
NL: That was the big problem to solve when starting this journal. There are literally thousands and thousands of preprints out there. We needed to figure out the best way to sift through all of that content to find notable new science that may not be getting the level of attention it deserves and flawed science that has the potential to do harm. What we ended up doing first was hiring a battalion of grad students to help. And there was a happy side benefit with this. A lot of grad students had lost their practicum assignments due to COVID-19, and this was a task that they could do that was still keeping them close to what’s going on in public health research while enabling them to learn the ins and outs of academic publishing at the same time.
From there, the editorial team had the fantastic idea to start working with a group at the Lawrence Livermore labs in Berkeley on how to use artificial intelligence that they had developed called COVID Scholar. That gave them some computing horsepower to help with sifting through those thousands and thousands of preprints to find the most interesting ones.
In what ways have you implemented Agile project management principles in the development of this journal (either intentionally or by chance)?
NL: I wouldn’t necessarily say that we came up with a formal iteration process, but we do iterate a lot. The journal is run using PubPub, which is an experimental publishing platform developed at MIT and is now part of the Knowledge Futures Group. We were really stress testing it with this project because the platform is not designed to publish peer reviews. From the start, we were wondering, how are we going to break this? I guess that was a fairly Agile mindset going in — we were okay with not having a complete solution and focused our initial efforts on just getting a sort of MVP version of the journal started to begin creating value. And we did end up breaking things, but the development team is so talented and has been able to work with us to find solutions.
The main challenges we faced were answering the questions: How can we ensure all of the reviews will be connected on the platform? And, how should we display the preprints themselves from the journal? For the latter question, we ended up deciding to abandon that idea and instead just link over to the preprints on the preprint servers, which was much simpler. The biggest challenge we still face now is that we need to get links from the preprint servers back to the reviews that we’re publishing so people who encounter the preprints first understand that reviews of them exist. Then, they’ll hopefully consult our reviews and have greater confidence in how they interpret the preprints claims.
Since the journal launch, the UC Berkeley team, the Knowledge Futures Group developers who manage PubPub, and the MIT Press people have all been meeting weekly to discuss what needs to be changed and what needs to be fixed and how we’re going to iterate. Everybody has approached this initiative with such gusto. I think we’ve all recognized that what we’re doing is hard and that we’re going to make mistakes and we’re going to have to change things regularly, because that’s sort of just how it goes with something so brand new. So I would say that we have exhibited significant agility in our ability to adapt to the challenges that we’ve faced and still face.
What sorts of quantitative or qualitative indicators are you tracking to help guide Agile journal planning?
NL: Because this effort is so new, it has been somewhat difficult to come up with metrics. We had no idea how the community was going to approach this kind of journal or whether or they would even accept it. So, for some things, like gauging reviewer responsiveness, for example, we didn’t have apples-to-apples benchmarks. Typically our journals will send out about a half dozen requests to reviewers, knowing they will likely get at least two to accept. With this particular journal, since we’re asking for peer reviews of preprints, we’re finding we have to send out more like ten review requests per accept. So, it’s a good thing the editorial office does have a battalion of grad students to work on the journal!
In terms of other metrics, we’re looking at some more traditional things like how many peer reviews we’re publishing and how that number is growing over time. We’re also tracking how much attention the preprint reviews are getting in the mainstream.
NL: We recently published one of our most significant preprint reviews so far. There was a study that came out of a foundation affiliated with Steve Bannon that basically asserts almost the same claims as the previous paper I mentioned, also suggesting that COVID-19 was created in a lab by the Chinese to weaken the West. The author of this paper has appeared on Fox News and has been making their way into the mainstream. So we published four peer reviews, which I think are really the only reviews out there of the material, from serious scientists, including Robert Gallo, who discovered HIV, that all declare the claims to be misleading. That’s gotten a lot of attention, and many of our other reviews have been being picked up. So it’s been a rewarding six months so far. I don’t think we’ve ever pulled together a journal this quickly. We’re hoping the journal will continue to take off and serve as a proof of concept of this unique preprint publishing model to support rapid validation of science during times of crisis.
NL: One thing the journal is going to do is offer a traditional publishing option to authors of favorably-reviewed papers, which we’re starting to plan now. It will include composition, copyediting, typesetting, and proofreading. I don’t think we’ll get a tremendous number of people taking us up on it, because scholars still want to publish in the most prestigious venues; but there is a lot of work being done on COVID-19 in regions of the world that don’t necessarily have access to Western journals where this option could be invaluable. It’s a shame that much of the approach to coronavirus research isn’t necessarily global. The quality science that is coming out of other regions needs to be valued and vetted and published in the same way that Western science is. And our journal isn’t just about science, we are also interested in the humanities and social sciences. For example, there was a great series of reviews of preprints about the spread of COVID-19 in prisons, and we’re hoping to have more on the economic impacts of COVID-19.