“The journey of a thousand miles begins with one step” — wise words from Lao Tzu. When embarking on any initiative to improve journal peer review workflows, you might tack onto that “…and repeatable performance metrics!” Whether you’re trying to reach a major journal milestone, such as cutting time to manuscript decision by half, or you’re simply working to keep refining your editorial processes, tracking repeatable metrics will help you determine the best course of action.
Jennifer Mahar, Executive Peer Review Manager at Origin Editorial, has helped journals reach workflow goals big and small. She always starts any project by digging into the data. “I’ve found that’s the best way to get to your desired result whatever that may be,” she said. In the below interview, Mahar shares how she uses data insights to iteratively improve areas of peer review within editorial control, like the time it takes to complete manuscript quality checks, as well as areas outside of editorial control, like reviewer response rates.
Further Reading: For more peer review optimization tips from Jennifer Mahar and others check out Scholastica’s free eBook Academic Journal Management Best Practices: Tales from the Trenches.
Q&A with Jennifer Mahar
What key performance metrics do you suggest all journals focus on?
JM: First, every journal should know (and want to know!) the time it’s taking them to make a first decision, as well as their total manuscript turnaround time. Those are KPIs that you really should share with authors. And if you have a great time to first decision — for example, if it takes you 25 days — that’s something you should be happy to post! Some editors don’t want to share manuscript handling time with their authors, but I don’t understand the point of trying to keep it a secret, that won’t serve you any good. I think you should be as transparent as you can.
Another main metric I would say every journal should watch is reviewer time. That’s a pretty strong KPI. For example, if you’re giving reviewers two weeks to submit reviews and you find that authors are complaining the peer review process is taking too long, you’ll want to dig into that data. You may find that reviewers are taking longer than expected — and you may need to adjust your chasers — or, if not, you may have another bottleneck somewhere that you have to figure out (i.e., is it taking too long to obtain the required number of reviewers). At the same time, if your reviewers are taking three weeks to get comments back to you, but your due date is four weeks, then you need to move the bar back.
Overall, with any area you want to improve, you have to start with metrics. You won’t be able to fix a problem or make an improvement until you know where you stand. If you do a nice audit of what’s going on, you’ll know exactly where your time is being taken. Often, unfortunately, it’s being taken with authors during revisions. In a case like that, you may have to start requiring authors to get their papers back to you in 30 days. That can be a little tough to do. But, if you’re a highly competitive journal, you may have to take that kind of stance.
How should journals go about auditing their operations to decide what improvement areas and related metrics to focus on?
JM: First, it’s important to keep in mind that there are things that you can control in the editorial office, and then there are things that you can help control. For example, an area of the process that you have 100% control over is how quickly you complete initial quality checks when new papers come in the door. But once you pass off a manuscript to a journal editor, you don’t really have control over how long the next step in the process will take. The main thing at these handoff points is to focus on setting clear parameters.
Before you decide on any sort of deadlines, you have to audit your current process to see how you’re doing. Dig into why each step is taking as long as it does currently to see what if any improvements are possible. You should go through and audit your time for quality checks, assign to editor, peer review, decision, revisions, and so on to determine where you are before you decide where you want to go. Within your peer review system, you should have the ability to figure out, turn by turn, how long your papers are taking to roll into each of these steps.
I would also say remember to always use your medians, and not your averages. Always use medians, so it pulls out useless long data tails. And, of course, always report the same data. You don’t want to ever veer from your core metrics.
You mentioned that quality checks are an area of peer review that editorial offices have complete control over. What are your recommendations to improve quality check workflows?
JM: I think a light touch at initial QC is so important, and knowing how to do that goes back to the numbers. You have to dig into your data and look at how many papers you’re rejecting out of hand. If 30 - 40% of your papers are being desk rejected and you’re triaging them first, then your staff is wasting a lot of time on formatting issues. What we’ve done with a group of my journals is started accepting a PDF the first time around. We tell authors that we don’t care about their separate figure file types. That has helped us and our authors save time. Of course, this solution will not work for all journals, particularly journals that need figures to be really high quality for reviewers like immunoblots. In that case, a PDF might not cut it. So it’s important to think through where you can trim some fat without issue.
I think, generally, you have to minimize the number of things you ask authors to do when they first submit. When you have too many specifications, the authors start to get confused. Then you end up having to send papers back and potentially waste two or three days asking them to “skip down to section four” of your lengthy instructions page, for example. I think most journals should minimize the amount of stuff they ask for in the initial submission, ideally keeping it to a page, and then add more formatting instructions in revisions if they have to. Then the author at least knows that their paper is probably going to get published.
Your author instructions should be as simple and straightforward as possible. Of course use bullets or tables for information, it really helps the reader. Try to trim the fat and cut out unnecessary words wherever you can. And date it — if you don’t date your instructions for authors then you’ll never know when you last updated them!
What steps can peer review managers and managing editors take to help keep editorial board members to agreed-upon deadlines?
JM: One of the slides I shared at an annual CSE meeting that everybody loved is the “editor report card.” We are very transparent with our editors for one of the journals I work with, and we share everybody’s performance stats on a slide. Then we don’t even have to talk about it really, the editors see how their peers are doing, and if one editor is taking 45 days to make a decision when everyone else is taking 30 days they will usually self-correct. If someone is historically taking a long time, you may have to bump it up the chain and ask the EIC to have a talk with that editor to see if they can speed things along.
As I mentioned before, I think this is an area where it really all comes down to setting clear expectations. One of the biggest things that I’ve found over the years is that often editors just don’t know what’s expected of them. So during onboarding, you have to fully train each editor and explain to them the journal’s expectations. And then, of course, just know that life gets in the way sometimes, and you have to understand that. I think the coronavirus pandemic is the biggest example of that.
Another area that can be difficult for editors to control is reviewer responsiveness. What steps can journals take to try to improve their reviewer response rates?
JM: You don’t have to over invite reviewers, but you should definitely overpopulate. So if you need two reviewers per paper, again I would look at the data and first figure out your current rate of agreeance. So if you have a 30% agreeance rate, then you know you should at least double the number of reviewer suggestions from the outset because you’re probably not going to succeed at getting the first two reviewers you ask. For most of my journals, I ask the editors to at least double the number of reviewers they suggest, and then maybe consider adding another one or two, because otherwise I’ll be coming back to ask for more options and that will cause delays, which of course is no fun for the author or editor. But again, we are transparent about everything, so if we’re having trouble finding reviewers we will reach out to the author to let them know what’s going on and that we’re working as quickly as we can. I think in everything, knowing your metrics and being transparent throughout your process is key.
As journals are using metrics to establish and update their workflows, should they document them in some way?
JM: You should always have a handbook for your editorial office with your journal policies and procedures. That handbook file should live somewhere it can stay forever, and it should be updated at least on an annual basis. I have a journal handbook and then different handbooks for my associate editors and EIC — I would suggest making a big doc and creating subsections in it with processes and policies by role.
The handbook is invaluable when onboarding new editors. I think it can also become a tome, so you need to try to keep it a “handbook light” with highlights. For example, “how to make a quick decision” should almost be a splash page kind of thing. Otherwise, you’re going to lose peoples’ attention. Having a handbook your editors can reference that reflects any workflow updates you’ve made does make a big difference.