Image Credit: Pexels
Image Credit: Pexels

Every academic journal should have a system for tracking and reporting on publication performance. From average reviewer response rate to submission volume by geographic region, there are a host of things to consider. It’s up to journal teams to identify the key performance areas and corresponding metrics that matter most to their publication and to use those metrics in order to make data-informed decisions.

How are most journals doing in the pursuit of gathering and analyzing meaningful performance data? According to Jason Roberts, Senior Partner at Origin Editorial, the majority could use some work.

“I can tell you that the majority of the reports I see produced are usually really lacking,” he said.

From his work at Origin as well as serving as managing editor for various publications, Roberts has been exposed to many editorial offices. In his time reviewing and helping teams improve their journal metrics tracking, Roberts said the key issue he’s found is teams failing to maintain reproducible data and to track trends over time.

“A quick mean or median here and there without any measure of variance is hopeless. Many performance reports do not even compare current performance with past performance,” Roberts explained.

In the interview below, Roberts outlines steps journals should take to ensure they have a detailed and reproducible metrics tracking plan.

Q&A with Jason Roberts

What factors do you focus on when developing a plan for tracking journal performance metrics?

JR: When it comes to metrics planning I try to think about:

  • What are the best measurements of performance and how can I best report variance?
  • What contexts and comparisons should I use to base the data I generate?
  • What parameters should I use to interpret the data in a way that is both statistically accurate and also of use to editorial office staff? (Two people may look at the same numbers and draw very different conclusions.)
  • With regard to the presentation of the data: What is the best method to draw out the results? Is there a type of chart that could be generated to highlight a problem where a number in a table may not expose it?
  • Who is going to read my reports and how will they approach interpreting the data?

I think about reports in 2 ways: summary and operational. Summary reports I use to report back on major performance indicators. Operational reports are reports I often create specifically to allow me to better detect manuscripts that have problems or, based on previous experience/data analysis, manuscripts that are likely to evolve into a problem.

If I am studying an editorial office management phenomenon, I think about PICO: Population, Intervention, Control and Outcome. I freely admit my epidemiologist-trained spouse taught me that. I’m not smart enough to figure that out by myself.

What are the top metrics you recommend all journals track?

JR: I will focus on summary metrics:

  • Submission levels
  • Submission level changes by geography
  • Article type submissions (and outcomes by article type)
  • Reviewer performance (timeliness and willingness to review)
  • Editor assignment speed/editor decision ratios and variance across the editorial board
  • Time to first decision
  • Time authors take to revise a manuscript
  • The success rate of new authors vs. returning authors (that one is tricky and requires a fair bit of work to identify who is new)

Where possible, I try to make sure I can use 5 years worth of data. Normally, I prefer to compare 3-5 same quarters across consecutive years (so Q1 2012, 2013, 2014, 2015, 2016) rather than Q1 vs Q2. There is a seasonality to editorial offices.

What, if any, advice do you have for maintaining consistent metrics reporting?

JR: Quite simply: set your parameters for each report and write them down. Also, if you don’t actually save the report, write down which fields you reported on in your submission system. I am not going to point fingers, but in all the major systems I know that a simple switch of a field can change the outcome while, on the surface, the field seems to be collecting data on just one point. Finally, write down your study population. What did you include or exclude? Keep all of this for posterity.

Ultimately, as most journals work in isolation, what you want is consistency in the numbers between your comparison data points. People get obsessed with determining whether something is, say, 432 or 433 when in fact they need to pay more attention to making sure that whichever number is correct is actually exactly how you reported it the year before. Consistency is of greater importance. If you determine you have to make a change to a report, you must go back and correct the previously generated data for the past few years. I see this all the time. A journal changes staff and the new staff have no idea that in reporting, say, turnaround time to decision, the previous office excluded certain article types for example. They can’t understand in this situation why they may, for example, be slower, when in fact it was because the old office included everything including editorials and letters that may have been accepted with minimal peer review. The new office may only be focused on reporting turnaround time for full length articles, which take longer to review.

JR: The big one for me right now is paying attention to reviewer conversion rates. By that I mean the percentage of reviewers that agree to provide a review. I am finding in biomedicine that many journals are now down to 50% of invitations to review being accepted and that number has been trending down by a percentage point or two annually for about a decade now. In short, it’s getting harder to convince reviewers to provide a review. We all know that but not many folks seem to bother measuring it.

How do you recommend journal teams seek to use metrics reporting to improve their workflows - do you have any general advice for translating metrics to change?

JR: Yes…never make any major workflow change without doing some research first. In an ideal world we could conduct trials but it seems next to impossible to do that with the rigid workflows of a submission system. Most of the time, we make a change and then do a pre-post retrospective analysis. The problem with that is that though there is a good chance your intervention may have made, for example, reviewers return their comments faster, equally, we cannot discount that they may have collectively just been faster post-intervention. Only a trial would give us an answer closer to the truth. However, there are ways to be better informed before you set policy.

For example, one journal I consulted for was going to make some crazy suggestions to their reviewer reminder schedule (essentially they didn’t want to hassle reviewers until after deadline). I designed a plot that showed them the frequency of when reviewers returned their comments with “days after accepting the invitation to review” on the x axis. Guess what? There was a measurable spike in the return of reviews within 24 hours of a reminder email being received by a reviewer with the biggest spike of all actually being on the due date. So, to some extent they had a point about pre-deadline reminders. It seems many reviewers simply returned their comments on the due date and had planned accordingly, regardless of the number of reminders they received. So, in short, don’t make a policy change until you have carefully studied current data. I always say anecdote is the enemy of effective office management. It is staggering how often I see editors try to overrule the professional editorial office staff based on anecdote or a complaint from a solitary, but influential, individual in a field and in doing so fly in the face of evidence. Meanwhile, they spend their whole lives working in evidence-based contexts with their day jobs or even in reviewing papers. But, when it comes to peer review management, apparently the same rules don’t apply. Well, that’s wrong. They do!

Danielle Padula
This post was written by Danielle Padula, Community Development
Guide to Managing Reviewers Course