Image Credit: Kaitlyn Baker on Unsplash

Have you or your fellow editors ever received peer review feedback (on-time!) only to find that it’s incomplete because the reviewer failed to address one of your journal’s primary manuscript assessment criteria? Or have you ever received a review recommending manuscript revisions that had vague or especially lean comments giving authors little to go off of?

Handling incomplete or unclear reviewer feedback can be challenging, especially if it becomes a recurring problem at your journal. It can be difficult to determine what to do in these situations—whether to ask the reviewer to send additional comments, which could cause delays in peer review, or to work with the comments you’ve been given and have your editorial team make further manuscript assessments and recommendations as needed.

Of course you can’t control what reviewers write in their comments, but setting clear expectations and creating a structure for review feedback can make a big difference in the quality of responses you receive. In the past we’ve talked about the importance of having clear reviewer guidelines on your journal website, as well as the benefits of creating a reviewer checklist. Reviewer guidelines and checklists help reviewers know how they should approach reading and assessing a manuscript for your journal. But, on their own, they do not guarantee structured feedback, as reviewers may skim or skip steps. In order to ensure a certain degree of standardization among the feedback you receive, having an automated peer review feedback form is your best option.

Reviewer feedback forms are essentially templates of questions for reviewers to answer. Automated feedback forms are best because they require reviewers to work within the form itself, making it less likely for reviewers to skip questions.

Below we outline some best practices for creating a peer review feedback form and what yours should cover.

Remember your feedback form will be as structured as you make it

One of the primary benefits of having a designated peer review feedback form is that it can take the guesswork out of where to focus comments for reviewers. Asking reviewers to simply comment on a manuscript’s quality without giving them specific areas to address is sort of like dropping them off in the woods and telling them to find their way to a destination without directions. Some will find the destination via more direct routes than others, some may not get there. Think of your feedback form questions like a series of trail markers. While reviewers may go slightly off trail with their comments, they will all ultimately take the same general path and address the same necessary manuscript criteria to reach your “review destination” if you give them markers to follow.

You can only expect reviewers to answer the questions that you ask them. If your only required question is “leave comments” you’re not going to really get the benefits of having a feedback form. That said, if you allow your form to become a laundry list of questions you could run the risk of reviewers becoming fatigued and answering questions less articulately. So it’s important to structure your form to include the key areas you need without becoming too detailed.

In order to determine the right mix of general and specific questions to include in your feedback form, start by thinking about the primary manuscript assessment criteria your editors will need all reviewers to address in order to make a manuscript decision. List out these must-haves and use them as a starting point for building your feedback form. We cover some common general assessment questions below.

Format questions based on the level of feedback needed

Once you’ve outlined the primary assessment areas that you need reviewers to address you can determine the best way to present your questions. Scholastica’s peer review software gives editors the ability to ask 3 types of questions:

  1. Required overall assessment questions - all reviewers must give an overall manuscript rating of 1-5 with 5 being highest, as well as a publication recommendation (Accept, Reject, or Revise & Resubmit)
  2. Open response questions - reviewers are given a box to write detailed comments
  3. Rating scale questions - reviewers are shown a question with 5 options to choose from: Strongly Agree, Agree, Neutral, Disagree, and Strongly Disagree.

The first “required questions” are general ones that we recommend all journals ask reviewers. A manuscript rating will give your editors a quick snapshot of what the reviewer thinks of the manuscript’s quality overall, and asking reviewers directly whether the manuscript should be accepted, rejected, or sent back for revisions will ensure that you know what the reviewer recommends.

The next two types of questions - Likert scale and open ended - each serve a distinct purpose. Likert scale questions are useful for more general manuscript assessment areas that require a certain level of agreement, such as - “Is this manuscript a novel addition to its field?” Open-ended questions enable you to ask reviewers to think more critically about specific aspects of the manuscript and to provide suggested revisions. For example, you can ask reviewers to comment on any aspects of the research methodology that appear flawed.

Start with general manuscript assessment criteria and then work towards more specifics

When designing a peer review feedback form, it can be helpful for your editorial team and, ultimately, reviewers if you follow an inverted question structure starting with broad assessment questions and then working towards specifics. This way, you can start with key criteria that need to be met in order for the manuscript to move forward in peer review and then give reviewers the opportunity to share more details about their recommendations.

Manuscript criteria will vary by journal of course, but there are some general assessment questions that will apply to most manuscripts. Your journal will likely want to address the novelty, soundness, and thoroughness of the manuscript. Some possible questions include:

  • Was the literature thorough given the objectives?
  • Does this manuscript address a research topic that is relevant and important to the field?
  • Is the statement of the research problem/purpose thorough and clear?
  • Is the manuscript clearly organized with supporting references?

You can either make these open-ended or Likert scale questions depending on the level of detail you need. Often using Likert scale questions for more general areas of manuscript assessment will make it easier for not only reviewers to quickly give feedback but also for your editors to compare overall reviewer sentiments and make decisions.

Once you’ve gotten these assessment criteria out of the way you can move onto asking reviewers to give more detailed feedback. This is where open-ended questions come in. As we noted before, it’s important to keep in mind that reviewers are busy people and if you give them too many open-ended questions you may end up with a lot of short responses that are less helpful than more thorough responses to fewer questions would have been. You should ideally be able to get reviewers to expand on the areas needed via a few questions that provide a framework for addressing particular areas of manuscript quality such as:

  • Include a brief summary of the manuscript, including a statement about the significance of the research as well as the overall strength and weaknesses of the manuscript.

  • Provide a brief assessment of the research process and presentation and note any obvious flaws or areas of weakness. Are the conclusions consistent with the evidence and arguments presented?

  • Comments to the editor - this is a field we include in the Scholastica feedback form to give reviewers a chance to address any areas of the manuscript or their review that they weren’t able to in previous questions such as stating any competing interests.

Remember to offer helpful reviewer reminders and support

Reviewers will likely appreciate your feedback form because it will make reviewing for your journal a lot more straightforward. To help your reviewers be successful be sure to provide overall guidance and support in addition to your feedback form. Don’t just send a reviewer invitation and then leave it at that. Send reviewers a link to your peer review guidelines and checklist if you have one, consider sending reviewers due date reminders to help them stay on task, and always send appreciation emails. With tools like Scholastica’s peer review software you can create templates for all of these recurring correspondences and automate reviewer reminders to know that they’re always going out on time. Taking steps to guide reviewers can improve your journal’s peer review experience for everyone involved.