Review the scientific review process and find an efficient journal to publish your work in

Journal pages.

Each journal has its own page with information about the review process. Data on experienced duration and quality of the review process are provided by researchers. Journal popularity scores are calculated based on the number of visits to the journal page, and editorial information is provided by the editor.

journal article review time

Compare journals

Compare journals within and between research fields on several aspects such as duration of first review round and decision time for desk rejections. Other interesting statistics include total handling time of accepted manuscripts, journal popularity score, and overall quality of the review process. Many reviews come with a motivation for the overall rating.

journal article review time

Share your experience

After receiving the final decision of a review process, visit the journal's page, click on 'Review this journal' and share your experience by filling out the SciRev questionnaire. All review experiences are provided by registered members of the academic community, and checked for systematic errors by the SciRev team.

journal article review time

Support our work

Our website is meant to be a service by researchers for researchers. As a non-profit organization, SciRev is one of the few players in the scientific field that is completely independent of any other party. That means that we depend on donations to cover our costs. Please help us remain independent by supporting us with a donation.

journal article review time

Page Content

Overview of the review report format, the first read-through, first read considerations, spotting potential major flaws, concluding the first reading, rejection after the first reading, before starting the second read-through, doing the second read-through, the second read-through: section by section guidance, how to structure your report, on presentation and style, criticisms & confidential comments to editors, the recommendation, when recommending rejection, additional resources, step by step guide to reviewing a manuscript.

When you receive an invitation to peer review, you should be sent a copy of the paper's abstract to help you decide whether you wish to do the review. Try to respond to invitations promptly - it will prevent delays. It is also important at this stage to declare any potential Conflict of Interest.

The structure of the review report varies between journals. Some follow an informal structure, while others have a more formal approach.

" Number your comments!!! " (Jonathon Halbesleben, former Editor of Journal of Occupational and Organizational Psychology)

Informal Structure

Many journals don't provide criteria for reviews beyond asking for your 'analysis of merits'. In this case, you may wish to familiarize yourself with examples of other reviews done for the journal, which the editor should be able to provide or, as you gain experience, rely on your own evolving style.

Formal Structure

Other journals require a more formal approach. Sometimes they will ask you to address specific questions in your review via a questionnaire. Or they might want you to rate the manuscript on various attributes using a scorecard. Often you can't see these until you log in to submit your review. So when you agree to the work, it's worth checking for any journal-specific guidelines and requirements. If there are formal guidelines, let them direct the structure of your review.

In Both Cases

Whether specifically required by the reporting format or not, you should expect to compile comments to authors and possibly confidential ones to editors only.

Reviewing with Empathy

Following the invitation to review, when you'll have received the article abstract, you should already understand the aims, key data and conclusions of the manuscript. If you don't, make a note now that you need to feedback on how to improve those sections.

The first read-through is a skim-read. It will help you form an initial impression of the paper and get a sense of whether your eventual recommendation will be to accept or reject the paper.

Keep a pen and paper handy when skim-reading.

Try to bear in mind the following questions - they'll help you form your overall impression:

  • What is the main question addressed by the research? Is it relevant and interesting?
  • How original is the topic? What does it add to the subject area compared with other published material?
  • Is the paper well written? Is the text clear and easy to read?
  • Are the conclusions consistent with the evidence and arguments presented? Do they address the main question posed?
  • If the author is disagreeing significantly with the current academic consensus, do they have a substantial case? If not, what would be required to make their case credible?
  • If the paper includes tables or figures, what do they add to the paper? Do they aid understanding or are they superfluous?

While you should read the whole paper, making the right choice of what to read first can save time by flagging major problems early on.

Editors say, " Specific recommendations for remedying flaws are VERY welcome ."

Examples of possibly major flaws include:

  • Drawing a conclusion that is contradicted by the author's own statistical or qualitative evidence
  • The use of a discredited method
  • Ignoring a process that is known to have a strong influence on the area under study

If experimental design features prominently in the paper, first check that the methodology is sound - if not, this is likely to be a major flaw.

You might examine:

  • The sampling in analytical papers
  • The sufficient use of control experiments
  • The precision of process data
  • The regularity of sampling in time-dependent studies
  • The validity of questions, the use of a detailed methodology and the data analysis being done systematically (in qualitative research)
  • That qualitative research extends beyond the author's opinions, with sufficient descriptive elements and appropriate quotes from interviews or focus groups

Major Flaws in Information

If methodology is less of an issue, it's often a good idea to look at the data tables, figures or images first. Especially in science research, it's all about the information gathered. If there are critical flaws in this, it's very likely the manuscript will need to be rejected. Such issues include:

  • Insufficient data
  • Unclear data tables
  • Contradictory data that either are not self-consistent or disagree with the conclusions
  • Confirmatory data that adds little, if anything, to current understanding - unless strong arguments for such repetition are made

If you find a major problem, note your reasoning and clear supporting evidence (including citations).

After the initial read and using your notes, including those of any major flaws you found, draft the first two paragraphs of your review - the first summarizing the research question addressed and the second the contribution of the work. If the journal has a prescribed reporting format, this draft will still help you compose your thoughts.

The First Paragraph

This should state the main question addressed by the research and summarize the goals, approaches, and conclusions of the paper. It should:

  • Help the editor properly contextualize the research and add weight to your judgement
  • Show the author what key messages are conveyed to the reader, so they can be sure they are achieving what they set out to do
  • Focus on successful aspects of the paper so the author gets a sense of what they've done well

The Second Paragraph

This should provide a conceptual overview of the contribution of the research. So consider:

  • Is the paper's premise interesting and important?
  • Are the methods used appropriate?
  • Do the data support the conclusions?

After drafting these two paragraphs, you should be in a position to decide whether this manuscript is seriously flawed and should be rejected (see the next section). Or whether it is publishable in principle and merits a detailed, careful read through.

Even if you are coming to the opinion that an article has serious flaws, make sure you read the whole paper. This is very important because you may find some really positive aspects that can be communicated to the author. This could help them with future submissions.

A full read-through will also make sure that any initial concerns are indeed correct and fair. After all, you need the context of the whole paper before deciding to reject. If you still intend to recommend rejection, see the section "When recommending rejection."

Once the paper has passed your first read and you've decided the article is publishable in principle, one purpose of the second, detailed read-through is to help prepare the manuscript for publication. You may still decide to recommend rejection following a second reading.

" Offer clear suggestions for how the authors can address the concerns raised. In other words, if you're going to raise a problem, provide a solution ." (Jonathon Halbesleben, Editor of Journal of Occupational and Organizational Psychology)

Preparation

To save time and simplify the review:

  • Don't rely solely upon inserting comments on the manuscript document - make separate notes
  • Try to group similar concerns or praise together
  • If using a review program to note directly onto the manuscript, still try grouping the concerns and praise in separate notes - it helps later
  • Note line numbers of text upon which your notes are based - this helps you find items again and also aids those reading your review

Now that you have completed your preparations, you're ready to spend an hour or so reading carefully through the manuscript.

As you're reading through the manuscript for a second time, you'll need to keep in mind the argument's construction, the clarity of the language and content.

With regard to the argument’s construction, you should identify:

  • Any places where the meaning is unclear or ambiguous
  • Any factual errors
  • Any invalid arguments

You may also wish to consider:

  • Does the title properly reflect the subject of the paper?
  • Does the abstract provide an accessible summary of the paper?
  • Do the keywords accurately reflect the content?
  • Is the paper an appropriate length?
  • Are the key messages short, accurate and clear?

Not every submission is well written. Part of your role is to make sure that the text’s meaning is clear.

Editors say, " If a manuscript has many English language and editing issues, please do not try and fix it. If it is too bad, note that in your review and it should be up to the authors to have the manuscript edited ."

If the article is difficult to understand, you should have rejected it already. However, if the language is poor but you understand the core message, see if you can suggest improvements to fix the problem:

  • Are there certain aspects that could be communicated better, such as parts of the discussion?
  • Should the authors consider resubmitting to the same journal after language improvements?
  • Would you consider looking at the paper again once these issues are dealt with?

On Grammar and Punctuation

Your primary role is judging the research content. Don't spend time polishing grammar or spelling. Editors will make sure that the text is at a high standard before publication. However, if you spot grammatical errors that affect clarity of meaning, then it's important to highlight these. Expect to suggest such amendments - it's rare for a manuscript to pass review with no corrections.

A 2010 study of nursing journals found that 79% of recommendations by reviewers were influenced by grammar and writing style (Shattel, et al., 2010).

1. The Introduction

A well-written introduction:

  • Sets out the argument
  • Summarizes recent research related to the topic
  • Highlights gaps in current understanding or conflicts in current knowledge
  • Establishes the originality of the research aims by demonstrating the need for investigations in the topic area
  • Gives a clear idea of the target readership, why the research was carried out and the novelty and topicality of the manuscript

Originality and Topicality

Originality and topicality can only be established in the light of recent authoritative research. For example, it's impossible to argue that there is a conflict in current understanding by referencing articles that are 10 years old.

Authors may make the case that a topic hasn't been investigated in several years and that new research is required. This point is only valid if researchers can point to recent developments in data gathering techniques or to research in indirectly related fields that suggest the topic needs revisiting. Clearly, authors can only do this by referencing recent literature. Obviously, where older research is seminal or where aspects of the methodology rely upon it, then it is perfectly appropriate for authors to cite some older papers.

Editors say, "Is the report providing new information; is it novel or just confirmatory of well-known outcomes ?"

It's common for the introduction to end by stating the research aims. By this point you should already have a good impression of them - if the explicit aims come as a surprise, then the introduction needs improvement.

2. Materials and Methods

Academic research should be replicable, repeatable and robust - and follow best practice.

Replicable Research

This makes sufficient use of:

  • Control experiments
  • Repeated analyses
  • Repeated experiments

These are used to make sure observed trends are not due to chance and that the same experiment could be repeated by other researchers - and result in the same outcome. Statistical analyses will not be sound if methods are not replicable. Where research is not replicable, the paper should be recommended for rejection.

Repeatable Methods

These give enough detail so that other researchers are able to carry out the same research. For example, equipment used or sampling methods should all be described in detail so that others could follow the same steps. Where methods are not detailed enough, it's usual to ask for the methods section to be revised.

Robust Research

This has enough data points to make sure the data are reliable. If there are insufficient data, it might be appropriate to recommend revision. You should also consider whether there is any in-built bias not nullified by the control experiments.

Best Practice

During these checks you should keep in mind best practice:

  • Standard guidelines were followed (e.g. the CONSORT Statement for reporting randomized trials)
  • The health and safety of all participants in the study was not compromised
  • Ethical standards were maintained

If the research fails to reach relevant best practice standards, it's usual to recommend rejection. What's more, you don't then need to read any further.

3. Results and Discussion

This section should tell a coherent story - What happened? What was discovered or confirmed?

Certain patterns of good reporting need to be followed by the author:

  • They should start by describing in simple terms what the data show
  • They should make reference to statistical analyses, such as significance or goodness of fit
  • Once described, they should evaluate the trends observed and explain the significance of the results to wider understanding. This can only be done by referencing published research
  • The outcome should be a critical analysis of the data collected

Discussion should always, at some point, gather all the information together into a single whole. Authors should describe and discuss the overall story formed. If there are gaps or inconsistencies in the story, they should address these and suggest ways future research might confirm the findings or take the research forward.

4. Conclusions

This section is usually no more than a few paragraphs and may be presented as part of the results and discussion, or in a separate section. The conclusions should reflect upon the aims - whether they were achieved or not - and, just like the aims, should not be surprising. If the conclusions are not evidence-based, it's appropriate to ask for them to be re-written.

5. Information Gathered: Images, Graphs and Data Tables

If you find yourself looking at a piece of information from which you cannot discern a story, then you should ask for improvements in presentation. This could be an issue with titles, labels, statistical notation or image quality.

Where information is clear, you should check that:

  • The results seem plausible, in case there is an error in data gathering
  • The trends you can see support the paper's discussion and conclusions
  • There are sufficient data. For example, in studies carried out over time are there sufficient data points to support the trends described by the author?

You should also check whether images have been edited or manipulated to emphasize the story they tell. This may be appropriate but only if authors report on how the image has been edited (e.g. by highlighting certain parts of an image). Where you feel that an image has been edited or manipulated without explanation, you should highlight this in a confidential comment to the editor in your report.

6. List of References

You will need to check referencing for accuracy, adequacy and balance.

Where a cited article is central to the author's argument, you should check the accuracy and format of the reference - and bear in mind different subject areas may use citations differently. Otherwise, it's the editor’s role to exhaustively check the reference section for accuracy and format.

You should consider if the referencing is adequate:

  • Are important parts of the argument poorly supported?
  • Are there published studies that show similar or dissimilar trends that should be discussed?
  • If a manuscript only uses half the citations typical in its field, this may be an indicator that referencing should be improved - but don't be guided solely by quantity
  • References should be relevant, recent and readily retrievable

Check for a well-balanced list of references that is:

  • Helpful to the reader
  • Fair to competing authors
  • Not over-reliant on self-citation
  • Gives due recognition to the initial discoveries and related work that led to the work under assessment

You should be able to evaluate whether the article meets the criteria for balanced referencing without looking up every reference.

7. Plagiarism

By now you will have a deep understanding of the paper's content - and you may have some concerns about plagiarism.

Identified Concern

If you find - or already knew of - a very similar paper, this may be because the author overlooked it in their own literature search. Or it may be because it is very recent or published in a journal slightly outside their usual field.

You may feel you can advise the author how to emphasize the novel aspects of their own study, so as to better differentiate it from similar research. If so, you may ask the author to discuss their aims and results, or modify their conclusions, in light of the similar article. Of course, the research similarities may be so great that they render the work unoriginal and you have no choice but to recommend rejection.

"It's very helpful when a reviewer can point out recent similar publications on the same topic by other groups, or that the authors have already published some data elsewhere ." (Editor feedback)

Suspected Concern

If you suspect plagiarism, including self-plagiarism, but cannot recall or locate exactly what is being plagiarized, notify the editor of your suspicion and ask for guidance.

Most editors have access to software that can check for plagiarism.

Editors are not out to police every paper, but when plagiarism is discovered during peer review it can be properly addressed ahead of publication. If plagiarism is discovered only after publication, the consequences are worse for both authors and readers, because a retraction may be necessary.

For detailed guidelines see COPE's Ethical guidelines for reviewers and Wiley's Best Practice Guidelines on Publishing Ethics .

8. Search Engine Optimization (SEO)

After the detailed read-through, you will be in a position to advise whether the title, abstract and key words are optimized for search purposes. In order to be effective, good SEO terms will reflect the aims of the research.

A clear title and abstract will improve the paper's search engine rankings and will influence whether the user finds and then decides to navigate to the main article. The title should contain the relevant SEO terms early on. This has a major effect on the impact of a paper, since it helps it appear in search results. A poor abstract can then lose the reader's interest and undo the benefit of an effective title - whilst the paper's abstract may appear in search results, the potential reader may go no further.

So ask yourself, while the abstract may have seemed adequate during earlier checks, does it:

  • Do justice to the manuscript in this context?
  • Highlight important findings sufficiently?
  • Present the most interesting data?

Editors say, " Does the Abstract highlight the important findings of the study ?"

If there is a formal report format, remember to follow it. This will often comprise a range of questions followed by comment sections. Try to answer all the questions. They are there because the editor felt that they are important. If you're following an informal report format you could structure your report in three sections: summary, major issues, minor issues.

  • Give positive feedback first. Authors are more likely to read your review if you do so. But don't overdo it if you will be recommending rejection
  • Briefly summarize what the paper is about and what the findings are
  • Try to put the findings of the paper into the context of the existing literature and current knowledge
  • Indicate the significance of the work and if it is novel or mainly confirmatory
  • Indicate the work's strengths, its quality and completeness
  • State any major flaws or weaknesses and note any special considerations. For example, if previously held theories are being overlooked

Major Issues

  • Are there any major flaws? State what they are and what the severity of their impact is on the paper
  • Has similar work already been published without the authors acknowledging this?
  • Are the authors presenting findings that challenge current thinking? Is the evidence they present strong enough to prove their case? Have they cited all the relevant work that would contradict their thinking and addressed it appropriately?
  • If major revisions are required, try to indicate clearly what they are
  • Are there any major presentational problems? Are figures & tables, language and manuscript structure all clear enough for you to accurately assess the work?
  • Are there any ethical issues? If you are unsure it may be better to disclose these in the confidential comments section

Minor Issues

  • Are there places where meaning is ambiguous? How can this be corrected?
  • Are the correct references cited? If not, which should be cited instead/also? Are citations excessive, limited, or biased?
  • Are there any factual, numerical or unit errors? If so, what are they?
  • Are all tables and figures appropriate, sufficient, and correctly labelled? If not, say which are not

Your review should ultimately help the author improve their article. So be polite, honest and clear. You should also try to be objective and constructive, not subjective and destructive.

You should also:

  • Write clearly and so you can be understood by people whose first language is not English
  • Avoid complex or unusual words, especially ones that would even confuse native speakers
  • Number your points and refer to page and line numbers in the manuscript when making specific comments
  • If you have been asked to only comment on specific parts or aspects of the manuscript, you should indicate clearly which these are
  • Treat the author's work the way you would like your own to be treated

Most journals give reviewers the option to provide some confidential comments to editors. Often this is where editors will want reviewers to state their recommendation - see the next section - but otherwise this area is best reserved for communicating malpractice such as suspected plagiarism, fraud, unattributed work, unethical procedures, duplicate publication, bias or other conflicts of interest.

However, this doesn't give reviewers permission to 'backstab' the author. Authors can't see this feedback and are unable to give their side of the story unless the editor asks them to. So in the spirit of fairness, write comments to editors as though authors might read them too.

Reviewers should check the preferences of individual journals as to where they want review decisions to be stated. In particular, bear in mind that some journals will not want the recommendation included in any comments to authors, as this can cause editors difficulty later - see Section 11 for more advice about working with editors.

You will normally be asked to indicate your recommendation (e.g. accept, reject, revise and resubmit, etc.) from a fixed-choice list and then to enter your comments into a separate text box.

Recommending Acceptance

If you're recommending acceptance, give details outlining why, and if there are any areas that could be improved. Don't just give a short, cursory remark such as 'great, accept'. See Improving the Manuscript

Recommending Revision

Where improvements are needed, a recommendation for major or minor revision is typical. You may also choose to state whether you opt in or out of the post-revision review too. If recommending revision, state specific changes you feel need to be made. The author can then reply to each point in turn.

Some journals offer the option to recommend rejection with the possibility of resubmission – this is most relevant where substantial, major revision is necessary.

What can reviewers do to help? " Be clear in their comments to the author (or editor) which points are absolutely critical if the paper is given an opportunity for revisio n." (Jonathon Halbesleben, Editor of Journal of Occupational and Organizational Psychology)

Recommending Rejection

If recommending rejection or major revision, state this clearly in your review (and see the next section, 'When recommending rejection').

Where manuscripts have serious flaws you should not spend any time polishing the review you've drafted or give detailed advice on presentation.

Editors say, " If a reviewer suggests a rejection, but her/his comments are not detailed or helpful, it does not help the editor in making a decision ."

In your recommendations for the author, you should:

  • Give constructive feedback describing ways that they could improve the research
  • Keep the focus on the research and not the author. This is an extremely important part of your job as a reviewer
  • Avoid making critical confidential comments to the editor while being polite and encouraging to the author - the latter may not understand why their manuscript has been rejected. Also, they won't get feedback on how to improve their research and it could trigger an appeal

Remember to give constructive criticism even if recommending rejection. This helps developing researchers improve their work and explains to the editor why you felt the manuscript should not be published.

" When the comments seem really positive, but the recommendation is rejection…it puts the editor in a tough position of having to reject a paper when the comments make it sound like a great paper ." (Jonathon Halbesleben, Editor of Journal of Occupational and Organizational Psychology)

Visit our Wiley Author Learning and Training Channel for expert advice on peer review.

Watch the video, Ethical considerations of Peer Review

English Editing Research Services

journal article review time

retirement–banner–GL

Important Update: We regret to inform you that as of July 31, 2024, the edanz Learning Lab service will be discontinued. After this date, the website (learning.edanz.com) will no longer be accessible.

How Long Does Peer Review Take?

How long does peer review take? Edanz Learning Lab

No one likes to wait. But this often happens in the peer review process.

You’ve submitted a manuscript to a journal. You’ve managed the submission system and uploaded all your files correctly. The editor emailed telling you your article was sent out for peer review . So far so good.

But then you wait. And wait…

How long should you have to wait to receive comments from peer reviewers and the journal editor? In other words, how long does peer review take?

Short answer: It takes up to about 3 months (studies have shown peer review typically takes 7–12 weeks), but there are a lot of variables to take into account. These include the journal’s internal processes and publication frequency, availability of peer reviewers, and other things out of your control.

Here’s some insight into what goes on and how you can give your next manuscript submission a shove in the right direction.

The good, the bad, and the crazy of waiting to get published

Authors often have little or no idea how long peer review takes, and how long they should wait before doing something. And they often end up waiting longer than they need to.

Some waiting it to be expected but if it’s getting too long: Don’t wait, communicate !

Some waited over 5 years to get published

A 2016 survey by Nature Research (see image below) showed some 30% of authors ended up waiting 6 months to 1 year for their articles to be published. In 37% of cases the wait was 1–2 years.

A shocking 15% of survey respondents waited 2–3 years, while 8% waited for 3–5 years.

Even worse, 3% of the 3,644 authors surveyed had waited over 5 years for one of their articles to appear in print

You can start a family in that time!

how long to wait for journal response – Edanz Learning Lab

It doesn’t have to be that way, though.

What’s the average wait?

The average waiting time for authors across academic publishing is actually just 90 days from submission, through peer review, to publication.

This is better than it used to be. Some thanks goes to our era of fast online publishing and open access for articles.

Specific examples

  • PNAS averages 10 days from submission to initial decision, 45 days from submission to decision on a full review, and 6.4 months for submission to publication. This is, however, a highly selective journal.
  • Open-access journal PLOS ONE takes around 43 days to first decision. Then, all sorts of variable come into play, because the journal deals with such high volume and breadth of studies.

A study of 3,000 articles

A study by Huisman and Smits extracted data from more than 3,000 articles submitted to one website (SciRev). It showed that peer review time in samples ranged from under 4 weeks to more than 3 months, with 10% having to wait even longer.

Interestingly, less than 20% of articles surveyed were rejected without peer review and, indeed, the length of review (up to a point, about 2.5 months) correlated with higher perceived “quality” on the author’s part.

Stages of the submission process include

  • Processing the initial article: 1­–2 weeks
  • Selecting peer reviewers: 1­–2 weeks
  • Sending the paper out for review and waiting for comments: 3–6 weeks
  • Rendering an editorial decision: 1­–2 weeks

The problem is: Peer review often doesn’t go smoothly.

  • Editors forget.
  • Editors can’t find well-matched peer reviewers to work on submissions (actually the most common reason for delay revealed in surveys).
  • Editors have a huge backlog to work through.
  • Editors have day jobs as full-time academics and are performing tasks for journals “on the side.”
  • Peer reviewers don’t always respond to editorial requests, or take a very long time before they submit their reviews.

These things are a mix of human nature and a bit unacceptable.

So really, how long?

So, how long should you wait for your next paper to come back from peer review? Not more than 10–12 weeks, or up to 3 months. Read on to find how to avoid hitting that point and find what to do if you do actually hit it.

How to speed up the review process

There are a number of ways you, the author, can speed up the review.

Newer and open access journals

One way is to select newer and open access and journals. A study found reviewers and editors with these journals tend to be more enthusiastic . It’s often harder, however, for editors to find suitable, able reviewers at older and more established journals.

Engaged journal editors search for eager peer reviewers who can do their review and give comments relatively quickly.

This is often done at journals by adding effective reviewers to journal editorial boards so that some recognition passes back to an individual’s CV or resume.

Publishers know that peer review tasks, performed by busy, working academics are very often not considered important by their institutions as part of academic assessment processes.

journal article review time

Another explanation, though, is that reviewers are simply not being careful enough. This was shown to be partly the case in a well-known pathology journal that had lowered review speed to 16 days for an initial decision.

Yet still, no matter how responsible and well-managed journals may be, there are times when you can give them a nudge.

You can write a polite email to your journal editor and ask what’s going on.

Write the journal editor and ask for an update

Take a deep breath first, no matter how much pressure you’re under. You must keep your cool and mind your manners when your journal is leaving you hanging.

Be polite and professional. Make sure you write your email so as to give something back to the editor, to help and support them, rather than being aggressive or angry (as so many authors are in these situations).

Help them out

You need to make this as easy as possible for the editor. They’re not likely going to remember you right away.

Put all necessary information into your inquiry emails: Names of authors, paper title, the initial manuscript number and date of submission.

If you didn’t include peer reviewer suggestions in your initial cover letter , add them now. If you did include them (as you should have), suggest a few more.

What to write

Address the editor by name (typically, Professor + Last Name or Dr. + Last Name).

Using someone’s name directly in correspondence is one of the most effective ways to get their attention and put them in a favorable mindset to help you: “A person’s name, in any language, is the sweetest and most agreeable sound,” as Carnegie said.

Dear Professor Jones:

Thank you very much for your time taken with our recently submitted manuscript (add title and number here).

We wondered if there is something we can do to expedite the processing of our manuscript. Please inform us about any issue you may have encountered.

We also understand that it can be challenging to find suitable peer reviewers. Accordingly, we are providing a number of candidate peer reviewers with their contact information. These are as follows:

[add peer review info here]

Thank you again for your time and consideration. We await your response.

Corresponding Author

Suggesting peer reviewers

An effective inquiry email to an editor about a manuscript should ideally contain a number of additional peer reviewer suggestions. That’s because it’s quite likely that your paper’s review is being held up by failure to secure reviewers.

Some journals have removed the option for authors to suggest peer reviewers from their submission systems because these were often open to abuse.

But in a cover letter or inquiry letter to a journal you can make suggestions. (Check our Reviewer Recommendation service if you want a customized list of international researchers in your area of specialty).

Here a few simple rules to choosing a reviewer:

  • Look to your reading and references A good place to start looking for potential reviewers is in the articles you read or references you’ve used. Authors on these papers will be knowledgeable in fields related to your work and therefore would have a good background from which to assess the various aspects of your manuscript.
  • Network You may have met people at conferences, poster sessions, or other networking events. These people are active in your field and may have shown interest in your work. They will also be up to date on the literature and techniques in the field and so would make excellent candidates to review your manuscript.
  • Aim for younger and mid-level researchers Heads of department or high-level professors may seem like the most ideal people to evaluate your manuscript; however, they are likely too busy to take on much peer review. Younger scientists are in the process of establishing their careers and authority in the field so they are more likely to be active in the peer review process.
  • Be cross-disciplinary If your work is interdisciplinary or uses an analysis method from another field, consider suggesting researchers in this area as well. Although they may not be as familiar with your primary field, they will have the expertise to evaluate your use of the method, which is an important overall contribution to improving your manuscript.

Who not to suggest

Do not suggest people you work closely with, or colleagues you’ve published with recently, as these are conflicts of interest.

You also cannot nominate students who’ve worked with you, or other close colleagues. Editors will check your (and their) recent publication lists. They want to know:

  • Is it their field?
  • Do they have time?
  • Is there any conflict of interest?

Who to suggest

Nominate those working in the same field with whom you’ve had no conflict. Ideally, these are researchers who will likely provide an overall favorable view of your work.

This is why it’s a good idea to talk about your work pre-publication, to share it on preprint servers or send it out to colleagues internationally and ask them for feedback.

Receiving positive feedback on work yet to be published means you have a potential peer review nomination you can put into your inquiry letter when you write and ask for an update on your submission.

Assistant professors and earlier stage researchers are also better candidates, as they are actively building their reputations. They’ll have more time and energy for peer review than top researchers leading their own labs.

In sum: Patience is a virtue, but you don’t have to wait years

Three months is a good rough deadline for when you can get in touch with the journal. Hopefully, it’s worth the wait, because even Hollywood movies see the value of peer review in validating your study.

Be polite. Be courteous. Give something back to the editor and you’ll more than likely get a positive response. Check out our webinar “ Effective communication during the submission and publication process ” for practical tips.

The bottom line is: Don’t wait, communicate. The painful wait you’re going through may be something you can help resolve.

Get your free template for writing a manuscript submission inquiry letter below.

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

How Long Is Too Long in Contemporary Peer Review? Perspectives from Authors Publishing in Conservation Biology Journals

* E-mail: [email protected]

Affiliation Fish Ecology and Conservation Physiology Laboratory, Department of Biology, Carleton University, Ottawa, Ontario, Canada

Affiliation MISTRA EviEM, Royal Swedish Academy of Sciences, Stockholm, Sweden

Affiliations Rosenstiel School of Marline and Atmospheric Science, University of Miami, Miami, FL, United States of America, Beneath the Waves, Inc., Syracuse, NY, United States of America

Affiliation Rosenstiel School of Marline and Atmospheric Science, University of Miami, Miami, FL, United States of America

Affiliations Fish Ecology and Conservation Physiology Laboratory, Department of Biology, Carleton University, Ottawa, Ontario, Canada, Institute of Environmental Science, Carleton University, Ottawa, Ontario, Canada

  • Vivian M. Nguyen, 
  • Neal R. Haddaway, 
  • Lee F. G. Gutowsky, 
  • Alexander D. M. Wilson, 
  • Austin J. Gallagher, 
  • Michael R. Donaldson, 
  • Neil Hammerschlag, 
  • Steven J. Cooke

PLOS

  • Published: August 12, 2015
  • https://doi.org/10.1371/journal.pone.0132557
  • Reader Comments

29 Sep 2015: The PLOS ONE Staff (2015) Correction: How Long Is Too Long in Contemporary Peer Review? Perspectives from Authors Publishing in Conservation Biology Journals. PLOS ONE 10(9): e0139783. https://doi.org/10.1371/journal.pone.0139783 View correction

Table 1

Delays in peer reviewed publication may have consequences for both assessment of scientific prowess in academia as well as communication of important information to the knowledge receptor community. We present an analysis on the perspectives of authors publishing in conservation biology journals regarding their opinions on the importance of speed in peer-review as well as how to improve review times. Authors were invited to take part in an online questionnaire, of which the data was subjected to both qualitative (open coding, categorizing) and quantitative analyses (generalized linear models). We received 637 responses to 6,547 e-mail invitations sent. Peer-review speed was generally perceived as slow, with authors experiencing a typical turnaround time of 14 weeks while their perceived optimal review time was six weeks. Male and younger respondents seem to have higher expectations of review speed than females and older respondents. The majority of participants attributed lengthy review times to reviewer and editor fatigue, while editor persistence and journal prestige were believed to speed up the review process. Negative consequences of lengthy review times were perceived to be greater for early career researchers and to have impact on author morale (e.g. motivation or frustration). Competition among colleagues was also of concern to respondents. Incentivizing peer-review was among the top suggested alterations to the system along with training graduate students in peer-review, increased editorial persistence, and changes to the norms of peer-review such as opening the peer-review process to the public. It is clear that authors surveyed in this study viewed the peer-review system as under stress and we encourage scientists and publishers to push the envelope for new peer-review models.

Citation: Nguyen VM, Haddaway NR, Gutowsky LFG, Wilson ADM, Gallagher AJ, Donaldson MR, et al. (2015) How Long Is Too Long in Contemporary Peer Review? Perspectives from Authors Publishing in Conservation Biology Journals. PLoS ONE 10(8): e0132557. https://doi.org/10.1371/journal.pone.0132557

Editor: Miguel A. Andrade-Navarro, Johannes-Gutenberg University of Mainz, GERMANY

Received: March 1, 2015; Accepted: June 16, 2015; Published: August 12, 2015

Copyright: © 2015 Nguyen et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited

Data Availability: Data are available in the paper and supporting information files.

Funding: This work was supported by the Natural Sciences and Engineering Research Council, 315918-166, http://www.nserc-crsng.gc.ca/index_eng.asp and the Canada Research Chair, 320517-166, http://www.chairs-chaires.gc.ca/home-accueil-eng.aspx . The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: The authors have declared that no competing interests exist.

Introduction

Peer reviewed publications remain the cornerstone of the scientific world [ 1 , 2 ] despite the fact that the review process is not infallible [ 3 , 4 ]. Such publications are an essential means of disseminating scientific information through credible and accessible channels. Moreover, academic institutions evaluate scientists based on the quantity and quality of their research via publication output. Given the importance of peer-review to the dissemination of information and to the researchers themselves, it is of little surprise that the process of scientific publishing has been a subject of discussion itself. For example, researchers have explored the many and various biases associated with contemporary peer-review (e.g., gender [ 5 ], nationality/language [ 6 ], and presence of a “known” name and academic age [ 7 ]), with a goal of improving the objectivity, fairness, and rigor of the review process [ 8 ]. What has received less attention is the duration of peer review. Given the significance of peer-reviewed publications for science and evidence-based conservation [ 9 ], efforts to improve the peer-review system are warranted to ensure that delays in publication do not have significant impacts on the transition of scientific evidence into policy.

Despite the switch from surface mail to online communication channels and article submission [ 10 , 11 ], review processes may still stretch into months or even years. Such extreme delays have consequences for both the assessment of scientific prowess (e.g., tenure, employment, promotion) in academics and also delay the communication of important information for threatened habitats or species. Presumably having rapid turnaround times is desirable for authors [ 12 ], particularly early career researchers [ 13 ], but also puts “stress” on the peer-review system. Although review time certainly is discussed informally, there is very little known about what authors themselves think about the speed of peer-review, and how it could be improved. For example, what is an acceptable timeline for a review? How long should authors wait before contacting editors about the progress of a review? What do authors perceive as trade-offs in quality versus speed of a review? What strategies can an author use to try to elicit a more rapid review process? What are the underlying factors that influence variation in review time? Do author demographics play a role in the perspective in the variation of review time? Finally, what does a “long” review mean to career development, scientific progress, and the future behavior of authors with respect to selecting potential publishing outlets? These questions might seem obvious or inherent given our publishing roles and requirements as active researchers, but they have yet to be addressed formally in the scientific literature.

Here, we present an analysis on perspectives about the speed and importance of review times among a subset of authors of papers within the realm of “conservation biology.” Conservation biology is a field with particular urgency for evidence to inform decisions [ 14 ], but has not received as much attention on its peer-review system as other urgent fields such as health and medical sciences [ 15 , 16 ]. We discuss the findings as they relate to peer-review duration and present author perspective on how to improve the speed of peer-review.

Data Collection and Sampling

We extracted the e-mail addresses of authors that published in the field of “conservation biology” from citation records within the Web of Science online database. A search was undertaken on 9 April, 2014 using Web of Science [consisting of Web of Science Core Collections, Biosis Previews (subscription up to 2008), MEDLINE, SciELo and Zoological Record]. We used the following search string, and limited the search to 2013 (to ensure all authors were still active): “conservation AND *diversity”. Search results were refined to include entries for the following Web of Science subject categories alone: environmental sciences ecology, biodiversity conservation, zoology, plant sciences, marine freshwater biology, agriculture, forestry, entomology, fisheries. A total of 6,142 results were obtained, where 4,606 individual e-mail addresses were extracted. E-mails were sent to this mailing list inviting authors to participate in an anonymous online questionnaire hosted on Fluid Surveys; however, of these e-mails, 312 addresses were inactive. Individuals with e-mails that bounced back indicating a change of e-mail were sent an invitation to the new e-mail address indicated. We sent an additional invitation on 22 May, 2014 using a mailing list produced from an additional extraction of 2,679 e-mail addresses obtained from another search using the above string and subject categories but restricted to 2012, with 426 addresses that were non-functional or no longer active. Reminders were sent to all e-mail addresses between 18–20 June, 2014, and closed access to the online questionnaire on 3 July, 2014.

Survey Instrument

The entire questionnaire was composed of 38 open- and closed-ended questions, of which a subset of the questions relevant to review times was used for this study. We asked respondents to focus their experiences in the last five years, given the major phase shift in review protocols in earlier years associated with the move to electronic-based communication [ 17 , 18 ]. However, we did anticipate observing different responses between those that were active in publishing in the pre-electronic era and those that have only published since electronic submission and review became standard practice. While it is not possible to decouple author age/career stage as a potential response driver in the questionnaire [ 13 ], we nonetheless explored the association between time since first peer-reviewed publication and author responses. The questionnaire began with questions that assessed the participants’ opinions on various “review metrics” (e.g., opinions of slow vs. rapid review durations, optimal review duration—see supporting information for full survey questions [ S1 File ]. This section was followed by questions associated with the respondent’s experience and expectations as an author, and their potential behaviour with respect to lengthy review times. Additionally, we assessed participants’ perspective on factors that ultimately influence review speed using open-ended questions and Likert type questions. We then asked whether the peer-review system should be altered and how it should be altered. Lastly, we recorded respondent characteristics such as socio-demographic information, publishing experience and frequency, as well as other experiences with the peer-review system (e.g. referee experience). It is important to note that there could be potential inaccuracies in perceptions of time and events due to self-reporting and recall bias, when someone may perceive a length of time to be quicker or slower than it is in reality. All but author characteristic questions in the survey were optional, and the number of responses (the sample size, n) therefore varies accordingly at or below the total number of respondents. The questionnaire was pre-tested with five authors, and protocols were approved by Carleton University Research Ethics Board (100958).

Data Analysis

For open-ended responses, we categorized the data by common themes that emerged among responses (i.e. open coding; [ 19 ]) using QSR NVivo 10. We use narrative-style quotes from the responses throughout the paper to illustrate the details and properties of each category or theme. We quantified certain responses using frequency counts of the coded themes to provide proportions of respondents that agree with an idea/theme or to provide a number of responses that corresponded with a theme. For the purpose of article clarity and conciseness, we report the majority of responses in percentage and chose to omit reporting the remainder of the responses when they are responses of no opinions or neutrality (e.g., when a respondent responds to a choice as “neither”).

Generalized linear models were used to identify how demographic information (e.g., gender), career status (e.g., number of publications), and experience regarding review times (# of weeks for either a “typical” (TYPICAL), “short” (SHORT), or “long” (LONG) review period) explained respondents’ expectations (i.e., opinion) for the length of time that constitutes an optimal (Model 1), short (Model 2) and long review time (Model 3). Response variables (modeled as # of weeks) were assumed to follow a Poisson or negative binomial distribution (i.e., when residuals were overdispersed) with normally distributed errors. The best model to explain respondent opinion was selected using backwards model selection [ 20 , 21 ]. Details on the statistical methods and the results are found in supporting information [ S2 File ].

Results and Discussion

Response rate and overall respondent characteristics.

We received 673 responses out of all the invited participants (N = 6,547), of which 461 completed the questionnaire to the end, with the possibility of skipping some questions (see S3 File for raw data). The remainder of participants partially completed the questionnaire, thus the number of responses varied by question. While we recognize that the response rate is low and the potential for sampling exists, we do not attempt to generalize the perspectives reported to the entire population of authors in field of conservation biology, but rather provide insights on the issue. It is also important to recognize that respondents who are more likely to participate in our questionnaire are also perhaps more likely to be those who are proactive in voicing their opinions. Of all the respondents, 28% were female and 63% were male (9% left it blank or preferred not to say). This may lead to a male-dominant perspective in our results. Most respondents ranged between 31–40 years old (38.2%), followed by 41–50 years (24%), 51–64 years (18%), 21–30 years (11%), less than 5% of respondents were 65 years or older, and <1% were under 21 years old (2 respondents).

Overall, responses came from 119 countries. We categorized countries based on economic income set out by the World Bank (2014). The majority of respondents (N = 640) worked in countries of high-income economies (78%), followed by upper-middle-income economies (17%), lower-middle-income (4%), and less than 2% for low-income economies. The top countries participating in this study included the United States (17%), the United Kingdom (10%), Australia (8%) and Brazil (7%). The majority of respondents (N = 611) were from academia (77% of which 15% were graduate students), governmental or state agencies (11%), non-government or non-profit organizations (10%), and the private sector (2%) which can include consulting and non-academic research institutes among others. The participant characterization suggests that the author perspectives in this article are largely biased towards industrialized nations and academia, which reflects the characteristics we would expect from the research community.

Author publishing and referee experiences

A larger proportion of participants published their first paper within the last decade (44% of 451 respondents published in 2000–2009, and 19% published in 2010 and after), which indicates a bias toward authors that are potentially in their mid-careers. About half of the respondents have published > 20 publications (with 21% of 623 respondents publishing >50), and only 10% have published <10. Half of the participants publish < 3 papers per year, 35% publish 4–6, 10% publish 7–10, and only 3% of participants publish >10 papers per year. Furthermore, nearly half of the respondents act as journal referees 1–5 times per year (48% of N = 450). Twenty percent of respondents are highly active referees (reviewing manuscripts >10 times per year), 25% being referees 6–10 times, and < 10% reviewing manuscripts only once a year. Overall, the majority of respondents have been publishing for at least 10 years and at least half of them are highly experienced with the peer-review process as both authors and referees. As such, the perspectives gathered in our questionnaire come from highly experienced authors that are actively publishing and therefore familiar with the peer-review system.

Peer review duration: experiences and expectations

We asked participating authors about their experience with peer-review durations (i.e. period between initial submission and first editorial decision following formal peer review), and 368 respondents gave useable/ complete answers. The average (mean ± SD) shortest or quickest review time was reported to be 5.1 ± 6.0 weeks ( Table 1 ), while the opinion of a “fast” review period was on average 4.4 ± 2.9 weeks. While the opinion of a “slow” review period was on average 14.4 ± 8.2 weeks, the longest or slowest review time was reported on average to be 31.5 ± 23.8 weeks ( Table 1 )—nearly double what the respondents perceive as slow. Furthermore, respondents reported that a “typical” turnaround time for a manuscript submission was on average 14.4 ± 6.0 weeks (ranging between 2–52 weeks), and that the optimal review period on average (median) is 6.4 ± 4 weeks. An optimal range for peer review durations were 1–20 weeks with majority falling within eight weeks or under (86% of 366 responses).

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0132557.t001

The fact that respondent opinions and actual experiences of short or long review durations are not aligned, and that their experiences of review durations are lengthier (nearly double the “optimal” time), indicate that the overall perception of the peer-review system is slow. Results reported here may provide indicators for conservation biology related journals to gauge their performance on review time and improve author experiences and satisfaction. In a broad review (over 4000 respondents from across disciplines), Mulligan et al. [ 22 ] noted that 43% of respondents felt that the time it took to the first decision for their last article was slow or very slow. Mulligan et al. [ 22 ] asked authors about whether their last manuscript review (to first decision) took longer than 6 months and reported a mean of 31% but noted some differences among disciplines. For example, reviews in the physical sciences and chemistry rarely (15%) take longer than 6 months while those in the humanities, social science and economics were more likely to take longer than 6 months (i.e., 59%). Mulligan et al. [ 22 ] included a category called “agricultural and biological sciences” and reported 29% of respondents indicated reviews took longer than 6 months with 45% reporting 3 to 6 months. In general, these findings are consistent with the responses we obtained from a focused survey of scientists working in conservation biology.

Respondents did not perceive “fast” or “slow” reviews to influence review quality (75% of 547 useable responses), with the exception of 8% of respondents who believed that fast reviews have higher review quality and another 8% believed fast reviews have lower review quality (10% had no opinion). Therefore, faster review times should presumably be beneficial to the authors, the journals and the relevant field given the belief that review speed does not affect quality, although this has not been tested empirically. We discuss mechanisms to improve review times based on this information later in this article.

Who expects what in peer review duration?

A respondent’s opinion for an optimal review time depended on a weak two-way interaction between respondent experience and gender (TYPICAL*Gender, L- Ratio Test = 5.9, df = 1, P = 0.015). According to both male and female respondents, the optimal length of time for a review should always be shorter than what they have experienced as “typical” ( Fig 1 ). Opinion on what constitutes a short review period (Model 2) was dependent on several weak two-way interactions including: Age*Gender ( L- Ratio Test = 10.6, df = 3, P = 0.01), SHORT*Gender ( L- Ratio Test = 5.1, df = 1, P = 0.02), SHORT*Age ( L- Ratio Test = 11.5, df = 3, P = 0.01). For respondents over 41 years old, experience and opinion are more closely related than compared to younger respondents who suggest a short review is ≤ 10 weeks, regardless of experience ( Fig 1 ). Female experience and opinion were more closely matched than males, however this was evident for respondents 41–50 years old ( Fig 2 ). Finally, opinion on a long review period (Model 3) was dependent on LONG ( L- Ratio Test = 61.7, df = 1, P < 0.001) and Gender ( L- Ratio Test = 6.0, df = 1, P = 0.01). Here, respondents always expected “long” review periods to be many weeks less than what was experienced as a “long” review ( Fig 3 ). For example, although a female respondent experienced a long review of 60 weeks, she expects a long review to take just over 20 weeks [18.2, 22.3, 95% CI]. Based on researcher experience and generalizing for all ages, those who identify as male appear to be the least satisfied with the speed of the peer-review process.

thumbnail

https://doi.org/10.1371/journal.pone.0132557.g001

thumbnail

https://doi.org/10.1371/journal.pone.0132557.g002

thumbnail

https://doi.org/10.1371/journal.pone.0132557.g003

Interactions with editors and journals

If a decision has not been made on a manuscript, participants (N = 479) waited on average 12.9 ± 7.5 weeks before making first contact with the editor or journal regarding the status of the manuscript “in review”. Of those who make first contact with an editor or journal, most will make additional attempts (77% of 479 responses) if time progresses without a response or decision. Of 479 completed responses, only 9% will never attempt to contact the editor or journal suggesting that the author population in this study is quite proactive in voicing their concerns, but keeping in mind that authors who are proactive are likely to agree to partake in the questionnaire. Nevertheless, this finding is important for editors who may feel confused as to the sort of delays in time before authors begin contacting them. Approximately 12% of participants (N = 469) believed that contacting the editor or journal would jeopardize the decision for acceptance, 6% thought it would benefit the decision, while majority did not believe there was any influence.

Only 14% of respondents (N = 480) have threatened to withdraw their manuscript from a journal, and 15% (of the 480 respondents) have actually withdrawn a submitted manuscript when the review process was unsatisfactorily long, which was indicated to be on average 30 ± 31 weeks (ranging from 2–100 weeks, N = 72) when such actions were deemed necessary. This review duration for a potential withdrawal of a manuscript is over double the average time that respondents perceive as slow, indicating that most authors had been quite patient with the peer review process. Despite their apparent patience, respondents generally believe that long reviews should be shorter than what they have experienced ( Fig 2 ), indicating an overall perception that peer-review durations are too slow within the realm of conservation biology.

The majority of participants (72% of 480 responses) did not believe that a long or a short review period would mean that the manuscript was likely to be accepted or rejected. Contrastingly, 14% of respondents believed that a “short” review period would likely lead to a rejection of the manuscript and only 6% believed it would likely be accepted, leaving 8% without opinion. In general, authors did not seem to believe there was any bias toward acceptance or rejection of their manuscript if they contacted the editor or whether the review period was quick or long.

Factors influencing review time and accountability

Of the completed responses (N = 471), over half of the respondents (56%) believed that the reviewers are accountable for the review duration, while 33% held the editors accountable, and 6% attributed delays to the journal staff. The remainder of respondents (5%) believed it was a combination of all the players. Likert type questions revealed that in general, reviewer fatigue (e.g., lack of time, etc.) was ranked as the most influential factor in slowing review speed, followed by editor fatigue, and somewhat the length of the manuscript as well as number of reviewers ( Table 2 ). One respondent expressed this reviewer fatigue as follows:

While editors try to find suitable reviewers in practice there is a relatively small pool of reviewers who can be relied on to do useful reviews . I am an associate editor on 5 journals and am convinced that there is substantial reviewer fatigue out there as the number of publications has grown annually as have the number of journals .

thumbnail

https://doi.org/10.1371/journal.pone.0132557.t002

This may correspond with the increased number of publications and publication outlets that contemporary scientists have to contend with. Similarly, in 2007, it was reported that over 1000 new papers appear daily in the scientific and medical literature alone, and this number is likely increasing rapidly [ 12 ]. Kumar [ 23 ] listed five reasons for publication delay, which included reviewer availability and reviewers having other commitments pushing manuscript reviews at the bottom of their list. The other three reasons included editors sending the manuscript for multiple rounds of reviews (when reviews are conflicting or inadequate); the journal has outsourced manuscript management (e.g., Business Process Outsourcing agency), and; the reviewer intentionally delays the publication of a manuscript for various reasons (e.g., rivalry or intentions to plagiarize).

On the other hand, respondents perceived the persistence of the editorial team as a factor in somewhat speeding up the review process, as well as maximum allocated review times for each journal, and the journal prestige or impact factor ( Table 2 ):

I will always take the full amount of time they [editors] give me . Moreover , only once have I been asked to review a paper by an open access journal , which required my review submission in 2 weeks . But all the others were non-open access journals that gave me a month or more , which increased the average time to decision .

Consequences of long or short review durations

We questioned participating authors on their perspectives of consequences of long or short review durations. Our findings indicate a number of consequences that we have grouped into themes below.

Consequences for the journals.

After a long review period, most respondents (74% of 472 responses) said they are less likely to submit to that journal again relative to other journals; however, some (19%) said it would depend on the journal impact factor or prestige. As expected for the other end, if the review period was short, respondents (69% of N = 471) said they are more likely to submit to that journal again, with some respondents (17%) considering journal impact factor or prestige and 12% of participants were neither more or less likely to submit to that journal again. We also found that review duration is an important factor when respondents (N = 470) consider which journal to submit their research to (43% said yes and 46% said sometimes), while < 10% of participants said they never consider review duration when submitting a manuscript. Therefore, review time is an important consideration for journals to maintain reputation, as majority of respondents have given thought to review times when deciding what journal to submit to. Although, there are some indications of trade-offs between review duration and impact factor as approximately 1 of 5 respondents consider journal prestige and impact factor as an influential part of deciding to which journal they should submit.

In general, respondents (N = 465) discuss the speed of review with their colleagues, of which 54% (of 465) discuss it monthly, 30% once a year, 12% weekly, 1% daily and 4% never discuss review speed. Interestingly, there was an even split among all respondents (N = 466) with authors (49%) that have “blacklisted” a journal for its lengthy review times (i.e. chosen not to resubmit manuscripts to that journal in the future,) and those who have not (48%). These findings send multiple messages to journal editors: 1) review time is an important factor for authors in consideration of publication outlets, and 2) review time is actively being discussed by half of the respondents, which can hinder or endorse a particular journal’s reputation. Publication of research can ultimately affect society at large if the manuscript has significant scientific and policy implications. Therefore, editors/journals/publishers have a responsibility to disseminate credible scientific information in a timely manner and must play an active role through setting standards and facilitating the peer-review process [ 23 ].

Consequences on careers.

Just over half of the respondents (55% of 466 respondents) feel that a lengthy peer-review process affects their career, while 30% did not believe it did. Open-ended responses suggested that lengthier peer-review durations generally have negative impacts on “early career researchers” and “young scientists” (mentioned by 65 of 212 responses) because of the “publish or perish” system, which affects opportunities for jobs and career advancement. One respondent wrote:

As an early career researcher trying to build a list of publications , it is important to have papers reviewed quickly . The longer the time lag between a research project and accepted publication the more difficult it is to apply for new grants or job opportunities .

Furthermore, some respondents mentioned the delay in graduation or acceptance in graduate school for students due to lengthy peer-review processes:

I received the first response about my first article only after 54 weeks . At that time I was not able to start my PhD because the institution only accepted candidates with at least one accepted article .
Even after successful completion of my Ph . D . research topic , I was unable to submit my thesis because it’s a rule that at the day of Ph . D . thesis submission , must have a minimum one peer reviewed publication .

The comments of these early-career respondents are perhaps reflected in the predictions from Model 2, where despite the length of time they have experienced as a “short” review, respondents consistently expect review periods to be much shorter ( Fig 2 ). It seems that regardless of their experience, the review period cannot be short enough for early-career professionals who publish in conservation biology. In addition, it seems that irrespective of age, respondents believe a lengthy review period should be considerably shorter than what they have experienced ( Fig 3 ).

For respondents with tenure or later in their career, a slow review process can impact applications for grants/funding (approximately. 28% of responses) and promotions (approximately 19% of responses):

Publications are important for ranking of scientists and institution achievements so long reviews and long editorial process could violate this process .

Furthermore, concerns about competition among research groups (5% of responses), subjective treatment, malpractice of certain reviewers and editors, conflicts of interest, and the potential for being “scooped” (i.e., publishing the same idea/findings first) were voiced. Intentional delay of review was also listed as 1 of 5 reasons for peer-review delay by Kumar [ 23 ], emphasizing some merit to this topic. Although not the focus of this study, we found that the association between review time and the potential for being “scooped” is worrisome to a number of authors and should be acknowledged as this topic was brought up relatively frequently when respondents were given the opportunity to comment freely (open responses). For example:

If people play the game well and get their “friends” to review their papers . I am sure in many cases that speeds up the process more so when people cite their friends (the reviewers) in these papers .
If a person has an "in" with the journal . In other words , subjectivity and preferential treatment increase speed .

Several respondents (<8%) urged that if a manuscript is to be rejected, journals should do so in a timely manner so the researcher can resubmit to another journal sooner. Others voiced concerns that a delay of a manuscript could hinder subsequent work that is built on the manuscript in review, and some mentioned challenges in remembering specifics of the study or content of the manuscript when review times are particularly long.

Consequences to authors’ morale.

It was also revealed that lengthy peer reviews can affect motivation, causing conflict as well as frustration (8% of responses):

The frustration associated with a lengthy process discourages the writer . Incentives for conducting research are diminished when rewards are not forthcoming . Less incentive means less motivation which both translate into less productivity . Less productivity means less likelihood for promotions . This in turn sets up a vicious cycle very similar to the one related to applying unsuccessfully for grants .
A long peer review process reduces drastically your efficiency of publishing papers , because you need to go back to your previous work and you cannot focus on your current work . Sometimes you need to spend quite a bit of time figuring out how to answer reviewer’s concerns because it was too long ago that you submitted your manuscript .
It is very frustrating , and sometimes embarrassing , to have papers endlessly "in review” . " I had a paper where the subject editor sat on the paper for 5 months without sending it for review; after 3 contacts they finally sent it for review and it has been another month and we have not heard back . This was a key paper needed to build a grant proposal , and my collaborators consistently asked if it was published yet—the grant was ultimately submitted before the paper was accepted .

These consequences are not often discussed, but are often interlinked with consequences of a researcher’s career and aspirations. Although for the majority of the time, long review durations may not have dramatic consequences; however, lengthy review durations that occur at the wrong place at the wrong time may potentially lead to a cascade of consequences.

Alternative responses to consequences of review times.

A number of respondents (<10%) provided interesting alternative responses that are worth mentioning such as (but not limited to) consequences on research quality because of the race to publish, competition among colleagues, greater opportunity cost when taking the time to submit a “quality” manuscript, and limiting peer-review process only to academic research because researchers in other sectors are not rewarded with number of publications and productivity:

Research quality suffers—as opportunities to publish high quality research can be lost when other groups publish (often lower quality) research first . The focus then becomes speed and simplicity of research rather than quality .
Because of career pressure , especially for younger scientists , or the need to complete a degree program , choices are often made (I witness them here) to submit smaller , simpler studies to journals with a quick turnaround , or with a presumed higher acceptance rate for a particular work , rather than invest more time in extending analysis and/or facing rejection or extensive revisions

Should the review process be altered?

When asked if respondents thought the review process should be altered to change the review time, 61% (of 463) responded yes, 12% responded no, and the remainder had neutral opinions. Of 462 respondents, 43% believed that the review process should be improved while only 8% said no. When asked how the review process should be improved, 211 participants provided open-ended responses (summarized in Table 3 ).

thumbnail

https://doi.org/10.1371/journal.pone.0132557.t003

Referee reward system.

About one quarter of the suggestions for improvement was to pay reviewers/editors or provide reviewer incentives/consequences or reward system such as: free year subscription to the journal; rewarding reviewers by adding value to their CV (e.g., “20 best reviews” or “20 best reviewers’ awards”); “have a 1 in 2 out policy… each paper you submit as a lead author means you have to review 2 for that journal before you can publish again in that journal”; providing discounts on the reviewer’s own submissions or items from the scientific publishing house (e.g., books, open access discount, etc.); and home institutions should have reward systems for researchers who regularly review papers.

Editors should remove slow reviewers from their lists . There should be a central bank where good reviewers receive benefits such as fast track review of their material if submitted to the same company (e . g . Wiley , Elsevier , etc . ) . A reduction in publication costs for good reviewers (not just time but quality of revision)
Engagement for reviewing should be better acknowledged as a performance indicator; some exemplary review processes should be made public so that authors and reviewers can learn from them . Reviewers should be able to see the other reviewer's comments after the editor's decision .
For instance , the journal Molecular Ecology is publishing the list of the best reviewers every years based on the quality and speed of the review . This is one example of a reward that the reviewers can put in their CV to show their importance in the field .

Our findings suggest there is some weighted call for reviewer incentives and reward systems. It is challenging to get accurate data on the cost of peer review, and in economic terms, the ‘opportunity cost’ to reviewers. The editor of BMJ , Richard Smith [ 24 ], estimated the average total cost of peer review per paper was approximately £100 for BMJ (keeping in mind 60% are rejected without external review), whereas the cost of papers that made it to review was closer to £1000 and without considering opportunity costs (i.e., time spent on editing and reviewing manuscripts that could be spent on other activities). A recent survey reported two-thirds of academics agreed that $100–200 would motivate them for reviewing while one-third refused to accept monetary compensation [ 25 ]. Kumar [ 23 ] reports differing results from two recent studies where one study of 1500 peer reviewers in the field of economics responded to both monetary and non-monetary incentives to expedite the return of reports [ 26 ], while in 2013, Squazzoni et al. [ 27 ] reported that financial incentives decreased the quality and efficiency of peer reviewers.

Reward system and incentives for reviewers have been proposed in the literature [ 28 ], where there may be penalties to those who decline reviews or non-monetary rewards for review completions such as published lists of reviewers as a means of acknowledgment (e.g. Journal of Ecosystems and Management). However, some journals already use this system and still there is no indication of change in referee behavior [ 29 ]. One common incentive given for peer-review is a temporary subscription to the journal in question. It is perhaps not surprising that such an incentive might fail to change reviewer behavior, since many reviewers will belong to institutions that already possess subscriptions to a host of journals

It may just be a matter of time for the “top reviewers” or time spent on reviews to become “prestigious” and valued in more tangible ways (whereas current system values number of publication). Peerage for Science is a novel approach to externalized peer-review, through which manuscripts are submitted for peer-review by members of a community of researchers in a transparent and non-blinded way, after which journals can then be contacted with an amended draft [ 30 ]. This system incentivizes peer-reviewers by providing metrics and ratings relating to their reviewing activities that members can use to demonstrate their activities.

Deadlines and defined policies.

Approximately one third of responses (N = 211) suggested stricter deadlines and policies, shorter allocated time to review a manuscript, and procedures to ensure adherence to strict deadlines should be established to improve review duration:

Current review process should follow the model of the PLOS (online journals) . Reviewers are constrained to address specific scientific elements : The question , the method , the results and the discussion that these are scientifically acceptable . This should encourage young researchers to publish without the need to include big names/ popular personalities in research to have the paper through journal review .

Again improvements in peer review turnaround and quality are something that the journal editors are able to control by setting out standards and policies that facilitate the peer review process. A recent review of time management for manuscript peer-review acknowledged several suggestions to improve the review process and time, but that it is the responsibility of editors, publishers and academic sponsors of the journals to implement these improvements [ 23 ].

Editorial persistence and journal management.

Related to these more stringent deadlines and policies is the suggestion that editors should put more pressure on reviewers, and follow up with deadlines (30 responses), while others suggested better journal management (13 responses):

Some Journals restart the time counting during a revision process , for example , asking to re-submit as a new manuscript in order to reduce the revision time , instead of keeping track of the time during the whole revision process and to be more realistic about the time that a revision takes . I believe that is a way of cheating or deceiving the system .

As illustrated by the quote above, many journals ask to re-submit as a “new submission” rather than a “resubmission”, and sends to new referees instead of the previous ones to review the revisions, which increases the length of peer review time. Fox and Petchey [ 29 ] suggested that if a manuscript is rejected from one journal, the reviews should be carried forward to the subsequent journal that the manuscript was submitted to. They argued this action helps with quality control and facilitates review process by ensuring that authors revise their manuscripts appropriately, and reduces any duplication of efforts by referees. At present, at least one ecology journal allows authors of manuscripts previously rejected to provide previous reviews and the publisher Wiley is trialing peer-review transfer across nine of its neuroscience journals [ 31 ]. A more formal system for sharing reviews is suggested to increase speed and quality of the peer review system, which is now feasible with the pervasive use of electronic submission and review systems [ 29 ].

Peer review training.

Including graduate students or early career researchers as reviewers may increase the “supply” for the increasing demand. Some may argue that graduate students lack experience and knowledge to appropriately assess a manuscript. Formal training has been suggested to improve quality of reviews and increase the network of reviewers. Furthermore, recommendations by senior researchers of names of reliable and qualified graduate students or early career researchers as potential reviewers may help with the deficit [ 32 ]. Indeed, the British Ecological Society recommends that academic supervisors should assign their own peer-review invitations to graduate students [ 33 ], although it is certainly sensible to verify that individual journal editors are happy with this practice.

Changes to the norms of peer-review system.

A number of respondents (12%) wanted to see more drastic changes in the norms of publishing. For example, permanent and paid group of reviewers, standardizing all journals, permitting to submit manuscripts to more than one journal, including more early career researchers as reviewers, following model journals that do it well (e.g., Geoscience, PLOS one), having a database of reviewers, or have sub-reviewers (e.g. expertise for statistics, methods, taxa, tools, etc.).

“PubCreds” currency, has been proposed as a system where reviewers “pay” for their submission using PubCreds they have earned by performing reviews [ 29 ]. Although, a radical idea, Fox and Petchey [ 29 ] state that “doing nothing will lead to a system in which external review becomes a thing of the past, decision-making by journals is correspondingly stochastic, and the most selfish among us are the most rewarded”. Furthermore, Smith [ 24 ] suggested adopting a “quick and light” form of peer review, with the aim of opening the peer-review system to the broader world to critique the paper or even rank it in the way that Amazon and other retailers ask users to rank their products. Alternatively, some journals (e.g. Biogeosciences) employ a two-stage peer-review, whereby articles are published in a discussions format that is open to public review prior to final publication of an amended version. Other journals (e.g. PLOSone) and platforms ( www.PubPeer.com ) offer the opportunity for continued review following publication. The argument for a radical change in the norms is not uncommon and may be required in today’s peer-review system which will soon be in crisis [ 29 ], although suggestions that increase the labour required of editors and referees, such as submitting to more than one journal concurrently, may exacerbate the already stressed peer-review system.

Role of open access and journal prestige on review duration

The majority of respondents do not review a manuscript quicker for higher tier journals (71% of 445 respondents). When respondents were asked about their perception on the justification of journal prestige on turnaround time, 50% of 369 responses do not believe publishing in a top-tier journal justifies a rapid or delayed review time, while 37% believe it does (remainder had no opinion). Of those who believed publishing in a top-tier journal justifies longer or shorter review time, 64% believe it explains rapid reviews, 14% believe it justifies a delayed review, and 20% believe it justifies both rapid and delay (<5% believe neither). On the other hand, it was interesting to note that a higher number of respondents (75% of 367) believe that publishing in a low-tier journal does not justify a rapid or delayed review time. Overall, journal prestige and impact factor seem to be an important indicator for many authors, although their ability to turnaround peer-review in a timelier manner may reflect their perceived prestige and the higher quality manuscripts that make it through primary editorial screening. One respondent noted:

There is likely a link between review duration and impact factors , as impact factors are based on citations during the first two years after publication . If those citing papers take longer to go through the review , they won't count towards the journal's impact factor .

We were interested in participants’ perspectives of the review process for open access (OA) journals, particularly because authors pay a fee to publish in such journals. About a third (32% of 461) agreed that OA journals should have higher quality of “customer service”, such as faster review and publication times, with an additional 13% of respondents who strongly agree. Another third (31%) of respondents were neutral about this statement, whereas 16% disagree and 7% strongly disagree. This finding is interesting because it provides insight on authors’ perspectives and expectations of OA journals, where authors have higher expectations from OA journals even though peer-review standards should be disconnected from cost and from who pays. This is most likely the result of a shift in the customer base. In subscription-based publishing the customer is the librarian and their measure of product quality was assessed primarily through metrics such as Impact Factor. In OA publishing, the customer becomes the submitting researcher, and quality is assessed through publishing service and, incorrectly perhaps, standards of editorial review. It has yet to be proven that publishers will see substantial increases in profits following a switch to OA, and if profit margins are not significantly increased then expectations of improved service are unwarranted.

Although the topic of open access journals was not the primary focus of our study, we believe that it is an increasing relevant topic as there are debates about the quality of OA journals, but on the other hand, open access may be viewed as mandatory, particularly where research is funded with public money. Future research including perspectives and understanding value of OA journals within the conservation science community should be considered.

Our findings show that the peer-review process within conservation biology is perceived by authors to be slow (14 weeks), and turnaround times that are over double the length of what they perceive as “optimal” (6 weeks). In particular, males seem to expect shorter review times than females, whereas female expectations were found to be more closely related to what they have actually experienced in typical review times. Similarly, older participants (> 40 years) have expectations of review times that are more closely aligned with their experience, while younger authors developed their opinion of a short review time to be <10 weeks despite their experiences. Overall, the primary reasons that participants attribute to the lengthy peer-review process is the “stress” on the peer review system, mainly reviewer and editor fatigue. Meanwhile, editor persistence and journal prestige/impact factor were believed to speed up the review process. The institutional incentive for productivity has its fallacies. The demand from increased publications strains the peer-review system and the “publish or perish” environment can also potentially create a strong demand for publications outlets and increased expectations for quick turnaround times.

It appears that early career researchers are more vulnerable to slow peer review durations in a “publish or perish” system as it relates to graduation employment opportunities and other career advancements. Closely related to impacts on careers are consequences of lengthy peer review duration on an author’s “morale” (i.e. motivation, frustration, conflicts, embarrassment). Some respondents commented that lengthy review durations may result in lack of motivation, forgotten details about the manuscript thus leading to reduced efficiency in productivity and potentially a lower quality manuscript. Competition among colleagues was thought by few respondents to encourage publication of shorter and simpler studies in order to gain a quicker turnaround review time, rather than investing more time in complex and extensive analyses or revisions. These concerns have merit as they do exist and may have implications on quality of research and publications.

Although the objective of our research was not to assess the quality of the peer-review system, we believe all aspects of the process are interlinked and both peer review quality and speed are not mutually exclusive and must be discussed simultaneously. The majority (61%) of respondents believe that the review process should be altered with a number of suggestions such as a referee reward system, defined deadlines and policies, editorial persistence, better journal management, changing the norms of the peer-review process and others. Currently, researchers are rewarded based on productivity, which may result in a system breakdown by increasing demand from a short supply of reviewers and subsequently degrading quality of publications associated with the race to publish [ 32 ]. We suggest a partial shift in institutional rewards and incentives from researcher productivity to greater outreach efforts and public interactions/activities, as there is evidence that conservation goals may be more effectively achieved by engaging the public. Implementing a system that rewards these actions in conjunction with productivity may alleviate pressure in the peer review system overall, and increase conservation successes. Training for peer review is a possibility to improve quality of reviews as well as increase the pool of reviewers by including early career scientists and graduate students. Generally, there is a call from a number of authors to revise and review our own peer review system to ensure its persistence and quality control.

Open access and opening the peer review process is on the forefront of publishing innovation. For example, PeerJ ( www.peerj.com ) offers a novel approach that combines open access and a pre-print system that enables articles to be made available online more rapidly than traditional scholarly publishing. ScienceOpen ( www.scienceopen.com ) immediately publishes the manuscripts in Open Access and accepts continuous open review in a transparent Post-Publication Peer Review process. Such approaches will require time to determine their value to the scientific community, but as scholarly publishing continues to rapidly evolve, experimental approaches to enhancing the communication of peer-reviewed research are warranted. We encourage other scientists and publishers to build on these approaches and continue to push the envelope for new publishing approaches.

Peer reviewed journals will continue to be the primary means by which we vet scientific research and communicate novel discoveries to fellow scientists and the community at large, but as shown here, there is much room for improvement. We provided one of the first evaluations of an important component of the publishing machine, and our results indicate a desire for researchers to streamline the peer-review process. While our sample may not be generalizable to the entire global community of researchers in the field of conservation biology, we believe the opinions, perceptions, and information provided here present an important collective voice that should be discussed more broadly. While the technology is in place to accelerate peer-review, the process itself is still lagging behind the need of researchers, managers, policy-makers, and the public, particularly for time-sensitive research areas such as conservation biology. Moving forward, we should encourage experimental and innovative approaches to enhance and expedite the peer-review process.

Supporting Information

S1 file. complete list of survey questions..

https://doi.org/10.1371/journal.pone.0132557.s001

S2 File. GLM data analysis supplement for Models 1–3.

https://doi.org/10.1371/journal.pone.0132557.s002

S3 File. Raw questionnaire data.

https://doi.org/10.1371/journal.pone.0132557.s003

Acknowledgments

We thank all of the study participants who took the time to share their perspectives. Funding was provided by the Canada Research Chairs Program and the Natural Sciences and Engineering Research Council of Canada.

Author Contributions

Conceived and designed the experiments: SJC NH AJG MRD NRH ADMW VMN. Performed the experiments: VMN LFGG NRH. Analyzed the data: VMN LFGG. Contributed reagents/materials/analysis tools: VMN LFGG SJC. Wrote the paper: VMN NRH LFGG ADMW AJG MRD NH SJC.

  • View Article
  • Google Scholar
  • PubMed/NCBI
  • 10. Harnad S (1996) Implementing peer review on the Net: scientific quality control in scholarly electronic journals. In Peek R. and Newby G., eds. Scholarly publishing: the electronic frontier. MIT Press, Cambridge, MA.
  • 19. Strauss AL (1998) Basics of qualitative research: techniques and procedures for developing grounded theory 2nd ed. SAGE Publications Inc. Thousand Oaks, United States.
  • 20. Chambers JM (1992) Linear models. Chapter 4 of Statistical Models in S eds Chambers J. M. and Hastie T.J., Wadsworth & Brooks/Cole
  • 21. Zuur AF, Ieno EN, Walker N, Saveliev AA, Smith GM (2009) Mixed effects models and extensions in ecology with R. New York: Springer.
  • 25. Davis P (2013; Internet). Society for Scholarly Publishing—Rewarding reviewers: money, prestige, or some of both? [updated 2013 Feb 22; cited 2015 Feb 27] Available: http://scholarlykitchen.sspnet.org/2013/02/22/rewarding-reviewers-money-prestige-or-some-of-both/
  • 31. Wiley Online Library [Internet]. Transferable Peer Review Pilot (cited 2015 Feb 27) Available: http://olabout.wiley.com/WileyCDA/Section/id-819213.html
  • 33. British Ecological Society [Internet]. A guide to peer review in ecology and evolution (cited 2015 February 27). Available: http://www.britishecologicalsociety.org/wp-

How Long Should Authors Wait for a Journal's Response? (and When to Reach Out)

  • Research Process

Researchers should wait for 6-8 weeks before contacting a journal editor to inquire about the status of their submitted paper, according to advice from American Journal Experts. The initial submission process, including ethical checks and finding suitable peer reviewers, can take up to three weeks, while the rejection rate for peer reviewers can be as high as 70%, leading to potential delays in publishing.

Updated on May 4, 2023

an hourglass to represent wait times for authors to reach out to academic journals

FAQ from a researcher : How long should I wait after journal article submission before writing to the editor for an update?

Researchers ask our AJE team questions all the time. The question we are answering today came in from an author via one of our Research Communication Partners (RCPs).

Wait! What's an RCP?

An AJE Research Communication Partner (RCP) is an expert who works with out authors who have bought Premium Editing packages. RCP's are standing by to answer any and all of your research questions.

Back to the question. Just the other day, an author contacted one of our RCPs to ask: “I’ve submitted an article to a journal. How long should I wait before contacting the journal editor to ask about the status of my submitted paper? I've been waiting for a couple of weeks without hearing anything. The status of the paper on the journal management system is still ‘submitted to the journal.’”

Don’t wait! Communicate!

In response to questions like this – about articles submitted to journals – AJE always likes to say “Don’t wait! Communicate!” In this case, our advice to this author, which was passed via the RCP, was to wait for 6-8 weeks to hear back from the journal (based on the average time to first response stated on the journal’s website) and then to write a politely-worded email to the editor to request more information, suggesting some additional names and emails for suitable peer reviewers.

Some helpful background on submitting to a journal

In this case, the question we received was about how long to wait before asking about the status of a submitted paper. On average, the length of time it takes an editor to process a paper submitted to their journal and send it out for peer review is 2-3 weeks.

The editor (or editorial office) has to check the submission to make sure it complies with all ethical guidelines (e.g., declarations, data protection, local and national ethical board approvals) as well as the journal’s aims and scope and internal checklists. Once the submission passes these initial checks (so-called ‘Editorial Triage’) the paper is then assigned to two or more suitable peer reviewers. This also takes time.

Up to 70% of requests for peer review sent out by journals are rejected by researchers because they have limited time or feel the article they are being asked to assess is too far from their expertise. Editors often have to ask a number of potential peer reviewers before two or more accept the job. Then, it takes more time for comments to come back, and – even then – these might be conflicting and require additional reviewers to be consulted before the editor is able to reach a balanced opinion.

In short, it takes time. Usually, it takes up to 6-8 weeks before you can expect to hear anything back from your journal editor.

How to draft an email to check on your article submission progress

It’s important to be polite and courteous at all times while writing an email to a publisher. Clearly ask for information, but also provide the editor the title, the authors, and the manuscript number to help them find your paper in their system. 

It is helpful to also provide suggestions for additional peer reviewers. It’s quite likely your paper has become stalled in the journal system because of a lack of peer reviewers. Therefore, including some peer reviews in your enquiry email will go a long way in clearing the lag. 

As an author, you also have the chance to make some suggestions for peer reviewers in your cover letter when submitting your paper for the first time. Who should these peer reviewers be? Other colleagues in your field who would be suitable to give comments about your article. 

What you can write

You can write something along the lines of: 

“Dear Editor,

I'm writing to enquire about the status of my article submitted on [date], manuscript number [X, Y and Z]. The authors and title of our paper are as follows: 

I would like to inquire about the status of our submitted manuscript since it has now been [insert the amount of time that has passed]. 

Here are some suggestions for additional potential colleagues who would be in a position to provide suitable peer review.

Thank you for your time. 

Sincerely, 

[Your name].”

Final thoughts

This is the way to do it. Wait a while. Then, communicate with the journal editor after an appropriate amount of time has passed.

Don’t wait, communicate. If you have questions about article writing, article publishing or managing the journal publication process, get in touch with us at AJE .

Because academic writing is no-one’s first language.

The AJE Team

The AJE Team

See our "Privacy Policy"

Don't waste valuable time sifting through countless journals - let us do the work for you!

Try our Journal Recommendation service today and take the first step toward successful publication.

Home

How to Review a Journal Article

rainbow over colonnade

For many kinds of assignments, like a  literature review , you may be asked to offer a critique or review of a journal article. This is an opportunity for you as a scholar to offer your  qualified opinion  and  evaluation  of how another scholar has composed their article, argument, and research. That means you will be expected to go beyond a simple  summary  of the article and evaluate it on a deeper level. As a college student, this might sound intimidating. However, as you engage with the research process, you are becoming immersed in a particular topic, and your insights about the way that topic is presented are valuable and can contribute to the overall conversation surrounding your topic.

IMPORTANT NOTE!!

Some disciplines, like Criminal Justice, may only want you to summarize the article without including your opinion or evaluation. If your assignment is to summarize the article only, please see our literature review handout.

Before getting started on the critique, it is important to review the article thoroughly and critically. To do this, we recommend take notes,  annotating , and reading the article several times before critiquing. As you read, be sure to note important items like the thesis, purpose, research questions, hypotheses, methods, evidence, key findings, major conclusions, tone, and publication information. Depending on your writing context, some of these items may not be applicable.

Questions to Consider

To evaluate a source, consider some of the following questions. They are broken down into different categories, but answering these questions will help you consider what areas to examine. With each category, we recommend identifying the strengths and weaknesses in each since that is a critical part of evaluation.

Evaluating Purpose and Argument

  • How well is the purpose made clear in the introduction through background/context and thesis?
  • How well does the abstract represent and summarize the article’s major points and argument?
  • How well does the objective of the experiment or of the observation fill a need for the field?
  • How well is the argument/purpose articulated and discussed throughout the body of the text?
  • How well does the discussion maintain cohesion?

Evaluating the Presentation/Organization of Information

  • How appropriate and clear is the title of the article?
  • Where could the author have benefited from expanding, condensing, or omitting ideas?
  • How clear are the author’s statements? Challenge ambiguous statements.
  • What underlying assumptions does the author have, and how does this affect the credibility or clarity of their article?
  • How objective is the author in his or her discussion of the topic?
  • How well does the organization fit the article’s purpose and articulate key goals?

Evaluating Methods

  • How appropriate are the study design and methods for the purposes of the study?
  • How detailed are the methods being described? Is the author leaving out important steps or considerations?
  • Have the procedures been presented in enough detail to enable the reader to duplicate them?

Evaluating Data

  • Scan and spot-check calculations. Are the statistical methods appropriate?
  • Do you find any content repeated or duplicated?
  • How many errors of fact and interpretation does the author include? (You can check on this by looking up the references the author cites).
  • What pertinent literature has the author cited, and have they used this literature appropriately?

Following, we have an example of a summary and an evaluation of a research article. Note that in most literature review contexts, the summary and evaluation would be much shorter. This extended example shows the different ways a student can critique and write about an article.

Chik, A. (2012). Digital gameplay for autonomous foreign language learning: Gamers’ and language teachers’ perspectives. In H. Reinders (ed.),  Digital games in language learning and teaching  (pp. 95-114). Eastbourne, UK: Palgrave Macmillan.

Be sure to include the full citation either in a reference page or near your evaluation if writing an  annotated bibliography .

In Chik’s article “Digital Gameplay for Autonomous Foreign Language Learning: Gamers’ and Teachers’ Perspectives”, she explores the ways in which “digital gamers manage gaming and gaming-related activities to assume autonomy in their foreign language learning,” (96) which is presented in contrast to how teachers view the “pedagogical potential” of gaming. The research was described as an “umbrella project” consisting of two parts. The first part examined 34 language teachers’ perspectives who had limited experience with gaming (only five stated they played games regularly) (99). Their data was recorded through a survey, class discussion, and a seven-day gaming trial done by six teachers who recorded their reflections through personal blog posts. The second part explored undergraduate gaming habits of ten Hong Kong students who were regular gamers. Their habits were recorded through language learning histories, videotaped gaming sessions, blog entries of gaming practices, group discussion sessions, stimulated recall sessions on gaming videos, interviews with other gamers, and posts from online discussion forums. The research shows that while students recognize the educational potential of games and have seen benefits of it in their lives, the instructors overall do not see the positive impacts of gaming on foreign language learning.

The summary includes the article’s purpose, methods, results, discussion, and citations when necessary.

This article did a good job representing the undergraduate gamers’ voices through extended quotes and stories. Particularly for the data collection of the undergraduate gamers, there were many opportunities for an in-depth examination of their gaming practices and histories. However, the representation of the teachers in this study was very uneven when compared to the students. Not only were teachers labeled as numbers while the students picked out their own pseudonyms, but also when viewing the data collection, the undergraduate students were more closely examined in comparison to the teachers in the study. While the students have fifteen extended quotes describing their experiences in their research section, the teachers only have two of these instances in their section, which shows just how imbalanced the study is when presenting instructor voices.

Some research methods, like the recorded gaming sessions, were only used with students whereas teachers were only asked to blog about their gaming experiences. This creates a richer narrative for the students while also failing to give instructors the chance to have more nuanced perspectives. This lack of nuance also stems from the emphasis of the non-gamer teachers over the gamer teachers. The non-gamer teachers’ perspectives provide a stark contrast to the undergraduate gamer experiences and fits neatly with the narrative of teachers not valuing gaming as an educational tool. However, the study mentioned five teachers that were regular gamers whose perspectives are left to a short section at the end of the presentation of the teachers’ results. This was an opportunity to give the teacher group a more complex story, and the opportunity was entirely missed.

Additionally, the context of this study was not entirely clear. The instructors were recruited through a master’s level course, but the content of the course and the institution’s background is not discussed. Understanding this context helps us understand the course’s purpose(s) and how those purposes may have influenced the ways in which these teachers interpreted and saw games. It was also unclear how Chik was connected to this masters’ class and to the students. Why these particular teachers and students were recruited was not explicitly defined and also has the potential to skew results in a particular direction.

Overall, I was inclined to agree with the idea that students can benefit from language acquisition through gaming while instructors may not see the instructional value, but I believe the way the research was conducted and portrayed in this article made it very difficult to support Chik’s specific findings.

Some professors like you to begin an evaluation with something positive but isn’t always necessary.

The evaluation is clearly organized and uses transitional phrases when moving to a new topic.

This evaluation includes a summative statement that gives the overall impression of the article at the end, but this can also be placed at the beginning of the evaluation.

This evaluation mainly discusses the representation of data and methods. However, other areas, like organization, are open to critique.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of plosone

How Long Is Too Long in Contemporary Peer Review? Perspectives from Authors Publishing in Conservation Biology Journals

Vivian m. nguyen.

1 Fish Ecology and Conservation Physiology Laboratory, Department of Biology, Carleton University, Ottawa, Ontario, Canada

Neal R. Haddaway

2 MISTRA EviEM, Royal Swedish Academy of Sciences, Stockholm, Sweden

Lee F. G. Gutowsky

Alexander d. m. wilson, austin j. gallagher.

3 Rosenstiel School of Marline and Atmospheric Science, University of Miami, Miami, FL, United States of America

4 Beneath the Waves, Inc., Syracuse, NY, United States of America

Michael R. Donaldson

Neil hammerschlag, steven j. cooke.

5 Institute of Environmental Science, Carleton University, Ottawa, Ontario, Canada

Conceived and designed the experiments: SJC NH AJG MRD NRH ADMW VMN. Performed the experiments: VMN LFGG NRH. Analyzed the data: VMN LFGG. Contributed reagents/materials/analysis tools: VMN LFGG SJC. Wrote the paper: VMN NRH LFGG ADMW AJG MRD NH SJC.

Associated Data

Data are available in the paper and supporting information files.

Delays in peer reviewed publication may have consequences for both assessment of scientific prowess in academia as well as communication of important information to the knowledge receptor community. We present an analysis on the perspectives of authors publishing in conservation biology journals regarding their opinions on the importance of speed in peer-review as well as how to improve review times. Authors were invited to take part in an online questionnaire, of which the data was subjected to both qualitative (open coding, categorizing) and quantitative analyses (generalized linear models). We received 637 responses to 6,547 e-mail invitations sent. Peer-review speed was generally perceived as slow, with authors experiencing a typical turnaround time of 14 weeks while their perceived optimal review time was six weeks. Male and younger respondents seem to have higher expectations of review speed than females and older respondents. The majority of participants attributed lengthy review times to reviewer and editor fatigue, while editor persistence and journal prestige were believed to speed up the review process. Negative consequences of lengthy review times were perceived to be greater for early career researchers and to have impact on author morale (e.g. motivation or frustration). Competition among colleagues was also of concern to respondents. Incentivizing peer-review was among the top suggested alterations to the system along with training graduate students in peer-review, increased editorial persistence, and changes to the norms of peer-review such as opening the peer-review process to the public. It is clear that authors surveyed in this study viewed the peer-review system as under stress and we encourage scientists and publishers to push the envelope for new peer-review models.

Introduction

Peer reviewed publications remain the cornerstone of the scientific world [ 1 , 2 ] despite the fact that the review process is not infallible [ 3 , 4 ]. Such publications are an essential means of disseminating scientific information through credible and accessible channels. Moreover, academic institutions evaluate scientists based on the quantity and quality of their research via publication output. Given the importance of peer-review to the dissemination of information and to the researchers themselves, it is of little surprise that the process of scientific publishing has been a subject of discussion itself. For example, researchers have explored the many and various biases associated with contemporary peer-review (e.g., gender [ 5 ], nationality/language [ 6 ], and presence of a “known” name and academic age [ 7 ]), with a goal of improving the objectivity, fairness, and rigor of the review process [ 8 ]. What has received less attention is the duration of peer review. Given the significance of peer-reviewed publications for science and evidence-based conservation [ 9 ], efforts to improve the peer-review system are warranted to ensure that delays in publication do not have significant impacts on the transition of scientific evidence into policy.

Despite the switch from surface mail to online communication channels and article submission [ 10 , 11 ], review processes may still stretch into months or even years. Such extreme delays have consequences for both the assessment of scientific prowess (e.g., tenure, employment, promotion) in academics and also delay the communication of important information for threatened habitats or species. Presumably having rapid turnaround times is desirable for authors [ 12 ], particularly early career researchers [ 13 ], but also puts “stress” on the peer-review system. Although review time certainly is discussed informally, there is very little known about what authors themselves think about the speed of peer-review, and how it could be improved. For example, what is an acceptable timeline for a review? How long should authors wait before contacting editors about the progress of a review? What do authors perceive as trade-offs in quality versus speed of a review? What strategies can an author use to try to elicit a more rapid review process? What are the underlying factors that influence variation in review time? Do author demographics play a role in the perspective in the variation of review time? Finally, what does a “long” review mean to career development, scientific progress, and the future behavior of authors with respect to selecting potential publishing outlets? These questions might seem obvious or inherent given our publishing roles and requirements as active researchers, but they have yet to be addressed formally in the scientific literature.

Here, we present an analysis on perspectives about the speed and importance of review times among a subset of authors of papers within the realm of “conservation biology.” Conservation biology is a field with particular urgency for evidence to inform decisions [ 14 ], but has not received as much attention on its peer-review system as other urgent fields such as health and medical sciences [ 15 , 16 ]. We discuss the findings as they relate to peer-review duration and present author perspective on how to improve the speed of peer-review.

Data Collection and Sampling

We extracted the e-mail addresses of authors that published in the field of “conservation biology” from citation records within the Web of Science online database. A search was undertaken on 9 April, 2014 using Web of Science [consisting of Web of Science Core Collections, Biosis Previews (subscription up to 2008), MEDLINE, SciELo and Zoological Record]. We used the following search string, and limited the search to 2013 (to ensure all authors were still active): “conservation AND *diversity”. Search results were refined to include entries for the following Web of Science subject categories alone: environmental sciences ecology, biodiversity conservation, zoology, plant sciences, marine freshwater biology, agriculture, forestry, entomology, fisheries. A total of 6,142 results were obtained, where 4,606 individual e-mail addresses were extracted. E-mails were sent to this mailing list inviting authors to participate in an anonymous online questionnaire hosted on Fluid Surveys; however, of these e-mails, 312 addresses were inactive. Individuals with e-mails that bounced back indicating a change of e-mail were sent an invitation to the new e-mail address indicated. We sent an additional invitation on 22 May, 2014 using a mailing list produced from an additional extraction of 2,679 e-mail addresses obtained from another search using the above string and subject categories but restricted to 2012, with 426 addresses that were non-functional or no longer active. Reminders were sent to all e-mail addresses between 18–20 June, 2014, and closed access to the online questionnaire on 3 July, 2014.

Survey Instrument

The entire questionnaire was composed of 38 open- and closed-ended questions, of which a subset of the questions relevant to review times was used for this study. We asked respondents to focus their experiences in the last five years, given the major phase shift in review protocols in earlier years associated with the move to electronic-based communication [ 17 , 18 ]. However, we did anticipate observing different responses between those that were active in publishing in the pre-electronic era and those that have only published since electronic submission and review became standard practice. While it is not possible to decouple author age/career stage as a potential response driver in the questionnaire [ 13 ], we nonetheless explored the association between time since first peer-reviewed publication and author responses. The questionnaire began with questions that assessed the participants’ opinions on various “review metrics” (e.g., opinions of slow vs. rapid review durations, optimal review duration—see supporting information for full survey questions [ S1 File ]. This section was followed by questions associated with the respondent’s experience and expectations as an author, and their potential behaviour with respect to lengthy review times. Additionally, we assessed participants’ perspective on factors that ultimately influence review speed using open-ended questions and Likert type questions. We then asked whether the peer-review system should be altered and how it should be altered. Lastly, we recorded respondent characteristics such as socio-demographic information, publishing experience and frequency, as well as other experiences with the peer-review system (e.g. referee experience). It is important to note that there could be potential inaccuracies in perceptions of time and events due to self-reporting and recall bias, when someone may perceive a length of time to be quicker or slower than it is in reality. All but author characteristic questions in the survey were optional, and the number of responses (the sample size, n) therefore varies accordingly at or below the total number of respondents. The questionnaire was pre-tested with five authors, and protocols were approved by Carleton University Research Ethics Board (100958).

Data Analysis

For open-ended responses, we categorized the data by common themes that emerged among responses (i.e. open coding; [ 19 ]) using QSR NVivo 10. We use narrative-style quotes from the responses throughout the paper to illustrate the details and properties of each category or theme. We quantified certain responses using frequency counts of the coded themes to provide proportions of respondents that agree with an idea/theme or to provide a number of responses that corresponded with a theme. For the purpose of article clarity and conciseness, we report the majority of responses in percentage and chose to omit reporting the remainder of the responses when they are responses of no opinions or neutrality (e.g., when a respondent responds to a choice as “neither”).

Generalized linear models were used to identify how demographic information (e.g., gender), career status (e.g., number of publications), and experience regarding review times (# of weeks for either a “typical” (TYPICAL), “short” (SHORT), or “long” (LONG) review period) explained respondents’ expectations (i.e., opinion) for the length of time that constitutes an optimal (Model 1), short (Model 2) and long review time (Model 3). Response variables (modeled as # of weeks) were assumed to follow a Poisson or negative binomial distribution (i.e., when residuals were overdispersed) with normally distributed errors. The best model to explain respondent opinion was selected using backwards model selection [ 20 , 21 ]. Details on the statistical methods and the results are found in supporting information [ S2 File ].

Results and Discussion

Response rate and overall respondent characteristics.

We received 673 responses out of all the invited participants (N = 6,547), of which 461 completed the questionnaire to the end, with the possibility of skipping some questions (see S3 File for raw data). The remainder of participants partially completed the questionnaire, thus the number of responses varied by question. While we recognize that the response rate is low and the potential for sampling exists, we do not attempt to generalize the perspectives reported to the entire population of authors in field of conservation biology, but rather provide insights on the issue. It is also important to recognize that respondents who are more likely to participate in our questionnaire are also perhaps more likely to be those who are proactive in voicing their opinions. Of all the respondents, 28% were female and 63% were male (9% left it blank or preferred not to say). This may lead to a male-dominant perspective in our results. Most respondents ranged between 31–40 years old (38.2%), followed by 41–50 years (24%), 51–64 years (18%), 21–30 years (11%), less than 5% of respondents were 65 years or older, and <1% were under 21 years old (2 respondents).

Overall, responses came from 119 countries. We categorized countries based on economic income set out by the World Bank (2014). The majority of respondents (N = 640) worked in countries of high-income economies (78%), followed by upper-middle-income economies (17%), lower-middle-income (4%), and less than 2% for low-income economies. The top countries participating in this study included the United States (17%), the United Kingdom (10%), Australia (8%) and Brazil (7%). The majority of respondents (N = 611) were from academia (77% of which 15% were graduate students), governmental or state agencies (11%), non-government or non-profit organizations (10%), and the private sector (2%) which can include consulting and non-academic research institutes among others. The participant characterization suggests that the author perspectives in this article are largely biased towards industrialized nations and academia, which reflects the characteristics we would expect from the research community.

Author publishing and referee experiences

A larger proportion of participants published their first paper within the last decade (44% of 451 respondents published in 2000–2009, and 19% published in 2010 and after), which indicates a bias toward authors that are potentially in their mid-careers. About half of the respondents have published > 20 publications (with 21% of 623 respondents publishing >50), and only 10% have published <10. Half of the participants publish < 3 papers per year, 35% publish 4–6, 10% publish 7–10, and only 3% of participants publish >10 papers per year. Furthermore, nearly half of the respondents act as journal referees 1–5 times per year (48% of N = 450). Twenty percent of respondents are highly active referees (reviewing manuscripts >10 times per year), 25% being referees 6–10 times, and < 10% reviewing manuscripts only once a year. Overall, the majority of respondents have been publishing for at least 10 years and at least half of them are highly experienced with the peer-review process as both authors and referees. As such, the perspectives gathered in our questionnaire come from highly experienced authors that are actively publishing and therefore familiar with the peer-review system.

Peer review duration: experiences and expectations

We asked participating authors about their experience with peer-review durations (i.e. period between initial submission and first editorial decision following formal peer review), and 368 respondents gave useable/ complete answers. The average (mean ± SD) shortest or quickest review time was reported to be 5.1 ± 6.0 weeks ( Table 1 ), while the opinion of a “fast” review period was on average 4.4 ± 2.9 weeks. While the opinion of a “slow” review period was on average 14.4 ± 8.2 weeks, the longest or slowest review time was reported on average to be 31.5 ± 23.8 weeks ( Table 1 )—nearly double what the respondents perceive as slow. Furthermore, respondents reported that a “typical” turnaround time for a manuscript submission was on average 14.4 ± 6.0 weeks (ranging between 2–52 weeks), and that the optimal review period on average (median) is 6.4 ± 4 weeks. An optimal range for peer review durations were 1–20 weeks with majority falling within eight weeks or under (86% of 366 responses).

CategoryAverage time in weeks (mean ± SD )25 percentileMedian75 percentileRange (weeks)
Shortest or quickest review time reported5.1 ± 6.03461–88
Opinion of fast review time4.4± 2.93441–26
Longest or slowest review time reported31.5 ± 23.81624401–200
Opinion of slow review time14.4 ± 8.2812161–100
Typical turnaround time reported14.4 ± 6.0710121–54
Opinion of optimal review time6.4 ± 44681–52

*SD = Standard Deviation

The fact that respondent opinions and actual experiences of short or long review durations are not aligned, and that their experiences of review durations are lengthier (nearly double the “optimal” time), indicate that the overall perception of the peer-review system is slow. Results reported here may provide indicators for conservation biology related journals to gauge their performance on review time and improve author experiences and satisfaction. In a broad review (over 4000 respondents from across disciplines), Mulligan et al. [ 22 ] noted that 43% of respondents felt that the time it took to the first decision for their last article was slow or very slow. Mulligan et al. [ 22 ] asked authors about whether their last manuscript review (to first decision) took longer than 6 months and reported a mean of 31% but noted some differences among disciplines. For example, reviews in the physical sciences and chemistry rarely (15%) take longer than 6 months while those in the humanities, social science and economics were more likely to take longer than 6 months (i.e., 59%). Mulligan et al. [ 22 ] included a category called “agricultural and biological sciences” and reported 29% of respondents indicated reviews took longer than 6 months with 45% reporting 3 to 6 months. In general, these findings are consistent with the responses we obtained from a focused survey of scientists working in conservation biology.

Respondents did not perceive “fast” or “slow” reviews to influence review quality (75% of 547 useable responses), with the exception of 8% of respondents who believed that fast reviews have higher review quality and another 8% believed fast reviews have lower review quality (10% had no opinion). Therefore, faster review times should presumably be beneficial to the authors, the journals and the relevant field given the belief that review speed does not affect quality, although this has not been tested empirically. We discuss mechanisms to improve review times based on this information later in this article.

Who expects what in peer review duration?

A respondent’s opinion for an optimal review time depended on a weak two-way interaction between respondent experience and gender (TYPICAL*Gender, L- Ratio Test = 5.9, df = 1, P = 0.015). According to both male and female respondents, the optimal length of time for a review should always be shorter than what they have experienced as “typical” ( Fig 1 ). Opinion on what constitutes a short review period (Model 2) was dependent on several weak two-way interactions including: Age*Gender ( L- Ratio Test = 10.6, df = 3, P = 0.01), SHORT*Gender ( L- Ratio Test = 5.1, df = 1, P = 0.02), SHORT*Age ( L- Ratio Test = 11.5, df = 3, P = 0.01). For respondents over 41 years old, experience and opinion are more closely related than compared to younger respondents who suggest a short review is ≤ 10 weeks, regardless of experience ( Fig 1 ). Female experience and opinion were more closely matched than males, however this was evident for respondents 41–50 years old ( Fig 2 ). Finally, opinion on a long review period (Model 3) was dependent on LONG ( L- Ratio Test = 61.7, df = 1, P < 0.001) and Gender ( L- Ratio Test = 6.0, df = 1, P = 0.01). Here, respondents always expected “long” review periods to be many weeks less than what was experienced as a “long” review ( Fig 3 ). For example, although a female respondent experienced a long review of 60 weeks, she expects a long review to take just over 20 weeks [18.2, 22.3, 95% CI]. Based on researcher experience and generalizing for all ages, those who identify as male appear to be the least satisfied with the speed of the peer-review process.

An external file that holds a picture, illustration, etc.
Object name is pone.0132557.g001.jpg

Interactions with editors and journals

If a decision has not been made on a manuscript, participants (N = 479) waited on average 12.9 ± 7.5 weeks before making first contact with the editor or journal regarding the status of the manuscript “in review”. Of those who make first contact with an editor or journal, most will make additional attempts (77% of 479 responses) if time progresses without a response or decision. Of 479 completed responses, only 9% will never attempt to contact the editor or journal suggesting that the author population in this study is quite proactive in voicing their concerns, but keeping in mind that authors who are proactive are likely to agree to partake in the questionnaire. Nevertheless, this finding is important for editors who may feel confused as to the sort of delays in time before authors begin contacting them. Approximately 12% of participants (N = 469) believed that contacting the editor or journal would jeopardize the decision for acceptance, 6% thought it would benefit the decision, while majority did not believe there was any influence.

Only 14% of respondents (N = 480) have threatened to withdraw their manuscript from a journal, and 15% (of the 480 respondents) have actually withdrawn a submitted manuscript when the review process was unsatisfactorily long, which was indicated to be on average 30 ± 31 weeks (ranging from 2–100 weeks, N = 72) when such actions were deemed necessary. This review duration for a potential withdrawal of a manuscript is over double the average time that respondents perceive as slow, indicating that most authors had been quite patient with the peer review process. Despite their apparent patience, respondents generally believe that long reviews should be shorter than what they have experienced ( Fig 2 ), indicating an overall perception that peer-review durations are too slow within the realm of conservation biology.

The majority of participants (72% of 480 responses) did not believe that a long or a short review period would mean that the manuscript was likely to be accepted or rejected. Contrastingly, 14% of respondents believed that a “short” review period would likely lead to a rejection of the manuscript and only 6% believed it would likely be accepted, leaving 8% without opinion. In general, authors did not seem to believe there was any bias toward acceptance or rejection of their manuscript if they contacted the editor or whether the review period was quick or long.

Factors influencing review time and accountability

Of the completed responses (N = 471), over half of the respondents (56%) believed that the reviewers are accountable for the review duration, while 33% held the editors accountable, and 6% attributed delays to the journal staff. The remainder of respondents (5%) believed it was a combination of all the players. Likert type questions revealed that in general, reviewer fatigue (e.g., lack of time, etc.) was ranked as the most influential factor in slowing review speed, followed by editor fatigue, and somewhat the length of the manuscript as well as number of reviewers ( Table 2 ). One respondent expressed this reviewer fatigue as follows:

 Accountability of peer review durationGreatly slows review speedSomewhat slows review speedNo impactSomewhat speeds up reviewGreatly speeds up reviewMode
Scientific significance for advancing the field of study (N = 461)1%10%46%34%9%3
Conservation implications of results (N = 208)1%5%74%17%3%3
Policy implications of results (N = 456)2%10%72%14%3%3
Potential public interest or potential for media attention (N = 458)1%4%53%33%10%3
Length of paper (N = 462)12%55%29%3%1%2
Journal prestige or impact factor (N = 459)4%12%27%42%16%4
Maximum 'allocated' review times for each journal (N = 454)10%21%25%34%10%4
Persistence of editorial team (N = 460)3%10%18%44%25%4
Number of reviewers (N = 464)22%58%13%7%2%2
Editor fatigue (lack of time, etc.) (N = 465)51%42%5%1%1%1
Reviewer fatigue (lack of time, etc.) (N = 467)71%26%1%1%1%1
While editors try to find suitable reviewers in practice there is a relatively small pool of reviewers who can be relied on to do useful reviews . I am an associate editor on 5 journals and am convinced that there is substantial reviewer fatigue out there as the number of publications has grown annually as have the number of journals .

This may correspond with the increased number of publications and publication outlets that contemporary scientists have to contend with. Similarly, in 2007, it was reported that over 1000 new papers appear daily in the scientific and medical literature alone, and this number is likely increasing rapidly [ 12 ]. Kumar [ 23 ] listed five reasons for publication delay, which included reviewer availability and reviewers having other commitments pushing manuscript reviews at the bottom of their list. The other three reasons included editors sending the manuscript for multiple rounds of reviews (when reviews are conflicting or inadequate); the journal has outsourced manuscript management (e.g., Business Process Outsourcing agency), and; the reviewer intentionally delays the publication of a manuscript for various reasons (e.g., rivalry or intentions to plagiarize).

On the other hand, respondents perceived the persistence of the editorial team as a factor in somewhat speeding up the review process, as well as maximum allocated review times for each journal, and the journal prestige or impact factor ( Table 2 ):

I will always take the full amount of time they [editors] give me . Moreover , only once have I been asked to review a paper by an open access journal , which required my review submission in 2 weeks . But all the others were non-open access journals that gave me a month or more , which increased the average time to decision .

Consequences of long or short review durations

We questioned participating authors on their perspectives of consequences of long or short review durations. Our findings indicate a number of consequences that we have grouped into themes below.

Consequences for the journals

After a long review period, most respondents (74% of 472 responses) said they are less likely to submit to that journal again relative to other journals; however, some (19%) said it would depend on the journal impact factor or prestige. As expected for the other end, if the review period was short, respondents (69% of N = 471) said they are more likely to submit to that journal again, with some respondents (17%) considering journal impact factor or prestige and 12% of participants were neither more or less likely to submit to that journal again. We also found that review duration is an important factor when respondents (N = 470) consider which journal to submit their research to (43% said yes and 46% said sometimes), while < 10% of participants said they never consider review duration when submitting a manuscript. Therefore, review time is an important consideration for journals to maintain reputation, as majority of respondents have given thought to review times when deciding what journal to submit to. Although, there are some indications of trade-offs between review duration and impact factor as approximately 1 of 5 respondents consider journal prestige and impact factor as an influential part of deciding to which journal they should submit.

In general, respondents (N = 465) discuss the speed of review with their colleagues, of which 54% (of 465) discuss it monthly, 30% once a year, 12% weekly, 1% daily and 4% never discuss review speed. Interestingly, there was an even split among all respondents (N = 466) with authors (49%) that have “blacklisted” a journal for its lengthy review times (i.e. chosen not to resubmit manuscripts to that journal in the future,) and those who have not (48%). These findings send multiple messages to journal editors: 1) review time is an important factor for authors in consideration of publication outlets, and 2) review time is actively being discussed by half of the respondents, which can hinder or endorse a particular journal’s reputation. Publication of research can ultimately affect society at large if the manuscript has significant scientific and policy implications. Therefore, editors/journals/publishers have a responsibility to disseminate credible scientific information in a timely manner and must play an active role through setting standards and facilitating the peer-review process [ 23 ].

Consequences on careers

Just over half of the respondents (55% of 466 respondents) feel that a lengthy peer-review process affects their career, while 30% did not believe it did. Open-ended responses suggested that lengthier peer-review durations generally have negative impacts on “early career researchers” and “young scientists” (mentioned by 65 of 212 responses) because of the “publish or perish” system, which affects opportunities for jobs and career advancement. One respondent wrote:

As an early career researcher trying to build a list of publications , it is important to have papers reviewed quickly . The longer the time lag between a research project and accepted publication the more difficult it is to apply for new grants or job opportunities .

Furthermore, some respondents mentioned the delay in graduation or acceptance in graduate school for students due to lengthy peer-review processes:

I received the first response about my first article only after 54 weeks . At that time I was not able to start my PhD because the institution only accepted candidates with at least one accepted article .
Even after successful completion of my Ph . D . research topic , I was unable to submit my thesis because it’s a rule that at the day of Ph . D . thesis submission , must have a minimum one peer reviewed publication .

The comments of these early-career respondents are perhaps reflected in the predictions from Model 2, where despite the length of time they have experienced as a “short” review, respondents consistently expect review periods to be much shorter ( Fig 2 ). It seems that regardless of their experience, the review period cannot be short enough for early-career professionals who publish in conservation biology. In addition, it seems that irrespective of age, respondents believe a lengthy review period should be considerably shorter than what they have experienced ( Fig 3 ).

For respondents with tenure or later in their career, a slow review process can impact applications for grants/funding (approximately. 28% of responses) and promotions (approximately 19% of responses):

Publications are important for ranking of scientists and institution achievements so long reviews and long editorial process could violate this process .

Furthermore, concerns about competition among research groups (5% of responses), subjective treatment, malpractice of certain reviewers and editors, conflicts of interest, and the potential for being “scooped” (i.e., publishing the same idea/findings first) were voiced. Intentional delay of review was also listed as 1 of 5 reasons for peer-review delay by Kumar [ 23 ], emphasizing some merit to this topic. Although not the focus of this study, we found that the association between review time and the potential for being “scooped” is worrisome to a number of authors and should be acknowledged as this topic was brought up relatively frequently when respondents were given the opportunity to comment freely (open responses). For example:

If people play the game well and get their “friends” to review their papers . I am sure in many cases that speeds up the process more so when people cite their friends (the reviewers) in these papers .
If a person has an "in" with the journal . In other words , subjectivity and preferential treatment increase speed .

Several respondents (<8%) urged that if a manuscript is to be rejected, journals should do so in a timely manner so the researcher can resubmit to another journal sooner. Others voiced concerns that a delay of a manuscript could hinder subsequent work that is built on the manuscript in review, and some mentioned challenges in remembering specifics of the study or content of the manuscript when review times are particularly long.

Consequences to authors’ morale

It was also revealed that lengthy peer reviews can affect motivation, causing conflict as well as frustration (8% of responses):

The frustration associated with a lengthy process discourages the writer . Incentives for conducting research are diminished when rewards are not forthcoming . Less incentive means less motivation which both translate into less productivity . Less productivity means less likelihood for promotions . This in turn sets up a vicious cycle very similar to the one related to applying unsuccessfully for grants .
A long peer review process reduces drastically your efficiency of publishing papers , because you need to go back to your previous work and you cannot focus on your current work . Sometimes you need to spend quite a bit of time figuring out how to answer reviewer’s concerns because it was too long ago that you submitted your manuscript .
It is very frustrating , and sometimes embarrassing , to have papers endlessly "in review” . " I had a paper where the subject editor sat on the paper for 5 months without sending it for review; after 3 contacts they finally sent it for review and it has been another month and we have not heard back . This was a key paper needed to build a grant proposal , and my collaborators consistently asked if it was published yet—the grant was ultimately submitted before the paper was accepted .

These consequences are not often discussed, but are often interlinked with consequences of a researcher’s career and aspirations. Although for the majority of the time, long review durations may not have dramatic consequences; however, lengthy review durations that occur at the wrong place at the wrong time may potentially lead to a cascade of consequences.

Alternative responses to consequences of review times

A number of respondents (<10%) provided interesting alternative responses that are worth mentioning such as (but not limited to) consequences on research quality because of the race to publish, competition among colleagues, greater opportunity cost when taking the time to submit a “quality” manuscript, and limiting peer-review process only to academic research because researchers in other sectors are not rewarded with number of publications and productivity:

Research quality suffers—as opportunities to publish high quality research can be lost when other groups publish (often lower quality) research first . The focus then becomes speed and simplicity of research rather than quality .
Because of career pressure , especially for younger scientists , or the need to complete a degree program , choices are often made (I witness them here) to submit smaller , simpler studies to journals with a quick turnaround , or with a presumed higher acceptance rate for a particular work , rather than invest more time in extending analysis and/or facing rejection or extensive revisions

Should the review process be altered?

When asked if respondents thought the review process should be altered to change the review time, 61% (of 463) responded yes, 12% responded no, and the remainder had neutral opinions. Of 462 respondents, 43% believed that the review process should be improved while only 8% said no. When asked how the review process should be improved, 211 participants provided open-ended responses (summarized in Table 3 ).

ThemeDescriptionApproximate proportion of responses
Deadlines and defined policiesShorted allocated time to review a manuscript and strict procedures to ensure adherence to deadlines30%
Referee reward systemProviding incentives and compensation for reviewers and editors25%
Editorial persistenceProactivity from editors to send reminders, follow up with deadlines, and setting the tone.14%
Alternative responsesPermitting to submit to more than one journal, include early career researchers as reviewers, follow model of journals that do it well, bank or database of reviewers, have sub-reviewers (e.g. expertise for statistics, methods, taxa, tools, etc.)13%
Change norms of publishingAuthor empowerment, journal standardization, open peer review, double blind reviews12%
Improved journal managementOverall management of editorial staff and inter-journal management6%

Referee reward system

About one quarter of the suggestions for improvement was to pay reviewers/editors or provide reviewer incentives/consequences or reward system such as: free year subscription to the journal; rewarding reviewers by adding value to their CV (e.g., “20 best reviews” or “20 best reviewers’ awards”); “have a 1 in 2 out policy… each paper you submit as a lead author means you have to review 2 for that journal before you can publish again in that journal”; providing discounts on the reviewer’s own submissions or items from the scientific publishing house (e.g., books, open access discount, etc.); and home institutions should have reward systems for researchers who regularly review papers.

Editors should remove slow reviewers from their lists . There should be a central bank where good reviewers receive benefits such as fast track review of their material if submitted to the same company (e . g . Wiley , Elsevier , etc . ) . A reduction in publication costs for good reviewers (not just time but quality of revision)
Engagement for reviewing should be better acknowledged as a performance indicator; some exemplary review processes should be made public so that authors and reviewers can learn from them . Reviewers should be able to see the other reviewer's comments after the editor's decision .
For instance , the journal Molecular Ecology is publishing the list of the best reviewers every years based on the quality and speed of the review . This is one example of a reward that the reviewers can put in their CV to show their importance in the field .

Our findings suggest there is some weighted call for reviewer incentives and reward systems. It is challenging to get accurate data on the cost of peer review, and in economic terms, the ‘opportunity cost’ to reviewers. The editor of BMJ , Richard Smith [ 24 ], estimated the average total cost of peer review per paper was approximately £100 for BMJ (keeping in mind 60% are rejected without external review), whereas the cost of papers that made it to review was closer to £1000 and without considering opportunity costs (i.e., time spent on editing and reviewing manuscripts that could be spent on other activities). A recent survey reported two-thirds of academics agreed that $100–200 would motivate them for reviewing while one-third refused to accept monetary compensation [ 25 ]. Kumar [ 23 ] reports differing results from two recent studies where one study of 1500 peer reviewers in the field of economics responded to both monetary and non-monetary incentives to expedite the return of reports [ 26 ], while in 2013, Squazzoni et al. [ 27 ] reported that financial incentives decreased the quality and efficiency of peer reviewers.

Reward system and incentives for reviewers have been proposed in the literature [ 28 ], where there may be penalties to those who decline reviews or non-monetary rewards for review completions such as published lists of reviewers as a means of acknowledgment (e.g. Journal of Ecosystems and Management). However, some journals already use this system and still there is no indication of change in referee behavior [ 29 ]. One common incentive given for peer-review is a temporary subscription to the journal in question. It is perhaps not surprising that such an incentive might fail to change reviewer behavior, since many reviewers will belong to institutions that already possess subscriptions to a host of journals

It may just be a matter of time for the “top reviewers” or time spent on reviews to become “prestigious” and valued in more tangible ways (whereas current system values number of publication). Peerage for Science is a novel approach to externalized peer-review, through which manuscripts are submitted for peer-review by members of a community of researchers in a transparent and non-blinded way, after which journals can then be contacted with an amended draft [ 30 ]. This system incentivizes peer-reviewers by providing metrics and ratings relating to their reviewing activities that members can use to demonstrate their activities.

Deadlines and defined policies

Approximately one third of responses (N = 211) suggested stricter deadlines and policies, shorter allocated time to review a manuscript, and procedures to ensure adherence to strict deadlines should be established to improve review duration:

Current review process should follow the model of the PLOS (online journals) . Reviewers are constrained to address specific scientific elements : The question , the method , the results and the discussion that these are scientifically acceptable . This should encourage young researchers to publish without the need to include big names/ popular personalities in research to have the paper through journal review .

Again improvements in peer review turnaround and quality are something that the journal editors are able to control by setting out standards and policies that facilitate the peer review process. A recent review of time management for manuscript peer-review acknowledged several suggestions to improve the review process and time, but that it is the responsibility of editors, publishers and academic sponsors of the journals to implement these improvements [ 23 ].

Editorial persistence and journal management

Related to these more stringent deadlines and policies is the suggestion that editors should put more pressure on reviewers, and follow up with deadlines (30 responses), while others suggested better journal management (13 responses):

Some Journals restart the time counting during a revision process , for example , asking to re-submit as a new manuscript in order to reduce the revision time , instead of keeping track of the time during the whole revision process and to be more realistic about the time that a revision takes . I believe that is a way of cheating or deceiving the system .

As illustrated by the quote above, many journals ask to re-submit as a “new submission” rather than a “resubmission”, and sends to new referees instead of the previous ones to review the revisions, which increases the length of peer review time. Fox and Petchey [ 29 ] suggested that if a manuscript is rejected from one journal, the reviews should be carried forward to the subsequent journal that the manuscript was submitted to. They argued this action helps with quality control and facilitates review process by ensuring that authors revise their manuscripts appropriately, and reduces any duplication of efforts by referees. At present, at least one ecology journal allows authors of manuscripts previously rejected to provide previous reviews and the publisher Wiley is trialing peer-review transfer across nine of its neuroscience journals [ 31 ]. A more formal system for sharing reviews is suggested to increase speed and quality of the peer review system, which is now feasible with the pervasive use of electronic submission and review systems [ 29 ].

Peer review training

Including graduate students or early career researchers as reviewers may increase the “supply” for the increasing demand. Some may argue that graduate students lack experience and knowledge to appropriately assess a manuscript. Formal training has been suggested to improve quality of reviews and increase the network of reviewers. Furthermore, recommendations by senior researchers of names of reliable and qualified graduate students or early career researchers as potential reviewers may help with the deficit [ 32 ]. Indeed, the British Ecological Society recommends that academic supervisors should assign their own peer-review invitations to graduate students [ 33 ], although it is certainly sensible to verify that individual journal editors are happy with this practice.

Changes to the norms of peer-review system

A number of respondents (12%) wanted to see more drastic changes in the norms of publishing. For example, permanent and paid group of reviewers, standardizing all journals, permitting to submit manuscripts to more than one journal, including more early career researchers as reviewers, following model journals that do it well (e.g., Geoscience, PLOS one), having a database of reviewers, or have sub-reviewers (e.g. expertise for statistics, methods, taxa, tools, etc.).

“PubCreds” currency, has been proposed as a system where reviewers “pay” for their submission using PubCreds they have earned by performing reviews [ 29 ]. Although, a radical idea, Fox and Petchey [ 29 ] state that “doing nothing will lead to a system in which external review becomes a thing of the past, decision-making by journals is correspondingly stochastic, and the most selfish among us are the most rewarded”. Furthermore, Smith [ 24 ] suggested adopting a “quick and light” form of peer review, with the aim of opening the peer-review system to the broader world to critique the paper or even rank it in the way that Amazon and other retailers ask users to rank their products. Alternatively, some journals (e.g. Biogeosciences) employ a two-stage peer-review, whereby articles are published in a discussions format that is open to public review prior to final publication of an amended version. Other journals (e.g. PLOSone) and platforms ( www.PubPeer.com ) offer the opportunity for continued review following publication. The argument for a radical change in the norms is not uncommon and may be required in today’s peer-review system which will soon be in crisis [ 29 ], although suggestions that increase the labour required of editors and referees, such as submitting to more than one journal concurrently, may exacerbate the already stressed peer-review system.

Role of open access and journal prestige on review duration

The majority of respondents do not review a manuscript quicker for higher tier journals (71% of 445 respondents). When respondents were asked about their perception on the justification of journal prestige on turnaround time, 50% of 369 responses do not believe publishing in a top-tier journal justifies a rapid or delayed review time, while 37% believe it does (remainder had no opinion). Of those who believed publishing in a top-tier journal justifies longer or shorter review time, 64% believe it explains rapid reviews, 14% believe it justifies a delayed review, and 20% believe it justifies both rapid and delay (<5% believe neither). On the other hand, it was interesting to note that a higher number of respondents (75% of 367) believe that publishing in a low-tier journal does not justify a rapid or delayed review time. Overall, journal prestige and impact factor seem to be an important indicator for many authors, although their ability to turnaround peer-review in a timelier manner may reflect their perceived prestige and the higher quality manuscripts that make it through primary editorial screening. One respondent noted:

There is likely a link between review duration and impact factors , as impact factors are based on citations during the first two years after publication . If those citing papers take longer to go through the review , they won't count towards the journal's impact factor .

We were interested in participants’ perspectives of the review process for open access (OA) journals, particularly because authors pay a fee to publish in such journals. About a third (32% of 461) agreed that OA journals should have higher quality of “customer service”, such as faster review and publication times, with an additional 13% of respondents who strongly agree. Another third (31%) of respondents were neutral about this statement, whereas 16% disagree and 7% strongly disagree. This finding is interesting because it provides insight on authors’ perspectives and expectations of OA journals, where authors have higher expectations from OA journals even though peer-review standards should be disconnected from cost and from who pays. This is most likely the result of a shift in the customer base. In subscription-based publishing the customer is the librarian and their measure of product quality was assessed primarily through metrics such as Impact Factor. In OA publishing, the customer becomes the submitting researcher, and quality is assessed through publishing service and, incorrectly perhaps, standards of editorial review. It has yet to be proven that publishers will see substantial increases in profits following a switch to OA, and if profit margins are not significantly increased then expectations of improved service are unwarranted.

Although the topic of open access journals was not the primary focus of our study, we believe that it is an increasing relevant topic as there are debates about the quality of OA journals, but on the other hand, open access may be viewed as mandatory, particularly where research is funded with public money. Future research including perspectives and understanding value of OA journals within the conservation science community should be considered.

Our findings show that the peer-review process within conservation biology is perceived by authors to be slow (14 weeks), and turnaround times that are over double the length of what they perceive as “optimal” (6 weeks). In particular, males seem to expect shorter review times than females, whereas female expectations were found to be more closely related to what they have actually experienced in typical review times. Similarly, older participants (> 40 years) have expectations of review times that are more closely aligned with their experience, while younger authors developed their opinion of a short review time to be <10 weeks despite their experiences. Overall, the primary reasons that participants attribute to the lengthy peer-review process is the “stress” on the peer review system, mainly reviewer and editor fatigue. Meanwhile, editor persistence and journal prestige/impact factor were believed to speed up the review process. The institutional incentive for productivity has its fallacies. The demand from increased publications strains the peer-review system and the “publish or perish” environment can also potentially create a strong demand for publications outlets and increased expectations for quick turnaround times.

It appears that early career researchers are more vulnerable to slow peer review durations in a “publish or perish” system as it relates to graduation employment opportunities and other career advancements. Closely related to impacts on careers are consequences of lengthy peer review duration on an author’s “morale” (i.e. motivation, frustration, conflicts, embarrassment). Some respondents commented that lengthy review durations may result in lack of motivation, forgotten details about the manuscript thus leading to reduced efficiency in productivity and potentially a lower quality manuscript. Competition among colleagues was thought by few respondents to encourage publication of shorter and simpler studies in order to gain a quicker turnaround review time, rather than investing more time in complex and extensive analyses or revisions. These concerns have merit as they do exist and may have implications on quality of research and publications.

Although the objective of our research was not to assess the quality of the peer-review system, we believe all aspects of the process are interlinked and both peer review quality and speed are not mutually exclusive and must be discussed simultaneously. The majority (61%) of respondents believe that the review process should be altered with a number of suggestions such as a referee reward system, defined deadlines and policies, editorial persistence, better journal management, changing the norms of the peer-review process and others. Currently, researchers are rewarded based on productivity, which may result in a system breakdown by increasing demand from a short supply of reviewers and subsequently degrading quality of publications associated with the race to publish [ 32 ]. We suggest a partial shift in institutional rewards and incentives from researcher productivity to greater outreach efforts and public interactions/activities, as there is evidence that conservation goals may be more effectively achieved by engaging the public. Implementing a system that rewards these actions in conjunction with productivity may alleviate pressure in the peer review system overall, and increase conservation successes. Training for peer review is a possibility to improve quality of reviews as well as increase the pool of reviewers by including early career scientists and graduate students. Generally, there is a call from a number of authors to revise and review our own peer review system to ensure its persistence and quality control.

Open access and opening the peer review process is on the forefront of publishing innovation. For example, PeerJ ( www.peerj.com ) offers a novel approach that combines open access and a pre-print system that enables articles to be made available online more rapidly than traditional scholarly publishing. ScienceOpen ( www.scienceopen.com ) immediately publishes the manuscripts in Open Access and accepts continuous open review in a transparent Post-Publication Peer Review process. Such approaches will require time to determine their value to the scientific community, but as scholarly publishing continues to rapidly evolve, experimental approaches to enhancing the communication of peer-reviewed research are warranted. We encourage other scientists and publishers to build on these approaches and continue to push the envelope for new publishing approaches.

Peer reviewed journals will continue to be the primary means by which we vet scientific research and communicate novel discoveries to fellow scientists and the community at large, but as shown here, there is much room for improvement. We provided one of the first evaluations of an important component of the publishing machine, and our results indicate a desire for researchers to streamline the peer-review process. While our sample may not be generalizable to the entire global community of researchers in the field of conservation biology, we believe the opinions, perceptions, and information provided here present an important collective voice that should be discussed more broadly. While the technology is in place to accelerate peer-review, the process itself is still lagging behind the need of researchers, managers, policy-makers, and the public, particularly for time-sensitive research areas such as conservation biology. Moving forward, we should encourage experimental and innovative approaches to enhance and expedite the peer-review process.

Supporting Information

Acknowledgments.

We thank all of the study participants who took the time to share their perspectives. Funding was provided by the Canada Research Chairs Program and the Natural Sciences and Engineering Research Council of Canada.

Funding Statement

This work was supported by the Natural Sciences and Engineering Research Council, 315918-166, http://www.nserc-crsng.gc.ca/index_eng.asp and the Canada Research Chair, 320517-166, http://www.chairs-chaires.gc.ca/home-accueil-eng.aspx . The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Data Availability

Review time in peer review: quantitative analysis and modelling of editorial workflows

  • Open access
  • Published: 09 February 2016
  • Volume 107 , pages 271–286, ( 2016 )

Cite this article

You have full access to this open access article

journal article review time

  • Maciej J. Mrowinski 1 ,
  • Agata Fronczak 1 ,
  • Piotr Fronczak 1 ,
  • Olgica Nedic 2 &
  • Marcel Ausloos 3 , 4 , 5  

7545 Accesses

27 Citations

16 Altmetric

Explore all metrics

In this paper, we undertake a data-driven theoretical investigation of editorial workflows. We analyse a dataset containing information about 58 papers submitted to the Biochemistry and Biotechnology section of the Journal of the Serbian Chemical Society. We separate the peer review process into stages that each paper has to go through and introduce the notion of completion rate - the probability that an invitation sent to a potential reviewer will result in a finished review. Using empirical transition probabilities and probability distributions of the duration of each stage we create a directed weighted network, the analysis of which allows us to obtain the theoretical probability distributions of review time for different classes of reviewers. These theoretical distributions underlie our numerical simulations of different editorial strategies. Through these simulations, we test the impact of some modifications of the editorial policy on the efficiency of the whole review process. We discover that the distribution of review time is similar for all classes of reviewers, and that the completion rate of reviewers known personally by the editor is very high, which means that they are much more likely to answer the invitation and finish the review than other reviewers. Thus, the completion rate is the key factor that determines the efficiency of each editorial policy. Our results may be of great importance for editors and act as a guide in determining the optimal number of reviewers.

Similar content being viewed by others

Editorial behaviors in peer review.

journal article review time

A framework for assessing the peer review duration of journals: case study in computer science

Evaluating alternative systems of peer review: a large-scale agent-based modelling approach to scientific publication.

Avoid common mistakes on your manuscript.

Introduction

Despite a variety of criticisms of its effectiveness (Wager and Jefferson 2001 ; Cooper 2009 ), peer review is a fundamental mechanism for validating the quality of the research that is published in today’s scientific literature (Baker 2002 ; Ware and Monkman 2008 ; Mulligan et al. 2013 ; Wareand Mabe 2015 ; Nicholas et al. 2015 ). It is a complex, multi-phase process that seems to be largely understudied (Squazzoni and Takács 2011 ) and there appear to be some growing concerns regarding how to improve its functioning. Given the increasing number of submitted articles and the limited pool of reviewers, acquiring a good and timely review is becoming progressively more challenging. Several journals emphasize the rapidity of their review process in order to attract submissions. Reviews can take even a year, depending on the complexity of the topic, the number of reviewers involved, and the details of the editorial procedures. In contrast, sometimes reviews can be very quick, for example when the paper is rejected directly by the editor.

In face of these problems, many suggestions have been proposed to make the peer review and editorial process more efficient and equitable (Bornmann 2011 ). In particular, the role of editors in the process of selecting and managing reviewers has been increasingly discussed (Schwartz and Zamboanga 2009 ; Kravitz et al. 2010 ; Newton 2010 ). However, these discussions are mainly focused on quality, ethical issues or qualitative recommendations for editors or reviewers (Cawley 2011 ; Resnik et al. 2008 ; Hames 2013 ; Wager 2006 ; Kovanis et al. 2015 ) that do not lead to measurable improvements to the efficiency of peer review process as viewed from the perspective of editors. Do the editors send out a sufficient number of reviewer invitations to obtain two or three timely reviews of a manuscript? How often should they draw on expertise of the same reviewers consuming their time and energy? How long should they wait for a review before they can repeat the invitation or assume that a response is unlikely? What is the statistical chance that reviewers will respond? Does it depend on whether they were previously reviewers for the same journal? Although it is likely that editors try to answer these and other questions when they optimise their workflow, they have to do it on their own using trial and error. Without an intensive discussion that could help to answer the aforementioned questions in a more systematic way one can be sure that the submission-publication editorial lags will be increasing in the years to come.

Our paper is meant to fill this gap with the help of quantitative analysis. We examine selected aspects of peer review and suggest possible improvements. To this end, we analyse a dataset containing information about 58 papers submitted to the Biochemistry and Biotechnology section of the Journal of the Serbian Chemical Society (JSCS). After separating the peer review process into stages that each review has to go through, we use a weighted directed graph to describe it in a probabilistic manner under the weak assumption that the process is Markovian. We test the impact of some modifications of the editorial policy on the efficiency of the whole process. Our quantitative findings allow us to provide editors with practical suggestions for improving their workflow.

The paper is organized as follows:

" Review process and initial data analysis " section describes the dataset used in the paper as well as the methodology employed to analyse the data. " Review time " section is devoted to the data driven theoretical analysis of review time. Simulations of various editorial policy scenarios and their impact on the efficiency of the process are presented in " Simulations of the review process " section. In " Discussion with conclusion " section we give concluding remarks and point out open problems that may be researched within the presented methodology in the future.

Review process and initial data analysis

The sample we studied contains information about reviews of 58 manuscripts submitted electronically to one of the sub-editors of JSCS between November 2011 and July 2014. Each of 323 members of the sample corresponds to an invitation sent to a single reviewer and comprises the group the reviewer belongs to, the ID of the reviewed manuscript and dates associated with phases of the review process. Reviewers were divided into two groups—65 known reviewers are known personally by the sub-editor while 258 other reviewers were chosen through various different means (e.g. picked up from SCOPUS database as experts in the topic of the submitted manuscript). Reviews in JSCS are single-blind, meaning that reviewers know the names of the authors but remain anonymous themselves.

It is worth noting that out of 65 aforementioned known reviewers 34 were actually unique. The remaining 31 invitations were sent to a group of 13 reviewers, 9 of whom were asked to review 2 manuscripts, 3 to review 3 manuscripts, and 1 to review 4 manuscripts. Reviewers who are invited to review multiple manuscripts within a short period of time may suffer from burnout (Arns 2014 ). In our case, non-unique reviewers received each new invitation after 345 days on average. While this sample is not large enough to make a definite statement, we did not observe any relation between subsequent review times for non-unique reviewers. All other reviewers were unique.

The review process itself is separable into distinct phases that mirror interactions between the sub-editor, authors and reviewers. It begins with the invitation phase, when the sub-editor, after receiving a new submission, sends out invitations to a number of reviewers (5 on average in the JSCS case: 1 known and 4 other ) and waits for their responses. If any of the invited reviewers does not respond, then after about 7 days an inquiry is sent which begins the inquiry phase. If that inquiry also remains without an answer for 10 days, then the review process for that particular reviewer ends at the no response phase and is considered finished with a negative outcome. After receiving the initial invitation or the inquiry, reviewers who do answer either confirm their willingness to write the review, which begins the confirmation phase, or reject the invitation. In the latter case, much like for reviewers who did not answer at all, the review process ends at the rejection phase and is considered finished with a negative outcome. In the former, the JSCS sub-editor waits for the report for 25 days before beginning the second possible inquiry phase by sending an inquiry. This may result in either the reviewer finishing the review and sending the report—which ends the process at the report phase and is the only outcome that is considered positive—or a lack of answer, which ends the process at the no response phase. To sum it up, there are three possible outcomes of the review process— report , no response or rejection .

A directed graph in which nodes correspond to phases and edges to allowable transitions between subsequent phases can be used as a visual representation of the review process. Graphs that describe the workflow in our sample can be found in Figs.  1 , 2 and 3 . The value expressed in percent next to each edge is the probability that a realisation of the review process will pass through the edge—that is, the number of members from our sample for which the transition between nodes connected by the edge occurred divided by the size of the sample. Widths of edges were scaled proportionally to that probability.

What is immediately striking is that only 43 % of all invitations actually result in a finished review (Fig.  1 ). Most of reviewers—that is 64 %—do not even respond to the initial invitation and 42 % ignore the inquiry. These poor results are mostly driven by reviewers that belong to the other group (Fig.  2 ), which constitutes the majority of all reviewers. Only 31 % of other reviewers finish the review, 73 % ignore the initial inquiry, 51 % do not answer at all and 16 % reject the invitation. On the other hand, known reviewers—who are in minority—are far more reliable (Fig.  3 ). Most of them, 74 %, respond to the invitation and 89 % finish the review. Only 3 % do not answer and 8 % reject. As we will show in the following sections, this disparity between known and other reviewers may play a crucial role in the review process and is the key factor that determines its effectiveness.

A graph corresponding to the review process with known and other reviewers. Next to each edge are probabilities (calculated as explained in the main text) of a realisation of this process passing through the edge. Transitions are only possible from the upper sections of the graph to the lower sections

A graph corresponding to the review process with only other reviewers. Next to each edge are probabilities (calculated as explained in the main text) of a realisation of this process passing through the edge. Transitions are only possible from the upper sections of the graph to the lower sections

A graph corresponding to the review process with only known reviewers. Next to each edge are probabilities (calculated as explained in the main text) of a realisation of this process passing through the edge. Transitions are only possible from the upper sections of the graph to the lower sections

Review time

Review time, that is the number of days between the invitation phase and report phase, is one of the most direct and tangible measure of the efficiency of the review process. Since our sample contains information about the beginning and end of each phase, we were able to acquire distributions of review time for known and other reviewers, as well as partial distributions of days between all intermediate phases. These partial distributions are especially interesting, as they can serve as building blocks with which one can create a simulation of the entire review process and recreate the cumulative distribution of review time under various assumptions.

The distribution of review time can be reassembled using partial distributions in the following way (see Fig.  4 ). To each node (phase) j of the review process graph (Figs.  1 , 2 and 3 ) one can assign the probability \(q_j\) that a realisation of the process will pass through node j and the probability distribution \(G_j(t)\) of days between the invitation phase and phase j . Similarly, each edge is characterised by the probability \(p_{i,j}\) that the review process will pass from phase i to j and the probability distribution \(P_{i,j}(t)\) of days associated with such a transition. Given all these probabilities, \(G_j(t)\) can be calculated as follows

where the summation is over set \(\{i\}_j\) of all predecessors of node j and symbol \(*\) represents the discrete convolution

Weights \(w_{i, j}\) are defined as

and the probability \(q_j\) can be expressed as

Equations ( 1 – 4 ) are recursive. The distribution \(G_j(t)\) associated with node j depends on the corresponding distributions associated with predecessors of node j and probabilities \(q_j\) exhibit similar dependence. As such, these equations can be solved recursively if one assumes appropriate initial conditions for nodes without parents (in our case it is \(q_{\text {invitation}} = 1\) and \(G_{\text {invitation}}(t)=\delta _{0,t}\) for the node that corresponds to the invitation phase) and acquires probabilities \(P_{i, j}\) and \(p_{i,j}\) from the sample. One last fact worth noting is that the quantity \(q_i p_{i,j}\) from the numerator in Eq. ( 3 ) is actually the same as the probability in Figs.  1 , 2 and 3 next to each edge.

A schematic representation of a node from the review process graph, its predecessors and all associated probabilities. Detailed description can be found in " Review time " section

Using the aforementioned procedure we recreated the distribution of review times for both known and other reviewers which we then compared with the corresponding empirical distributions from the sample (Figs.  5 , 6 , 7 and 8 ). According to our theoretical calculations based on Eqs. ( 1 – 4 ) the average review time for known reviewers is 23 days with standard deviation of 12 days which is in agreement with the average review time acquired from the sample. As for other reviewers, the theoretical average review time is 20 days with standard deviation of 11 days and the sample, again, yields the same values. One-sample Kolmogorov–Smirnov test performed to compare the theoretical distribution with the sample gives p value 0.88 for known reviewers and 0.97 for other reviewers. It means that the distributions of review times calculated using partial distributions are essentially the same as the ones obtained directly from data.

The theoretical probability distribution of review time for known reviewers who responded to the initial invitation ( black line ), who received an inquiry ( white line ) and their sum which gives the distribution for all known reviewers ( filled polygon )

The probability distribution of review time for known reviewers: theoretical— black line , from data— grey bars

The theoretical probability distribution of review time for other reviewers who responded to the initial invitation ( black line ), who received an inquiry ( white line ) and their sum which gives the distribution for all other reviewers ( filled polygon )

The probability distribution of review time for other reviewers: theoretical— black line , from data— grey bars

This is an important and non-obvious observation, as the only underlying assumption behind Eqs. ( 1 – 4 ) is that the review process is memoryless (Markovian)—that is the partial distributions assigned to edges do not depend on the history of the process. Results presented thus far seem to confirm this reasonable assumption. Moreover, the findings are reinforced even further in the following section through simulations of the model.

Other than the validity of theoretical distributions, there are two main conclusions that can be drawn from results presented in Figs.  5 , 6 , 7 and 8 . Firstly, the review time distribution is bimodal. Reviewers who either confirmed or sent in their reviews after receiving the invitation are the ones who contribute to the leftmost maximum (and they are in the majority of those who actually completed the reports—69 % of other and 82 % of known ). Secondly, distributions of review time are similar for known or other reviewers. The difference between means and standard deviations for both groups is negligible from any practical standpoint: a two-sample Kolmogorov–Smirnov test for both empirical distributions gives p value \(\simeq\) 0.40. Based on these facts one can make a very strong assumption that the distribution of review time is the same across the entire population of reviewers and does not depend on the reviewer group.

While in our work we were mostly interested in the time that is needed to acquire a given number of reviews, it should be mentioned that technically this is only the first major stage of the full peer review process. The second stage begins when the reviews are sent to authors and ends with the notification of acceptance or rejection. However, the dynamics of that second stage are rather linear and straightforward. In the case of our data from JSCS, one revision of the original manuscript was necessary to address the remarks of reviewers (though one has to keep in mind that we only had access to data pertaining to accepted manuscripts). On average, it took authors 34 days to deliver the revised version and final notifications were sent after 8 more days. Thus, manuscripts were accepted on average 42 days after the sub-editor received all reviews. It means that the second stage of the peer review process is longer than the first one, which is consistent with findings of other researchers (Trimble and Ceja 2011 ).

Simulations of the review process

So far we have considered review times of a single reviewer. However, editors usually need more than one review in order to judge whether to publish an article. In the case of our data from JSCS, the sub-editor aims for two reviews per article and sent invitations to five reviewers on average—one known and four other . While this review strategy indeed resulted in two reviews per article on average (2.34 to be exact), 9 articles were published after receiving only one review, 24 after 2 reviews, 21 after 3 and 4 after 4 reviews. This discrepancy between the target number of reviews and the number of reviews actually received stems from the difference in the probability of finishing the report between known and other reviewers. We are going to call this probability the completion rate .

Using partial distributions we can easily simulate the effects of any editorial strategy and find the number of reviewers needed to achieve a certain number of reviews per article. We will use the average time of receiving two reviews as a measure of the effectiveness of each strategy. Figure  9 shows these average times under the assumption that the invited reviewer always writes the report (the completion rate equals 1 for both known and other reviewers) as a function of the number of reviewers. The average time decreases as the number of reviewers increases. Results for known and other reviewers are found to be very similar. This is intuitive and consistent with our prediction made in " Review time " section.

Average time of acquiring two reviews for known ( empty circles ) and other ( filled black circles ) reviewers when all reviewer finish their reviews

The assumption that an invitation always results in a report is not realistic. If we want to take into account the fact that the actual completion rate of the review process for a single reviewer is smaller than 1, especially for other reviewers, then some additional strategy needs to be introduced to deal with the cases when two reviews are not received at all. In our simulations, we decided to use a simple strategy: if two reviews are not received, then invitations are resent to the same number of reviewers. This procedure is repeated if necessary until reviewers produce two reports in total. While this is not the most effective and time-efficient strategy which we would suggest to editors, it still allows us to study the consequences of the difference between the completion rates of known and other reviewers.

Figure  10 is analogous to Fig.  9 —in that it shows the average time of receiving two reviews—but this time we used the actual completion rates taken from the sample (89 % for known , 31 % for other reviewers) and employed the policy described in the previous paragraph. As can be clearly seen, the difference in completion rates between known and other reviewers results in a completely different dynamics. Other reviewers are far less effective. Their average review time is much higher: for example, two reviews can be received from 2 known reviewers after 32 days, but other reviewers finish the set of 2 reviews after 70 days. Even as the number of reviewers increases, this difference remains significant.

Average time of acquiring two reviews for known ( empty circles ) and other ( filled black circles ) reviewers with completion rate taken into account. Filled polygon represents standard deviation

However, in " Review time " section, we have shown that distributions of review time for known and other reviewers are very similar, which suggests that the completion rate is the leading factor during the review process. This claim is partially supported by results presented in Fig.  9 . If that claim is indeed valid, then one known reviewer should be ”worth” 89/31 % other reviewers and conversely one other reviewer is ”worth” 31/89 % known reviewers. By ”worth”, we mean that proportionally substituting one type of reviewer for another should yield the same results. Figure  11 , where the X axis for one type of reviewers was rescaled to match their worth in the other type of reviewers, confirms this prediction. The average number of days after which 2 reviews are acquired are similar and standard deviations, while not exactly the same—which is to be expected are comparable.

Same as Fig.  10 but with the X axis rescaled for other reviewers

So far we have studied separately known and other reviewers. However, as explained in " Review process and initial data analysis " section, the group of reviewers invited to review an article usually contains reviewers of both kinds. Figure  12 shows the average time of acquiring two reviews when reviewer types are mixed with different proportions. As one could expect, the average time decreases with the increasing total number of reviewers and known reviewers are far more effective than other . Still, by rescaling the X axis—that is by expressing the worth of one kind of reviewer using another—we get similar results (Fig.  13 ).

Average time of acquiring two reviews for a group of mixed reviewers. The X axis—total number of reviewers. Curves correspond to various numbers of known reviewers: 0 known — top curve , 10 known — bottom curve

Same as Fig.  12 but with rescaled X axis

Information about average times in groups of mixed reviewers, expressed in a slightly different way in Fig.  14 and summarised in Table  1 can potentially be of great importance for editors and act as a guide in determining the optimal number of reviewers. For example, in order to receive two reviews after about 30 days, one needs to invite 7 other reviewers, 2 known or a mixed group of 4 other and 1 known . That last option is consistent with the choice made by the sub-editor of JSCS who provided us with the data.

Average time of acquiring two reviews for a group of mixed reviewers

It is important to note that editors may be tempted to invite only known reviewers, which would lead to shorter review times. However, such a policy would not only be unrealistic but also inadvisable. The pool of potential known reviewers is limited and editors would be forced to invite the same reviewers several times within a short time frame. This, in turn, could discourage such reviewers and make them more likely to turn down invitations, further reducing the pool. It gives us an idea that the process of selecting reviewers could be modeled as an optimization problem within an agent-based simulation framework (where other factors, e.g. the quality of reviewers (Ausloos et al. 2015 ), could be taken into account), however we leave it to further studies.

Discussion with conclusion

In summary, we have examined selected aspects of peer review through a case study—an analysis of the review stages of 58 papers submitted to the Biochemistry and Biotechnology section of the Journal of the Serbian Chemical Society. While it would be interesting to compare these results with those obtained by studying other journals, such data is not easily available. On one hand, large publishers treat such information as trade secrets and are not willing to share it with external researchers. On the other hand, smaller publishers often do not have the IT infrastructure that would allow for automatic data retrieval (e.g. submissions are via e-mail only) and data must be collected manually by editors, which is a very time consuming process. However, during the last PEERE EU project workshop on ”New models of peer review” (November, 2015) where we presented our findings, we were approached by editors willing to give us access to larger collections of data. Thus, we may be able to provide some comparative analysis in the future.

We have studied review time that characterises the entire process as well as the durations of all stages. We have used a directed graph to describe the process and found empirical weights that correspond to the probability of passing through each edge. We have introduced two kinds of reviewers— known and other —and found that

the distribution of review time is similar for both kinds of reviewers

but the completion rate is much higher for known reviewers than for other reviewers.

Therefore, the completion rate is the main factor that determines the effectiveness of the review process.

We have simulated the editorial workflow using a Markov-like model. We have tested the impact of some modifications of the editorial policy on the efficiency of the whole review process, emphasizing the number of different types of reviewers in particular. Our results suggest that known reviewers are objectively better than other reviewers and there is no advantage in choosing the latter over the former. In an ideal world, editors should invite only known reviewers. Unfortunately, since they are effectively a finite resource, this is not possible.

In our opinion, the difference between the completion rate of known and other reviewers can be explained in two ways:

It is a purely statistical effect. It is possible that the completion rate of other reviewers is a good estimate of the average completion rate of the entire population, but by virtue of sheer luck the sub-editor knows only reviewers with high completion rate, who belong to the tail of the distribution. On the other hand, the opposite is also possible—the completion rate of the entire population is high, but the sub-editor chooses other reviewers from the tail of the distribution.

The relationship with the editor—the fact that some reviewers know the editor personally—determines the completion rate of known reviewers.

We believe that the first explanation is highly unlikely and that the second one is correct. Reviewers have only finite amounts of time at their disposal. Usually, they cannot accept all invitations they receive which forces them to make a choice. It seems intuitive that reviewers will prioritise invitations from editors they know in order to maintain reputation and avoid disappointing these editors. In fact, according to the JSCS sub-editor, not only are the known reviewers more likely to accept the invitation than other reviewers, but they are also more diligent and write reviews of high quality. However, in the absence of such a personal relationship with the editor, reviewers may employ different criteria. For example, it is common sense that in general reviewers will choose more prestigious journals with high impact factor over the less prestigious ones.

Based on the aforementioned observations, we would like to propose a hypothesis that the completion rate is not necessarily a property of reviewers but of their relationships with other entities—be it journals, editors or even other reviewers. As such, the same reviewer can be treated as reliable by some journals or journal editors—i.e. likely to answer and write the review—and as unreliable by others. Editors, at least very roughly, should be able to estimate the completion rate of potential reviewers. For example, a journal with low impact factor cannot feasibly expect a review from a Nobel laureate. Moreover, since relations between people can change, the completion rate does not have to be constant and it may evolve with time.

Authors of manuscripts, reviewers and editors form a complex network of mutual connections, the structure of which has a direct influence on the effectiveness of the review process. However, since editors are the ones who actually manage the entire process, it would seem that their workflow is equally, if not even more important than that structure. With the right kind of workflow, one can potentially overcome many shortcomings of the behaviour of both authors and reviewers. We have shown that through very naive and most certainly not optimal means—by sending invitations to a sufficiently large group of potential reviewers—it is possible to achieve short review time. The results presented in this manuscript can be used as a foundation necessary to study the dynamics of the review process and determine the optimal workflow for an editor, which can be the subject of interesting research work.

Arns, M. (2014). Open access is tiring out peer reviewers. Nature , 515 , 467. doi: 10.1038/515467a .

Article   Google Scholar  

Ausloos, M., Nedic, O., Fronczak, A., & Fronczak, P. (2015). Quantifying the quality of peer reviewers through Zipf’s law. Scientometrics . doi: 10.1007/s11192-015-1704-5

Baker, D. (2002). The peer review process in science education journals. Research in Science Education , 32 (2), 171–180.

Bornmann, L. (2011). Scientific peer review. Annual Review of Information Science and Technology , 45 (1), 197–245.

Cawley, V. (2011). An analysis of the ethics of peer review and other traditional academic publishing practices. International Journal of Social Science and Humanity , 1 (3), 205–213.

Cooper, M. L. (2009). Problems, pitfalls, and promise in the peer-review process: Commentary on Trafimow & Rice. Perspectives on Psychological Science , 4 (1), 84–90.

Hames, I. (2013). Cope ethical guidelines for peer reviewers . http://publicationethics.org/files/Peer%20review%20guidelines_0.pdf . Accessed 30 September 2015.

Kovanis, M., Porcher, R., Ravaud, P., & Trinquart, L. (2015). Complex systems approach to scientific publication and peer-review system: Development of an agent-based model calibrated with empirical journal data. Scientometrics . doi: 10.1007/s11192-015-1800-6

Kravitz, R. L., Franks, P., Feldman, M. D., Gerrity, M., Byrne, C., & Tierney, W. M. (2010). Editorial peer reviewers’ recommendations at a general medical journal: Are they reliable and do editors care. PLoS One , 5 (4), e10072.

Mulligan, A., Hall, L., & Raphael, E. (2013). Peer review in a changing world: An international study measuring the attitudes of researchers. Journal of the American Society for Information Science and Technology , 64 (1), 132–161.

Newton, D. P. (2010). Quality and peer review of research: An adjudicating role for editors. Accountability in Research , 17 (3), 130–145.

Nicholas, D., Watkinson, A., Jamali, H., Herman, E., Tenopir, C., Volentine, R., et al. (2015). Peer review: Still king in the digital age. Learned Publishing , 28 (1), 15–21.

Resnik, D. B., Gutierrez-Ford, C., & Peddada, S. (2008). Perceptions of ethical problems with scientific journal peer review: An exploratory study. Science and Engineering Ethics , 14 (3), 305–310.

Schwartz, S. J., & Zamboanga, B. L. (2009). The peer-review and editorial system: Ways to fix something that might be broken. Perspectives on Psychological Science , 4 (1), 54–61.

Squazzoni, F., & Takács, K. (2011). Social simulation that ’peers into peer review’. Journal of Artificial Societies and Social Simulation , 14 (4), 3.

Trimble, V., & Ceja, J. A. (2011). Are american astrophysics papers accepted more quickly than others? Part I. Scientometrics , 89 (1), 281–289.

Wager, E. (2006). Ethics: What is it for. Nature: Web Debate–Peer-Review . doi: 10.1038/nature04990 .

Wager, E., & Jefferson, T. (2001). Shortcomings of peer review in biomedical journals. Learned Publishing , 14 (4), 257–263.

Ware, M., & Mabe, M. (2015). The STM report: An overview of scientific and scholarly journal publishing (4th ed.). Technical report, International Association of Scientific, Technical, Medical Publishers.

Ware, M., & Monkman, M. (2008). Peer review in scholarly journals: Perspective of the scholarly community—An international study . Technical report, Mark Ware Consullting, Bristol. http://publishingresearchconsortium.com/index.php/prc-documents/prc-research-projects/36-peer-review-full-prc-report-final/file . Accessed 30 September 2015.

Download references

Acknowledgments

A.F. & P.F. were supported by the Foundation for Polish Science (grant no. POMOST/2012-5/5) and by the European Union within European Regional Development Fund (Innovative Economy). This paper is a part of scientific activities in COST Action TD1306 New Frontiers of Peer Review (PEERE).

Author information

Authors and affiliations.

Faculty of Physics, Warsaw University of Technology, Koszykowa 75, 00-662, Warsaw, Poland

Maciej J. Mrowinski, Agata Fronczak & Piotr Fronczak

Institute for the Application of Nuclear Energy (INEP), University of Belgrade, Banatska 31b, Belgrade-Zemun, Serbia

Olgica Nedic

School of Management, University of Leicester, University Road, Leicester, LE1 7RH, UK

Marcel Ausloos

eHumanities Group, Royal Netherlands Academy of Arts and Sciences (NKVA), Joan Muyskenweg 25, 1096 CJ, Amsterdam, The Netherlands

Group of Researchers for Applications of Physics in Economy and Sociology (GRAPES), Rue de la Belle Jardiniere 483, 4031, Angleur, Belgium

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Maciej J. Mrowinski .

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Mrowinski, M.J., Fronczak, A., Fronczak, P. et al. Review time in peer review: quantitative analysis and modelling of editorial workflows. Scientometrics 107 , 271–286 (2016). https://doi.org/10.1007/s11192-016-1871-z

Download citation

Received : 03 October 2015

Published : 09 February 2016

Issue Date : April 2016

DOI : https://doi.org/10.1007/s11192-016-1871-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Peer review
  • Editorial process
  • Weighted directed graph
  • Find a journal
  • Publish with us
  • Track your research

Manuscript Manager

Your Journal’s Review Time

Review time, there has been a big push over the last 10 years to get academic work published as quickly as possible. as time to publication becomes more and more critical, there is a trend to make review times shorter now than before. how short is advisable and do short review times yield results, our advice to you, 1. set a realistic review time.

It seems most journals these days run on a peer review software system where time parameters can be set for tasks. Some journals allow more time for an evaluation of a manuscript, some allow less. A 14 day margin is a fairly common parameter for a peer review, but whether you choose 14 days or another amount of time, remember to give a few days of leeway where necessary.  Set the review time to a reasonable 14 to 21 days, but expect some reviewers to take slightly longer.

2. Automate your reminders

When choosing an online editorial system, know that it is a true advantage to have automatic reminders option. Set your reminders at regular intervals to keep a bit of gentle pressure on reviewers to submit their evaluations. This is very important to avoid overdues , as some editors are better than others in monitoring the review progress. If you are not online, it could be worth it to set up reminders manually. Much valuable time can be lost while waiting for a review. Setting the proper time parameters can make your review more efficient overall .

Sign up to receive future posts directly in your inbox

Blog Categories

Recent blog posts.

  • Is your peer review system still worth it?
  • What to do about overdues?
  • Update Reviewer Profile Info

Share this post with your network!

  • PRO Courses Guides New Tech Help Pro Expert Videos About wikiHow Pro Upgrade Sign In
  • EDIT Edit this Article
  • EXPLORE Tech Help Pro About Us Random Article Quizzes Request a New Article Community Dashboard This Or That Game Happiness Hub Popular Categories Arts and Entertainment Artwork Books Movies Computers and Electronics Computers Phone Skills Technology Hacks Health Men's Health Mental Health Women's Health Relationships Dating Love Relationship Issues Hobbies and Crafts Crafts Drawing Games Education & Communication Communication Skills Personal Development Studying Personal Care and Style Fashion Hair Care Personal Hygiene Youth Personal Care School Stuff Dating All Categories Arts and Entertainment Finance and Business Home and Garden Relationship Quizzes Cars & Other Vehicles Food and Entertaining Personal Care and Style Sports and Fitness Computers and Electronics Health Pets and Animals Travel Education & Communication Hobbies and Crafts Philosophy and Religion Work World Family Life Holidays and Traditions Relationships Youth
  • Browse Articles
  • Learn Something New
  • Quizzes Hot
  • Happiness Hub
  • This Or That Game
  • Train Your Brain
  • Explore More
  • Support wikiHow
  • About wikiHow
  • Log in / Sign up
  • Education and Communications

How to Review & Evaluate a Journal Publication

Last Updated: April 20, 2024 Fact Checked

Active Reading

Critical evaluation, final review.

This article was co-authored by Richard Perkins . Richard Perkins is a Writing Coach, Academic English Coordinator, and the Founder of PLC Learning Center. With over 24 years of education experience, he gives teachers tools to teach writing to students and works with elementary to university level students to become proficient, confident writers. Richard is a fellow at the National Writing Project. As a teacher leader and consultant at California State University Long Beach's Global Education Project, Mr. Perkins creates and presents teacher workshops that integrate the U.N.'s 17 Sustainable Development Goals in the K-12 curriculum. He holds a BA in Communications and TV from The University of Southern California and an MEd from California State University Dominguez Hills. This article has been fact-checked, ensuring the accuracy of any cited facts and confirming the authority of its sources. This article has been viewed 151,683 times.

Whether you’re publishing a journal article review or completing one for a class, your critique should be fair, thorough, and constructive. Don't worry—this article will walk you through exactly how to review a journal article step-by-step. Keep reading for tips on how to analyze the article, assess how successful it is, and put your thoughts into words. 

Step 1 Familiarize yourself with your publication’s style guide.

  • Familiarizing yourself with format and style guidelines is especially important if you haven’t published with that journal in the past. For example, a journal might require you to recommend an article for publication, meet a certain word count, or provide revisions that the authors should make.
  • If you’re reviewing a journal article for a school assignment, familiarize yourself the guidelines your instructor provided.

Step 2 Skim the article to get a feel for its organization.

  • While giving the article a closer read, gauge whether and how well the article resolves its central problem. Ask yourself, “Is this investigation important, and does it uniquely contribute to its field?”
  • At this stage, note any terminological inconsistencies, organizational problems, typos, and formatting issues.

Step 1 Decide how well the abstract and introduction map out the article.

  • How well does the abstract summarize the article, the problem it addresses, its techniques, results, and significance? For example, you might find that an abstract describes a pharmaceutical study's topic and skips to results without discussing the experiment's methods with much detail.
  • Does the introduction map out the article’s structure? Does it clearly lay out the groundwork? A good introduction gives you a clear idea of what to expect in the coming sections. It might state the problem and hypothesis, briefly describe the investigation's methods, then state whether the experiment proved or disproved the hypothesis.

Step 2 Evaluate the article’s references and literature review.

  • If necessary, spend some time perusing copies of the article’s sources so you can better understand the topic’s existing literature.
  • A good literature review will say something like, "Smith and Jones, in their authoritative 2015 study, demonstrated that adult men and women responded favorably to the treatment. However, no research on the topic has examined the technique's effects and safety in children and adolescents, which is what we sought to explore in our current work."

Step 3 Examine the methods.

  • For example, you might observe that subjects in medical study didn’t accurately represent a diverse population.

Step 4 Assess how the article presents data and results.

  • For example, you might find that tables list too much undigested data that the authors don’t adequately summarize within the text.

Step 5 Evaluate non-scientific evidence and analyses.

  • For example, if you’re reviewing an art history article, decide whether it analyzes an artwork reasonably or simply leaps to conclusions. A reasonable analysis might argue, “The artist was a member of Rembrandt’s workshop, which is evident in the painting’s dramatic light and sensual texture.”

Step 6 Assess the writing style.

  • Is the language clear and unambiguous, or does excessive jargon interfere with its ability to make an argument?
  • Are there places that are too wordy? Can any ideas be stated in a simpler way?
  • Are grammar, punctuation, and terminology correct?

Step 1 Outline your review.

  • Your thesis and evidence should be constructive and thoughtful. Point out both strengths and weaknesses, and propose alternative solutions instead of focusing only on weaknesses.
  • A good, constructive thesis would be, “The article demonstrates that the drug works better than a placebo in specific demographics, but future research that includes a more diverse subject sampling is necessary.”

Step 2 Write your review’s first draft.

  • The introduction summarizes the article and states your thesis.
  • The body provides specific examples from the text that support your thesis.
  • The conclusion summarizes your review, restates your thesis, and offers suggestion for future research.

Step 3 Revise your draft before submitting it.

  • Make sure your writing is clear, concise, and logical. If you mention that an article is too verbose, your own writing shouldn’t be full of unnecessarily complicated terms and sentences.
  • If possible, have someone familiar with the topic read your draft and offer feedback.

Community Q&A

Tom De Backer

You Might Also Like

Write

  • ↑ https://www.science.org/content/article/how-review-paper
  • ↑ https://www.uis.edu/learning-hub/writing-resources/handouts/learning-hub/how-to-review-a-journal-article
  • ↑ https://library.queensu.ca/inforef/criticalreview.htm

About This Article

Richard Perkins

If you want to review a journal article, you’ll need to carefully read it through and come up with a thesis for your piece. Read the article once to get a general idea of what it says, then read it through again and make detailed notes. You should focus on things like whether the introduction gives a good overview of the topic, whether the writing is concise, and whether the results are presented clearly. When you write your review, present both strengths and weaknesses of the article so you’re giving a balanced assessment. Back up your points with examples in the main body of your review, which will make it more credible. You should also ensure your thesis about the article is clear by mentioning it in the introduction and restating it in the conclusion of your review. For tips on how to edit your review before publication, keep reading! Did this summary help you? Yes No

  • Send fan mail to authors

Reader Success Stories

Anonymous

Jun 30, 2019

Did this article help you?

journal article review time

Laura Drawls

Aug 26, 2017

Azeez A.

Oct 29, 2019

S. E.

Sep 27, 2018

Sarah Corduroy

Sarah Corduroy

Dec 5, 2022

Do I Have a Dirty Mind Quiz

Featured Articles

Tell if Your Best Friend Loves You

Trending Articles

How to Do Nice Things for Your Parents & Show Your Appreciation

Watch Articles

Make Body Oil

  • Terms of Use
  • Privacy Policy
  • Do Not Sell or Share My Info
  • Not Selling Info

Don’t miss out! Sign up for

wikiHow’s newsletter

The History of Peer Review Is More Interesting Than You Think

The term “peer review” was coined in the 1970s, but the referee principle is usually assumed to be as old as the scientific enterprise itself. (It isn’t.)

Peer review illustration

Peer review has become a cornerstone of academic publishing, a fundamental part of scholarship itself. With peer review, independent, third-party experts in the relevant field(s) assess manuscripts submitted to journals. The idea is that these expert peers referee the process, especially when it comes to technical matters that may be beyond the knowledge of editors.

JSTOR Daily Membership Ad

“In all fields of academia, reputations and careers are now expected to be built on peer-reviewed publication; concerns with its efficacy and appropriateness thus seem to strike at the heart of scholarship,” write historians Noah Moxham and Aileen Fyfe .

The peer review system, continue Moxham and Fyfe, is “crucial to building the reputation both of individual scientists and of the scientific enterprise at large” because the process

is believed to certify the quality and reliability of research findings. It promises supposedly impartial evaluation of research, through close scrutiny by specialists, and is widely used by journal editors, grant-making bodies, and government.

As with any human enterprise, peer review is far from foolproof . Errors and downright frauds have made it through the process. In addition, as Moxham and Fyfe note, there can be “inappropriate bias due to the social dynamics of the process.” (Some peer review types may introduce less bias than others.)

The term “peer review” was coined in the early 1970s, but the referee principle is usually assumed to be about as old as the scientific enterprise itself, dating to the Royal Society of London’s Philosophical Transactions , which began publication in 1665.

Moxham and Fyfe complicate this history, using the Royal Society’s “rich archives” to trace the evolution of editorial practices at one of the earliest scientific societies.

Initially, the publication of Philosophical Transactions was a private venture managed by the Society’s secretaries. Secretary Henry Oldenburg, the first editor, ran it from 1665 to 1677, without, write Moxham and Fyfe, any “clear set of standards.”

Research sponsored by the Royal Society itself was published separately from the Transactions . In fact, the royally chartered Society had the power to license publication of books and periodicals (like the Transactions ) as “part of a wider mechanism of state censorship intended to ensure the proscription of politically seditious or religious heterodox material.” But as time passed, there wasn’t really much Society oversight over the publication at all.

The situation came to a crisis in the early 1750s, when an unsuccessful candidate for a Society fellowship raised a ruckus, conflating the separate administrations of the Society and the now rather stodgy Transactions. The bad press compelled the Society to take over financial and editorial control—by committee—of the Transactions in 1752. The editorial committee could refer submissions to fellows with particular expertise—but papers were already being vetted since they needed to be referred by fellows in the first place.

Formalization of the use of expert referees would be institutionalized by 1832. A “written report of fitness” of submissions by one or more fellows was to be made before acceptance. This followed similar procedures already introduced abroad, particularly at the Académie des sciences in Paris.

All of this, Moxham and Fyfe argue, was more about institution-building (and fortification) than what we know as peer reviewing today.

“Refereeing and associated editorial practices” were intended to “disarm specific attacks upon the eighteenth-century Society; sometimes, to protect the Society’s finances; and, by the later nineteenth century, to award prestige to members of the nascent profession of natural scientists.”

Weekly Newsletter

Get your fix of JSTOR Daily’s best stories in your inbox each Thursday.

Privacy Policy   Contact Us You may unsubscribe at any time by clicking on the provided link on any marketing message.

From 1752 to 1957, the front of every Transactions included an “ Advertisement ” noting that the Society could not pretend to “answer for the certainty of the facts, or propriety of the reasonings” of the papers contained within; all that “must still rest on the credit or judgement of their respective authors.”

The twentieth century saw a plethora of independent scientific journals and an exponential increase in scientific papers. “Professional, international scientific research” burst the bounds of the old learned societies with their gentlemanly ways. In 1973, the journal Nature (founded in 1869) made refereeing standard practice, to “raise the journal above accusations of cronyism and elitism.” Since then, peer review, as it came to be called in preference to refereeing, has become universal. At least in avowed “peer-reviewed journals.”

Support JSTOR Daily! Join our membership program on Patreon today.

JSTOR logo

JSTOR is a digital library for scholars, researchers, and students. JSTOR Daily readers can access the original research behind our articles for free on JSTOR.

Get Our Newsletter

More stories.

Daily Sleuth image

From Gamification to Game-Based Learning

journal article review time

Brunei: A Tale of Soil and Oil

Digital generated image of futuristic cubes connecting.

Why Architects Need Philosophy to Guide the AI Design Revolution

An image of Sonya Pritzker beside the cover of her book, Learning to Love

Inside China’s Psychoboom

Recent posts.

  • All The Way With LBJ?
  • The Death of Jack Trice
  • Staying Cool: Helpful Hints From History
  • Foreign Magic in Imperial Rome
  • Cassini’s First Years at Saturn

Support JSTOR Daily

Sign up for our weekly newsletter.

  • Election 2024
  • Entertainment
  • Newsletters
  • Photography
  • AP Buyline Personal Finance
  • AP Buyline Shopping
  • Press Releases
  • Israel-Hamas War
  • Russia-Ukraine War
  • Global elections
  • Asia Pacific
  • Latin America
  • Middle East
  • Delegate Tracker
  • AP & Elections
  • 2024 Paris Olympic Games
  • Auto Racing
  • Movie reviews
  • Book reviews
  • Financial Markets
  • Business Highlights
  • Financial wellness
  • Artificial Intelligence
  • Social Media

McDonald’s same-store sales fall for 1st time since 2020 as tapped-out customers hold on to cash

Image

FILE - McDonald’s restaurant signs are shown in in East Palestine, Ohio, Feb. 9, 2023. McDonald’s reports earning on Monday, July 29, 2024.(AP Photo/Gene J. Puskar, File)

  • Copy Link copied

They’re not lovin’ it.

McDonald’s global same-stores fell for the first time in nearly four years in the second quarter as inflation-weary consumers skipped eating out or chose cheaper options. The company said it’s working on fixes, like meal deals and new menu items, but it expects same-store sales to be down for the next few quarters.

“Consumers still recognize us as the value leader versus our key competitors, it’s clear that our value leadership gap has recently shrunk,” McDonald’s Chairman, President and CEO Chris Kempczinski said Monday during a conference call with investors. “We are working to fix that with pace.”

Sales at locations open at least a year fell 1% in the April-June period, the first decline since the final quarter of 2020, when the pandemic shuttered stores and millions stayed home.

In the U.S., same-store sales fell nearly 1%. McDonald’s saw fewer customers but it said those who came spent more because of price increases. Kempczinski defended the higher menu prices, saying the costs for paper, food and labor increased as much as 40% in some markets over the last few years.

It’s an issue that goes beyond the Chicago burger giant. Customer traffic at U.S. fast-food restaurants fell 2% in the first half of the year compared to the same period a year ago, according to Circana, a market research company. David Portalatin, a food industry advisor for Circana, expects high inflation and rising consumer debt will also dent traffic in the second half of 2024.

Image

McDonald’s also reported lower store traffic in France and the Middle East, where people have been boycotting the chain because of a perception that it supports Israel in the war in Gaza. Kempczinski said weak consumer sentiment in China has customers fleeing to lower-priced rivals.

McDonald’s warned in April that more of its inflation-weary customers were seeking better value and affordability. The company introduced a $5 meal deal at U.S. restaurants on June 25, which was late in this financial reporting period.

McDonald’s U.S. President Joe Erlinger said Monday that $5 meal deal sales are running ahead of expectations and are getting lower-income consumers back into McDonald’s stores. Erlinger said 93% of McDonald’s franchisees have agreed to run the promotion through August.

Other countries, such as Germany and the United Kingdom, are also seeing success with meal deals, the company said. But Kempczinski said McDonald’s needs to be providing broader value and boosting that message with better marketing.

“Trying to move the consumer with one item or a few items is not sufficient for the context that we’re in,” he said.

New menu items are also in the works. The company is testing its value-oriented Big Arch double burger in three international markets through the end of this year, Kempczinski said.

For the second quarter, revenue was flat at $6.5 billion and just off the $6.6 billion that Wall Street was expecting, according to analysts polled by FactSet.

The company’s net income fell 12% to $2 billion, or $2.80 per share. Excluding one-time items such as restructuring charges, McDonald’s earned $2.97 per share. That was far from the per-share profit of $3.07 that industry analysts had forecast.

Investors appeared satisfied with the plans McDonald’s has to reverse its slide. McDonald’s shares rose 4% in morning trading Monday.

journal article review time

Las Vegas News

  • Entertainment
  • Investigations
  • Latest Headlines
  • Top 100 Restaurants
  • Things To Do In Vegas
  • What Are They Hiding?
  • NV Primary Results
  • Israel at War
  • 2024 Election
  • Clark County
  • Nation and World
  • Science and Technology
  • Road Warrior
  • Las Vegas Weather
  • East Valley
  • North Las Vegas
  • Summerlin/Centennial Hills
  • Remembering Oct. 1, 2017
  • Deborah Wall
  • Natalie Burt
  • Remembering Jeff German
  • Police Accountability
  • Alpine Fire
  • 100 Years of Growth
  • Dangerous Driving
  • Raiders News
  • Golden Knights
  • UNLV Football
  • UNLV Basketball
  • Nevada Preps
  • NBA Summer League
  • Sports Betting 101
  • Las Vegas Sportsbooks
  • National Finals Rodeo
  • On TV/Radio
  • MMA and UFC
  • Casinos & Gaming
  • Conventions
  • Inside Gaming
  • Entrepreneurs
  • Real Estate News
  • Business Press
  • Sheldon Adelson (1933-2021)
  • Debra J. Saunders
  • Michael Ramirez cartoons
  • Victor Joecks
  • Richard A. Epstein
  • Victor Davis Hanson
  • Drawing Board
  • Homicide Tracker
  • Faces of Death Row
  • Kats’ Cool Hangs
  • Arts & Culture
  • Home and Garden
  • Las Vegas Hiking Guide
  • RJ Magazine
  • Today’s Obituaries
  • Submit an obit
  • Dealer News
  • Classifieds
  • Place a Classified Ad
  • Provided Content
  • Real Estate Millions
  • Internships
  • Service Directory
  • Transportation
  • Merchandise
  • Legal Information
  • Real Estate Classifieds
  • Garage Sales
  • Contests and Promotions
  • Best of Las Vegas
  • Nevada State Bank
  • Verizon Business
  • Touro University
  • P3 Health Partners
  • Adult Health
  • Star Nursery
  • Partner Articles
  • Ignite Funding
  • Supplements
  • Travel Nevada
  • Subscriptions
  • Newsletters
  • Advertise with Us

icon-x

  • >> News
  • >> Politics and Government
  • >> Nevada

UnidosUS kicks off Las Vegas convention; Biden expected to speak

The largest Latino-centered civil rights organization kicked off its annual convention on the Strip.

A general view of the UnidosUS convention at the MGM Grand on Monday, July 15, 2024, in Las Veg ...

The nation’s largest Latino-centered civil rights organization on Monday kicked off its annual convention on the Strip.

This year’s “Our Time is Now!” three-day event is centered around November’s elections and the crucial Latino vote that could swing them.

President Joe Biden is scheduled to give keynote remarks on Wednesday to about the 1,500 in attendance.

Attendees at two packed ballrooms at MGM Grand began hearing Monday from more than 100 leaders who were set to discuss a variety of topics, including democracy and election integrity.

Secretary of Health and Human Services Xavier Becerra, the first Latino to ever serve in that position, was among the convention’s first-day speakers.

Rep. Steven Horsford, D-Nevada, and Nevada Secretary of State Cisco Aguilar also spoke.

‘We have tremendous power’

UnidosUS President and CEO Janet Murguia opened her remarks expressing condolences to the victims of the attempted assassination of former President Donald Trump two days before.

“I just wanted to take a moment to address the appalling attack on the former president at the political rally this past Saturday,” Murguia said. “Despite our differences, we are glad that the (former) president is safe, and our thoughts go out to the victims.”

She added that “violence has no place in American politics.”

Murguia said that political differences must be settled in the ballot box, noting that Latino voters could play a “deciding factor” in the upcoming election.

The convention aimed to bolster Latino voices.

Murguia said that more than 2,500 Latinos reach voting age every day and that more than a fifth of the community’s eligible voters were projected to cast a ballot for the first time in November.

“Nearly 40 percent of newly eligible voters in the battleground states of Arizona and right here in Nevada are Latino,” Murguia said.

“We have tremendous power,” she said, “And it is essential that we exercise that power this November.”

While the nonpartisan organization doesn’t endorse candidates, Murguia said: “You should vote. There’s just too much at stake.”

Nevada officials among speakers

Horsford called the local Latino community a “powerhouse at all levels of governments: in business, in labor and in advocacy.”

“To me, the Latino spending dollar is an important part of our political capital as citizens in this nation,” Horsford said. “We need to be more than just the consumers in our community. We also need to be the owners and the creators of that wealth and to share in that success.”

In a bilingual speech, Becerra touted the White House’s health and human services efforts, including the vaccine roll out during the COVID-19 pandemic.

Becerra spoke about the Latino fight inspired by his father who used to tell him in Spanish, “Son, if I can get up in the morning to go to work, it’s a good day.”

But he said that a subset of the community needed more than “luck” to survive.

“We all have to understand the value of being able to get up in the morning, to go to a job, and consider that a good day,” he said. “Because there’s still Americans for whom that’s very difficult.”

Aguilar and Arizona Secretary of State Adrian Fontes were part of the “Latino Power en Acción: Defending our Democracy” panel.

Aguilar said there was more work to be done to get the Latino population out to vote, noting that 30 percent of Nevadans identified as Latinos and that 20 percent of overall voters are Latino.

“However, only 50 percent of that one fifth turn out, and if you look at how close our elections are, you increase turnout 3-4 percent, you’re going to flip the dynamic for some races.”

“Before our community starts to understand that, they have to see us running for office, they have to see government working for them,” Aguilar said. “It’s my responsibility as an elected official to get out there and show them that government does work for them.”

Contact Ricardo Torres-Cortez at [email protected] .

Influencer payments hidden from taxpayers

journal article review time

Republican Gov. Joe Lombardo has taken an active role in the 2024 election. Is it an effort to catch up to Democrats’ long-standing campaign efforts?

Independent presidential candidate Robert F. Kennedy Jr. speaks during the Freedom Fest at the ...

Robert F. Kennedy’s independent presidential campaign submitted the required number of valid signatures in order to appear on Nevada’s ballot in November.

Nevada Democratic Senator Jacky Rosen co-hosted a roundtable discussion in Washington on Thursd ...

According to the Anti-Defamation League, 73 percent of Jewish college students experienced or witnessed some form of prejudice during the 2023-2024 school year.

Nye County Judge Michele Fiore and former Las Vegas City Councilwoman was suspended by the Neva ...

Pahrump Justice of the Peace Michele Fiore heard cases for the last time before her suspension begins. The next time she’s in court, she’ll be on the other side of the bench.

journal article review time

Vice President Kamala Harris’ presidential campaign launched in Nevada on Thursday. Some Democrats seem much more enthusiastic.

Parker Beining, 20, of Henderson, speaks to the Review-Journal at The District at Green Valley ...

Vice President Kamala Harris has some advantages in Nevada, but some voters are ambivalent.

journal article review time

Gov. Joe Lombardo called on President Joe Biden to streamline the process for releasing federal land so that more affordable housing can be built in Nevada.

People ride on personal water crafts on the Colorado River at the southern end of Laughlin on T ...

Reps. Dina Titus and Susie Lee, D-Nev., oversaw amendments to boost funds for the Las Vegas Wash and Laughlin’s water infrastructure.

journal article review time

Nevada’s Democratic delegates unanimously supported Vice President Kamala Harris’ nomination for president, the state party announced.

journal article review time

Nevada voters will get the chance in November to weigh in on whether to require photo ID for in-person voting.

recommend 1

Stack Exchange Network

Stack Exchange network consists of 183 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

submission review is taking too long [duplicate]

I have submitted an article for a journal in which it was stated that the review time was from 80 to 120 days, that is approximately 4 months at most. The submission of my article was approximately 13 months ago and I did not get any answer in the allotted time they mentioned. I sent an email to the editor after 8 months and I was told that the review process has suffered a delay, but that they soon will fix that issue.

I waited until September and sent another email and again I got another reply of an apology and nothing more. I believe that it has passed too much time, until now it is almost like one year, so I do not what to do.

Would there be any problem if I submit my article to another journal or should I send another email to the editor of this journal? Or rather just wait until I get a response?

  • peer-review

Klangen's user avatar

  • 7 Note that if you decide to submit elsewhere (which is your good right) you still need to first withdraw from this journal. This bad experience does not give you a card blanche to double-dip. –  xLeitix Commented Nov 26, 2018 at 14:07
  • 1 It may help to get in touch with the editor-in-chief. Individual editors occasionally have a hard time finding willing reviewers. And if the volunteer reviewer gets wrapped up in their real life duties (leading to a delay), the bar for looking for alternate reviewers is relatively high. Been there. Not pleasant for anyone involved. –  Jyrki Lahtonen Commented Nov 27, 2018 at 4:44
  • academia.meta.stackexchange.com/questions/1967/… –  Anonymous Physicist Commented Sep 22, 2020 at 2:32

5 Answers 5

It seems like a long time by any standard. Contact them. You can give them a date by which you will formally withdraw your article from consideration. Maybe a couple of weeks. If they don't jump you can submit elsewhere without worry.

The final nudge is just a courtesy. You could actually just inform them that you are withdrawing for submission elsewhere. But if the journal is reputable it might be worth the courtesy.

Buffy's user avatar

  • 2 There is a vague possibility that something unethical is going on - purposeful delays in order to trump a publication. Though I hope not... –  Spark Commented Nov 26, 2018 at 3:43
  • 5 @YairZick I would assume that if the paper is about some critical finding. If not, it is more likely that reviewers are slow or editors have a problem finding reviewers. Generally, I personally found that reviewers have become slower, less motivated and less reliable over the last decade and half, probably because of the spate of review requests while submission quality and originality has decreased on average (from my personal observation). –  Captain Emacs Commented Nov 26, 2018 at 9:52
  • 10 Maybe important to emphasize that you indeed need to withdraw first before submitting elsewhere. I am not sure if the OP is aware of that based on their original phrasing. –  xLeitix Commented Nov 26, 2018 at 14:06

Write to the editor with pointed questions. How many reviewers have been invited? How many agreed/declined? When are the review due dates?

Without knowing the answers to these questions, deciding whether to wait or to withdraw and resubmit is just a crapshoot. With the answers, it's possible to make a much more informed decision about whether the reviewers are likely to finish their reviews.

If the editors refuse to answer, you can still guess if the delay is because of them or because of the reviewers based on how long it takes to answer your question. If they take a long time to answer, I'd guess that the delay is because of them, in which case I'd be more inclined to withdraw and submit elsewhere.

Allure's user avatar

  • Good advice here, get the editor to state what has happened & is happening... –  Solar Mike Commented Nov 26, 2018 at 5:10

I agree with @Buffy's answer. A polite letter to the editors asking for clarification would be good. If you or a colleague know anyone on the editorial board - contact them. I had a case where a paper was sitting idle for close to a year after an accept with minor revisions because of a miscommunication between the reviewers and the editor in charge.

Another point to consider: perhaps I am being a bit paranoid, but if your paper was sitting there for a very long time and has not been published, it may be a good idea to have a version of it on ArXiv or some other relevant open repository. This serves the purpose of timestamping your publication .

There are (thankfully rare) horror stories of unscrupulous reviewers purposely delaying decisions in order to get the results themselves. If your review is taking so long and results in your field take a long time to come by (say, experiments need to be run), this may be a cause for concern.

Also seriously consider whether this journal is worth submitting to in the future. Journals should not be rewarded for this kind of behavior.

Spark's user avatar

  • 2 Submitting to the journal has already "timestamped" the work. –  David Richerby Commented Nov 26, 2018 at 13:15
  • 2 @DavidRicherby: ... but not if the paper is rejected. –  Oleg Lobachev Commented Nov 26, 2018 at 16:59
  • 1 @OlegLobachev Even if the paper is rejected, you can still ask the journal's editor to vouch for the fact that you submitted it. In reality, it seems very unlikely that one would need to prove that a paper existed in some specific date, anyway. –  David Richerby Commented Nov 26, 2018 at 17:00
  • 1 Maybe, but if another person just happens to release similar results, it may take a really long time to get the editor to move on this. They may even be reluctant to get into this mess altogether as it may paint them and their editorial process as unethical/incompetent. Why not just go for an independent, free, open format that’s indisputable and has zero hassle? –  Spark Commented Nov 27, 2018 at 0:36
  • If all you want is a timestamp you can get that from a notary –  candied_orange Commented Oct 2, 2019 at 13:05

Three times in my (pretty long) career editors took an outrageously long time (more than a year) to decide on a submission. I was confident that the papers were correct and appropriate for those journals, so did not want to withdraw them and resubmit elsewhere. Eventually my frequent mail to the editors (snail and later e-) led to acceptance in each case. In at least one of them I think the editor gave up on nagging the referees and checked the paper herself.

Ethan Bolker's user avatar

Any credible and efficiently managed journal should be able to complete the review process and reach a decision in 6-8 weeks. Any period beyond that is usually the fault of Editors and Co-Editors who simply accept the prestige of the title but pay little attention to their duties. The excuse is always that they are volunteers and have other responsibilities. In my view, why accept an editorship if you will be unable to devote sufficient time to the tasks of being an editor. Some of the fault also lies with the publishers who don't have systems in place in their electronic process to flag submissions for which reviews are missing within a given time period. The way I deal with lengthy reviews is to first contact the editor for an update after 8 weeks has elapsed. If the response is much more than a simple "we are still waiting for reviews", and it appears that some attempt will be made to get the reviews, I allow another 4 weeks at the most. Beyond that, I either withdraw the paper and offer it elsewhere or I offer it elsewhere without withdrawal. In both scenarios, as far as I am concerned, the journal to which the paper was sent to first has lost exclusive rights to my submission. Thereafter, whichever journal comes back first with an acceptance is the journal that will be given the copyright to publish.

Stackhouse's user avatar

  • I think it's considered unethical to duplicate submissions. –  user354948 Commented Jul 1, 2022 at 17:08

Not the answer you're looking for? Browse other questions tagged journals peer-review .

  • Featured on Meta
  • Announcing a change to the data-dump process
  • Upcoming initiatives on Stack Overflow and across the Stack Exchange network...

Hot Network Questions

  • adjective as a noun
  • What concerns are there with soldering stainless steel cable on electronics?
  • server side triggers - how to prevent a linked server creation?
  • What type of outlet has four vertical slots and how can I modernize it?
  • Can a state get away with failing to comply with treaty obligations during wartime?
  • Recognizing Named Characters
  • A story where a self-declared medium speaks with the voice of a dead skeptic and calls her audience idiots for believing in spiritism
  • What are the not-winglet wingtips on the CL-415?
  • Looking for a book I read pre-1990. Possibly called "The Wells of Yutan". A group of people go on a journey up a river
  • Where to find IC footprint lookup tables
  • problem to align matrix system
  • Moving the content up and down in header and footer
  • Unifying FOL atoms in C
  • Where can I find a switch to alternate connections between two pairs of two wires?
  • Is it still unsafe to cat an arbitrary file?
  • P versus semi-NP?
  • Can we simply remove the log term for loss in policy gradient methods?
  • What does "joint-most" exactly mean?
  • How can life which cannot live on the surface of a planet naturally reach the supermajority of the planet's caves?
  • How fast does the Parker solar probe move before/at aphelion?
  • What reference is Deadpool making in his line, "What is it, girl? Is there trouble at the well?"
  • How do manganese nodules in the ocean sustain oxygen production without depleting over geological time scales?
  • Using Gamma Ray Lasers to Blow Away Interstellar Medium
  • Plastic guides in Shimano brake levers

journal article review time

  • Introduction
  • Conclusions
  • Article Information

The studies are presented in order of smallest to largest change in screen time. The square data markers indicate the degree of change, with the lines through the markers indicating 90% CIs. The diamond data marker indicates the overall pooled effect based on the included studies.

eTable 1. Search Strategy From Ovid MEDLINE

eTable 2. Quality Assessment Criteria

eTable 3. Quality Assessment of Included Studies

eFigure. Assessment of Publication Bias Using a Scatterplot of the Random-Effect Solution and the SE for Each Sample Estimate

eReferences.

See More About

Select your interests.

Customize your JAMA Network experience by selecting one or more topics from the list below.

  • Academic Medicine
  • Acid Base, Electrolytes, Fluids
  • Allergy and Clinical Immunology
  • American Indian or Alaska Natives
  • Anesthesiology
  • Anticoagulation
  • Art and Images in Psychiatry
  • Artificial Intelligence
  • Assisted Reproduction
  • Bleeding and Transfusion
  • Caring for the Critically Ill Patient
  • Challenges in Clinical Electrocardiography
  • Climate and Health
  • Climate Change
  • Clinical Challenge
  • Clinical Decision Support
  • Clinical Implications of Basic Neuroscience
  • Clinical Pharmacy and Pharmacology
  • Complementary and Alternative Medicine
  • Consensus Statements
  • Coronavirus (COVID-19)
  • Critical Care Medicine
  • Cultural Competency
  • Dental Medicine
  • Dermatology
  • Diabetes and Endocrinology
  • Diagnostic Test Interpretation
  • Drug Development
  • Electronic Health Records
  • Emergency Medicine
  • End of Life, Hospice, Palliative Care
  • Environmental Health
  • Equity, Diversity, and Inclusion
  • Facial Plastic Surgery
  • Gastroenterology and Hepatology
  • Genetics and Genomics
  • Genomics and Precision Health
  • Global Health
  • Guide to Statistics and Methods
  • Hair Disorders
  • Health Care Delivery Models
  • Health Care Economics, Insurance, Payment
  • Health Care Quality
  • Health Care Reform
  • Health Care Safety
  • Health Care Workforce
  • Health Disparities
  • Health Inequities
  • Health Policy
  • Health Systems Science
  • History of Medicine
  • Hypertension
  • Images in Neurology
  • Implementation Science
  • Infectious Diseases
  • Innovations in Health Care Delivery
  • JAMA Infographic
  • Law and Medicine
  • Leading Change
  • Less is More
  • LGBTQIA Medicine
  • Lifestyle Behaviors
  • Medical Coding
  • Medical Devices and Equipment
  • Medical Education
  • Medical Education and Training
  • Medical Journals and Publishing
  • Mobile Health and Telemedicine
  • Narrative Medicine
  • Neuroscience and Psychiatry
  • Notable Notes
  • Nutrition, Obesity, Exercise
  • Obstetrics and Gynecology
  • Occupational Health
  • Ophthalmology
  • Orthopedics
  • Otolaryngology
  • Pain Medicine
  • Palliative Care
  • Pathology and Laboratory Medicine
  • Patient Care
  • Patient Information
  • Performance Improvement
  • Performance Measures
  • Perioperative Care and Consultation
  • Pharmacoeconomics
  • Pharmacoepidemiology
  • Pharmacogenetics
  • Pharmacy and Clinical Pharmacology
  • Physical Medicine and Rehabilitation
  • Physical Therapy
  • Physician Leadership
  • Population Health
  • Primary Care
  • Professional Well-being
  • Professionalism
  • Psychiatry and Behavioral Health
  • Public Health
  • Pulmonary Medicine
  • Regulatory Agencies
  • Reproductive Health
  • Research, Methods, Statistics
  • Resuscitation
  • Rheumatology
  • Risk Management
  • Scientific Discovery and the Future of Medicine
  • Shared Decision Making and Communication
  • Sleep Medicine
  • Sports Medicine
  • Stem Cell Transplantation
  • Substance Use and Addiction Medicine
  • Surgical Innovation
  • Surgical Pearls
  • Teachable Moment
  • Technology and Finance
  • The Art of JAMA
  • The Arts and Medicine
  • The Rational Clinical Examination
  • Tobacco and e-Cigarettes
  • Translational Medicine
  • Trauma and Injury
  • Treatment Adherence
  • Ultrasonography
  • Users' Guide to the Medical Literature
  • Vaccination
  • Venous Thromboembolism
  • Veterans Health
  • Women's Health
  • Workflow and Process
  • Wound Care, Infection, Healing

Others Also Liked

  • Download PDF
  • X Facebook More LinkedIn
  • CME & MOC

Madigan S , Eirich R , Pador P , McArthur BA , Neville RD. Assessment of Changes in Child and Adolescent Screen Time During the COVID-19 Pandemic : A Systematic Review and Meta-analysis . JAMA Pediatr. 2022;176(12):1188–1198. doi:10.1001/jamapediatrics.2022.4116

Manage citations:

© 2024

  • Permissions

Assessment of Changes in Child and Adolescent Screen Time During the COVID-19 Pandemic : A Systematic Review and Meta-analysis

  • 1 Department of Psychology, University of Calgary, Calgary, Alberta, Canada
  • 2 Alberta Children’s Hospital Research Institute, Calgary, Alberta, Canada
  • 3 School of Public Health, Physiotherapy and Sports Science, University College Dublin, Dublin, Ireland

Question   To what extent has the COVID-19 pandemic been associated with changes in the duration, content, and context of daily screen time among children and adolescents globally?

Findings   In this systematic review and meta-analysis of 46 studies including 29 017 youths (≤18 years), pooled estimates comparing estimates taken before and during the COVID-19 pandemic revealed an increase in screen time of 84 min/d, or 52%. Screen time increases were highest for individuals aged 12 to 18 years and for handheld devices and personal computers.

Meaning   This study shows an association between the COVID-19 pandemic and increases in screen time; practitioners and pandemic recovery initiatives should focus on fostering healthy device habits, including moderating use, monitoring content, prioritizing device-free time, and using screens for creativity or connection.

Importance   To limit the spread of COVID-19, numerous restrictions were imposed on youths, including school closures, isolation requirements, social distancing, and cancelation of extracurricular activities, which independently or collectively may have shifted screen time patterns.

Objective   To estimate changes in the duration, content, and context of screen time of children and adolescents by comparing estimates taken before the pandemic with those taken during the pandemic and to determine when and for whom screen time has increased the most.

Data Sources   Electronic databases were searched between January 1, 2020, and March 5, 2022, including MEDLINE, Embase, PsycINFO, and the Cochrane Central Register of Controlled Trials. A total of 2474 nonduplicate records were retrieved.

Study Selection   Study inclusion criteria were reported changes in the duration (minutes per day) of screen time before and during the pandemic; children, adolescents, and young adults (≤18 years); longitudinal or retrospective estimates; peer reviewed; and published in English.

Data Extraction and Synthesis   A total of 136 articles underwent full-text review. Data were analyzed from April 6, 2022, to May 5, 2022, with a random-effects meta-analysis.

Main Outcomes and Measures   Change in daily screen time comparing estimates taken before vs during the COVID-19 pandemic.

Results   The meta-analysis included 46 studies (146 effect sizes; 29 017 children; 57% male; and mean [SD] age, 9 [4.1] years) revealed that, from a baseline prepandemic value of 162 min/d (2.7 h/d), during the pandemic there was an increase in screen time of 84 min/d (1.4 h/d), representing a 52% increase. Increases were particularly marked for individuals aged 12 to 18 years ( k [number of sample estimates] = 26; 110 min/d) and for device type (handheld devices [ k  = 20; 44 min/d] and personal computers [ k  = 13; 46 min/d]). Moderator analyses showed that increases were possibly larger in retrospective ( k  = 36; 116 min/d) vs longitudinal ( k  = 51; 65 min/d) studies. Mean increases were observed in samples examining both recreational screen time alone ( k  = 54; 84 min/d) and total daily screen time combining recreational and educational use ( k  = 33; 68 min/d).

Conclusions and Relevance   The COVID-19 pandemic has led to considerable disruptions in the lives and routines of children, adolescents, and families, which is likely associated with increased levels of screen time. Findings suggest that when interacting with children and caregivers, practitioners should place a critical focus on promoting healthy device habits, which can include moderating daily use; choosing age-appropriate programs; promoting device-free time, sleep, and physical activity; and encouraging children to use screens as a creative outlet or a means to meaningfully connect with others.

To limit the spread of the COVID-19 virus, numerous restrictions were imposed on the daily lives of children and adolescents globally, including repeated school closures, cancellation of extracurricular activities, social and physical distancing from peers and other sources of interpersonal support (eg, teachers and coaches), and mandated home quarantining due to COVID-19 exposure. Parents, in parallel, also experienced substantial challenges, including financial instability, job insecurity, loss of child care, and increased home-schooling responsibilities, which individually and collectively resulted in increased family stress and mental distress. 1 - 3 To cope with such unparalleled disruptions to normal living conditions, many children and families likely used digital devices to occupy their time during the pandemic. Population-level increases in child and adolescent screen time have therefore been expected. 4 , 5 Trajectories of screen use demonstrate that children with high screen use often remain high users throughout preschool and middle childhood. 6 , 7 Meta-analyses have also documented significant associations of child screen time with poor sleep, 8 physical activity, 9 language and communication skills, 10 mental health, 11 and academic 12 outcomes. Up to 80% of apps for children are also purposely built with manipulative design features (eg, fabricated time pressure, gifts, and attractive lures to encourage longer gameplay), 13 which can be persuasive in maintaining children’s attention. Therefore, a critical time-sensitive research focus should be to determine the degree to which child and adolescent screen time increased during the COVID-19 pandemic in terms of the duration of use as well as the content and context of use.

Although most empirical studies suggest that screen time increased during the pandemic, there is considerable variability in the direction and magnitude of change between studies. For example, Welling et al 14 reported no significant changes, Morrison et al 15 reported a decrease of 15 min/d, and McArthur et al 4 and Pietrobelli et al 16 reported increases of 102 min/d and 292 min/d, respectively, before vs during the pandemic. Thus, there is a need to explain between-study variability in COVID-19–associated changes in screen time. The variation in design affordances across devices and platforms, such as their mobility and intended use, may yield variations in the patterns of change across device type. With more than 1.5 billion children worldwide moving to online school at the outset of the pandemic, 17 context of use should also be examined because screen time could have increased for educational use.

One expected, developmentally relevant moderator of changes in screen time is child age because screen time increases across childhood. 18 , 19 Variability could also be sex specific, with studies showing that screen time is higher for boys than for girls, 19 - 21 and informant dependent because youths (vs parents) may be more reliable estimators of their own behavior. 11 , 22 Between-study variability may also be associated with the populations under investigation, such as children and adolescents with medical (eg, obesity) or clinical (eg, autism spectrum disorder) diagnoses who may have been prone to receiving or requesting more screen time. 23 - 26 Another source of heterogeneity could be study design, with some studies providing longitudinal change in cohorts of children by comparing pandemic data with historical prepandemic data, whereas other studies were cross-sectional and asked participants to retrospectively recall prepandemic screen time (an approach prone to recall bias). 27 Finally, government-mandated restrictions and their seasonal timing varied across countries, which could have affected estimates across studies.

The objectives of this study were to conduct a systematic review and meta-analysis of global changes in child and adolescent screen time before vs during the COVID-19 pandemic and to determine the degree to which these changes differed across devices, context of use, age groups, sexes, devices, population types, methods, and region and season (ie, geographic latitude). Together, these objectives can inform practitioners, programs, and policies seeking to put child and adolescent sedentary behaviors at the forefront of global pandemic recovery efforts.

In this meta-analysis, 4 electronic databases (MEDLINE, Embase, PsycINFO, and the Cochrane Central Register of Controlled Trials) were searched for studies published between January 1, 2020, and March 5, 2022. Search strategy terms included screen time, sedentary behavior, and COVID-19 (eTable 1 in the Supplement ). Retrieved studies were imported into Covidence, 28 where duplicates were automatically removed. Reference lists of included studies and relevant systematic reviews were also hand searched. This review was registered as a protocol with PROSPERO ( CRD42022320709 ).

Study inclusion criteria were reported changes in the duration (minutes per day) of screen time before and during the COVID-19 pandemic within the same group of children; children, adolescents, and young adults (≤18 years); longitudinal or retrospective study; peer reviewed; and published in English. Exclusion criteria were case studies, reports, and qualitative analyses. Study inclusion was determined by 2 independent coders (S.M. and P.P.), who coded all titles or abstracts in Covidence (mean random agreement probability, 93%). Independent coders (S.M. and P.P.) reviewed all full-text articles against the inclusion criteria. Discrepancies were resolved via consensus.

Changes in the duration of daily screen time before vs during the pandemic were extracted from each study. Inferential statistics ( P value, z score, t value, and CI) were extracted to calculate the SE of these changes. When studies included male and female individuals, separate subsample data were extracted to account for heterogeneity arising from real differences in screen use between sexes. Data extraction was conducted by 2 coders (P.P. and R.D.N.). Intercoder agreement was 94%.

Continuous moderators were baseline (prepandemic) screen time (minutes per day), number of months between assessments of screen time, sample geographic latitude, and study quality. Categorical moderators were device type or content (handheld device use, personal computers, television, videogaming, and social media), content (recreational and recreational plus educational [ie, total]), age group (preschool [≤5 years] and primary school [>5 to ≤12 years], and secondary school [>12 to ≤18 years]), sex (percentage of female individuals), study design (longitudinal or retrospective), informant (parent or youth), and population (clinical [autism spectrum disorder and psychiatric patients, k  = 4] vs nonclinical; medical [obesity and diabetes, k  = 16] vs nonmedical samples, where k is the number of sample estimates).

Study quality was assessed with items from the National Institutes of Health Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies. 29 Each study received a score of 0 (criterion unmet) or 1 (criterion met) for 11 quality indicators, which were tallied to give a quality score from 0 to 11 (eTable 2 and eTable 3 in the Supplement ). The study followed the Preferred Reporting Items for Systematic Reviews and Meta-analyses ( PRISMA ) reporting guideline. Data for this study were freely available through published studies.

Random-effects meta-analyses 30 were conducted in SAS, version 9.4 (SAS Institute Inc) from April 6 through May 5, 2022. The inverse square method was used to weight sample estimates. 31 Between-sample heterogeneity was summarized with the τ statistic, representing the typical differences in the meta-analyzed mean between samples. Effect sizes were calculated by following the Cohen 32 principle of standardization (ie, by dividing outcomes by their respective between-person SD of pre–COVID-19 screen time). Standardized thresholds for small, moderate, large, and very large effect sizes were 0.2, 0.6, 1.2, and 2 SDs, respectively. 33 Sampling uncertainty is represented as 90% CIs. Precision of estimation 33 was deemed inadequate or unclear when the 90% CI included substantial positive and negative values (ie, −0.2 and 0.2 SDs, respectively). When the 90% CI included both trivial and substantial (positive or negative) values, the outcome was interpreted as “possibly” substantial. Publication bias and potential outliers were evaluated with the random-effects output (ie, the random-effect solutions for each sample estimate) from the meta-analytic model described earlier. Publication bias was evaluated with a scatterplot of the random-effect solutions and the SEs for each sample estimate. Potential outliers were detected when the P value for the random-effect solution was less than a threshold given by P  < .05 divided by the degrees of freedom for the sample estimate random-effect solution in question.

Our search strategy produced 2474 nonduplicate records, and 136 underwent full-text review ( Figure 1 ). Forty-six studies met the full inclusion criteria, with 146 available estimates. Of the 146 estimates, 87 represented changes for all devices combined, 20 for handheld devices, 13 for personal computers, 11 for television, 9 for video gaming, and 6 for social media.

Across the 46 studies ( Table 1 ), 4 , 14 - 16 , 24 - 26 , 34 - 72 29 017 children and young adults aged 18 years or younger were represented (57% male and 43% female). The mean (SD) age was 9 (4.1) years. Only 9 (20%) of the studies included in this meta-analysis reported data on the race or ethnicity of their sample, and the data on racial and ethnic categories among these 9 studies were inconsistently reported. Studies used parent-reported (29 studies [63%]) or child-reported (17 studies [37%]) data. In terms of context of use, 29 studies reported changes in recreational screen use (17 studies for recreational plus education use). Most studies (28 [61%]) reported longitudinal estimates of change in screen time; the remaining 18 studies (39%) were retrospective estimates of prepandemic data. Of the 46 included studies, 14 were from Asia (30%), 12 from Europe (26%), 12 from North America (26%), 3 from Australia or New Zealand (7%), 2 from South America (4%), and 2 from the Middle East (4%), and 1 study (2%) had pooled data from multiple countries. The mean study quality score was 6.8 (range, 3-9) (eTable 3 in the Supplement ).

From a baseline value of 162 min/d (2.7 h/d), total daily screen time across all children increased during the COVID-19 pandemic by 84 min/d (90% CI, 51-116 min/d), corresponding to a moderate effect size when standardized ( Figure 2 ). Between-study heterogeneity was small as summarized by a τ statistic of 0.3 SDs (90% CI, 0.2-0.5 SDs).

Moderator analyses ( Table 2 ) 73 revealed that increases in screen time were particularly marked for individuals 12 to 18 years of age, whose total daily screen time increased by 110 min/d ( k  = 26; 90% CI, 72-149 min/d), corresponding to a moderate to large effect size. The increase in total daily screen time for preschoolers and primary school children was smaller—approximately 65 min/d—corresponding to a moderate effect size (preschool k  = 12 [mean, 66 min/d; 90% CI, 27-106 min/d]; primary school k  = 49 [mean, 65 min/d; 90% CI, 36-95 min/d]). Time spent on both handheld devices and personal computers increased by approximately 45 min/d on both types of devices, corresponding to a moderate to large effect size (handheld device k  = 20 [mean, 44 min/d; 90% CI, 11-77 min/d]; personal computer k  = 13 [mean, 46 min/d; 90% CI, 12-81 min/d]). Moderator analyses also revealed that changes in total daily screen time were larger for sample estimates in which the data were reported retrospectively (116 min/d; 90% CI, 95-137 min/d; k  = 36) rather than longitudinally (65 min/d; 90% CI, 50-80 min/d; k  = 51). Both estimates were in the range of moderate effect sizes.

Moderator analyses ( Table 2 ) signaled possible increases in television viewing, video gaming, and social media use. Changes in daily screen time were also possibly larger for sample estimates with higher baseline (pre–COVID-19) screen time levels, sample estimates of recreational screen time, sample estimates representing children and adolescents with weight-related medical diagnoses, and sample estimates based on parental reports. However, sampling uncertainty in each of these outcomes was too large to be definitive (ie, 90% CIs included a wide range of trivial values). Sampling uncertainty for the remaining moderators shown in Table 2 (ie, sex, regional and seasonal characteristics, studies of samples with clinical diagnoses, and studies conducted over different durations) should be interpreted as unclear.

The standardized slope of the regression line representing publication bias was a trivial effect size (β = 0.09; 90% CI, −0.06 to 0.25) (eFigure in the Supplement ). A single outlier was identified against the weighted threshold of P  < .001. The direction or effect sizes of study outcomes were not sensitive to the removal of this outlier.

This meta-analysis of 46 studies (146 effect sizes) from 29 017 children and adolescents revealed that, on average, screen time increased by 52%, or 84 min/d (1.4 h/d), during the pandemic. Compared with a prepandemic baseline value of 162 min/d (2.7 h/d), this increase corresponds to a daily mean of 246 minutes of screen time per day (4.1 h/d) across all children and adolescents during the pandemic. This substantial change in screen time is more than what can be expected according to developmental changes 19 , 20 and time trends. 21 Substantial mean increases were observed in samples examining changes in recreational screen time alone (increase of 84 min/d) as well as combined estimates of recreational plus educational (increase of 68 min/d) screen time from prior to during the pandemic. As such, changes in screen time estimated in this study can very likely be associated with the unprecedented disruptions of the COVID-19 pandemic. These findings should be considered along with another meta-analysis suggesting a 32% decrease in children’s engagement in moderate to vigorous physical activity during the pandemic. 74 Policy-relevant pandemic recovery planning and resource allocation should therefore consider how to help children, adolescents, and families to “sit less and play more” to meet the 24-hour movement guidelines. 75

In this meta-analysis, we identified several moderators that explained existing heterogeneity across studies examining changes in screen time before vs during the pandemic. Changes were larger for individuals 12 to 18 years of age (110 min/d) compared with preschoolers (66 min/d) and middle school children (65 min/d). Adolescents were more likely than their younger counterparts to own and access digital devices. 76 This finding could also be explained by the fact that adolescence is marked by an increased emphasis on both a wider interpersonal and virtual peer network as well as the development of romantic relationships. 77 In most circumstances, the social distancing restrictions implemented during the pandemic prohibited face-to-face social interactions between children and adolescents from different households, especially early in the pandemic. Therefore, it is likely that they resorted to and relied on digital devices to stay connected. This finding aligns with a recent census of screen use among children and adolescents, in which 83% of respondents reported using screens to stay connected with family and friends. 78 Adolescents were also more likely than younger children during the pandemic to seek new outlets for creative expression, learning new skills and building on existing skills in a remote context, much of which took place on digital devices. 78

The estimated mean changes in screen time spent on handheld devices (44 min/d) and personal computers (46 min/d) were particularly marked, whereas changes in television, gaming, and social media were similar. This finding aligns with the observation that, as devices became a central component of daily living and interactions during the pandemic—for work, schooling, learning, socialization, and recreation alike—1 in 5 parents reportedly purchased new devices for their children, primarily computers and handheld devices. 79 Handheld devices and personal computers also provide access to text messaging, instant messaging, video chatting and sharing, etc, which children and adolescents are more likely to engage in to connect with peers.

Although the observed mean values were both moderate effect sizes, there was a larger range of increases in screen time estimated when prepandemic screen time data were collected in studies retrospectively (90% CI, 95-116 min/d) rather than longitudinally (90% CI, 50-80 min/d). Given the unprecedented nature of the pandemic as well as the time-sensitive need to study pandemic-related associations in real time, some scholars collected pandemic data in a largely pragmatic manner, including the use of retrospective recall of prepandemic experiences and behaviors. However, retrospective study designs are vulnerable to recall bias. 27 For example, parents may have become more acutely aware of their children’s screen time during lockdowns, which may have biased their perception of and ability to accurately recall their children’s prepandemic screen time. Comparatively speaking, longitudinal designs are often more methodologically rigorous. As such, within-person studies of child and adolescent screen time should be more heavily relied on to inform decision-making regarding policy and practice given their scope for enhanced precision of estimation.

Although we examined duration, content, and context of use in this meta-analysis, we could not examine how children and adolescents were using screens (eg, solitary viewing, gaming with others, or video chatting). It is possible, for example, that some youths used screens as a supportive tool for connecting with peers and other supports during physical distancing, which could explain their increased use. Children and adolescents who used screens to coview or connect with others during the pandemic had half as much screen time as their peers who viewed screens in a solitary manner. 80 Thus, future research should examine duration of screen time and its association with whatever devices or platforms children and adolescents are using, examine how they are engaging with screens, and determine when and for whom problematic screen use may develop. 81

Studies have found small associations between increased screen use among children and poor mental health both before (see Eirich et al 11 for a meta-analysis) and during the pandemic 82 - 85 ; however, the association may be nonlinear. That is, there is support for an inverted U -shaped association between screen time and well-being—the “Goldilocks hypothesis”—in which children who receive less than 1 hour of screen time per day and those who receive high doses of screen time have been shown to have the poorest psychosocial functioning compared with children with moderate screen use. 86 Thus, restricting screens altogether is likely not a feasible or optimal solution to managing children’s and adolescents’ screen use during the pandemic or afterward. Understanding how screens have been used during the COVID-19 pandemic, for better and for worse, 87 and determining who is at greatest risk for sustained problematic outcomes require priority in future studies. Cohort study designs with repeated measures that can account for changes in screen use and mental health before, during, and after the COVID-19 pandemic will be particularly important for this endeavor.

The observed increase in screen time during the COVID-19 pandemic may be temporary and context dependent for some youths (eg, those isolated during school closures). However, for others, sustained problematic screen use habits may be formed. Practitioners working with children, adolescents, and families should focus on promoting healthy device habits among youths, which can include moderating and monitoring daily use, choosing age-appropriate programs, and prioritizing device-free time with family and friends. Youths should be prompted to think about how they use screens and whether they can focus their time on screens to meaningfully connect with others or as a creative outlet. It is also critical to discuss balancing screen use with other important daily functions, such as sleep and physical activity. Last, given that screen use is often interconnected among family members, that parents’ level of screen use is strongly associated with children’s screen use, 88 and that parents’ stress during the pandemic was associated with children’s increased duration of screen use, 4 it is important for practitioners to speak jointly with youths and their caregivers to effect change in familywide screen use. 89

This study had several limitations. First, although there was representative coverage of various continents in this meta-analysis, there were no samples from South Africa and limited samples from South America and the Middle East. Thus, findings may be relevant only to specific geographic regions of the world. Second, no reports of screen time were validated against passive sensing apps. 90 Third, only 1 study explicitly reported that all participants were engaging in virtual learning, and included samples were homogeneous in terms of socioeconomic status, precluding consideration of these variables as potential moderators. Greater diversity in sampling for future research studies on child and adolescent screen use is urgently needed.

The COVID-19 pandemic led to substantial changes in daily routines of children and adolescents. This systematic review and meta-analysis revealed that their screen time during the pandemic increased by 52% compared with prepandemic baseline estimates, which is greater than what would be expected based on age changes and time trends. Recovery initiatives should focus on promoting healthy device habits among children and adolescents, including moderating daily use, monitoring content, and promoting the use of screens as a creative outlet and to meaningfully connect with others. Cohort study designs with repeated measurement of screen time that can account for developmental change, as well as preexisting risks and stable contextual factors or vulnerabilities, are needed to disentangle the associations of the COVID-19 pandemic with the screen time and mental health outcomes of children and adolescents.

Accepted for Publication: August 10, 2022.

Published Online: November 7, 2022. doi:10.1001/jamapediatrics.2022.4116

Corresponding Author: Sheri Madigan, PhD, Department of Psychology, University of Calgary, 2500 University Ave, Calgary, AB T2N 1N4, Canada ( [email protected] ).

Author Contributions: Drs Madigan and Neville had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.

Concept and design: Madigan, Eirich, Neville.

Acquisition, analysis, or interpretation of data: All authors.

Drafting of the manuscript: Madigan, Eirich, Pador, Neville.

Critical revision of the manuscript for important intellectual content: Eirich, McArthur, Neville.

Statistical analysis: Eirich, Neville.

Administrative, technical, or material support: Madigan, Eirich, Pador.

Supervision: Madigan.

Conflict of Interest Disclosures: None reported.

Additional Information: Data extracted from included studies, data used for the meta-analysis, and SAS mixed-model code are available on reasonable request to the corresponding author.

  • Register for email alerts with links to free full-text articles
  • Access PDFs of free articles
  • Manage your interests
  • Save searches and receive search alerts

IMAGES

  1. Critical Review of Journal Article Example

    journal article review time

  2. 😂 How to write a review article. How to Write a Peer Review for an

    journal article review time

  3. How to Write an Article Review: Tips, Topics, Free Sample

    journal article review time

  4. Article Review

    journal article review time

  5. 🌷 Writing a journal review. How to Write a Journal Article Review PPT

    journal article review time

  6. How to Write an Article Review (With Samples)

    journal article review time

VIDEO

  1. Journal Article Review

  2. JOURNAL ARTICLE REVIEW # 8: PAANO MAGING MASAYA (TAMIR ET AL, 2017)

  3. JOURNAL ARTICLE REVIEW # 19: SPIRITUALITY IN OLD AGE

  4. JOURNAL ARTICLE REVIEW # 12: ATTACHMENT AND RELATIONAL SATISFACTION (MADEY & ROGERS, 2009)

  5. JOURNAL ARTICLE REVIEW # 13: MATERIALISM AND SOCIAL MEDIA USAGE (KAMAL ET. AL, 2013)

  6. COH606 JOURNAL ARTICLE REVIEW

COMMENTS

  1. editors

    When you review papers submitted for publication, is there an "optimal" length for reviews? In my experience as an author and referee, I have seen a large range of review lengths (for reference, a paper in my field is typically between 3 and 8 printed pages):

  2. SciRev

    Journal pages. Each journal has its own page with information about the review process. Data on experienced duration and quality of the review process are provided by researchers.

  3. Duration and quality of the peer review process: the author's

    There are around 28,000 scientific journals worldwide, which publish 2.5 million scientific articles annually, produced by a research community of 6-9 million scientists (Ware and Mabe 2015; Jinha 2010; Björk et al. 2009; Plume and Van Weijen 2014; Etkin 2014).Many of the published articles have been rejected at least once before they reached the editor's desk of the journal in which they ...

  4. Editorial and Peer Review Process

    Discover a faster, simpler path to publishing in a high-quality journal. PLOS ONE promises fair, rigorous peer review, broad scope, and wide readership - a perfect fit for your research every time.. Learn More Submit Now

  5. Step by Step Guide to Reviewing a Manuscript

    Step by step guide to reviewing a manuscript. When you receive an invitation to peer review, you should be sent a copy of the paper's abstract to help you decide whether you wish to do the review.

  6. How Long Does Peer Review Take?

    Another explanation, though, is that reviewers are simply not being careful enough. This was shown to be partly the case in a well-known pathology journal that had lowered review speed to 16 days for an initial decision.. Yet still, no matter how responsible and well-managed journals may be, there are times when you can give them a nudge.

  7. How to write a good scientific review article

    Literature reviews are valuable resources for the scientific community. With research accelerating at an unprecedented speed in recent years and more and more original papers being published, review articles have become increasingly important as a means to keep up-to-date with developments in a particular area of research.

  8. How Long Is Too Long in Contemporary Peer Review? Perspectives ...

    Delays in peer reviewed publication may have consequences for both assessment of scientific prowess in academia as well as communication of important information to the knowledge receptor community. We present an analysis on the perspectives of authors publishing in conservation biology journals regarding their opinions on the importance of speed in peer-review as well as how to improve review ...

  9. How to Write an Effective Journal Article Review

    Role of the Editor. Each manuscript that is submitted to a peer-reviewed journal is assigned to a managing editor who is responsible for the following tasks: (1) selection of reviewers, (2) independent review of manuscript content, (3) integration of both reviewers' comments and feedback from his or her own independent review in order to prepare an editorial decision, and (4) preparation of ...

  10. How long does the review process take?

    The review process currently averages at 61 days from submission to acceptance across our 50+ journals.It varies across journals for a number of reasons (e.g. some fields have reviewers who are on field work and out of contact for a time, and some fields do more iterations in the discussion forum).

  11. How to Write a Peer Review

    Here's how your outline might look: 1. Summary of the research and your overall impression. In your own words, summarize what the manuscript claims to report.

  12. How to find out the average duration of the peer-review process for a

    The other point I made in my previous answer about rapid peer review is that depending on your situation, it might not be as important as you think to have your paper published; in many cases, submission to a reputable journal counts for almost as much as publication - it indicates to potential admissions committees, employers, etc. that your work is actually ready for prime time (as opposed ...

  13. How Long Should Authors Wait for a Journal's Response? (and When to

    Wait 6-8 weeks to hear back from the journal and then to write a politely-worded email to the editor to request more information, suggesting some additional names and emails for suitable peer reviewers.

  14. How to check how fast the review and publication process in a journal

    Some journals are quite open and self-report submitted texts, rejection rate, time of the review process etc. For instance, one of the leading journals in my field (political science) does this every year here Maybe there is something similar in your field of comp. science and educational science?! Nice overviews about journals and their peer review process can be found on 2 websites: https ...

  15. How long should I wait for a response from the journal?

    Answer: 1.5 months? It is a long time since you have submitted your paper to the journal. I submitted one paper to American Journal of Industrial and Business Management two months ago, and I received the review report about 3 weeks later after my submission. I think they are high-efficiency.

  16. How to Review a Journal Article

    For many kinds of assignments, like a literature review, you may be asked to offer a critique or review of a journal article.This is an opportunity for you as a scholar to offer your qualified opinion and evaluation of how another scholar has composed their article, argument, and research.That means you will be expected to go beyond a simple summary of the article and evaluate it on a deeper ...

  17. How Long Is Too Long in Contemporary Peer Review? Perspectives from

    Introduction. Peer reviewed publications remain the cornerstone of the scientific world [1, 2] despite the fact that the review process is not infallible [3, 4].Such publications are an essential means of disseminating scientific information through credible and accessible channels.

  18. Review time in peer review: quantitative analysis and modelling of

    Despite a variety of criticisms of its effectiveness (Wager and Jefferson 2001; Cooper 2009), peer review is a fundamental mechanism for validating the quality of the research that is published in today's scientific literature (Baker 2002; Ware and Monkman 2008; Mulligan et al. 2013; Wareand Mabe 2015; Nicholas et al. 2015).It is a complex, multi-phase process that seems to be largely ...

  19. Systematic reviews: Structure, form and content

    In recent years, there has been an explosion in the number of systematic reviews conducted and published (Chalmers & Fox 2016, Fontelo & Liu 2018, Page et al 2015) - although a systematic review may be an inappropriate or unnecessary research methodology for answering many research questions.Systematic reviews can be inadvisable for a variety of reasons.

  20. Review Time: Setting Time for Academic Manuscript Peer Review

    Review Time There has been a big push over the last 10 years to get academic work published as quickly as possible. As time to publication becomes more and more critical, there is a trend to make review times shorter now than before.

  21. How to Review a Journal Article: A Guide to Peer Reviewing

    This article was co-authored by Richard Perkins.Richard Perkins is a Writing Coach, Academic English Coordinator, and the Founder of PLC Learning Center. With over 24 years of education experience, he gives teachers tools to teach writing to students and works with elementary to university level students to become proficient, confident writers.

  22. The History of Peer Review Is More Interesting Than You Think

    Peer review has become a cornerstone of academic publishing, a fundamental part of scholarship itself. With peer review, independent, third-party experts in the relevant field(s) assess manuscripts submitted to journals. The idea is that these expert peers referee the process, especially when it comes to technical matters that may be beyond the knowledge of editors.

  23. Average time alloted for manuscript review across disciplines

    I am asking this question because I've always had the impression that manuscripts submitted to a journal in my field (theoretical linguistics) take an insanely long time to get published, and sometimes I get the impression that this is because, most of the time, nobody other than the author cares about getting things done within a reasonable amount of time.

  24. McDonald's same-store sales fall for the 1st time since the pandemic

    The Associated Press is an independent global news organization dedicated to factual reporting. Founded in 1846, AP today remains the most trusted source of fast, accurate, unbiased news in all formats and the essential provider of the technology and services vital to the news business.

  25. 'Time Bandits' Review: Ragtag Robbers Return

    Starring Lisa Kudrow and Kal-El Tuck as time-traveling treasure-hunters, Taika Waititi's reboot series is an educational romp through history that departs from Terry Gilliam's 1981 original.

  26. UnidosUS kicks off Las Vegas convention; Biden expected to speak

    The nation's largest Latino-centered civil rights organization on Monday kicked off its annual convention on the Strip. This year's "Our Time is Now!" three-day event is centered around ...

  27. Natural Medicine: Go Outside for Better Mental Health, Study Finds

    MONDAY, July 22, 2024 (HealthDay News) -- Spending time in nature can provide a boost for people with mental illness, a new review finds. Even as little as 10 minutes spent in a city park can improve a person's symptoms, researchers found. The positive effects of nature approved particularly ...

  28. submission review is taking too long

    I have submitted an article for a journal in which it was stated that the review time was from 80 to 120 days, that is approximately 4 months at most. The submission of my article was approximately...

  29. Assessment of Changes in Child and Adolescent Screen Time During the

    Key Points. Question To what extent has the COVID-19 pandemic been associated with changes in the duration, content, and context of daily screen time among children and adolescents globally?. Findings In this systematic review and meta-analysis of 46 studies including 29 017 youths (≤18 years), pooled estimates comparing estimates taken before and during the COVID-19 pandemic revealed an ...