University of Maryland Libraries Logo

Systematic Review

  • Library Help
  • What is a Systematic Review (SR)?

Steps of a Systematic Review

  • Framing a Research Question
  • Developing a Search Strategy
  • Searching the Literature
  • Managing the Process
  • Meta-analysis
  • Publishing your Systematic Review

Forms and templates

Logos of MS Word and MS Excel

Image: David Parmenter's Shop

  • PICO Template
  • Inclusion/Exclusion Criteria
  • Database Search Log
  • Review Matrix
  • Cochrane Tool for Assessing Risk of Bias in Included Studies

   • PRISMA Flow Diagram  - Record the numbers of retrieved references and included/excluded studies. You can use the Create Flow Diagram tool to automate the process.

   •  PRISMA Checklist - Checklist of items to include when reporting a systematic review or meta-analysis

PRISMA 2020 and PRISMA-S: Common Questions on Tracking Records and the Flow Diagram

  • PROSPERO Template
  • Manuscript Template
  • Steps of SR (text)
  • Steps of SR (visual)
  • Steps of SR (PIECES)

Image by

from the UMB HSHSL Guide. (26 min) on how to conduct and write a systematic review from RMIT University  from the VU Amsterdam . , (1), 6–23. https://doi.org/10.3102/0034654319854352

. (1), 49-60. . (4), 471-475.

 (2020)  (2020) - Methods guide for effectiveness and comparative effectiveness reviews (2017)  - Finding what works in health care: Standards for systematic reviews (2011)  - Systematic reviews: CRD’s guidance for undertaking reviews in health care (2008)

entify your research question. Formulate a clear, well-defined research question of appropriate scope. Define your terminology. Find existing reviews on your topic to inform the development of your research question, identify gaps, and confirm that you are not duplicating the efforts of previous reviews. Consider using a framework like  or to define you question scope. Use to record search terms under each concept. 

 It is a good idea to register your protocol in a publicly accessible way. This will help avoid other people completing a review on your topic. Similarly, before you start doing a systematic review, it's worth checking the different registries that nobody else has already registered a protocol on the same topic.

- Systematic reviews of health care and clinical interventions  - Systematic reviews of the effects of social interventions (Collaborative Approach to Meta-Analysis and Review of Animal Data from Experimental Studies) - The protocol is published immediately and subjected to open peer review. When two reviewers approve it, the paper is sent to Medline, Embase and other databases for indexing. - upload a protocol for your scoping review - Systematic reviews of healthcare practices to assist in the improvement of healthcare outcomes globally - Registry of a protocol on OSF creates a frozen, time-stamped record of the protocol, thus ensuring a level of transparency and accountability for the research. There are no limits to the types of protocols that can be hosted on OSF.  - International prospective register of systematic reviews. This is the primary database for registering systematic review protocols and searching for published protocols. . PROSPERO accepts protocols from all disciplines (e.g., psychology, nutrition) with the stipulation that they must include health-related outcomes.  - Similar to PROSPERO. Based in the UK, fee-based service, quick turnaround time. - Submit a pre-print, or a protocol for a scoping review.   - Share your search strategy and research protocol. No limit on the format, size, access restrictions or license.

outlining the details and documentation necessary for conducting a systematic review:

, (1), 28.
Clearly state the criteria you will use to determine whether or not a study will be included in your search. Consider study populations, study design, intervention types, comparison groups, measured outcomes. Use some database-supplied limits such as language, dates, humans, female/male, age groups, and publication/study types (randomized controlled trials, etc.).
Run your searches in the to your topic. Work with to help you design comprehensive search strategies across a variety of databases. Approach the grey literature methodically and purposefully. Collect ALL of the retrieved records from each search into , such as  , or , and prior to screening. using the  and .
- export your Endnote results in this screening software Start with a title/abstract screening to remove studies that are clearly not related to your topic. Use your to screen the full-text of studies. It is highly recommended that two independent reviewers screen all studies, resolving areas of disagreement by consensus.
Use , or systematic review software (e.g. , ), to extract all relevant data from each included study. It is recommended that you pilot your data extraction tool, to determine if other fields should be included or existing fields clarified.
Risk of Bias (Quality) Assessment -  (download the Excel spreadsheet to see all data) Use a Risk of Bias tool (such as the ) to assess the potential biases of studies in regards to study design and other factors. Read the to learn about the topic of assessing risk of bias in included studies. You can adapt  ( ) to best meet the needs of your review, depending on the types of studies included.

-

-

Clearly present your findings, including detailed methodology (such as search strategies used, selection criteria, etc.) such that your review can be easily updated in the future with new research findings. Perform a meta-analysis, if the studies allow. Provide recommendations for practice and policy-making if sufficient, high quality evidence exists, or future directions for research to fill existing gaps in knowledge or to strengthen the body of evidence.

For more information, see: 

. (2), 217–226. https://doi.org/10.2450/2012.0247-12  - Get some inspiration and find some terms and phrases for writing your manuscript - Automated high-quality spelling, grammar and rephrasing corrections using artificial intelligence (AI) to improve the flow of your writing. Free and subscription plans available.

8. Find the best journal to publish your work. Identifying the best journal to submit your research to can be a difficult process. To help you make the choice of where to submit, simply insert your title and abstract in any of the listed under the tab. 

Adapted from  A Guide to Conducting Systematic Reviews: Steps in a Systematic Review by Cornell University Library

This diagram illustrates in a visual way and in plain language what review authors actually do in the process of undertaking a systematic review.

This diagram illustrates what is actually in a published systematic review and gives examples from the relevant parts of a systematic review housed online on The Cochrane Library. It will help you to read or navigate a systematic review.

Source: Cochrane Consumers and Communications  (infographics are free to use and licensed under Creative Commons )

Check the following visual resources titled " What Are Systematic Reviews?"

  • Video  with closed captions available
  • Animated Storyboard

 

Image:   

-  the methods of the systematic review are generally decided before conducting it.  
- searching for studies which match the preset criteria in a systematic manner
- sort all retrieved articles (included or  excluded) and assess the risk of bias for each included study
- each study is coded with preset form, either qualitatively or quantitatively synthesize data.
- place results of synthesis into context, strengths and weaknesses of the studies 
- report provides description of methods and results in a clear and transparent manner

 

Source: Foster, M. (2018). Systematic reviews service: Introduction to systematic reviews. Retrieved September 18, 2018, from

  • << Previous: What is a Systematic Review (SR)?
  • Next: Framing a Research Question >>
  • Last Updated: Jul 11, 2024 6:38 AM
  • URL: https://lib.guides.umd.edu/SR

Jump to navigation

Home

Cochrane Cochrane Interactive Learning

Cochrane interactive learning, module 1: introduction to conducting systematic reviews, about this module.

Part of the Cochrane Interactive Learning course on Conducting an Intervention Review, this module introduces you to what systematic reviews are and why they are useful. This module describes the various types and preferred format of review questions, and outlines the process of conducting systematic reviews.

 45-60 minutes

What you can expect to learn (learning outcomes).

This module will teach you to:

  • Recognize features of systematic reviews as a research design
  • Recognize the importance of using rigorous methods to conduct a systematic review
  • Identify the types of review questions
  • Identify the elements of a well-defined review question
  • Understand the steps in a systematic review

Authors, contributors, and how to cite this module

Module 1 has been written and compiled by Dario Sambunjak, Miranda Cumpston and Chris Watts,  Cochrane Central Executive Team .

A full list of acknowledgements, including our expert advisors from across Cochrane, is available at the end of each module page. 

This module should be cited as: Sambunjak D, Cumpston M, Watts C. Module 1: Introduction to conducting systematic reviews. In: Cochrane Interactive Learning: Conducting an intervention review. Cochrane, 2017. Available from https://training.cochrane.org/interactivelearning/module-1-introduction-conducting-systematic-reviews .

Update and feedback

The module was last updated on September 2022.

We're pleased to hear your thoughts. If you have any questions, comments or feedback about the content of this module, please contact us .

University of Tasmania, Australia

Systematic reviews for health: online tutorials & courses.

  • Handbooks / Guidelines for Systematic Reviews
  • Standards for Reporting
  • Registering a Protocol
  • Tools for Systematic Review
  • Online Tutorials & Courses
  • Books and Articles about Systematic Reviews
  • Finding Systematic Reviews
  • Critical Appraisal
  • Library Help
  • Bibliographic Databases
  • Grey Literature
  • Handsearching
  • Citation Searching
  • 1. Formulate the Research Question
  • 2. Identify the Key Concepts
  • 3. Develop Search Terms - Free-Text
  • 4. Develop Search Terms - Controlled Vocabulary
  • 5. Search Fields
  • 6. Phrase Searching, Wildcards and Proximity Operators
  • 7. Boolean Operators
  • 8. Search Limits
  • 9. Pilot Search Strategy & Monitor Its Development
  • 10. Final Search Strategy
  • 11. Adapt Search Syntax
  • Documenting Search Strategies
  • Handling Results & Storing Papers

Cochrane Interactive Learning

Cochrane Interactive Learning offers an online course Conducting an Intervention Review . The first introductory module is free for everyone. You need to register.

Module 1: Introduction to conducting systematic reviews  (45-60 minutes)

This module introduces you to what systematic reviews are and why they are useful. This module describes the various types and preferred format of review questions, and outlines the process of conducting systematic reviews.

Online Tutorials

Systematic Searches (Yale University)

This series of tutorials on searching for systematic reviews has been developed by The Yale University's Cushing/Whitney Medical Library. The goal of these tutorials is to ensure that your search is comprehensive, methodical, transparent and reproducible, so that your conclusions are as unbiased and closer to truth as possible. Topics include building search strategies, using filters and finding grey literature.

Coursera Course

The Johns Hopkins University offers the online course Introduction to Systematic Review and Meta-Analysis .

It covers how to formulate an answerable research question,define inclusion and exclusion criteria, search for the evidence, extract data, assess the risk of bias in clinical trials and perform a meta-analysis.

Check out the Coursera website for more details and the next course offering.

Need More Help? Book a consultation with a  Learning and Research Librarian  or contact  [email protected] .

  • << Previous: Tools for Systematic Review
  • Next: Books and Articles about Systematic Reviews >>
  • Last Updated: Aug 6, 2024 10:44 AM
  • URL: https://utas.libguides.com/SystematicReviews

Australian Aboriginal Flag

  • A-Z Publications

Annual Review of Psychology

Volume 70, 2019, review article, how to do a systematic review: a best practice guide for conducting and reporting narrative reviews, meta-analyses, and meta-syntheses.

  • Andy P. Siddaway 1 , Alex M. Wood 2 , and Larry V. Hedges 3
  • View Affiliations Hide Affiliations Affiliations: 1 Behavioural Science Centre, Stirling Management School, University of Stirling, Stirling FK9 4LA, United Kingdom; email: [email protected] 2 Department of Psychological and Behavioural Science, London School of Economics and Political Science, London WC2A 2AE, United Kingdom 3 Department of Statistics, Northwestern University, Evanston, Illinois 60208, USA; email: [email protected]
  • Vol. 70:747-770 (Volume publication date January 2019) https://doi.org/10.1146/annurev-psych-010418-102803
  • First published as a Review in Advance on August 08, 2018
  • Copyright © 2019 by Annual Reviews. All rights reserved

Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question. The best reviews synthesize studies to draw broad theoretical conclusions about what a literature means, linking theory to evidence and evidence to theory. This guide describes how to plan, conduct, organize, and present a systematic review of quantitative (meta-analysis) or qualitative (narrative review, meta-synthesis) information. We outline core standards and principles and describe commonly encountered problems. Although this guide targets psychological scientists, its high level of abstraction makes it potentially relevant to any subject area or discipline. We argue that systematic reviews are a key methodology for clarifying whether and how research findings replicate and for explaining possible inconsistencies, and we call for researchers to conduct systematic reviews to help elucidate whether there is a replication crisis.

Article metrics loading...

Full text loading...

Literature Cited

  • APA Publ. Commun. Board Work. Group J. Artic. Rep. Stand. 2008 . Reporting standards for research in psychology: Why do we need them? What might they be?. Am. Psychol . 63 : 848– 49 [Google Scholar]
  • Baumeister RF 2013 . Writing a literature review. The Portable Mentor: Expert Guide to a Successful Career in Psychology MJ Prinstein, MD Patterson 119– 32 New York: Springer, 2nd ed.. [Google Scholar]
  • Baumeister RF , Leary MR 1995 . The need to belong: desire for interpersonal attachments as a fundamental human motivation. Psychol. Bull. 117 : 497– 529 [Google Scholar]
  • Baumeister RF , Leary MR 1997 . Writing narrative literature reviews. Rev. Gen. Psychol. 3 : 311– 20 Presents a thorough and thoughtful guide to conducting narrative reviews. [Google Scholar]
  • Bem DJ 1995 . Writing a review article for Psychological Bulletin. Psychol . Bull 118 : 172– 77 [Google Scholar]
  • Borenstein M , Hedges LV , Higgins JPT , Rothstein HR 2009 . Introduction to Meta-Analysis New York: Wiley Presents a comprehensive introduction to meta-analysis. [Google Scholar]
  • Borenstein M , Higgins JPT , Hedges LV , Rothstein HR 2017 . Basics of meta-analysis: I 2 is not an absolute measure of heterogeneity. Res. Synth. Methods 8 : 5– 18 [Google Scholar]
  • Braver SL , Thoemmes FJ , Rosenthal R 2014 . Continuously cumulating meta-analysis and replicability. Perspect. Psychol. Sci. 9 : 333– 42 [Google Scholar]
  • Bushman BJ 1994 . Vote-counting procedures. The Handbook of Research Synthesis H Cooper, LV Hedges 193– 214 New York: Russell Sage Found. [Google Scholar]
  • Cesario J 2014 . Priming, replication, and the hardest science. Perspect. Psychol. Sci. 9 : 40– 48 [Google Scholar]
  • Chalmers I 2007 . The lethal consequences of failing to make use of all relevant evidence about the effects of medical treatments: the importance of systematic reviews. Treating Individuals: From Randomised Trials to Personalised Medicine PM Rothwell 37– 58 London: Lancet [Google Scholar]
  • Cochrane Collab. 2003 . Glossary Rep., Cochrane Collab. London: http://community.cochrane.org/glossary Presents a comprehensive glossary of terms relevant to systematic reviews. [Google Scholar]
  • Cohn LD , Becker BJ 2003 . How meta-analysis increases statistical power. Psychol. Methods 8 : 243– 53 [Google Scholar]
  • Cooper HM 2003 . Editorial. Psychol. Bull. 129 : 3– 9 [Google Scholar]
  • Cooper HM 2016 . Research Synthesis and Meta-Analysis: A Step-by-Step Approach Thousand Oaks, CA: Sage, 5th ed.. Presents a comprehensive introduction to research synthesis and meta-analysis. [Google Scholar]
  • Cooper HM , Hedges LV , Valentine JC 2009 . The Handbook of Research Synthesis and Meta-Analysis New York: Russell Sage Found, 2nd ed.. [Google Scholar]
  • Cumming G 2014 . The new statistics: why and how. Psychol. Sci. 25 : 7– 29 Discusses the limitations of null hypothesis significance testing and viable alternative approaches. [Google Scholar]
  • Earp BD , Trafimow D 2015 . Replication, falsification, and the crisis of confidence in social psychology. Front. Psychol. 6 : 621 [Google Scholar]
  • Etz A , Vandekerckhove J 2016 . A Bayesian perspective on the reproducibility project: psychology. PLOS ONE 11 : e0149794 [Google Scholar]
  • Ferguson CJ , Brannick MT 2012 . Publication bias in psychological science: prevalence, methods for identifying and controlling, and implications for the use of meta-analyses. Psychol. Methods 17 : 120– 28 [Google Scholar]
  • Fleiss JL , Berlin JA 2009 . Effect sizes for dichotomous data. The Handbook of Research Synthesis and Meta-Analysis H Cooper, LV Hedges, JC Valentine 237– 53 New York: Russell Sage Found, 2nd ed.. [Google Scholar]
  • Garside R 2014 . Should we appraise the quality of qualitative research reports for systematic reviews, and if so, how. Innovation 27 : 67– 79 [Google Scholar]
  • Hedges LV , Olkin I 1980 . Vote count methods in research synthesis. Psychol. Bull. 88 : 359– 69 [Google Scholar]
  • Hedges LV , Pigott TD 2001 . The power of statistical tests in meta-analysis. Psychol. Methods 6 : 203– 17 [Google Scholar]
  • Higgins JPT , Green S 2011 . Cochrane Handbook for Systematic Reviews of Interventions, Version 5.1.0 London: Cochrane Collab. Presents comprehensive and regularly updated guidelines on systematic reviews. [Google Scholar]
  • John LK , Loewenstein G , Prelec D 2012 . Measuring the prevalence of questionable research practices with incentives for truth telling. Psychol. Sci. 23 : 524– 32 [Google Scholar]
  • Juni P , Witschi A , Bloch R , Egger M 1999 . The hazards of scoring the quality of clinical trials for meta-analysis. JAMA 282 : 1054– 60 [Google Scholar]
  • Klein O , Doyen S , Leys C , Magalhães de Saldanha da Gama PA , Miller S et al. 2012 . Low hopes, high expectations: expectancy effects and the replicability of behavioral experiments. Perspect. Psychol. Sci. 7 : 6 572– 84 [Google Scholar]
  • Lau J , Antman EM , Jimenez-Silva J , Kupelnick B , Mosteller F , Chalmers TC 1992 . Cumulative meta-analysis of therapeutic trials for myocardial infarction. N. Engl. J. Med. 327 : 248– 54 [Google Scholar]
  • Light RJ , Smith PV 1971 . Accumulating evidence: procedures for resolving contradictions among different research studies. Harvard Educ. Rev. 41 : 429– 71 [Google Scholar]
  • Lipsey MW , Wilson D 2001 . Practical Meta-Analysis London: Sage Comprehensive and clear explanation of meta-analysis. [Google Scholar]
  • Matt GE , Cook TD 1994 . Threats to the validity of research synthesis. The Handbook of Research Synthesis H Cooper, LV Hedges 503– 20 New York: Russell Sage Found. [Google Scholar]
  • Maxwell SE , Lau MY , Howard GS 2015 . Is psychology suffering from a replication crisis? What does “failure to replicate” really mean?. Am. Psychol. 70 : 487– 98 [Google Scholar]
  • Moher D , Hopewell S , Schulz KF , Montori V , Gøtzsche PC et al. 2010 . CONSORT explanation and elaboration: updated guidelines for reporting parallel group randomised trials. BMJ 340 : c869 [Google Scholar]
  • Moher D , Liberati A , Tetzlaff J , Altman DG PRISMA Group. 2009 . Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ 339 : 332– 36 Comprehensive reporting guidelines for systematic reviews. [Google Scholar]
  • Morrison A , Polisena J , Husereau D , Moulton K , Clark M et al. 2012 . The effect of English-language restriction on systematic review-based meta-analyses: a systematic review of empirical studies. Int. J. Technol. Assess. Health Care 28 : 138– 44 [Google Scholar]
  • Nelson LD , Simmons J , Simonsohn U 2018 . Psychology's renaissance. Annu. Rev. Psychol. 69 : 511– 34 [Google Scholar]
  • Noblit GW , Hare RD 1988 . Meta-Ethnography: Synthesizing Qualitative Studies Newbury Park, CA: Sage [Google Scholar]
  • Olivo SA , Macedo LG , Gadotti IC , Fuentes J , Stanton T , Magee DJ 2008 . Scales to assess the quality of randomized controlled trials: a systematic review. Phys. Ther. 88 : 156– 75 [Google Scholar]
  • Open Sci. Collab. 2015 . Estimating the reproducibility of psychological science. Science 349 : 943 [Google Scholar]
  • Paterson BL , Thorne SE , Canam C , Jillings C 2001 . Meta-Study of Qualitative Health Research: A Practical Guide to Meta-Analysis and Meta-Synthesis Thousand Oaks, CA: Sage [Google Scholar]
  • Patil P , Peng RD , Leek JT 2016 . What should researchers expect when they replicate studies? A statistical view of replicability in psychological science. Perspect. Psychol. Sci. 11 : 539– 44 [Google Scholar]
  • Rosenthal R 1979 . The “file drawer problem” and tolerance for null results. Psychol. Bull. 86 : 638– 41 [Google Scholar]
  • Rosnow RL , Rosenthal R 1989 . Statistical procedures and the justification of knowledge in psychological science. Am. Psychol. 44 : 1276– 84 [Google Scholar]
  • Sanderson S , Tatt ID , Higgins JP 2007 . Tools for assessing quality and susceptibility to bias in observational studies in epidemiology: a systematic review and annotated bibliography. Int. J. Epidemiol. 36 : 666– 76 [Google Scholar]
  • Schreiber R , Crooks D , Stern PN 1997 . Qualitative meta-analysis. Completing a Qualitative Project: Details and Dialogue JM Morse 311– 26 Thousand Oaks, CA: Sage [Google Scholar]
  • Shrout PE , Rodgers JL 2018 . Psychology, science, and knowledge construction: broadening perspectives from the replication crisis. Annu. Rev. Psychol. 69 : 487– 510 [Google Scholar]
  • Stroebe W , Strack F 2014 . The alleged crisis and the illusion of exact replication. Perspect. Psychol. Sci. 9 : 59– 71 [Google Scholar]
  • Stroup DF , Berlin JA , Morton SC , Olkin I , Williamson GD et al. 2000 . Meta-analysis of observational studies in epidemiology (MOOSE): a proposal for reporting. JAMA 283 : 2008– 12 [Google Scholar]
  • Thorne S , Jensen L , Kearney MH , Noblit G , Sandelowski M 2004 . Qualitative meta-synthesis: reflections on methodological orientation and ideological agenda. Qual. Health Res. 14 : 1342– 65 [Google Scholar]
  • Tong A , Flemming K , McInnes E , Oliver S , Craig J 2012 . Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ. BMC Med. Res. Methodol. 12 : 181– 88 [Google Scholar]
  • Trickey D , Siddaway AP , Meiser-Stedman R , Serpell L , Field AP 2012 . A meta-analysis of risk factors for post-traumatic stress disorder in children and adolescents. Clin. Psychol. Rev. 32 : 122– 38 [Google Scholar]
  • Valentine JC , Biglan A , Boruch RF , Castro FG , Collins LM et al. 2011 . Replication in prevention science. Prev. Sci. 12 : 103– 17 [Google Scholar]
  • Article Type: Review Article

Most Read This Month

Most cited most cited rss feed, job burnout, executive functions, social cognitive theory: an agentic perspective, on happiness and human potentials: a review of research on hedonic and eudaimonic well-being, sources of method bias in social science research and recommendations on how to control it, mediation analysis, missing data analysis: making it work in the real world, grounded cognition, personality structure: emergence of the five-factor model, motivational beliefs, values, and goals.

Reference management. Clean and simple.

How to write a systematic literature review [9 steps]

Systematic literature review

What is a systematic literature review?

Where are systematic literature reviews used, what types of systematic literature reviews are there, how to write a systematic literature review, 1. decide on your team, 2. formulate your question, 3. plan your research protocol, 4. search for the literature, 5. screen the literature, 6. assess the quality of the studies, 7. extract the data, 8. analyze the results, 9. interpret and present the results, registering your systematic literature review, frequently asked questions about writing a systematic literature review, related articles.

A systematic literature review is a summary, analysis, and evaluation of all the existing research on a well-formulated and specific question.

Put simply, a systematic review is a study of studies that is popular in medical and healthcare research. In this guide, we will cover:

  • the definition of a systematic literature review
  • the purpose of a systematic literature review
  • the different types of systematic reviews
  • how to write a systematic literature review

➡️ Visit our guide to the best research databases for medicine and health to find resources for your systematic review.

Systematic literature reviews can be utilized in various contexts, but they’re often relied on in clinical or healthcare settings.

Medical professionals read systematic literature reviews to stay up-to-date in their field, and granting agencies sometimes need them to make sure there’s justification for further research in an area. They can even be used as the starting point for developing clinical practice guidelines.

A classic systematic literature review can take different approaches:

  • Effectiveness reviews assess the extent to which a medical intervention or therapy achieves its intended effect. They’re the most common type of systematic literature review.
  • Diagnostic test accuracy reviews produce a summary of diagnostic test performance so that their accuracy can be determined before use by healthcare professionals.
  • Experiential (qualitative) reviews analyze human experiences in a cultural or social context. They can be used to assess the effectiveness of an intervention from a person-centric perspective.
  • Costs/economics evaluation reviews look at the cost implications of an intervention or procedure, to assess the resources needed to implement it.
  • Etiology/risk reviews usually try to determine to what degree a relationship exists between an exposure and a health outcome. This can be used to better inform healthcare planning and resource allocation.
  • Psychometric reviews assess the quality of health measurement tools so that the best instrument can be selected for use.
  • Prevalence/incidence reviews measure both the proportion of a population who have a disease, and how often the disease occurs.
  • Prognostic reviews examine the course of a disease and its potential outcomes.
  • Expert opinion/policy reviews are based around expert narrative or policy. They’re often used to complement, or in the absence of, quantitative data.
  • Methodology systematic reviews can be carried out to analyze any methodological issues in the design, conduct, or review of research studies.

Writing a systematic literature review can feel like an overwhelming undertaking. After all, they can often take 6 to 18 months to complete. Below we’ve prepared a step-by-step guide on how to write a systematic literature review.

  • Decide on your team.
  • Formulate your question.
  • Plan your research protocol.
  • Search for the literature.
  • Screen the literature.
  • Assess the quality of the studies.
  • Extract the data.
  • Analyze the results.
  • Interpret and present the results.

When carrying out a systematic literature review, you should employ multiple reviewers in order to minimize bias and strengthen analysis. A minimum of two is a good rule of thumb, with a third to serve as a tiebreaker if needed.

You may also need to team up with a librarian to help with the search, literature screeners, a statistician to analyze the data, and the relevant subject experts.

Define your answerable question. Then ask yourself, “has someone written a systematic literature review on my question already?” If so, yours may not be needed. A librarian can help you answer this.

You should formulate a “well-built clinical question.” This is the process of generating a good search question. To do this, run through PICO:

  • Patient or Population or Problem/Disease : who or what is the question about? Are there factors about them (e.g. age, race) that could be relevant to the question you’re trying to answer?
  • Intervention : which main intervention or treatment are you considering for assessment?
  • Comparison(s) or Control : is there an alternative intervention or treatment you’re considering? Your systematic literature review doesn’t have to contain a comparison, but you’ll want to stipulate at this stage, either way.
  • Outcome(s) : what are you trying to measure or achieve? What’s the wider goal for the work you’ll be doing?

Now you need a detailed strategy for how you’re going to search for and evaluate the studies relating to your question.

The protocol for your systematic literature review should include:

  • the objectives of your project
  • the specific methods and processes that you’ll use
  • the eligibility criteria of the individual studies
  • how you plan to extract data from individual studies
  • which analyses you’re going to carry out

For a full guide on how to systematically develop your protocol, take a look at the PRISMA checklist . PRISMA has been designed primarily to improve the reporting of systematic literature reviews and meta-analyses.

When writing a systematic literature review, your goal is to find all of the relevant studies relating to your question, so you need to search thoroughly .

This is where your librarian will come in handy again. They should be able to help you formulate a detailed search strategy, and point you to all of the best databases for your topic.

➡️ Read more on on how to efficiently search research databases .

The places to consider in your search are electronic scientific databases (the most popular are PubMed , MEDLINE , and Embase ), controlled clinical trial registers, non-English literature, raw data from published trials, references listed in primary sources, and unpublished sources known to experts in the field.

➡️ Take a look at our list of the top academic research databases .

Tip: Don’t miss out on “gray literature.” You’ll improve the reliability of your findings by including it.

Don’t miss out on “gray literature” sources: those sources outside of the usual academic publishing environment. They include:

  • non-peer-reviewed journals
  • pharmaceutical industry files
  • conference proceedings
  • pharmaceutical company websites
  • internal reports

Gray literature sources are more likely to contain negative conclusions, so you’ll improve the reliability of your findings by including it. You should document details such as:

  • The databases you search and which years they cover
  • The dates you first run the searches, and when they’re updated
  • Which strategies you use, including search terms
  • The numbers of results obtained

➡️ Read more about gray literature .

This should be performed by your two reviewers, using the criteria documented in your research protocol. The screening is done in two phases:

  • Pre-screening of all titles and abstracts, and selecting those appropriate
  • Screening of the full-text articles of the selected studies

Make sure reviewers keep a log of which studies they exclude, with reasons why.

➡️ Visit our guide on what is an abstract?

Your reviewers should evaluate the methodological quality of your chosen full-text articles. Make an assessment checklist that closely aligns with your research protocol, including a consistent scoring system, calculations of the quality of each study, and sensitivity analysis.

The kinds of questions you'll come up with are:

  • Were the participants really randomly allocated to their groups?
  • Were the groups similar in terms of prognostic factors?
  • Could the conclusions of the study have been influenced by bias?

Every step of the data extraction must be documented for transparency and replicability. Create a data extraction form and set your reviewers to work extracting data from the qualified studies.

Here’s a free detailed template for recording data extraction, from Dalhousie University. It should be adapted to your specific question.

Establish a standard measure of outcome which can be applied to each study on the basis of its effect size.

Measures of outcome for studies with:

  • Binary outcomes (e.g. cured/not cured) are odds ratio and risk ratio
  • Continuous outcomes (e.g. blood pressure) are means, difference in means, and standardized difference in means
  • Survival or time-to-event data are hazard ratios

Design a table and populate it with your data results. Draw this out into a forest plot , which provides a simple visual representation of variation between the studies.

Then analyze the data for issues. These can include heterogeneity, which is when studies’ lines within the forest plot don’t overlap with any other studies. Again, record any excluded studies here for reference.

Consider different factors when interpreting your results. These include limitations, strength of evidence, biases, applicability, economic effects, and implications for future practice or research.

Apply appropriate grading of your evidence and consider the strength of your recommendations.

It’s best to formulate a detailed plan for how you’ll present your systematic review results. Take a look at these guidelines for interpreting results from the Cochrane Institute.

Before writing your systematic literature review, you can register it with OSF for additional guidance along the way. You could also register your completed work with PROSPERO .

Systematic literature reviews are often found in clinical or healthcare settings. Medical professionals read systematic literature reviews to stay up-to-date in their field and granting agencies sometimes need them to make sure there’s justification for further research in an area.

The first stage in carrying out a systematic literature review is to put together your team. You should employ multiple reviewers in order to minimize bias and strengthen analysis. A minimum of two is a good rule of thumb, with a third to serve as a tiebreaker if needed.

Your systematic review should include the following details:

A literature review simply provides a summary of the literature available on a topic. A systematic review, on the other hand, is more than just a summary. It also includes an analysis and evaluation of existing research. Put simply, it's a study of studies.

The final stage of conducting a systematic literature review is interpreting and presenting the results. It’s best to formulate a detailed plan for how you’ll present your systematic review results, guidelines can be found for example from the Cochrane institute .

systematic literature review tutorial

systematic literature review tutorial

How to Perform a Systematic Literature Review

A Guide for Healthcare Researchers, Practitioners and Students

  • © 2020
  • Edward Purssell   ORCID: https://orcid.org/0000-0003-3748-0864 0 ,
  • Niall McCrae   ORCID: https://orcid.org/0000-0001-9776-7694 1

School of Health Sciences, City, University of London, London, UK

You can also search for this author in PubMed   Google Scholar

Florence Nightingale Faculty of Nursing Midwifery & Palliative Care, King’s College London, London, UK

  • Presents a logical approach to systematic literature reviewing
  • offers a corrective to flawed guidance in existing books
  • An accessible but intellectually stimulating guide with illuminating examples and analogies

85k Accesses

35 Citations

10 Altmetric

This is a preview of subscription content, log in via an institution to check access.

Access this book

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Other ways to access

Licence this eBook for your library

Institutional subscriptions

About this book

The systematic review is a rigorous method of collating and synthesizing evidence from multiple studies, producing a whole greater than the sum of parts. This textbook is an authoritative and accessible guide to an activity that is often found overwhelming. The authors steer readers on a logical, sequential path through the process, taking account of the different needs of researchers, students and practitioners. Practical guidance is provided on the fundamentals of systematic reviewing and also on advanced techniques such as meta-analysis. Examples are given in each chapter, with a succinct glossary to support the text.  

This up-to-date, accessible textbook will satisfy the needs of students, practitioners and educators in the sphere of healthcare, and contribute to improving the quality of evidence-based practice. The authors will advise some freely available or inexpensive open source/access resources (such as PubMed, R and Zotero) to help students how to perform a systemic review, in particular those with limited resources.

Similar content being viewed by others

systematic literature review tutorial

Conducting a Systematic Review: A Practical Guide

systematic literature review tutorial

  • Methodology
  • Evidence-based practice

Table of contents (11 chapters)

Front matter, introduction.

  • Edward Purssell, Niall McCrae

A Brief History of the Systematic Review

The aim and scope of a systematic review: a logical approach, searching the literature, screening search results: a 1-2-3 approach, critical appraisal: assessing the quality of studies, reviewing quantitative studies: meta-analysis and narrative approaches, reviewing qualitative studies and metasynthesis, reviewing qualitative and quantitative studies and mixed-method reviews, meaning and implications: the discussion, making an impact: dissemination of results, back matter, authors and affiliations.

Edward Purssell

Florence Nightingale Faculty of Nursing Midwifery & Palliative Care, King’s College London, London, UK

Niall McCrae

About the authors

Dr. Niall McCrae teaches mental health nursing and research methods at the Florence Nightingale Faculty of Nursing, Midwifery & Palliative Care at King’s College London. His research interests are dementia, depression, the impact of social media on younger people, and the history of mental health care. Niall has written two previous books: The Moon and Madness (Imprint Academic, 2011) and The Story of Nursing in British Mental Hospitals: Echoes from the Corridors (Routledge, 2016). He is a regular writer for Salisbury Review magazine. 

In partnershipPurssell and McCrae have written several papers on research methodology and literature reviewing for healthcare journals. Both have extensive experience of teaching literature reviewing at all academic levels, and explaining complex concepts in a way that is accessible to all

Bibliographic Information

Book Title : How to Perform a Systematic Literature Review

Book Subtitle : A Guide for Healthcare Researchers, Practitioners and Students

Authors : Edward Purssell, Niall McCrae

DOI : https://doi.org/10.1007/978-3-030-49672-2

Publisher : Springer Cham

eBook Packages : Medicine , Medicine (R0)

Copyright Information : The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020

Softcover ISBN : 978-3-030-49671-5 Published: 05 August 2020

eBook ISBN : 978-3-030-49672-2 Published: 04 August 2020

Edition Number : 1

Number of Pages : VII, 188

Number of Illustrations : 7 b/w illustrations, 12 illustrations in colour

Topics : Nursing Research , Nursing Education , Research Skills

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Systematic Reviews and Meta Analysis

  • Getting Started
  • Guides and Standards
  • Review Protocols
  • Databases and Sources
  • Randomized Controlled Trials
  • Controlled Clinical Trials
  • Observational Designs
  • Tests of Diagnostic Accuracy
  • Software and Tools
  • Where do I get all those articles?
  • Collaborations
  • EPI 233/528
  • Countway Mediated Search
  • Risk of Bias (RoB)

Systematic review Q & A

What is a systematic review.

A systematic review is guided filtering and synthesis of all available evidence addressing a specific, focused research question, generally about a specific intervention or exposure. The use of standardized, systematic methods and pre-selected eligibility criteria reduce the risk of bias in identifying, selecting and analyzing relevant studies. A well-designed systematic review includes clear objectives, pre-selected criteria for identifying eligible studies, an explicit methodology, a thorough and reproducible search of the literature, an assessment of the validity or risk of bias of each included study, and a systematic synthesis, analysis and presentation of the findings of the included studies. A systematic review may include a meta-analysis.

For details about carrying out systematic reviews, see the Guides and Standards section of this guide.

Is my research topic appropriate for systematic review methods?

A systematic review is best deployed to test a specific hypothesis about a healthcare or public health intervention or exposure. By focusing on a single intervention or a few specific interventions for a particular condition, the investigator can ensure a manageable results set. Moreover, examining a single or small set of related interventions, exposures, or outcomes, will simplify the assessment of studies and the synthesis of the findings.

Systematic reviews are poor tools for hypothesis generation: for instance, to determine what interventions have been used to increase the awareness and acceptability of a vaccine or to investigate the ways that predictive analytics have been used in health care management. In the first case, we don't know what interventions to search for and so have to screen all the articles about awareness and acceptability. In the second, there is no agreed on set of methods that make up predictive analytics, and health care management is far too broad. The search will necessarily be incomplete, vague and very large all at the same time. In most cases, reviews without clearly and exactly specified populations, interventions, exposures, and outcomes will produce results sets that quickly outstrip the resources of a small team and offer no consistent way to assess and synthesize findings from the studies that are identified.

If not a systematic review, then what?

You might consider performing a scoping review . This framework allows iterative searching over a reduced number of data sources and no requirement to assess individual studies for risk of bias. The framework includes built-in mechanisms to adjust the analysis as the work progresses and more is learned about the topic. A scoping review won't help you limit the number of records you'll need to screen (broad questions lead to large results sets) but may give you means of dealing with a large set of results.

This tool can help you decide what kind of review is right for your question.

Can my student complete a systematic review during her summer project?

Probably not. Systematic reviews are a lot of work. Including creating the protocol, building and running a quality search, collecting all the papers, evaluating the studies that meet the inclusion criteria and extracting and analyzing the summary data, a well done review can require dozens to hundreds of hours of work that can span several months. Moreover, a systematic review requires subject expertise, statistical support and a librarian to help design and run the search. Be aware that librarians sometimes have queues for their search time. It may take several weeks to complete and run a search. Moreover, all guidelines for carrying out systematic reviews recommend that at least two subject experts screen the studies identified in the search. The first round of screening can consume 1 hour per screener for every 100-200 records. A systematic review is a labor-intensive team effort.

How can I know if my topic has been been reviewed already?

Before starting out on a systematic review, check to see if someone has done it already. In PubMed you can use the systematic review subset to limit to a broad group of papers that is enriched for systematic reviews. You can invoke the subset by selecting if from the Article Types filters to the left of your PubMed results, or you can append AND systematic[sb] to your search. For example:

"neoadjuvant chemotherapy" AND systematic[sb]

The systematic review subset is very noisy, however. To quickly focus on systematic reviews (knowing that you may be missing some), simply search for the word systematic in the title:

"neoadjuvant chemotherapy" AND systematic[ti]

Any PRISMA-compliant systematic review will be captured by this method since including the words "systematic review" in the title is a requirement of the PRISMA checklist. Cochrane systematic reviews do not include 'systematic' in the title, however. It's worth checking the Cochrane Database of Systematic Reviews independently.

You can also search for protocols that will indicate that another group has set out on a similar project. Many investigators will register their protocols in PROSPERO , a registry of review protocols. Other published protocols as well as Cochrane Review protocols appear in the Cochrane Methodology Register, a part of the Cochrane Library .

  • Next: Guides and Standards >>
  • Last Updated: Feb 26, 2024 3:17 PM
  • URL: https://guides.library.harvard.edu/meta-analysis
         


10 Shattuck St, Boston MA 02115 | (617) 432-2136

| |
Copyright © 2020 President and Fellows of Harvard College. All rights reserved.

Banner

Systematic Review Tutorial

  • Other Types of Evidence Synthesis
  • Special Types of Systematic Reviews
  • Systematic Review Tools
  • 1.1 Develop a Research Question
  • 2.1 Select Databases
  • 2.2 Develop Terms
  • 2.3 Subject Headings vs. Keywords
  • 2.5 Test the Searches

Get Systematic Review Help

Schedule A Consultation

When to do a Systematic Review?

Syst ematic reviews are most useful

  • when there is a large body of published literature pertaining to a specific question
  • when a transparent search methodology and replicability are needed
  • when multiple published studies point to contradictory or unc ertain results or outcomes

Systematic Reviews: Transparent, Rigorous and Replicable

Systematic Review (from the Cochrane Glossary ) : a review of a clearly formulated question that uses systematic and explicit methods to identify, select, and critically appraise relevant research , and to collect and analyze data from the studies that are included in the review.

Systematic Review vs Literature Review

Systematic Review Literature Review

Definition

High-level overview of primary research on a focused question that identifies, selects, synthesizes, and appraises all high quality research evidence relevant to that question Qualitatively summarizes evidence on a topic using informal or subjective methods to collect and interpret studies
Goals Provide summary or overview of topic
Question Can be a general topic or a specific question
Components
Number of Authors Three or more One or more
Timeline Half a year or longer (the average takes 18 months) Several weeks or months
Requirements
Value Provides summary of literature on the topic

Adapted from - Kysh, Lynn (2013): Difference between a systematic review and a literature review. [ MLGSCA] . Available at:  http://dx.doi.org/10.6084/m9.figshare.766364

  • Next: Other Types of Evidence Synthesis >>
  • Last Updated: Jun 24, 2024 4:30 PM
  • URL: https://guides.library.uwm.edu/SystematicReview

We're sorry but you will need to enable Javascript to access all of the features of this site.

Stanford Online

Introduction to systematic reviews.

Stanford School of Medicine

This course is for members of the Stanford Medicine community. Valid Stanford login is required to access some of the content in this course. This course was created to facilitate more meaningful consultations between librarians and Stanford Medicine community members interested in conducting systematic reviews. It opens with a definition of the necessary requirements for a systematic review and comparison between systematic review methodologies and those of other types of reviews.

There are multiple organizations that provide guidelines for successful completion of a systematic review and we provide an overview of these guidelines from the Cochrane Collaboration, the National Academy of Medicine (formerly Institute of Medicine), and Joanna Briggs Institute. Next is a discussion of the importance of protocols for determining whether or not a systematic review on your topic of interest has already been completed. Tools for supporting an organized systematic review project are then highlighted, followed by a detailed review of how/why librarians collaborate on these reviews. In the final module, we highlight how you can search for systematic reviews in three major databases: PubMed, Embase, and CINAHL. Throughout the course are small assessments to reinforce concepts and encourage reflection.

Who Should Enroll

  • Understand the definition of a systematic review and its distinguishing features as compared to other types of reviews
  • Know the different resource guidelines for conducting a systematic review
  • Understand the facets of question development
  • Introduction to software tools to facilitate the systematic review process
  • Be able to search for systematic reviews on a given topic in PubMed and EMBASE
  • Engineering
  • Artificial Intelligence
  • Computer Science & Security
  • Business & Management
  • Energy & Sustainability
  • Data Science
  • Medicine & Health
  • Explore All
  • Technical Support
  • Master’s Application FAQs
  • Master’s Student FAQs
  • Master's Tuition & Fees
  • Grades & Policies
  • Graduate Application FAQs
  • Graduate Student FAQs
  • Graduate Tuition & Fees
  • Community Standards Review Process
  • Academic Calendar
  • Exams & Homework FAQs
  • Enrollment FAQs
  • Tuition, Fees, & Payments
  • Custom & Executive Programs
  • Free Online Courses
  • Free Content Library
  • School of Engineering
  • Graduate School of Education
  • Stanford Doerr School of Sustainability
  • School of Humanities & Sciences
  • Stanford Human Centered Artificial Intelligence (HAI)
  • Graduate School of Business
  • Stanford Law School
  • School of Medicine
  • Learning Collaborations
  • Stanford Credentials
  • What is a digital credential?
  • Grades and Units Information
  • Our Community
  • Get Course Updates

CU Anschutz logo

  • Strauss Health Sciences Library
  • Reserve a Room
  • My Accounts
  • Contact the Library

Systematic Reviews and Searching the Literature

  • Video Tutorials
  • Introduction
  • Best Practice
  • Where To Search
  • More Resources

Resources in Videos

Introduction to systematic reviews: parts 1-4, part 2: guidelines and standards.

  • National Academies of Science, Engineering, and Medicine: Finding What Works in Health Care: Standards for Systematic Reviews .
  • Systematic Reviews of Interventions.  
  • Systematic Reviews of Diagnostic Test Accuracy (DTA).  
  • JBI Manual for Evidence Synthesis .
  • Enhancing the Quality and Transparency of Health Research (EQUATOR) .
  • Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) .
  • PRISMA for Scoping Reviews .
  • Meta-analysis of Observational Studies in Epidemiology (MOOSE) .

Part 4: Developing a Protocol

  • Open Science Framework (OSF) . 
  • Prospero . 
  • Biomed Central: Systematic Reviews, Protocol . 
  • PRISMA for Systematic Review Protocols (PRISMA-P).
  • Shamseer, L. (2015) Planning a systematic review? Think protocols. Research in Progress Blog. 

Welcome to the Systematic Review video tutorial series. This series of videos is designed to introduce you to the systematic review process. The first 4 videos will provide you with background information about why systematic reviews are conducted and the best practices around designing and beginning a review. If you have any questions, please use the orange AskUs button to your right to contact us.

Part 1: Introduction and Understanding Review Types

Part 3: compiling a team, you can find the links provided in the videos in the following systematic review resources handout:.

Christi Piper, MLIS, Reference Librarian

Kristen Desanto, MSLS, MS, RD, AHIP, Clinical Librarian

Lilian Hoffecker, PhD, MLS, Research Librarian

  • << Previous: More Resources
  • Last Updated: Jun 27, 2024 11:46 AM
  • URL: https://library-cuanschutz.libguides.com/literaturesearching

Literature Reviews

  • Introduction
  • Tutorials and resources
  • Step 1: Literature search
  • Step 2: Analysis, synthesis, critique
  • Step 3: Writing the review

If you need any assistance, please contact the library staff at the Georgia Tech Library Help website . 

Literature review tutorials

There are many helpful Literature Review video tutorials online. Here is an excellent, succinct (10 min) introduction to how to succeed at a literature review:

Literature Reviews: An Overview for Graduate Students from NC State University Libraries on Vimeo .

For a longer, high quality in-depth look at how literature reviews come together, see this set of  literature review tutorials  from RMIT University.

Literature review resources

We recommend these resources for more information.

Cover Art

This literature review tutorial is from SAGE Research Methods, which has additional resources for learning about literature reviews.

  • << Previous: Introduction
  • Next: Step 1: Literature search >>
  • Last Updated: Apr 2, 2024 11:21 AM
  • URL: https://libguides.library.gatech.edu/litreview

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

How to do a systematic review

Affiliations.

  • 1 1 Nursing Midwifery and Allied Health Professions (NMAHP) Research Unit, Glasgow Caledonian University, Glasgow, UK.
  • 2 2 Department of Internal Medicine and Cardiology, Oslo University Hospital, Oslo, Norway.
  • PMID: 29148960
  • DOI: 10.1177/1747493017743796

High quality up-to-date systematic reviews are essential in order to help healthcare practitioners and researchers keep up-to-date with a large and rapidly growing body of evidence. Systematic reviews answer pre-defined research questions using explicit, reproducible methods to identify, critically appraise and combine results of primary research studies. Key stages in the production of systematic reviews include clarification of aims and methods in a protocol, finding relevant research, collecting data, assessing study quality, synthesizing evidence, and interpreting findings. Systematic reviews may address different types of questions, such as questions about effectiveness of interventions, diagnostic test accuracy, prognosis, prevalence or incidence of disease, accuracy of measurement instruments, or qualitative data. For all reviews, it is important to define criteria such as the population, intervention, comparison and outcomes, and to identify potential risks of bias. Reviews of the effect of rehabilitation interventions or reviews of data from observational studies, diagnostic test accuracy, or qualitative data may be more methodologically challenging than reviews of effectiveness of drugs for the prevention or treatment of stroke. Challenges in reviews of stroke rehabilitation can include poor definition of complex interventions, use of outcome measures that have not been validated, and poor generalizability of results. There may also be challenges with bias because the effects are dependent on the persons delivering the intervention, and because masking of participants and investigators may not be possible. There are a wide range of resources which can support the planning and completion of systematic reviews, and these should be considered when planning a systematic review relating to stroke.

Keywords: Cochrane; Systematic review; methods; protocol; rehabilitation; synthesis.

PubMed Disclaimer

Similar articles

  • Folic acid supplementation and malaria susceptibility and severity among people taking antifolate antimalarial drugs in endemic areas. Crider K, Williams J, Qi YP, Gutman J, Yeung L, Mai C, Finkelstain J, Mehta S, Pons-Duran C, Menéndez C, Moraleda C, Rogers L, Daniels K, Green P. Crider K, et al. Cochrane Database Syst Rev. 2022 Feb 1;2(2022):CD014217. doi: 10.1002/14651858.CD014217. Cochrane Database Syst Rev. 2022. PMID: 36321557 Free PMC article.
  • The future of Cochrane Neonatal. Soll RF, Ovelman C, McGuire W. Soll RF, et al. Early Hum Dev. 2020 Nov;150:105191. doi: 10.1016/j.earlhumdev.2020.105191. Epub 2020 Sep 12. Early Hum Dev. 2020. PMID: 33036834
  • How has the impact of 'care pathway technologies' on service integration in stroke care been measured and what is the strength of the evidence to support their effectiveness in this respect? Allen D, Rixson L. Allen D, et al. Int J Evid Based Healthc. 2008 Mar;6(1):78-110. doi: 10.1111/j.1744-1609.2007.00098.x. Int J Evid Based Healthc. 2008. PMID: 21631815
  • Rehabilitation interventions for improving balance following stroke: An overview of systematic reviews. Arienti C, Lazzarini SG, Pollock A, Negrini S. Arienti C, et al. PLoS One. 2019 Jul 19;14(7):e0219781. doi: 10.1371/journal.pone.0219781. eCollection 2019. PLoS One. 2019. PMID: 31323068 Free PMC article. Review.
  • Interventions for improving upper limb function after stroke. Pollock A, Farmer SE, Brady MC, Langhorne P, Mead GE, Mehrholz J, van Wijck F. Pollock A, et al. Cochrane Database Syst Rev. 2014 Nov 12;2014(11):CD010820. doi: 10.1002/14651858.CD010820.pub2. Cochrane Database Syst Rev. 2014. PMID: 25387001 Free PMC article. Review.
  • A Systematic Review of Gender Dysphoria Measures in Autistic Samples. Mears K, Rai D, Shah P, Cooper K, Ashwin C. Mears K, et al. Arch Sex Behav. 2024 Jun 3. doi: 10.1007/s10508-024-02896-4. Online ahead of print. Arch Sex Behav. 2024. PMID: 38831234
  • Efficacy of Dynamic Magnetic Resonance Imaging in the Diagnosis of Degenerative Cervical Myelopathy: Systematic Review Protocol. Luizari VPG, Oliveira LPDR, Pontes MDS, Soeira TP, Herrero CFPDS. Luizari VPG, et al. Rev Bras Ortop (Sao Paulo). 2024 Mar 21;59(1):e17-e20. doi: 10.1055/s-0044-1779311. eCollection 2024 Feb. Rev Bras Ortop (Sao Paulo). 2024. PMID: 38524714 Free PMC article.
  • Effectiveness of Online Food-Safety Educational Programs: A Systematic Review, Random-Effects Meta-Analysis, and Thematic Synthesis. Berglund Z, Simsek S, Feng Y. Berglund Z, et al. Foods. 2024 Mar 4;13(5):794. doi: 10.3390/foods13050794. Foods. 2024. PMID: 38472907 Free PMC article. Review.
  • Learning analytics for enhanced professional capital development: a systematic review. de La Hoz-Ruiz J, Khalil M, Domingo Segovia J, Liu Q. de La Hoz-Ruiz J, et al. Front Psychol. 2024 Jan 22;15:1302658. doi: 10.3389/fpsyg.2024.1302658. eCollection 2024. Front Psychol. 2024. PMID: 38318080 Free PMC article. Review.
  • Evaluation of marginal bone level, technical and biological complications between screw-retained and cement-retained all-ceramic implant-supported crowns on zirconia abutment: A systematic review and meta-analysis. Potdukhe SS, Iyer JM, Nadgere JB. Potdukhe SS, et al. J Indian Prosthodont Soc. 2024 Jan 1;24(1):25-35. doi: 10.4103/jips.jips_524_23. Epub 2024 Jan 24. J Indian Prosthodont Soc. 2024. PMID: 38263555
  • Search in MeSH

Grants and funding

  • NMAHP2/CSO_/Chief Scientist Office/United Kingdom

LinkOut - more resources

Full text sources, other literature sources.

  • scite Smart Citations
  • MedlinePlus Health Information

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • PMC10248995

Logo of sysrev

Guidance to best tools and practices for systematic reviews

Kat kolaski.

1 Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, NC USA

Lynne Romeiser Logan

2 Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, NY USA

John P. A. Ioannidis

3 Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, CA USA

Associated Data

Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy.

A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work.

Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.

Supplementary Information

The online version contains supplementary material available at 10.1186/s13643-023-02255-9.

Part 1. The state of evidence synthesis

Evidence syntheses are commonly regarded as the foundation of evidence-based medicine (EBM). They are widely accredited for providing reliable evidence and, as such, they have significantly influenced medical research and clinical practice. Despite their uptake throughout health care and ubiquity in contemporary medical literature, some important aspects of evidence syntheses are generally overlooked or not well recognized. Evidence syntheses are mostly retrospective exercises, they often depend on weak or irreparably flawed data, and they may use tools that have acknowledged or yet unrecognized limitations. They are complicated and time-consuming undertakings prone to bias and errors. Production of a good evidence synthesis requires careful preparation and high levels of organization in order to limit potential pitfalls [ 1 ]. Many authors do not recognize the complexity of such an endeavor and the many methodological challenges they may encounter. Failure to do so is likely to result in research and resource waste.

Given their potential impact on people’s lives, it is crucial for evidence syntheses to correctly report on the current knowledge base. In order to be perceived as trustworthy, reliable demonstration of the accuracy of evidence syntheses is equally imperative [ 2 ]. Concerns about the trustworthiness of evidence syntheses are not recent developments. From the early years when EBM first began to gain traction until recent times when thousands of systematic reviews are published monthly [ 3 ] the rigor of evidence syntheses has always varied. Many systematic reviews and meta-analyses had obvious deficiencies because original methods and processes had gaps, lacked precision, and/or were not widely known. The situation has improved with empirical research concerning which methods to use and standardization of appraisal tools. However, given the geometrical increase in the number of evidence syntheses being published, a relatively larger pool of unreliable evidence syntheses is being published today.

Publication of methodological studies that critically appraise the methods used in evidence syntheses is increasing at a fast pace. This reflects the availability of tools specifically developed for this purpose [ 4 – 6 ]. Yet many clinical specialties report that alarming numbers of evidence syntheses fail on these assessments. The syntheses identified report on a broad range of common conditions including, but not limited to, cancer, [ 7 ] chronic obstructive pulmonary disease, [ 8 ] osteoporosis, [ 9 ] stroke, [ 10 ] cerebral palsy, [ 11 ] chronic low back pain, [ 12 ] refractive error, [ 13 ] major depression, [ 14 ] pain, [ 15 ] and obesity [ 16 , 17 ]. The situation is even more concerning with regard to evidence syntheses included in clinical practice guidelines (CPGs) [ 18 – 20 ]. Astonishingly, in a sample of CPGs published in 2017–18, more than half did not apply even basic systematic methods in the evidence syntheses used to inform their recommendations [ 21 ].

These reports, while not widely acknowledged, suggest there are pervasive problems not limited to evidence syntheses that evaluate specific kinds of interventions or include primary research of a particular study design (eg, randomized versus non-randomized) [ 22 ]. Similar concerns about the reliability of evidence syntheses have been expressed by proponents of EBM in highly circulated medical journals [ 23 – 26 ]. These publications have also raised awareness about redundancy, inadequate input of statistical expertise, and deficient reporting. These issues plague primary research as well; however, there is heightened concern for the impact of these deficiencies given the critical role of evidence syntheses in policy and clinical decision-making.

Methods and guidance to produce a reliable evidence synthesis

Several international consortiums of EBM experts and national health care organizations currently provide detailed guidance (Table ​ (Table1). 1 ). They draw criteria from the reporting and methodological standards of currently recommended appraisal tools, and regularly review and update their methods to reflect new information and changing needs. In addition, they endorse the Grading of Recommendations Assessment, Development and Evaluation (GRADE) system for rating the overall quality of a body of evidence [ 27 ]. These groups typically certify or commission systematic reviews that are published in exclusive databases (eg, Cochrane, JBI) or are used to develop government or agency sponsored guidelines or health technology assessments (eg, National Institute for Health and Care Excellence [NICE], Scottish Intercollegiate Guidelines Network [SIGN], Agency for Healthcare Research and Quality [AHRQ]). They offer developers of evidence syntheses various levels of methodological advice, technical and administrative support, and editorial assistance. Use of specific protocols and checklists are required for development teams within these groups, but their online methodological resources are accessible to any potential author.

Guidance for development of evidence syntheses

 Cochrane (formerly Cochrane Collaboration)
 JBI (formerly Joanna Briggs Institute)
 National Institute for Health and Care Excellence (NICE)—United Kingdom
 Scottish Intercollegiate Guidelines Network (SIGN) —Scotland
 Agency for Healthcare Research and Quality (AHRQ)—United States

Notably, Cochrane is the largest single producer of evidence syntheses in biomedical research; however, these only account for 15% of the total [ 28 ]. The World Health Organization requires Cochrane standards be used to develop evidence syntheses that inform their CPGs [ 29 ]. Authors investigating questions of intervention effectiveness in syntheses developed for Cochrane follow the Methodological Expectations of Cochrane Intervention Reviews [ 30 ] and undergo multi-tiered peer review [ 31 , 32 ]. Several empirical evaluations have shown that Cochrane systematic reviews are of higher methodological quality compared with non-Cochrane reviews [ 4 , 7 , 9 , 11 , 14 , 32 – 35 ]. However, some of these assessments have biases: they may be conducted by Cochrane-affiliated authors, and they sometimes use scales and tools developed and used in the Cochrane environment and by its partners. In addition, evidence syntheses published in the Cochrane database are not subject to space or word restrictions, while non-Cochrane syntheses are often limited. As a result, information that may be relevant to the critical appraisal of non-Cochrane reviews is often removed or is relegated to online-only supplements that may not be readily or fully accessible [ 28 ].

Influences on the state of evidence synthesis

Many authors are familiar with the evidence syntheses produced by the leading EBM organizations but can be intimidated by the time and effort necessary to apply their standards. Instead of following their guidance, authors may employ methods that are discouraged or outdated 28]. Suboptimal methods described in in the literature may then be taken up by others. For example, the Newcastle–Ottawa Scale (NOS) is a commonly used tool for appraising non-randomized studies [ 36 ]. Many authors justify their selection of this tool with reference to a publication that describes the unreliability of the NOS and recommends against its use [ 37 ]. Obviously, the authors who cite this report for that purpose have not read it. Authors and peer reviewers have a responsibility to use reliable and accurate methods and not copycat previous citations or substandard work [ 38 , 39 ]. Similar cautions may potentially extend to automation tools. These have concentrated on evidence searching [ 40 ] and selection given how demanding it is for humans to maintain truly up-to-date evidence [ 2 , 41 ]. Cochrane has deployed machine learning to identify randomized controlled trials (RCTs) and studies related to COVID-19, [ 2 , 42 ] but such tools are not yet commonly used [ 43 ]. The routine integration of automation tools in the development of future evidence syntheses should not displace the interpretive part of the process.

Editorials about unreliable or misleading systematic reviews highlight several of the intertwining factors that may contribute to continued publication of unreliable evidence syntheses: shortcomings and inconsistencies of the peer review process, lack of endorsement of current standards on the part of journal editors, the incentive structure of academia, industry influences, publication bias, and the lure of “predatory” journals [ 44 – 48 ]. At this juncture, clarification of the extent to which each of these factors contribute remains speculative, but their impact is likely to be synergistic.

Over time, the generalized acceptance of the conclusions of systematic reviews as incontrovertible has affected trends in the dissemination and uptake of evidence. Reporting of the results of evidence syntheses and recommendations of CPGs has shifted beyond medical journals to press releases and news headlines and, more recently, to the realm of social media and influencers. The lay public and policy makers may depend on these outlets for interpreting evidence syntheses and CPGs. Unfortunately, communication to the general public often reflects intentional or non-intentional misrepresentation or “spin” of the research findings [ 49 – 52 ] News and social media outlets also tend to reduce conclusions on a body of evidence and recommendations for treatment to binary choices (eg, “do it” versus “don’t do it”) that may be assigned an actionable symbol (eg, red/green traffic lights, smiley/frowning face emoji).

Strategies for improvement

Many authors and peer reviewers are volunteer health care professionals or trainees who lack formal training in evidence synthesis [ 46 , 53 ]. Informing them about research methodology could increase the likelihood they will apply rigorous methods [ 25 , 33 , 45 ]. We tackle this challenge, from both a theoretical and a practical perspective, by offering guidance applicable to any specialty. It is based on recent methodological research that is extensively referenced to promote self-study. However, the information presented is not intended to be substitute for committed training in evidence synthesis methodology; instead, we hope to inspire our target audience to seek such training. We also hope to inform a broader audience of clinicians and guideline developers influenced by evidence syntheses. Notably, these communities often include the same members who serve in different capacities.

In the following sections, we highlight methodological concepts and practices that may be unfamiliar, problematic, confusing, or controversial. In Part 2, we consider various types of evidence syntheses and the types of research evidence summarized by them. In Part 3, we examine some widely used (and misused) tools for the critical appraisal of systematic reviews and reporting guidelines for evidence syntheses. In Part 4, we discuss how to meet methodological conduct standards applicable to key components of systematic reviews. In Part 5, we describe the merits and caveats of rating the overall certainty of a body of evidence. Finally, in Part 6, we summarize suggested terminology, methods, and tools for development and evaluation of evidence syntheses that reflect current best practices.

Part 2. Types of syntheses and research evidence

A good foundation for the development of evidence syntheses requires an appreciation of their various methodologies and the ability to correctly identify the types of research potentially available for inclusion in the synthesis.

Types of evidence syntheses

Systematic reviews have historically focused on the benefits and harms of interventions; over time, various types of systematic reviews have emerged to address the diverse information needs of clinicians, patients, and policy makers [ 54 ] Systematic reviews with traditional components have become defined by the different topics they assess (Table 2.1 ). In addition, other distinctive types of evidence syntheses have evolved, including overviews or umbrella reviews, scoping reviews, rapid reviews, and living reviews. The popularity of these has been increasing in recent years [ 55 – 58 ]. A summary of the development, methods, available guidance, and indications for these unique types of evidence syntheses is available in Additional File 2 A.

Types of traditional systematic reviews

Review typeTopic assessedElements of research question (mnemonic)
Intervention [ , ]Benefits and harms of interventions used in healthcare. opulation, ntervention, omparator, utcome ( )
Diagnostic test accuracy [ ]How well a diagnostic test performs in diagnosing and detecting a particular disease. opulation, ndex test(s), and arget condition ( )
Qualitative
 Cochrane [ ]Questions are designed to improve understanding of intervention complexity, contextual variations, implementation, and stakeholder preferences and experiences.

etting, erspective, ntervention or Phenomenon of nterest, omparison, valuation ( )

ample, henomenon of nterest, esign, valuation, esearch type ( )

spective, etting, henomena of interest/Problem, nvironment, omparison (optional), me/timing, indings ( )

 JBI [ ]Questions inform meaningfulness and appropriateness of care and the impact of illness through documentation of stakeholder experiences, preferences, and priorities. opulation, the Phenomena of nterest, and the ntext
Prognostic [ ]Probable course or future outcome(s) of people with a health problem. opulation, ntervention (model), omparator, utcomes, iming, etting ( )
Etiology and risk [ ]The relationship (association) between certain factors (e.g., genetic, environmental) and the development of a disease or condition or other health outcome. opulation or groups at risk, xposure(s), associated utcome(s) (disease, symptom, or health condition of interest), the context/location or the time period and the length of time when relevant ( )
Measurement properties [ , ]What is the most suitable instrument to measure a construct of interest in a specific study population? opulation, nstrument, onstruct, utcomes ( )
Prevalence and incidence [ ]The frequency, distribution and determinants of specific factors, health states or conditions in a defined population: eg, how common is a particular disease or condition in a specific group of individuals?Factor, disease, symptom or health ndition of interest, the epidemiological indicator used to measure its frequency (prevalence, incidence), the ulation or groups at risk as well as the ntext/location and time period where relevant ( )

Both Cochrane [ 30 , 59 ] and JBI [ 60 ] provide methodologies for many types of evidence syntheses; they describe these with different terminology, but there is obvious overlap (Table 2.2 ). The majority of evidence syntheses published by Cochrane (96%) and JBI (62%) are categorized as intervention reviews. This reflects the earlier development and dissemination of their intervention review methodologies; these remain well-established [ 30 , 59 , 61 ] as both organizations continue to focus on topics related to treatment efficacy and harms. In contrast, intervention reviews represent only about half of the total published in the general medical literature, and several non-intervention review types contribute to a significant proportion of the other half.

Evidence syntheses published by Cochrane and JBI

Intervention857296.3Effectiveness43561.5
Diagnostic1761.9Diagnostic Test Accuracy91.3
Overview640.7Umbrella40.6
Methodology410.45Mixed Methods20.3
Qualitative170.19Qualitative15922.5
Prognostic110.12Prevalence and Incidence60.8
Rapid110.12Etiology and Risk71.0
Prototype 80.08Measurement Properties30.4
Economic60.6
Text and Opinion10.14
Scoping436.0
Comprehensive 324.5
Total = 8900Total = 707

a Data from https://www.cochranelibrary.com/cdsr/reviews . Accessed 17 Sep 2022

b Data obtained via personal email communication on 18 Sep 2022 with Emilie Francis, editorial assistant, JBI Evidence Synthesis

c Includes the following categories: prevalence, scoping, mixed methods, and realist reviews

d This methodology is not supported in the current version of the JBI Manual for Evidence Synthesis

Types of research evidence

There is consensus on the importance of using multiple study designs in evidence syntheses; at the same time, there is a lack of agreement on methods to identify included study designs. Authors of evidence syntheses may use various taxonomies and associated algorithms to guide selection and/or classification of study designs. These tools differentiate categories of research and apply labels to individual study designs (eg, RCT, cross-sectional). A familiar example is the Design Tree endorsed by the Centre for Evidence-Based Medicine [ 70 ]. Such tools may not be helpful to authors of evidence syntheses for multiple reasons.

Suboptimal levels of agreement and accuracy even among trained methodologists reflect challenges with the application of such tools [ 71 , 72 ]. Problematic distinctions or decision points (eg, experimental or observational, controlled or uncontrolled, prospective or retrospective) and design labels (eg, cohort, case control, uncontrolled trial) have been reported [ 71 ]. The variable application of ambiguous study design labels to non-randomized studies is common, making them especially prone to misclassification [ 73 ]. In addition, study labels do not denote the unique design features that make different types of non-randomized studies susceptible to different biases, including those related to how the data are obtained (eg, clinical trials, disease registries, wearable devices). Given this limitation, it is important to be aware that design labels preclude the accurate assignment of non-randomized studies to a “level of evidence” in traditional hierarchies [ 74 ].

These concerns suggest that available tools and nomenclature used to distinguish types of research evidence may not uniformly apply to biomedical research and non-health fields that utilize evidence syntheses (eg, education, economics) [ 75 , 76 ]. Moreover, primary research reports often do not describe study design or do so incompletely or inaccurately; thus, indexing in PubMed and other databases does not address the potential for misclassification [ 77 ]. Yet proper identification of research evidence has implications for several key components of evidence syntheses. For example, search strategies limited by index terms using design labels or study selection based on labels applied by the authors of primary studies may cause inconsistent or unjustified study inclusions and/or exclusions [ 77 ]. In addition, because risk of bias (RoB) tools consider attributes specific to certain types of studies and study design features, results of these assessments may be invalidated if an inappropriate tool is used. Appropriate classification of studies is also relevant for the selection of a suitable method of synthesis and interpretation of those results.

An alternative to these tools and nomenclature involves application of a few fundamental distinctions that encompass a wide range of research designs and contexts. While these distinctions are not novel, we integrate them into a practical scheme (see Fig. ​ Fig.1) 1 ) designed to guide authors of evidence syntheses in the basic identification of research evidence. The initial distinction is between primary and secondary studies. Primary studies are then further distinguished by: 1) the type of data reported (qualitative or quantitative); and 2) two defining design features (group or single-case and randomized or non-randomized). The different types of studies and study designs represented in the scheme are described in detail in Additional File 2 B. It is important to conceptualize their methods as complementary as opposed to contrasting or hierarchical [ 78 ]; each offers advantages and disadvantages that determine their appropriateness for answering different kinds of research questions in an evidence synthesis.

An external file that holds a picture, illustration, etc.
Object name is 13643_2023_2255_Fig1_HTML.jpg

Distinguishing types of research evidence

Application of these basic distinctions may avoid some of the potential difficulties associated with study design labels and taxonomies. Nevertheless, debatable methodological issues are raised when certain types of research identified in this scheme are included in an evidence synthesis. We briefly highlight those associated with inclusion of non-randomized studies, case reports and series, and a combination of primary and secondary studies.

Non-randomized studies

When investigating an intervention’s effectiveness, it is important for authors to recognize the uncertainty of observed effects reported by studies with high RoB. Results of statistical analyses that include such studies need to be interpreted with caution in order to avoid misleading conclusions [ 74 ]. Review authors may consider excluding randomized studies with high RoB from meta-analyses. Non-randomized studies of intervention (NRSI) are affected by a greater potential range of biases and thus vary more than RCTs in their ability to estimate a causal effect [ 79 ]. If data from NRSI are synthesized in meta-analyses, it is helpful to separately report their summary estimates [ 6 , 74 ].

Nonetheless, certain design features of NRSI (eg, which parts of the study were prospectively designed) may help to distinguish stronger from weaker ones. Cochrane recommends that authors of a review including NRSI focus on relevant study design features when determining eligibility criteria instead of relying on non-informative study design labels [ 79 , 80 ] This process is facilitated by a study design feature checklist; guidance on using the checklist is included with developers’ description of the tool [ 73 , 74 ]. Authors collect information about these design features during data extraction and then consider it when making final study selection decisions and when performing RoB assessments of the included NRSI.

Case reports and case series

Correctly identified case reports and case series can contribute evidence not well captured by other designs [ 81 ]; in addition, some topics may be limited to a body of evidence that consists primarily of uncontrolled clinical observations. Murad and colleagues offer a framework for how to include case reports and series in an evidence synthesis [ 82 ]. Distinguishing between cohort studies and case series in these syntheses is important, especially for those that rely on evidence from NRSI. Additional data obtained from studies misclassified as case series can potentially increase the confidence in effect estimates. Mathes and Pieper provide authors of evidence syntheses with specific guidance on distinguishing between cohort studies and case series, but emphasize the increased workload involved [ 77 ].

Primary and secondary studies

Synthesis of combined evidence from primary and secondary studies may provide a broad perspective on the entirety of available literature on a topic. This is, in fact, the recommended strategy for scoping reviews that may include a variety of sources of evidence (eg, CPGs, popular media). However, except for scoping reviews, the synthesis of data from primary and secondary studies is discouraged unless there are strong reasons to justify doing so.

Combining primary and secondary sources of evidence is challenging for authors of other types of evidence syntheses for several reasons [ 83 ]. Assessments of RoB for primary and secondary studies are derived from conceptually different tools, thus obfuscating the ability to make an overall RoB assessment of a combination of these study types. In addition, authors who include primary and secondary studies must devise non-standardized methods for synthesis. Note this contrasts with well-established methods available for updating existing evidence syntheses with additional data from new primary studies [ 84 – 86 ]. However, a new review that synthesizes data from primary and secondary studies raises questions of validity and may unintentionally support a biased conclusion because no existing methodological guidance is currently available [ 87 ].

Recommendations

We suggest that journal editors require authors to identify which type of evidence synthesis they are submitting and reference the specific methodology used for its development. This will clarify the research question and methods for peer reviewers and potentially simplify the editorial process. Editors should announce this practice and include it in the instructions to authors. To decrease bias and apply correct methods, authors must also accurately identify the types of research evidence included in their syntheses.

Part 3. Conduct and reporting

The need to develop criteria to assess the rigor of systematic reviews was recognized soon after the EBM movement began to gain international traction [ 88 , 89 ]. Systematic reviews rapidly became popular, but many were very poorly conceived, conducted, and reported. These problems remain highly prevalent [ 23 ] despite development of guidelines and tools to standardize and improve the performance and reporting of evidence syntheses [ 22 , 28 ]. Table 3.1  provides some historical perspective on the evolution of tools developed specifically for the evaluation of systematic reviews, with or without meta-analysis.

Tools specifying standards for systematic reviews with and without meta-analysis

 Quality of Reporting of Meta-analyses (QUOROM) StatementMoher 1999 [ ]
 Meta-analyses Of Observational Studies in Epidemiology (MOOSE)Stroup 2000 [ ]
 Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA)Moher 2009 [ ]
 PRISMA 2020 Page 2021 [ ]
 Overview Quality Assessment Questionnaire (OQAQ)Oxman and Guyatt 1991 [ ]
 Systematic Review Critical Appraisal SheetCentre for Evidence-based Medicine 2005 [ ]
 A Measurement Tool to Assess Systematic Reviews (AMSTAR)Shea 2007 [ ]
 AMSTAR-2 Shea 2017 [ ]
 Risk of Bias in Systematic Reviews (ROBIS) Whiting 2016 [ ]

a Currently recommended

b Validated tool for systematic reviews of interventions developed for use by authors of overviews or umbrella reviews

These tools are often interchangeably invoked when referring to the “quality” of an evidence synthesis. However, quality is a vague term that is frequently misused and misunderstood; more precisely, these tools specify different standards for evidence syntheses. Methodological standards address how well a systematic review was designed and performed [ 5 ]. RoB assessments refer to systematic flaws or limitations in the design, conduct, or analysis of research that distort the findings of the review [ 4 ]. Reporting standards help systematic review authors describe the methodology they used and the results of their synthesis in sufficient detail [ 92 ]. It is essential to distinguish between these evaluations: a systematic review may be biased, it may fail to report sufficient information on essential features, or it may exhibit both problems; a thoroughly reported systematic evidence synthesis review may still be biased and flawed while an otherwise unbiased one may suffer from deficient documentation.

We direct attention to the currently recommended tools listed in Table 3.1  but concentrate on AMSTAR-2 (update of AMSTAR [A Measurement Tool to Assess Systematic Reviews]) and ROBIS (Risk of Bias in Systematic Reviews), which evaluate methodological quality and RoB, respectively. For comparison and completeness, we include PRISMA 2020 (update of the 2009 Preferred Reporting Items for Systematic Reviews of Meta-Analyses statement), which offers guidance on reporting standards. The exclusive focus on these three tools is by design; it addresses concerns related to the considerable variability in tools used for the evaluation of systematic reviews [ 28 , 88 , 96 , 97 ]. We highlight the underlying constructs these tools were designed to assess, then describe their components and applications. Their known (or potential) uptake and impact and limitations are also discussed.

Evaluation of conduct

Development.

AMSTAR [ 5 ] was in use for a decade prior to the 2017 publication of AMSTAR-2; both provide a broad evaluation of methodological quality of intervention systematic reviews, including flaws arising through poor conduct of the review [ 6 ]. ROBIS, published in 2016, was developed to specifically assess RoB introduced by the conduct of the review; it is applicable to systematic reviews of interventions and several other types of reviews [ 4 ]. Both tools reflect a shift to a domain-based approach as opposed to generic quality checklists. There are a few items unique to each tool; however, similarities between items have been demonstrated [ 98 , 99 ]. AMSTAR-2 and ROBIS are recommended for use by: 1) authors of overviews or umbrella reviews and CPGs to evaluate systematic reviews considered as evidence; 2) authors of methodological research studies to appraise included systematic reviews; and 3) peer reviewers for appraisal of submitted systematic review manuscripts. For authors, these tools may function as teaching aids and inform conduct of their review during its development.

Description

Systematic reviews that include randomized and/or non-randomized studies as evidence can be appraised with AMSTAR-2 and ROBIS. Other characteristics of AMSTAR-2 and ROBIS are summarized in Table 3.2 . Both tools define categories for an overall rating; however, neither tool is intended to generate a total score by simply calculating the number of responses satisfying criteria for individual items [ 4 , 6 ]. AMSTAR-2 focuses on the rigor of a review’s methods irrespective of the specific subject matter. ROBIS places emphasis on a review’s results section— this suggests it may be optimally applied by appraisers with some knowledge of the review’s topic as they may be better equipped to determine if certain procedures (or lack thereof) would impact the validity of a review’s findings [ 98 , 100 ]. Reliability studies show AMSTAR-2 overall confidence ratings strongly correlate with the overall RoB ratings in ROBIS [ 100 , 101 ].

Comparison of AMSTAR-2 and ROBIS

Characteristic
ExtensiveExtensive
InterventionIntervention, diagnostic, etiology, prognostic
7 critical, 9 non-critical4
 Total number1629
 Response options

Items # 1, 3, 5, 6, 10, 13, 14, 16: rated or

Items # 2, 4, 7, 8, 9 : rated or

Items # 11 , 12, 15: rated or

24 assessment items: rated

5 items regarding level of concern: rated

 ConstructConfidence based on weaknesses in critical domainsLevel of concern for risk of bias
 CategoriesHigh, moderate, low, critically lowLow, high, unclear

a ROBIS includes an optional first phase to assess the applicability of the review to the research question of interest. The tool may be applicable to other review types in addition to the four specified, although modification of this initial phase will be needed (Personal Communication via email, Penny Whiting, 28 Jan 2022)

b AMSTAR-2 item #9 and #11 require separate responses for RCTs and NRSI

Interrater reliability has been shown to be acceptable for AMSTAR-2 [ 6 , 11 , 102 ] and ROBIS [ 4 , 98 , 103 ] but neither tool has been shown to be superior in this regard [ 100 , 101 , 104 , 105 ]. Overall, variability in reliability for both tools has been reported across items, between pairs of raters, and between centers [ 6 , 100 , 101 , 104 ]. The effects of appraiser experience on the results of AMSTAR-2 and ROBIS require further evaluation [ 101 , 105 ]. Updates to both tools should address items shown to be prone to individual appraisers’ subjective biases and opinions [ 11 , 100 ]; this may involve modifications of the current domains and signaling questions as well as incorporation of methods to make an appraiser’s judgments more explicit. Future revisions of these tools may also consider the addition of standards for aspects of systematic review development currently lacking (eg, rating overall certainty of evidence, [ 99 ] methods for synthesis without meta-analysis [ 105 ]) and removal of items that assess aspects of reporting that are thoroughly evaluated by PRISMA 2020.

Application

A good understanding of what is required to satisfy the standards of AMSTAR-2 and ROBIS involves study of the accompanying guidance documents written by the tools’ developers; these contain detailed descriptions of each item’s standards. In addition, accurate appraisal of a systematic review with either tool requires training. Most experts recommend independent assessment by at least two appraisers with a process for resolving discrepancies as well as procedures to establish interrater reliability, such as pilot testing, a calibration phase or exercise, and development of predefined decision rules [ 35 , 99 – 101 , 103 , 104 , 106 ]. These methods may, to some extent, address the challenges associated with the diversity in methodological training, subject matter expertise, and experience using the tools that are likely to exist among appraisers.

The standards of AMSTAR, AMSTAR-2, and ROBIS have been used in many methodological studies and epidemiological investigations. However, the increased publication of overviews or umbrella reviews and CPGs has likely been a greater influence on the widening acceptance of these tools. Critical appraisal of the secondary studies considered evidence is essential to the trustworthiness of both the recommendations of CPGs and the conclusions of overviews. Currently both Cochrane [ 55 ] and JBI [ 107 ] recommend AMSTAR-2 and ROBIS in their guidance for authors of overviews or umbrella reviews. However, ROBIS and AMSTAR-2 were released in 2016 and 2017, respectively; thus, to date, limited data have been reported about the uptake of these tools or which of the two may be preferred [ 21 , 106 ]. Currently, in relation to CPGs, AMSTAR-2 appears to be overwhelmingly popular compared to ROBIS. A Google Scholar search of this topic (search terms “AMSTAR 2 AND clinical practice guidelines,” “ROBIS AND clinical practice guidelines” 13 May 2022) found 12,700 hits for AMSTAR-2 and 1,280 for ROBIS. The apparent greater appeal of AMSTAR-2 may relate to its longer track record given the original version of the tool was in use for 10 years prior to its update in 2017.

Barriers to the uptake of AMSTAR-2 and ROBIS include the real or perceived time and resources necessary to complete the items they include and appraisers’ confidence in their own ratings [ 104 ]. Reports from comparative studies available to date indicate that appraisers find AMSTAR-2 questions, responses, and guidance to be clearer and simpler compared with ROBIS [ 11 , 101 , 104 , 105 ]. This suggests that for appraisal of intervention systematic reviews, AMSTAR-2 may be a more practical tool than ROBIS, especially for novice appraisers [ 101 , 103 – 105 ]. The unique characteristics of each tool, as well as their potential advantages and disadvantages, should be taken into consideration when deciding which tool should be used for an appraisal of a systematic review. In addition, the choice of one or the other may depend on how the results of an appraisal will be used; for example, a peer reviewer’s appraisal of a single manuscript versus an appraisal of multiple systematic reviews in an overview or umbrella review, CPG, or systematic methodological study.

Authors of overviews and CPGs report results of AMSTAR-2 and ROBIS appraisals for each of the systematic reviews they include as evidence. Ideally, an independent judgment of their appraisals can be made by the end users of overviews and CPGs; however, most stakeholders, including clinicians, are unlikely to have a sophisticated understanding of these tools. Nevertheless, they should at least be aware that AMSTAR-2 and ROBIS ratings reported in overviews and CPGs may be inaccurate because the tools are not applied as intended by their developers. This can result from inadequate training of the overview or CPG authors who perform the appraisals, or to modifications of the appraisal tools imposed by them. The potential variability in overall confidence and RoB ratings highlights why appraisers applying these tools need to support their judgments with explicit documentation; this allows readers to judge for themselves whether they agree with the criteria used by appraisers [ 4 , 108 ]. When these judgments are explicit, the underlying rationale used when applying these tools can be assessed [ 109 ].

Theoretically, we would expect an association of AMSTAR-2 with improved methodological rigor and an association of ROBIS with lower RoB in recent systematic reviews compared to those published before 2017. To our knowledge, this has not yet been demonstrated; however, like reports about the actual uptake of these tools, time will tell. Additional data on user experience is also needed to further elucidate the practical challenges and methodological nuances encountered with the application of these tools. This information could potentially inform the creation of unifying criteria to guide and standardize the appraisal of evidence syntheses [ 109 ].

Evaluation of reporting

Complete reporting is essential for users to establish the trustworthiness and applicability of a systematic review’s findings. Efforts to standardize and improve the reporting of systematic reviews resulted in the 2009 publication of the PRISMA statement [ 92 ] with its accompanying explanation and elaboration document [ 110 ]. This guideline was designed to help authors prepare a complete and transparent report of their systematic review. In addition, adherence to PRISMA is often used to evaluate the thoroughness of reporting of published systematic reviews [ 111 ]. The updated version, PRISMA 2020 [ 93 ], and its guidance document [ 112 ] were published in 2021. Items on the original and updated versions of PRISMA are organized by the six basic review components they address (title, abstract, introduction, methods, results, discussion). The PRISMA 2020 update is a considerably expanded version of the original; it includes standards and examples for the 27 original and 13 additional reporting items that capture methodological advances and may enhance the replicability of reviews [ 113 ].

The original PRISMA statement fostered the development of various PRISMA extensions (Table 3.3 ). These include reporting guidance for scoping reviews and reviews of diagnostic test accuracy and for intervention reviews that report on the following: harms outcomes, equity issues, the effects of acupuncture, the results of network meta-analyses and analyses of individual participant data. Detailed reporting guidance for specific systematic review components (abstracts, protocols, literature searches) is also available.

PRISMA extensions

PRISMA for systematic reviews with a focus on health equity [ ]PRISMA-E2012
Reporting systematic reviews in journal and conference abstracts [ ]PRISMA for Abstracts2015; 2020
PRISMA for systematic review protocols [ ]PRISMA-P2015
PRISMA for Network Meta-Analyses [ ]PRISMA-NMA2015
PRISMA for Individual Participant Data [ ]PRISMA-IPD2015
PRISMA for reviews including harms outcomes [ ]PRISMA-Harms2016
PRISMA for diagnostic test accuracy [ ]PRISMA-DTA2018
PRISMA for scoping reviews [ ]PRISMA-ScR2018
PRISMA for acupuncture [ ]PRISMA-A2019
PRISMA for reporting literature searches [ ]PRISMA-S2021

PRISMA, Preferred Reporting Items for Systematic Reviews and Meta-Analyses

a Note the abstract reporting checklist is now incorporated into PRISMA 2020 [ 93 ]

Uptake and impact

The 2009 PRISMA standards [ 92 ] for reporting have been widely endorsed by authors, journals, and EBM-related organizations. We anticipate the same for PRISMA 2020 [ 93 ] given its co-publication in multiple high-impact journals. However, to date, there is a lack of strong evidence for an association between improved systematic review reporting and endorsement of PRISMA 2009 standards [ 43 , 111 ]. Most journals require a PRISMA checklist accompany submissions of systematic review manuscripts. However, the accuracy of information presented on these self-reported checklists is not necessarily verified. It remains unclear which strategies (eg, authors’ self-report of checklists, peer reviewer checks) might improve adherence to the PRISMA reporting standards; in addition, the feasibility of any potentially effective strategies must be taken into consideration given the structure and limitations of current research and publication practices [ 124 ].

Pitfalls and limitations of PRISMA, AMSTAR-2, and ROBIS

Misunderstanding of the roles of these tools and their misapplication may be widespread problems. PRISMA 2020 is a reporting guideline that is most beneficial if consulted when developing a review as opposed to merely completing a checklist when submitting to a journal; at that point, the review is finished, with good or bad methodological choices. However, PRISMA checklists evaluate how completely an element of review conduct was reported, but do not evaluate the caliber of conduct or performance of a review. Thus, review authors and readers should not think that a rigorous systematic review can be produced by simply following the PRISMA 2020 guidelines. Similarly, it is important to recognize that AMSTAR-2 and ROBIS are tools to evaluate the conduct of a review but do not substitute for conceptual methodological guidance. In addition, they are not intended to be simple checklists. In fact, they have the potential for misuse or abuse if applied as such; for example, by calculating a total score to make a judgment about a review’s overall confidence or RoB. Proper selection of a response for the individual items on AMSTAR-2 and ROBIS requires training or at least reference to their accompanying guidance documents.

Not surprisingly, it has been shown that compliance with the PRISMA checklist is not necessarily associated with satisfying the standards of ROBIS [ 125 ]. AMSTAR-2 and ROBIS were not available when PRISMA 2009 was developed; however, they were considered in the development of PRISMA 2020 [ 113 ]. Therefore, future studies may show a positive relationship between fulfillment of PRISMA 2020 standards for reporting and meeting the standards of tools evaluating methodological quality and RoB.

Choice of an appropriate tool for the evaluation of a systematic review first involves identification of the underlying construct to be assessed. For systematic reviews of interventions, recommended tools include AMSTAR-2 and ROBIS for appraisal of conduct and PRISMA 2020 for completeness of reporting. All three tools were developed rigorously and provide easily accessible and detailed user guidance, which is necessary for their proper application and interpretation. When considering a manuscript for publication, training in these tools can sensitize peer reviewers and editors to major issues that may affect the review’s trustworthiness and completeness of reporting. Judgment of the overall certainty of a body of evidence and formulation of recommendations rely, in part, on AMSTAR-2 or ROBIS appraisals of systematic reviews. Therefore, training on the application of these tools is essential for authors of overviews and developers of CPGs. Peer reviewers and editors considering an overview or CPG for publication must hold their authors to a high standard of transparency regarding both the conduct and reporting of these appraisals.

Part 4. Meeting conduct standards

Many authors, peer reviewers, and editors erroneously equate fulfillment of the items on the PRISMA checklist with superior methodological rigor. For direction on methodology, we refer them to available resources that provide comprehensive conceptual guidance [ 59 , 60 ] as well as primers with basic step-by-step instructions [ 1 , 126 , 127 ]. This section is intended to complement study of such resources by facilitating use of AMSTAR-2 and ROBIS, tools specifically developed to evaluate methodological rigor of systematic reviews. These tools are widely accepted by methodologists; however, in the general medical literature, they are not uniformly selected for the critical appraisal of systematic reviews [ 88 , 96 ].

To enable their uptake, Table 4.1  links review components to the corresponding appraisal tool items. Expectations of AMSTAR-2 and ROBIS are concisely stated, and reasoning provided.

Systematic review components linked to appraisal with AMSTAR-2 and ROBIS a

Table Table
Methods for study selection#5#2.5All three components must be done in duplicate, and methods fully described.Helps to mitigate CoI and bias; also may improve accuracy.
Methods for data extraction#6#3.1
Methods for RoB assessmentNA#3.5
Study description#8#3.2Research design features, components of research question (eg, PICO), setting, funding sources.Allows readers to understand the individual studies in detail.
Sources of funding#10NAIdentified for all included studies.Can reveal CoI or bias.
Publication bias#15*#4.5Explored, diagrammed, and discussed.Publication and other selective reporting biases are major threats to the validity of systematic reviews.
Author CoI#16NADisclosed, with management strategies described.If CoI is identified, management strategies must be described to ensure confidence in the review.

CoI conflict of interest, MA meta-analysis, NA not addressed, PICO participant, intervention, comparison, outcome, PRISMA-P Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols, RoB risk of bias

a Components shown in bold are chosen for elaboration in Part 4 for one (or both) of two reasons: 1) the component has been identified as potentially problematic for systematic review authors; and/or 2) the component is evaluated by standards of an AMSTAR-2 “critical” domain

b Critical domains of AMSTAR-2 are indicated by *

Issues involved in meeting the standards for seven review components (identified in bold in Table 4.1 ) are addressed in detail. These were chosen for elaboration for one (or both) of two reasons: 1) the component has been identified as potentially problematic for systematic review authors based on consistent reports of their frequent AMSTAR-2 or ROBIS deficiencies [ 9 , 11 , 15 , 88 , 128 , 129 ]; and/or 2) the review component is judged by standards of an AMSTAR-2 “critical” domain. These have the greatest implications for how a systematic review will be appraised: if standards for any one of these critical domains are not met, the review is rated as having “critically low confidence.”

Research question

Specific and unambiguous research questions may have more value for reviews that deal with hypothesis testing. Mnemonics for the various elements of research questions are suggested by JBI and Cochrane (Table 2.1 ). These prompt authors to consider the specialized methods involved for developing different types of systematic reviews; however, while inclusion of the suggested elements makes a review compliant with a particular review’s methods, it does not necessarily make a research question appropriate. Table 4.2  lists acronyms that may aid in developing the research question. They include overlapping concepts of importance in this time of proliferating reviews of uncertain value [ 130 ]. If these issues are not prospectively contemplated, systematic review authors may establish an overly broad scope, or develop runaway scope allowing them to stray from predefined choices relating to key comparisons and outcomes.

Research question development

AcronymMeaning
feasible, interesting, novel, ethical, and relevant
specific, measurable, attainable, relevant, timely
time, outcomes, population, intervention, context, study design, plus (effect) moderators

a Cummings SR, Browner WS, Hulley SB. Conceiving the research question and developing the study plan. In: Hulley SB, Cummings SR, Browner WS, editors. Designing clinical research: an epidemiological approach; 4th edn. Lippincott Williams & Wilkins; 2007. p. 14–22

b Doran, GT. There’s a S.M.A.R.T. way to write management’s goals and objectives. Manage Rev. 1981;70:35-6.

c Johnson BT, Hennessy EA. Systematic reviews and meta-analyses in the health sciences: best practice methods for research syntheses. Soc Sci Med. 2019;233:237–51

Once a research question is established, searching on registry sites and databases for existing systematic reviews addressing the same or a similar topic is necessary in order to avoid contributing to research waste [ 131 ]. Repeating an existing systematic review must be justified, for example, if previous reviews are out of date or methodologically flawed. A full discussion on replication of intervention systematic reviews, including a consensus checklist, can be found in the work of Tugwell and colleagues [ 84 ].

Protocol development is considered a core component of systematic reviews [ 125 , 126 , 132 ]. Review protocols may allow researchers to plan and anticipate potential issues, assess validity of methods, prevent arbitrary decision-making, and minimize bias that can be introduced by the conduct of the review. Registration of a protocol that allows public access promotes transparency of the systematic review’s methods and processes and reduces the potential for duplication [ 132 ]. Thinking early and carefully about all the steps of a systematic review is pragmatic and logical and may mitigate the influence of the authors’ prior knowledge of the evidence [ 133 ]. In addition, the protocol stage is when the scope of the review can be carefully considered by authors, reviewers, and editors; this may help to avoid production of overly ambitious reviews that include excessive numbers of comparisons and outcomes or are undisciplined in their study selection.

An association with attainment of AMSTAR standards in systematic reviews with published prospective protocols has been reported [ 134 ]. However, completeness of reporting does not seem to be different in reviews with a protocol compared to those without one [ 135 ]. PRISMA-P [ 116 ] and its accompanying elaboration and explanation document [ 136 ] can be used to guide and assess the reporting of protocols. A final version of the review should fully describe any protocol deviations. Peer reviewers may compare the submitted manuscript with any available pre-registered protocol; this is required if AMSTAR-2 or ROBIS are used for critical appraisal.

There are multiple options for the recording of protocols (Table 4.3 ). Some journals will peer review and publish protocols. In addition, many online sites offer date-stamped and publicly accessible protocol registration. Some of these are exclusively for protocols of evidence syntheses; others are less restrictive and offer researchers the capacity for data storage, sharing, and other workflow features. These sites document protocol details to varying extents and have different requirements [ 137 ]. The most popular site for systematic reviews, the International Prospective Register of Systematic Reviews (PROSPERO), for example, only registers reviews that report on an outcome with direct relevance to human health. The PROSPERO record documents protocols for all types of reviews except literature and scoping reviews. Of note, PROSPERO requires authors register their review protocols prior to any data extraction [ 133 , 138 ]. The electronic records of most of these registry sites allow authors to update their protocols and facilitate transparent tracking of protocol changes, which are not unexpected during the progress of the review [ 139 ].

Options for protocol registration of evidence syntheses

 BMJ Open
 BioMed Central
 JMIR Research Protocols
 World Journal of Meta-analysis
 Cochrane
 JBI
 PROSPERO

 Research Registry-

 Registry of Systematic Reviews/Meta-Analyses

 International Platform of Registered Systematic Review and Meta-analysis Protocols (INPLASY)
 Center for Open Science
 Protocols.io
 Figshare
 Open Science Framework
 Zenodo

a Authors are advised to contact their target journal regarding submission of systematic review protocols

b Registration is restricted to approved review projects

c The JBI registry lists review projects currently underway by JBI-affiliated entities. These records include a review’s title, primary author, research question, and PICO elements. JBI recommends that authors register eligible protocols with PROSPERO

d See Pieper and Rombey [ 137 ] for detailed characteristics of these five registries

e See Pieper and Rombey [ 137 ] for other systematic review data repository options

Study design inclusion

For most systematic reviews, broad inclusion of study designs is recommended [ 126 ]. This may allow comparison of results between contrasting study design types [ 126 ]. Certain study designs may be considered preferable depending on the type of review and nature of the research question. However, prevailing stereotypes about what each study design does best may not be accurate. For example, in systematic reviews of interventions, randomized designs are typically thought to answer highly specific questions while non-randomized designs often are expected to reveal greater information about harms or real-word evidence [ 126 , 140 , 141 ]. This may be a false distinction; randomized trials may be pragmatic [ 142 ], they may offer important (and more unbiased) information on harms [ 143 ], and data from non-randomized trials may not necessarily be more real-world-oriented [ 144 ].

Moreover, there may not be any available evidence reported by RCTs for certain research questions; in some cases, there may not be any RCTs or NRSI. When the available evidence is limited to case reports and case series, it is not possible to test hypotheses nor provide descriptive estimates or associations; however, a systematic review of these studies can still offer important insights [ 81 , 145 ]. When authors anticipate that limited evidence of any kind may be available to inform their research questions, a scoping review can be considered. Alternatively, decisions regarding inclusion of indirect as opposed to direct evidence can be addressed during protocol development [ 146 ]. Including indirect evidence at an early stage of intervention systematic review development allows authors to decide if such studies offer any additional and/or different understanding of treatment effects for their population or comparison of interest. Issues of indirectness of included studies are accounted for later in the process, during determination of the overall certainty of evidence (see Part 5 for details).

Evidence search

Both AMSTAR-2 and ROBIS require systematic and comprehensive searches for evidence. This is essential for any systematic review. Both tools discourage search restrictions based on language and publication source. Given increasing globalism in health care, the practice of including English-only literature should be avoided [ 126 ]. There are many examples in which language bias (different results in studies published in different languages) has been documented [ 147 , 148 ]. This does not mean that all literature, in all languages, is equally trustworthy [ 148 ]; however, the only way to formally probe for the potential of such biases is to consider all languages in the initial search. The gray literature and a search of trials may also reveal important details about topics that would otherwise be missed [ 149 – 151 ]. Again, inclusiveness will allow review authors to investigate whether results differ in gray literature and trials [ 41 , 151 – 153 ].

Authors should make every attempt to complete their review within one year as that is the likely viable life of a search. (1) If that is not possible, the search should be updated close to the time of completion [ 154 ]. Different research topics may warrant less of a delay, for example, in rapidly changing fields (as in the case of the COVID-19 pandemic), even one month may radically change the available evidence.

Excluded studies

AMSTAR-2 requires authors to provide references for any studies excluded at the full text phase of study selection along with reasons for exclusion; this allows readers to feel confident that all relevant literature has been considered for inclusion and that exclusions are defensible.

Risk of bias assessment of included studies

The design of the studies included in a systematic review (eg, RCT, cohort, case series) should not be equated with appraisal of its RoB. To meet AMSTAR-2 and ROBIS standards, systematic review authors must examine RoB issues specific to the design of each primary study they include as evidence. It is unlikely that a single RoB appraisal tool will be suitable for all research designs. In addition to tools for randomized and non-randomized studies, specific tools are available for evaluation of RoB in case reports and case series [ 82 ] and single-case experimental designs [ 155 , 156 ]. Note the RoB tools selected must meet the standards of the appraisal tool used to judge the conduct of the review. For example, AMSTAR-2 identifies four sources of bias specific to RCTs and NRSI that must be addressed by the RoB tool(s) chosen by the review authors. The Cochrane RoB-2 [ 157 ] tool for RCTs and ROBINS-I [ 158 ] for NRSI for RoB assessment meet the AMSTAR-2 standards. Appraisers on the review team should not modify any RoB tool without complete transparency and acknowledgment that they have invalidated the interpretation of the tool as intended by its developers [ 159 ]. Conduct of RoB assessments is not addressed AMSTAR-2; to meet ROBIS standards, two independent reviewers should complete RoB assessments of included primary studies.

Implications of the RoB assessments must be explicitly discussed and considered in the conclusions of the review. Discussion of the overall RoB of included studies may consider the weight of the studies at high RoB, the importance of the sources of bias in the studies being summarized, and if their importance differs in relationship to the outcomes reported. If a meta-analysis is performed, serious concerns for RoB of individual studies should be accounted for in these results as well. If the results of the meta-analysis for a specific outcome change when studies at high RoB are excluded, readers will have a more accurate understanding of this body of evidence. However, while investigating the potential impact of specific biases is a useful exercise, it is important to avoid over-interpretation, especially when there are sparse data.

Synthesis methods for quantitative data

Syntheses of quantitative data reported by primary studies are broadly categorized as one of two types: meta-analysis, and synthesis without meta-analysis (Table 4.4 ). Before deciding on one of these methods, authors should seek methodological advice about whether reported data can be transformed or used in other ways to provide a consistent effect measure across studies [ 160 , 161 ].

Common methods for quantitative synthesis

Aggregate data

Individual

participant data

Weighted average of effect estimates

Pairwise comparisons of effect estimates, CI

Overall effect estimate, CI, value

Evaluation of heterogeneity

Forest plot with summary statistic for average effect estimate
Network Variable The interventions, which are compared directly indirectlyNetwork diagram or graph, tabular presentations
Comparisons of relative effects between any pair of interventionsEffect estimates for intervention pairings
Summary relative effects for pair-wise comparisons with evaluations of inconsistency and heterogeneityForest plot, other methods
Treatment rankings (ie, probability that an intervention is among the best options)Rankogram plot
Summarizing effect estimates from separate studies (without combination that would provide an average effect estimate)Range and distribution of observed effects such as median, interquartile range, range

Box-and-whisker plot, bubble plot

Forest plot (without summary effect estimate)

Combining valuesCombined value, number of studiesAlbatross plot (study sample size against values per outcome)
Vote counting by direction of effect (eg, favors intervention over the comparator)Proportion of studies with an effect in the direction of interest, CI, valueHarvest plot, effect direction plot

CI confidence interval (or credible interval, if analysis is done in Bayesian framework)

a See text for descriptions of the types of data combined in each of these approaches

b See Additional File 4  for guidance on the structure and presentation of forest plots

c General approach is similar to aggregate data meta-analysis but there are substantial differences relating to data collection and checking and analysis [ 162 ]. This approach to syntheses is applicable to intervention, diagnostic, and prognostic systematic reviews [ 163 ]

d Examples include meta-regression, hierarchical and multivariate approaches [ 164 ]

e In-depth guidance and illustrations of these methods are provided in Chapter 12 of the Cochrane Handbook [ 160 ]

Meta-analysis

Systematic reviews that employ meta-analysis should not be referred to simply as “meta-analyses.” The term meta-analysis strictly refers to a specific statistical technique used when study effect estimates and their variances are available, yielding a quantitative summary of results. In general, methods for meta-analysis involve use of a weighted average of effect estimates from two or more studies. If considered carefully, meta-analysis increases the precision of the estimated magnitude of effect and can offer useful insights about heterogeneity and estimates of effects. We refer to standard references for a thorough introduction and formal training [ 165 – 167 ].

There are three common approaches to meta-analysis in current health care–related systematic reviews (Table 4.4 ). Aggregate meta-analyses is the most familiar to authors of evidence syntheses and their end users. This standard meta-analysis combines data on effect estimates reported by studies that investigate similar research questions involving direct comparisons of an intervention and comparator. Results of these analyses provide a single summary intervention effect estimate. If the included studies in a systematic review measure an outcome differently, their reported results may be transformed to make them comparable [ 161 ]. Forest plots visually present essential information about the individual studies and the overall pooled analysis (see Additional File 4  for details).

Less familiar and more challenging meta-analytical approaches used in secondary research include individual participant data (IPD) and network meta-analyses (NMA); PRISMA extensions provide reporting guidelines for both [ 117 , 118 ]. In IPD, the raw data on each participant from each eligible study are re-analyzed as opposed to the study-level data analyzed in aggregate data meta-analyses [ 168 ]. This may offer advantages, including the potential for limiting concerns about bias and allowing more robust analyses [ 163 ]. As suggested by the description in Table 4.4 , NMA is a complex statistical approach. It combines aggregate data [ 169 ] or IPD [ 170 ] for effect estimates from direct and indirect comparisons reported in two or more studies of three or more interventions. This makes it a potentially powerful statistical tool; while multiple interventions are typically available to treat a condition, few have been evaluated in head-to-head trials [ 171 ]. Both IPD and NMA facilitate a broader scope, and potentially provide more reliable and/or detailed results; however, compared with standard aggregate data meta-analyses, their methods are more complicated, time-consuming, and resource-intensive, and they have their own biases, so one needs sufficient funding, technical expertise, and preparation to employ them successfully [ 41 , 172 , 173 ].

Several items in AMSTAR-2 and ROBIS address meta-analysis; thus, understanding the strengths, weaknesses, assumptions, and limitations of methods for meta-analyses is important. According to the standards of both tools, plans for a meta-analysis must be addressed in the review protocol, including reasoning, description of the type of quantitative data to be synthesized, and the methods planned for combining the data. This should not consist of stock statements describing conventional meta-analysis techniques; rather, authors are expected to anticipate issues specific to their research questions. Concern for the lack of training in meta-analysis methods among systematic review authors cannot be overstated. For those with training, the use of popular software (eg, RevMan [ 174 ], MetaXL [ 175 ], JBI SUMARI [ 176 ]) may facilitate exploration of these methods; however, such programs cannot substitute for the accurate interpretation of the results of meta-analyses, especially for more complex meta-analytical approaches.

Synthesis without meta-analysis

There are varied reasons a meta-analysis may not be appropriate or desirable [ 160 , 161 ]. Syntheses that informally use statistical methods other than meta-analysis are variably referred to as descriptive, narrative, or qualitative syntheses or summaries; these terms are also applied to syntheses that make no attempt to statistically combine data from individual studies. However, use of such imprecise terminology is discouraged; in order to fully explore the results of any type of synthesis, some narration or description is needed to supplement the data visually presented in tabular or graphic forms [ 63 , 177 ]. In addition, the term “qualitative synthesis” is easily confused with a synthesis of qualitative data in a qualitative or mixed methods review. “Synthesis without meta-analysis” is currently the preferred description of other ways to combine quantitative data from two or more studies. Use of this specific terminology when referring to these types of syntheses also implies the application of formal methods (Table 4.4 ).

Methods for syntheses without meta-analysis involve structured presentations of the data in any tables and plots. In comparison to narrative descriptions of each study, these are designed to more effectively and transparently show patterns and convey detailed information about the data; they also allow informal exploration of heterogeneity [ 178 ]. In addition, acceptable quantitative statistical methods (Table 4.4 ) are formally applied; however, it is important to recognize these methods have significant limitations for the interpretation of the effectiveness of an intervention [ 160 ]. Nevertheless, when meta-analysis is not possible, the application of these methods is less prone to bias compared with an unstructured narrative description of included studies [ 178 , 179 ].

Vote counting is commonly used in systematic reviews and involves a tally of studies reporting results that meet some threshold of importance applied by review authors. Until recently, it has not typically been identified as a method for synthesis without meta-analysis. Guidance on an acceptable vote counting method based on direction of effect is currently available [ 160 ] and should be used instead of narrative descriptions of such results (eg, “more than half the studies showed improvement”; “only a few studies reported adverse effects”; “7 out of 10 studies favored the intervention”). Unacceptable methods include vote counting by statistical significance or magnitude of effect or some subjective rule applied by the authors.

AMSTAR-2 and ROBIS standards do not explicitly address conduct of syntheses without meta-analysis, although AMSTAR-2 items 13 and 14 might be considered relevant. Guidance for the complete reporting of syntheses without meta-analysis for systematic reviews of interventions is available in the Synthesis without Meta-analysis (SWiM) guideline [ 180 ] and methodological guidance is available in the Cochrane Handbook [ 160 , 181 ].

Familiarity with AMSTAR-2 and ROBIS makes sense for authors of systematic reviews as these appraisal tools will be used to judge their work; however, training is necessary for authors to truly appreciate and apply methodological rigor. Moreover, judgment of the potential contribution of a systematic review to the current knowledge base goes beyond meeting the standards of AMSTAR-2 and ROBIS. These tools do not explicitly address some crucial concepts involved in the development of a systematic review; this further emphasizes the need for author training.

We recommend that systematic review authors incorporate specific practices or exercises when formulating a research question at the protocol stage, These should be designed to raise the review team’s awareness of how to prevent research and resource waste [ 84 , 130 ] and to stimulate careful contemplation of the scope of the review [ 30 ]. Authors’ training should also focus on justifiably choosing a formal method for the synthesis of quantitative and/or qualitative data from primary research; both types of data require specific expertise. For typical reviews that involve syntheses of quantitative data, statistical expertise is necessary, initially for decisions about appropriate methods, [ 160 , 161 ] and then to inform any meta-analyses [ 167 ] or other statistical methods applied [ 160 ].

Part 5. Rating overall certainty of evidence

Report of an overall certainty of evidence assessment in a systematic review is an important new reporting standard of the updated PRISMA 2020 guidelines [ 93 ]. Systematic review authors are well acquainted with assessing RoB in individual primary studies, but much less familiar with assessment of overall certainty across an entire body of evidence. Yet a reliable way to evaluate this broader concept is now recognized as a vital part of interpreting the evidence.

Historical systems for rating evidence are based on study design and usually involve hierarchical levels or classes of evidence that use numbers and/or letters to designate the level/class. These systems were endorsed by various EBM-related organizations. Professional societies and regulatory groups then widely adopted them, often with modifications for application to the available primary research base in specific clinical areas. In 2002, a report issued by the AHRQ identified 40 systems to rate quality of a body of evidence [ 182 ]. A critical appraisal of systems used by prominent health care organizations published in 2004 revealed limitations in sensibility, reproducibility, applicability to different questions, and usability to different end users [ 183 ]. Persistent use of hierarchical rating schemes to describe overall quality continues to complicate the interpretation of evidence. This is indicated by recent reports of poor interpretability of systematic review results by readers [ 184 – 186 ] and misleading interpretations of the evidence related to the “spin” systematic review authors may put on their conclusions [ 50 , 187 ].

Recognition of the shortcomings of hierarchical rating systems raised concerns that misleading clinical recommendations could result even if based on a rigorous systematic review. In addition, the number and variability of these systems were considered obstacles to quick and accurate interpretations of the evidence by clinicians, patients, and policymakers [ 183 ]. These issues contributed to the development of the GRADE approach. An international working group, that continues to actively evaluate and refine it, first introduced GRADE in 2004 [ 188 ]. Currently more than 110 organizations from 19 countries around the world have endorsed or are using GRADE [ 189 ].

GRADE approach to rating overall certainty

GRADE offers a consistent and sensible approach for two separate processes: rating the overall certainty of a body of evidence and the strength of recommendations. The former is the expected conclusion of a systematic review, while the latter is pertinent to the development of CPGs. As such, GRADE provides a mechanism to bridge the gap from evidence synthesis to application of the evidence for informed clinical decision-making [ 27 , 190 ]. We briefly examine the GRADE approach but only as it applies to rating overall certainty of evidence in systematic reviews.

In GRADE, use of “certainty” of a body of evidence is preferred over the term “quality.” [ 191 ] Certainty refers to the level of confidence systematic review authors have that, for each outcome, an effect estimate represents the true effect. The GRADE approach to rating confidence in estimates begins with identifying the study type (RCT or NRSI) and then systematically considers criteria to rate the certainty of evidence up or down (Table 5.1 ).

GRADE criteria for rating certainty of evidence

[ ]
Risk of bias [ ]Large magnitude of effect
Imprecision [ ]Dose–response gradient
Inconsistency [ ]All residual confounding would decrease magnitude of effect (in situations with an effect)
Indirectness [ ]
Publication bias [ ]

a Applies to randomized studies

b Applies to non-randomized studies

This process results in assignment of one of the four GRADE certainty ratings to each outcome; these are clearly conveyed with the use of basic interpretation symbols (Table 5.2 ) [ 192 ]. Notably, when multiple outcomes are reported in a systematic review, each outcome is assigned a unique certainty rating; thus different levels of certainty may exist in the body of evidence being examined.

GRADE certainty ratings and their interpretation symbols a

 ⊕  ⊕  ⊕  ⊕ High: We are very confident that the true effect lies close to that of the estimate of the effect
 ⊕  ⊕  ⊕ Moderate: We are moderately confident in the effect estimate: the true effect is likely to be close to the estimate of the effect, but there is a possibility that it is substantially different
 ⊕  ⊕ Low: Our confidence in the effect estimate is limited: the true effect may be substantially different from the estimate of the effect
 ⊕ Very low: We have very little confidence in the effect estimate: the true effect is likely to be substantially different from the estimate of effect

a From the GRADE Handbook [ 192 ]

GRADE’s developers acknowledge some subjectivity is involved in this process [ 193 ]. In addition, they emphasize that both the criteria for rating evidence up and down (Table 5.1 ) as well as the four overall certainty ratings (Table 5.2 ) reflect a continuum as opposed to discrete categories [ 194 ]. Consequently, deciding whether a study falls above or below the threshold for rating up or down may not be straightforward, and preliminary overall certainty ratings may be intermediate (eg, between low and moderate). Thus, the proper application of GRADE requires systematic review authors to take an overall view of the body of evidence and explicitly describe the rationale for their final ratings.

Advantages of GRADE

Outcomes important to the individuals who experience the problem of interest maintain a prominent role throughout the GRADE process [ 191 ]. These outcomes must inform the research questions (eg, PICO [population, intervention, comparator, outcome]) that are specified a priori in a systematic review protocol. Evidence for these outcomes is then investigated and each critical or important outcome is ultimately assigned a certainty of evidence as the end point of the review. Notably, limitations of the included studies have an impact at the outcome level. Ultimately, the certainty ratings for each outcome reported in a systematic review are considered by guideline panels. They use a different process to formulate recommendations that involves assessment of the evidence across outcomes [ 201 ]. It is beyond our scope to describe the GRADE process for formulating recommendations; however, it is critical to understand how these two outcome-centric concepts of certainty of evidence in the GRADE framework are related and distinguished. An in-depth illustration using examples from recently published evidence syntheses and CPGs is provided in Additional File 5 A (Table AF5A-1).

The GRADE approach is applicable irrespective of whether the certainty of the primary research evidence is high or very low; in some circumstances, indirect evidence of higher certainty may be considered if direct evidence is unavailable or of low certainty [ 27 ]. In fact, most interventions and outcomes in medicine have low or very low certainty of evidence based on GRADE and there seems to be no major improvement over time [ 202 , 203 ]. This is still a very important (even if sobering) realization for calibrating our understanding of medical evidence. A major appeal of the GRADE approach is that it offers a common framework that enables authors of evidence syntheses to make complex judgments about evidence certainty and to convey these with unambiguous terminology. This prevents some common mistakes made by review authors, including overstating results (or under-reporting harms) [ 187 ] and making recommendations for treatment. This is illustrated in Table AF5A-2 (Additional File 5 A), which compares the concluding statements made about overall certainty in a systematic review with and without application of the GRADE approach.

Theoretically, application of GRADE should improve consistency of judgments about certainty of evidence, both between authors and across systematic reviews. In one empirical evaluation conducted by the GRADE Working Group, interrater reliability of two individual raters assessing certainty of the evidence for a specific outcome increased from ~ 0.3 without using GRADE to ~ 0.7 by using GRADE [ 204 ]. However, others report variable agreement among those experienced in GRADE assessments of evidence certainty [ 190 ]. Like any other tool, GRADE requires training in order to be properly applied. The intricacies of the GRADE approach and the necessary subjectivity involved suggest that improving agreement may require strict rules for its application; alternatively, use of general guidance and consensus among review authors may result in less consistency but provide important information for the end user [ 190 ].

GRADE caveats

Simply invoking “the GRADE approach” does not automatically ensure GRADE methods were employed by authors of a systematic review (or developers of a CPG). Table 5.3 lists the criteria the GRADE working group has established for this purpose. These criteria highlight the specific terminology and methods that apply to rating the certainty of evidence for outcomes reported in a systematic review [ 191 ], which is different from rating overall certainty across outcomes considered in the formulation of recommendations [ 205 ]. Modifications of standard GRADE methods and terminology are discouraged as these may detract from GRADE’s objectives to minimize conceptual confusion and maximize clear communication [ 206 ].

Criteria for using GRADE in a systematic review a

1. The certainty in the evidence (also known as quality of evidence or confidence in the estimates) should be defined consistently with the definitions used by the GRADE Working Group.
2. Explicit consideration should be given to each of the GRADE domains for assessing the certainty in the evidence (although different terminology may be used).
3. The overall certainty in the evidence should be assessed for each important outcome using four or three categories (such as high, moderate, low and/or very low) and definitions for each category that are consistent with the definitions used by the GRADE Working Group.
4. Evidence summaries … should be used as the basis for judgments about the certainty in the evidence.

a Adapted from the GRADE working group [ 206 ]; this list does not contain the additional criteria that apply to the development of a clinical practice guideline

Nevertheless, GRADE is prone to misapplications [ 207 , 208 ], which can distort a systematic review’s conclusions about the certainty of evidence. Systematic review authors without proper GRADE training are likely to misinterpret the terms “quality” and “grade” and to misunderstand the constructs assessed by GRADE versus other appraisal tools. For example, review authors may reference the standard GRADE certainty ratings (Table 5.2 ) to describe evidence for their outcome(s) of interest. However, these ratings are invalidated if authors omit or inadequately perform RoB evaluations of each included primary study. Such deficiencies in RoB assessments are unacceptable but not uncommon, as reported in methodological studies of systematic reviews and overviews [ 104 , 186 , 209 , 210 ]. GRADE ratings are also invalidated if review authors do not formally address and report on the other criteria (Table 5.1 ) necessary for a GRADE certainty rating.

Other caveats pertain to application of a GRADE certainty of evidence rating in various types of evidence syntheses. Current adaptations of GRADE are described in Additional File 5 B and included on Table 6.3 , which is introduced in the next section.

Concise Guide to best practices for evidence syntheses, version 1.0 a

Cochrane , JBICochrane, JBICochraneCochrane, JBIJBIJBIJBICochrane, JBIJBI
 ProtocolPRISMA-P [ ]PRISMA-PPRISMA-PPRISMA-PPRISMA-PPRISMA-PPRISMA-PPRISMA-PPRISMA-P
 Systematic reviewPRISMA 2020 [ ]PRISMA-DTA [ ]PRISMA 2020

eMERGe [ ]

ENTREQ [ ]

PRISMA 2020PRISMA 2020PRISMA 2020PRIOR [ ]PRISMA-ScR [ ]
 Synthesis without MASWiM [ ]PRISMA-DTA [ ]SWiM eMERGe [ ] ENTREQ [ ] SWiM SWiM SWiM PRIOR [ ]

For RCTs: Cochrane RoB2 [ ]

For NRSI:

ROBINS-I [ ]

Other primary research

QUADAS-2[ ]

Factor review QUIPS [ ]

Model review PROBAST [ ]

CASP qualitative checklist [ ]

JBI Critical Appraisal Checklist [ ]

JBI checklist for studies reporting prevalence data [ ]

For NRSI: ROBINS-I [ ]

Other primary research

COSMIN RoB Checklist [ ]AMSTAR-2 [ ] or ROBIS [ ]Not required
GRADE [ ]GRADE adaptation GRADE adaptation

CERQual [ ]

ConQual [ ]

GRADE adaptation Risk factors GRADE adaptation

GRADE (for intervention reviews)

Risk factors

Not applicable

AMSTAR A MeaSurement Tool to Assess Systematic Reviews, CASP Critical Appraisal Skills Programme, CERQual Confidence in the Evidence from Reviews of Qualitative research, ConQual Establishing Confidence in the output of Qualitative research synthesis, COSMIN COnsensus-based Standards for the selection of health Measurement Instruments, DTA diagnostic test accuracy, eMERGe meta-ethnography reporting guidance, ENTREQ enhancing transparency in reporting the synthesis of qualitative research, GRADE Grading of Recommendations Assessment, Development and Evaluation, MA meta-analysis, NRSI non-randomized studies of interventions, P protocol, PRIOR Preferred Reporting Items for Overviews of Reviews, PRISMA Preferred Reporting Items for Systematic Reviews and Meta-Analyses, PROBAST Prediction model Risk Of Bias ASsessment Tool, QUADAS quality assessment of studies of diagnostic accuracy included in systematic reviews, QUIPS Quality In Prognosis Studies, RCT randomized controlled trial, RoB risk of bias, ROBINS-I Risk Of Bias In Non-randomised Studies of Interventions, ROBIS Risk of Bias in Systematic Reviews, ScR scoping review, SWiM systematic review without meta-analysis

a Superscript numbers represent citations provided in the main reference list. Additional File 6 lists links to available online resources for the methods and tools included in the Concise Guide

b The MECIR manual [ 30 ] provides Cochrane’s specific standards for both reporting and conduct of intervention systematic reviews and protocols

c Editorial and peer reviewers can evaluate completeness of reporting in submitted manuscripts using these tools. Authors may be required to submit a self-reported checklist for the applicable tools

d The decision flowchart described by Flemming and colleagues [ 223 ] is recommended for guidance on how to choose the best approach to reporting for qualitative reviews

e SWiM was developed for intervention studies reporting quantitative data. However, if there is not a more directly relevant reporting guideline, SWiM may prompt reviewers to consider the important details to report. (Personal Communication via email, Mhairi Campbell, 14 Dec 2022)

f JBI recommends their own tools for the critical appraisal of various quantitative primary study designs included in systematic reviews of intervention effectiveness, prevalence and incidence, and etiology and risk as well as for the critical appraisal of systematic reviews included in umbrella reviews. However, except for the JBI Checklists for studies reporting prevalence data and qualitative research, the development, validity, and reliability of these tools are not well documented

g Studies that are not RCTs or NRSI require tools developed specifically to evaluate their design features. Examples include single case experimental design [ 155 , 156 ] and case reports and series [ 82 ]

h The evaluation of methodological quality of studies included in a synthesis of qualitative research is debatable [ 224 ]. Authors may select a tool appropriate for the type of qualitative synthesis methodology employed. The CASP Qualitative Checklist [ 218 ] is an example of a published, commonly used tool that focuses on assessment of the methodological strengths and limitations of qualitative studies. The JBI Critical Appraisal Checklist for Qualitative Research [ 219 ] is recommended for reviews using a meta-aggregative approach

i Consider including risk of bias assessment of included studies if this information is relevant to the research question; however, scoping reviews do not include an assessment of the overall certainty of a body of evidence

j Guidance available from the GRADE working group [ 225 , 226 ]; also recommend consultation with the Cochrane diagnostic methods group

k Guidance available from the GRADE working group [ 227 ]; also recommend consultation with Cochrane prognostic methods group

l Used for syntheses in reviews with a meta-aggregative approach [ 224 ]

m Chapter 5 in the JBI Manual offers guidance on how to adapt GRADE to prevalence and incidence reviews [ 69 ]

n Janiaud and colleagues suggest criteria for evaluating evidence certainty for meta-analyses of non-randomized studies evaluating risk factors [ 228 ]

o The COSMIN user manual provides details on how to apply GRADE in systematic reviews of measurement properties [ 229 ]

The expected culmination of a systematic review should be a rating of overall certainty of a body of evidence for each outcome reported. The GRADE approach is recommended for making these judgments for outcomes reported in systematic reviews of interventions and can be adapted for other types of reviews. This represents the initial step in the process of making recommendations based on evidence syntheses. Peer reviewers should ensure authors meet the minimal criteria for supporting the GRADE approach when reviewing any evidence synthesis that reports certainty ratings derived using GRADE. Authors and peer reviewers of evidence syntheses unfamiliar with GRADE are encouraged to seek formal training and take advantage of the resources available on the GRADE website [ 211 , 212 ].

Part 6. Concise Guide to best practices

Accumulating data in recent years suggest that many evidence syntheses (with or without meta-analysis) are not reliable. This relates in part to the fact that their authors, who are often clinicians, can be overwhelmed by the plethora of ways to evaluate evidence. They tend to resort to familiar but often inadequate, inappropriate, or obsolete methods and tools and, as a result, produce unreliable reviews. These manuscripts may not be recognized as such by peer reviewers and journal editors who may disregard current standards. When such a systematic review is published or included in a CPG, clinicians and stakeholders tend to believe that it is trustworthy. A vicious cycle in which inadequate methodology is rewarded and potentially misleading conclusions are accepted is thus supported. There is no quick or easy way to break this cycle; however, increasing awareness of best practices among all these stakeholder groups, who often have minimal (if any) training in methodology, may begin to mitigate it. This is the rationale for inclusion of Parts 2 through 5 in this guidance document. These sections present core concepts and important methodological developments that inform current standards and recommendations. We conclude by taking a direct and practical approach.

Inconsistent and imprecise terminology used in the context of development and evaluation of evidence syntheses is problematic for authors, peer reviewers and editors, and may lead to the application of inappropriate methods and tools. In response, we endorse use of the basic terms (Table 6.1 ) defined in the PRISMA 2020 statement [ 93 ]. In addition, we have identified several problematic expressions and nomenclature. In Table 6.2 , we compile suggestions for preferred terms less likely to be misinterpreted.

Terms relevant to the reporting of health care–related evidence syntheses a

A review that uses explicit, systematic methods to collate and synthesize findings of studies that address a clearly formulated question.
The combination of quantitative results of two or more studies. This encompasses meta-analysis of effect estimates and other methods, such as combining values, calculating the range and distribution of observed effects, and vote counting based on the direction of effect.
A statistical technique used to synthesize results when study effect estimates and their variances are available, yielding a quantitative summary of results.
An event or measurement collected for participants in a study (such as quality of life, mortality).
The combination of a point estimate (such as a mean difference, risk ratio or proportion) and a measure of its precision (such as a confidence/credible interval) for a particular outcome.
A document (paper or electronic) supplying information about a particular study. It could be a journal article, preprint, conference abstract, study register entry, clinical study report, dissertation, unpublished manuscript, government report, or any other document providing relevant information.
The title or abstract (or both) of a report indexed in a database or website (such as a title or abstract for an article indexed in Medline). Records that refer to the same report (such as the same journal article) are “duplicates”; however, records that refer to reports that are merely similar (such as a similar abstract submitted to two different conferences) should be considered unique.
An investigation, such as a clinical trial, that includes a defined group of participants and one or more interventions and outcomes. A “study” might have multiple reports. For example, reports could include the protocol, statistical analysis plan, baseline characteristics, results for the primary outcome, results for harms, results for secondary outcomes, and results for additional mediator and moderator analyses.

a Reproduced from Page and colleagues [ 93 ]

Terminology suggestions for health care–related evidence syntheses

PreferredPotentially problematic

Evidence synthesis with meta-analysis

Systematic review with meta-analysis

Meta-analysis
Overview or umbrella review

Systematic review of systematic reviews

Review of reviews

Meta-review

RandomizedExperimental
Non-randomizedObservational
Single case experimental design

Single-subject research

N-of-1 design

Case report or case seriesDescriptive study
Methodological qualityQuality
Certainty of evidence

Quality of evidence

Grade of evidence

Level of evidence

Strength of evidence

Qualitative systematic reviewQualitative synthesis
Synthesis of qualitative data Qualitative synthesis
Synthesis without meta-analysis

Narrative synthesis , narrative summary

Qualitative synthesis

Descriptive synthesis, descriptive summary

a For example, meta-aggregation, meta-ethnography, critical interpretative synthesis, realist synthesis

b This term may best apply to the synthesis in a mixed methods systematic review in which data from different types of evidence (eg, qualitative, quantitative, economic) are summarized [ 64 ]

We also propose a Concise Guide (Table 6.3 ) that summarizes the methods and tools recommended for the development and evaluation of nine types of evidence syntheses. Suggestions for specific tools are based on the rigor of their development as well as the availability of detailed guidance from their developers to ensure their proper application. The formatting of the Concise Guide addresses a well-known source of confusion by clearly distinguishing the underlying methodological constructs that these tools were designed to assess. Important clarifications and explanations follow in the guide’s footnotes; associated websites, if available, are listed in Additional File 6 .

To encourage uptake of best practices, journal editors may consider adopting or adapting the Concise Guide in their instructions to authors and peer reviewers of evidence syntheses. Given the evolving nature of evidence synthesis methodology, the suggested methods and tools are likely to require regular updates. Authors of evidence syntheses should monitor the literature to ensure they are employing current methods and tools. Some types of evidence syntheses (eg, rapid, economic, methodological) are not included in the Concise Guide; for these, authors are advised to obtain recommendations for acceptable methods by consulting with their target journal.

We encourage the appropriate and informed use of the methods and tools discussed throughout this commentary and summarized in the Concise Guide (Table 6.3 ). However, we caution against their application in a perfunctory or superficial fashion. This is a common pitfall among authors of evidence syntheses, especially as the standards of such tools become associated with acceptance of a manuscript by a journal. Consequently, published evidence syntheses may show improved adherence to the requirements of these tools without necessarily making genuine improvements in their performance.

In line with our main objective, the suggested tools in the Concise Guide address the reliability of evidence syntheses; however, we recognize that the utility of systematic reviews is an equally important concern. An unbiased and thoroughly reported evidence synthesis may still not be highly informative if the evidence itself that is summarized is sparse, weak and/or biased [ 24 ]. Many intervention systematic reviews, including those developed by Cochrane [ 203 ] and those applying GRADE [ 202 ], ultimately find no evidence, or find the evidence to be inconclusive (eg, “weak,” “mixed,” or of “low certainty”). This often reflects the primary research base; however, it is important to know what is known (or not known) about a topic when considering an intervention for patients and discussing treatment options with them.

Alternatively, the frequency of “empty” and inconclusive reviews published in the medical literature may relate to limitations of conventional methods that focus on hypothesis testing; these have emphasized the importance of statistical significance in primary research and effect sizes from aggregate meta-analyses [ 183 ]. It is becoming increasingly apparent that this approach may not be appropriate for all topics [ 130 ]. Development of the GRADE approach has facilitated a better understanding of significant factors (beyond effect size) that contribute to the overall certainty of evidence. Other notable responses include the development of integrative synthesis methods for the evaluation of complex interventions [ 230 , 231 ], the incorporation of crowdsourcing and machine learning into systematic review workflows (eg the Cochrane Evidence Pipeline) [ 2 ], the shift in paradigm to living systemic review and NMA platforms [ 232 , 233 ] and the proposal of a new evidence ecosystem that fosters bidirectional collaborations and interactions among a global network of evidence synthesis stakeholders [ 234 ]. These evolutions in data sources and methods may ultimately make evidence syntheses more streamlined, less duplicative, and more importantly, they may be more useful for timely policy and clinical decision-making; however, that will only be the case if they are rigorously reported and conducted.

We look forward to others’ ideas and proposals for the advancement of methods for evidence syntheses. For now, we encourage dissemination and uptake of the currently accepted best tools and practices for their development and evaluation; at the same time, we stress that uptake of appraisal tools, checklists, and software programs cannot substitute for proper education in the methodology of evidence syntheses and meta-analysis. Authors, peer reviewers, and editors must strive to make accurate and reliable contributions to the present evidence knowledge base; online alerts, upcoming technology, and accessible education may make this more feasible than ever before. Our intention is to improve the trustworthiness of evidence syntheses across disciplines, topics, and types of evidence syntheses. All of us must continue to study, teach, and act cooperatively for that to happen.

Acknowledgements

Michelle Oakman Hayes for her assistance with the graphics, Mike Clarke for his willingness to answer our seemingly arbitrary questions, and Bernard Dan for his encouragement of this project.

Authors’ contributions

All authors participated in the development of the ideas, writing, and review of this manuscript. The author(s) read and approved the final manuscript.

The work of John Ioannidis has been supported by an unrestricted gift from Sue and Bob O’Donnell to Stanford University.

Declarations

The authors declare no competing interests.

This article has been published simultaneously in BMC Systematic Reviews, Acta Anaesthesiologica Scandinavica, BMC Infectious Diseases, British Journal of Pharmacology, JBI Evidence Synthesis, the Journal of Bone and Joint Surgery Reviews , and the Journal of Pediatric Rehabilitation Medicine .

Publisher’ s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

jpm-logo

Article Menu

systematic literature review tutorial

  • Subscribe SciFeed
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Local control, survival, and toxicity outcomes with high-dose-rate peri-operative interventional radiotherapy (brachytherapy) in head and neck cancers: a systematic review.

systematic literature review tutorial

1. Introduction

2. methods and materials, 2.1. eligibility.

  • Study design: Clinical trials, prospective/retrospective cohorts, and case-control studies were included.
  • Population: Studies that included patients with primary or recurrent HNC, of any histology without distant metastasis, with or without prior irradiation, and treated with surgical resection and POIRT were eligible. Studies with the following co-interventions were allowed: reconstruction, external radiotherapy, and chemotherapy.
  • Outcomes: Studies that reported on any of the following outcomes were eligible: survival (recurrence-free survival, RFS; overall survival, OS), radiation toxicity (acute or late toxicity), peri-operative complications, and quality of life (QOL).
  • Setting: Studies that reported on patients treated from 1990 onwards were eligible; this restriction was intended to account for significant changes in diagnostic, medical, and surgical standards.
  • Studies with at least six months of follow-up were eligible.
  • Language: Only articles reported in the English, French, German, and Italian languages were included, given resource constraints.
  • Study design: Case series, case reports, and pre-clinical studies were excluded. Relevant reviews were listed for bibliography scanning. Studies that were available only as an abstract or a conference proceeding were excluded.
  • Outcomes: Studies that did not report on the above outcomes of interest, such as feasibility or dosimetric studies) were excluded.

2.2. Information Sources and Search Strategy

  • Head and neck cancer [MeSH Major Topic].
  • Brachytherapy [MeSH Terms].
  • Interventional radiotherapy [Title/Abstract].
  • Numbers: 2 OR 3.
  • Peri-operative [Title/Abstract].
  • Perioperative [Title/Abstract].
  • Numbers: 5 OR 6.
  • Numbers: 1 AND 4 AND 7.

2.3. Study Records

2.4. data items.

  • Setting: period of treatment, country.
  • Study design and size: e.g., clinical trial, prospective cohort, retrospective cohort; number of patients.
  • Patient characteristics: median/mean age, performance status, history of irradiation.
  • Disease characteristics: histology, site, tumor size, T-stage, N-stage, setting (primary, recurrence, second primary).
  • Treatment characteristics: resection status (clear margins, microscopic residual, macroscopic residual), chemotherapy, external radiotherapy, interventional radiotherapy dose and fractionation,
  • Dosimetric parameters.
  • Outcomes: RFS, OS, incidence of acute and late toxicity, peri-operative complications, QOL.
  • Duration of follow-up: median, range.

2.5. Outcomes and Prioritization

2.6. risk of bias assessment, 2.7. data synthesis, 3.1. search results, 3.2. screening, 4. critical appraisal, 5. scope of extracted data, 6. poirt in the primary setting, 7. poirt in the re-irradiation setting, 8. discussion, 8.1. poirt in the primary setting, 8.2. poirt in the re-irradiation setting, 8.3. enhancing the therapeutic ratio with poirt, 8.4. study limitations and recommendations, 9. conclusions, supplementary materials, author contributions, institutional review board statement, informed consent statement, data availability statement, conflicts of interest.

  • National Comprehensive Cancer Network. Head and Neck Cancers, Version 4. NCCN Clinical Practice Guidelines in Oncology [Internet]. 2024. Available online: https://www.nccn.org/professionals/physician_gls/pdf/head-and-neck.pdf (accessed on 11 May 2024).
  • Kovács, G.; Martinez-Monge, R.; Budrukkar, A.; Guinot, J.L.; Johansson, B.; Strnad, V.; Skowronek, J.; Rovirosa, A.; Siebert, F.-A. GEC-ESTRO ACROP recommendations for head & neck brachytherapy in squamous cell carcinomas: 1st update—Improvement by cross sectional imaging based treatment planning and stepping source technology. Radiother. Oncol. 2016 , 122 , 248–254. [ Google Scholar ]
  • Tagliaferri, L.; Fionda, B.; Bacorro, W.; Kovacs, G. Advances in Head-and-Neck Interventional Radiotherapy (Brachytherapy). J. Med. Univ. St. Tomas 2024 , 8 , 1338–1341. [ Google Scholar ] [ CrossRef ]
  • Jayalie, V.F.; Johnson, D.; Sudibio, S.; Rudiyo, R.; Jamnasi, J.; Hendriyo, H.; Resubal, J.R.; Manlapaz, D.J.; Cua, M.; Genson, J.M.; et al. Interdisciplinary and Regional Cooperation Towards Head and Neck Cancer Interventional Radiotherapy (Brachytherapy) Implementation in Southeast Asia. J. Med. Univ. St. Tomas 2024 , 8 , 1381–1389. [ Google Scholar ] [ CrossRef ]
  • Cua, M.M.; Jainar, C.J.; Calapit, J.A.J.; Mejia, M.B.; Bacorro, W. The evolving landscape of head-and-neck brachytherapy (2017–2023): A scoping review. Radiother. Oncol. 2024 , 194 (Suppl. S1), 317–319. [ Google Scholar ]
  • Tagliaferri, L.; Bussu, F.; Fionda, B.; Catucci, F.; Rigante, M.; Gambacorta, M.A.; Autorino, R.; Mattiucci, G.C.; Miccichè, F.; Placidi, E.; et al. Perioperative HDR brachytherapy for reirradiation in head and neck recurrences: Single-institution experience and systematic review. Tumori J. 2017 , 103 , 516–524. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. J. Clin. Epidemiol. 2021 , 134 , 178–189. [ Google Scholar ] [ CrossRef ]
  • Critical Appraisal Skills Programme. CASP Cohort Standard Checklist [Internet]. 2020. Available online: https://casp-uk.net/casp-tools-checklists/ (accessed on 28 July 2022).
  • Bussu, F.; Tagliaferri, L.; Mattiucci, G.; Parrilla, C.; Rizzo, D.; Gambacorta, M.A.; Lancellotta, V.; Autorino, R.; Fonnesu, C.; Kihlgren, C.; et al. HDR interventional radiotherapy (brachytherapy) in the treatment of primary and recurrent head and neck malignancies. Head Neck 2019 , 41 , 1667–1675. [ Google Scholar ] [ CrossRef ]
  • Martínez-Monge, R.; Gómez-Iturriaga, A.; Cambeiro, M.; Garrán, C.; Montesdeoca, N.; Aristu, J.J.; Alcalde, J. Phase I-II trial of perioperative high-dose-rate brachytherapy in oral cavity and oropharyngeal cancer. Brachytherapy 2009 , 8 , 26–33. [ Google Scholar ] [ CrossRef ]
  • Martínez-Monge, R.; Alcalde, J.; Concejo, C.; Cambeiro, M.; Garrán, C. Perioperative high-dose-rate brachytherapy (PHDRB) in previously irradiated head and neck cancer: Initial results of a Phase I/II reirradiation study. Brachytherapy 2006 , 5 , 32–40. [ Google Scholar ] [ CrossRef ]
  • Martínez-Monge, R.; Divassón, M.P.; Cambeiro, M.; Gaztañaga, M.; Moreno, M.; Arbea, L.; Montesdeoca, N.; Alcalde, J. Determinants of Complications and Outcome in High-Risk Squamous Cell Head-and-Neck Cancer Treated With Perioperative High–Dose Rate Brachytherapy (PHDRB). Int. J. Radiat. Oncol. Biol. Phys. 2011 , 81 , e245–e254. [ Google Scholar ] [ CrossRef ]
  • Potharaju, M.; Raj, H.; Muthukumaran, M.; Venkataraman, M.; Ilangovan, B.; Kuppusamy, S. Long-term outcome of high-dose-rate brachytherapy and perioperative brachytherapy in early mobile tongue cancer. J. Contemp. Brachytherapy 2018 , 10 , 64–72. [ Google Scholar ] [ CrossRef ]
  • Teudt, I.U.; Meyer, J.E.; Ritter, M.; Wollenberg, B.; Kolb, T.; Maune, S.; Kovàcs, G. Perioperative image-adapted brachytherapy for the treatment of paranasal sinus and nasal cavity malignancies. Brachytherapy 2014 , 13 , 178–186. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Martínez-Fernández, M.I.; Alcalde, J.; Cambeiro, M.; Peydró, G.V.; Martínez-Monge, R. Perioperative high dose rate brachytherapy (PHDRB) in previously irradiated head and neck cancer: Results of a phase I/II reirradiation study. Radiother. Oncol. 2017 , 122 , 255–259. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Bussu, F.; Fionda, B.; Rigante, M.; Rizzo, D.; Loperfido, A.; Gallus, R.; De Luca, L.M.; Corbisiero, M.F.; Lancellotta, V.; Tondo, A.; et al. Interventional radiotherapy (brachytherapy) for re-irradiation of recurrent head and neck malignancies: Oncologic outcomes and morbidity. Acta Otorhinolaryngol. Ital. 2024 , 44 (Suppl. S1), S28–S36. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Soror, T.; Paul, J.; Melchert, C.; Idel, C.; Rades, D.; Bruchhage, K.-L.; Kovács, G.; Leichtle, A. Salvage High-Dose-Rate Interventional Radiotherapy (Brachytherapy) Combined with Surgery for Regionally Relapsed Head and Neck Cancers. Cancers 2023 , 15 , 4549. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Ritter, M.; Teudt, I.U.; Meyer, J.E.; Schröder, U.; Kovács, G.; Wollenberg, B. Second-line treatment of recurrent HNSCC: Tumor debulking in combination with high-dose-rate brachytherapy and a simultaneous cetuximab-paclitaxel protocol. Radiat. Oncol. 2016 , 11 , 6. [ Google Scholar ] [ CrossRef ]
  • Rudzianskas, V.; Inciura, A.; Juozaityte, E.; Rudzianskiene, M.; Kubilius, R.; Vaitkus, S.; Kaseta, M.; Adliene, D. Reirradiation of recurrent head and neck cancer using high-dose-rate brachytherapy. Acta Otorhinolaryngol. Ital. 2012 , 32 , 297–303. [ Google Scholar ] [ PubMed ]
  • Pellizzon, A.C.; Salvajoli, J.V.; Kowalski, L.P.; Carvalho, A.L. Salvage for cervical recurrences of head and neck cancer with dissection and interstitial high dose rate brachytherapy. Radiat. Oncol. 2006 , 1 , 27. [ Google Scholar ] [ CrossRef ]
  • Ianovski, I.; Mlynarek, A.M.; Black, M.J.; Bahoric, B.; Sultanem, K.; Hier, M.P. The role of brachytherapy for margin control in oral tongue squamous cell carcinoma. J. Otolaryngol. Head Neck Surg. 2020 , 49 , 74. [ Google Scholar ] [ CrossRef ]
  • Gaztañaga, M.; Pagola, M.; Cambeiro, M.; Ruiz, M.E.R.; Aristu, J.; Montesdeoca, N.; Alcalde, J.; Martínez-Monge, R. Comparison of limited-volume perioperative high-dose-rate brachytherapy and wide-field external irradiation in resected head and neck cancer. Head Neck 2012 , 34 , 1081–1088. [ Google Scholar ] [ CrossRef ]
  • de Almeida-Silva, L.A.; dos Santos Lupp, J.; Sobral-Silva, L.A.; Dos Santos, L.A.R.; Marques, T.O.; da Silva, D.B.R.; Caneppele, T.M.F.; Bianchi-de-Moraes, M. The incidence of osteoradionecrosis of the jaws in oral cavity cancer patients treated with intensity-modulated radiotherapy: A systematic review and meta-analysis. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. 2024 , 138 , 66–78. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Chiu, K.W.; Yu, T.P.; Kao, Y.S. A systematic review and meta-analysis of osteoradionecrosis following proton therapy in patients with head and neck cancer. Oral Oncol. 2024 , 148 , 106649. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Bernier, J.; Domenge, C.; Ozsahin, M.; Matuszewska, K.; Lefèbvre, J.L.; Greiner, R.H.; Giralt, J.; Maingon, P.; Rolland, F.; Bolla, M.; et al. Postoperative Irradiation with or without Concomitant Chemotherapy for Locally Advanced Head and Neck Cancer. N. Engl. J. Med. 2004 , 350 , 1945–1952. [ Google Scholar ] [ CrossRef ]
  • Cooper, J.S.; Pajak, T.F.; Forastiere, A.A.; Jacobs, J.; Campbell, B.H.; Saxman, S.B.; Kish, J.A.; Kim, H.E.; Cmelak, A.J.; Rotman, M.; et al. Postoperative Concurrent Radiotherapy and Chemotherapy for High-Risk Squamous-Cell Carcinoma of the Head and Neck. N. Engl. J. Med. 2004 , 350 , 1937–1944. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Tagliaferri, L.; Budrukkar, A.; Lenkowicz, J.; Cambeiro, M.; Bussu, F.; Guinot, J.L.; Hildebrandt, G.; Johansson, B.; Meyer, J.E.; Niehoff, P.; et al. ENT COBRA ONTOLOGY: The covariates classification system proposed by the Head & Neck and Skin GEC-ESTRO Working Group for interdisciplinary standardized data collection in head and neck patient cohorts treated with interventional radiotherapy (brachytherapy). J. Contemp. Brachytherapy 2018 , 10 , 260–266. [ Google Scholar ]

Click here to enlarge figure

Risk of Bias Assessment (CASP Checklist for Cohort Studies)
Study IDResearch Question Selection Bias Measurement Bias (Exposure) Measurement Bias (Outcomes) Confounding Factors Follow-Up Magnitude of Effect Precision of Estimate Credibility Empiric Congruence Applicability Implications to Practice
]Low riskLow riskLow riskLow riskLow riskLow riskLow riskHigh riskLow riskLow riskLow riskLow risk
]Low riskLow riskLow riskLow riskLow riskLow riskLow riskHigh riskLow riskHigh riskLow riskLow risk
]Low riskLow riskLow riskLow riskLow riskLow riskLow riskHigh riskLow riskLow riskLow riskLow risk
]Low riskLow riskHigh riskLow riskHigh riskLow riskUncertain riskHigh riskLow riskLow riskHigh riskLow risk
]Low riskLow riskHigh riskLow riskHigh riskLow riskLow riskHigh riskLow riskLow riskHigh riskLow risk
]Low riskLow riskLow riskLow riskHigh riskLow riskHigh riskHigh riskLow riskLow riskHigh riskLow risk
]Low riskHigh riskHigh riskLow riskHigh riskHigh riskHigh riskHigh riskHigh riskLow riskHigh riskLow risk
]Low riskHigh riskHigh riskLow riskLow riskHigh riskLow riskLow riskUncertain riskUncertain riskHigh riskLow risk
]Low riskUncertain riskHigh riskLow riskHigh riskLow riskUncertain riskHigh riskUncertain riskUncertain riskHigh riskLow risk
]Low riskLow riskLow riskLow riskLow riskLow riskLow riskHigh riskLow riskUncertain riskLow riskLow risk
]Low riskHigh riskHigh riskHigh riskLow riskHigh riskLow riskHigh riskLow riskLow riskLow riskLow risk
Study ID (References)CountryStudy Periodn (%) M/FMdn Age
(Range)
Site, %Histology, %Stage, %Resection/ Margin Status, %
/Reconstruction, %
EBRT, %
/Dosing
BRT DosingImplant Technique and CTVDosimetry CTVStart of BRT
(Day Post-op)
Chemo, %/Regimen, %

[ ]
CanadaSep 2009 to
Apr 2017
55 (75)0.90 62
(24–92)
OT, 100 SCC, 100 pT1, 49
pT2, 47
pT3, 4
pT4, 0
pN0, 65
pN1, 15
pN2, 20
R0, 0
Close (2.1–5 mm), 58
R1, 42
Recon, 100
39
Involved neck, 55 Gy/25 F
Uninvolved neck, 50 Gy/25 F
Close margins
34 Gy/10 F, 63
R1
40.8 Gy/12 F, 37
ISIRT
CTV: Tumor bed
CTV: 5 mm around cathetersD3–534
If ENE, EBRT + concurrent weekly carboplatin 100 mg/m + taxol 40 mg/m

[ , , ]
SpainOct 2000 to
Oct 2008
57 (70)2.1759
(25–85)
OC, 52
OPx, 21
HPx, 7
Neck, 18
SCC, 100cN0, 21
cN1-2, 79
pN0, 30
pN+, 63
pNx, 7
R0 (10 mm), 12
Close (Mdn 3.0 mm), 35
R1, 53
100
45 Gy/25 F
R0
32 Gy/8 F BID 6 h apart
R1
40 Gy/10 F BID 6 h apart
ISIRT
CTV: Tumor bed and all surgical bed considered recurrence risk category 2 (≥2 nodes or ENE) or 3 (R1)
CTV: Tumor bed and high-risk volumesD2–363
Cisplatin–paclitaxel, 60
Cisplatin–other, 4

[ ]
IndiaJan 2000 to
Sep 2010
73 (36)2.25 52 OT, 100 SCC, 100 T1, 14
T2, 12
N0, 100
<5 mm, x
≥5–10 mm, x
Recon, 0
None40 Gy/10 F BID 6 h apartISIRT, single-plane
CTV: Tumor bed
CTV: 5 mm around cathetersD5–7None

[ ]
GermanyJan 2006 to
Jan 2013
35 (63)2.89 60 NC, 46
PNS, 54
SCC, 63
Adeno, 20
Other, 17
I, 17
II, 20
III, 11
IV, 51
R0, 54
R1, 31
R2, 3
Rx, 11
Osteosynthesis plates as needed
57
Mdn 50.4 Gy (40–63 Gy)
Mdn 20 Gy (10–35 Gy)/2.5 Gy-F BID 6 h apart ISIRT
Intensity-modulation by variable catheter spacing (5–12 mm)
CTV: Maximum 10 mm around cathetersMdn D7 (D2–14)31 (chemo given only for SCC)
Cisplatin, 26
Taxane, 9
Etoposide, 3
, adenocarcinoma; , twice daily; , clinical; , clinical target volume; , day; , external beam radiotherapy; , extranodal extension; , fraction; , Gray; , hour; , hypopharynx; , interventional radiotherapy; , interstitial interventional radiotherapy; , median; , nodal stage; , nasal cavity; , oral cavity; , oropharynx; , oral tongue; , paranasal sinus; , pathologic; , resection status; , squamous cell carcinoma; , primary tumor stage; , unknown
a. Percentage comprising the population and intervention of interest, if from a mixed cohort.
b. Separate numbers not derivable for the population or intervention of interest, numbers reported for the entire cohort.







[ , , , ]
SpainFeb 2001 to Nov 201563 (100)2.763
(26–82)
Neck, 32
OT, 24
OPx, 21
Other, 23
SCC, 95
Adeno, 2
Other, 4
Second primary, 24
T1-2N0, 18
T3/N+, 6

Recurrence
76
pN0, 38
pN+, 38
pNx, 24
ECE, 67
100
EBRT, 98
IRT, 14
Prior surgery, 64
Chemo, 32
R0 (10 mm), 11
Close (Mdn 3.0 mm), 35
R1, 54
None≤32 Gy, 29
40 Gy, 71
R0: 32 Gy/8 F BID 6 h apart
R1: 40 Gy/10 F BID 6 h apart
ISBT
CTV: Tumor bed and all surgical bed considered recurrence risk category 2 (≥2 nodes or ENE) or 3 (positive margins)
CTV: Tumor bed and high-risk volumesMdn D4 (D0-D10)None
[ , , ]ItalyDec 2010 to Jun 202334 (85)2.6Mean 64.5ICIRT group
NPx, 64
Ethmoid, 21
NC, 14

ISIRT group
OC, 27
Lx, 20
HPx, 13
OPx, 13
Other, 27
ICIRT group
SCC, 72
Adeno, 14
Other, 14

ISIRT group
SCC, 87
Other, 13
ICIRT group
LR, 100
(Second reRT, 3
Third reRT, 3)

ISIRT group
LR, x%
RR, x%
ICIRT group
Definitive, 64
Adjuvant, 36

ISIRT group
Definitive, 33
Adjuvant, 67
>65 Gy, 100%
GTR, 100
Recon
ICBT, 7
ISBT, 87
None30 Gy/12 F BID 6 h apartISIRT, ICIRT
CTV: Tumor bed and high-risk volumes
CTV: Tumor bed and high-risk volumesD3–5ICBT, 21
ISBT, 0
[ ]GermanyJan 2016 to Dec 202060 (70)3.29 65.6
(15.4–92.7)
OPx, 25
Neck, 23
OC, 23
Other, 26
SCC, 90
Adeno, 8 Other, 2
LR, 68
RR, 23
Second primary, 8.
70
Mdn 60 Gy (32–70)
Chemo, 45
R0, 32
Close margin (<5 mm), 5
R1, 18
R2, 45
Recon
Pedicled or free flap, as indicated, x%
12
30–50 Gy
Mdn 30 Gy (12–40)/3 Gy-F BID 6 h apart ISIRT
CTV: Tumor bed + 15–20 mm and high-risk volumes
8–12 mm spacing
CTV: Tumor bed + 15–20 mm and high-risk volumesD2–5None
[ ]GermanyJan 2006 to
May 2013
94 (71)Not reported<60, 38
≥60, 62
OPx/NPx, 28
OC, 26
Neck, 8
HPx/Lx, 6
Other 32
SCC, 80
Other, 44
I-II, 33
III-IV, 67
T1-2, 40
T3-4, 48
Tx, 10
N0, 71
N1-2, 22
N3, 3
67
Mdn 64.2 Gy (33–105)
Chemo, 26
Time to first recurrence
Mdn 24 mo (10–73)
<3 mo, 10
≤ 3 mo, 84
R0, 39
R1, 34
R2, 12
Rx, 6
No resection, 8
Recon
Pedicled, microvascular or random pattern flap, as indicated, x%
26
Mdn 48.7 Gy (30–60)
Mdn 25.9 Gy (10–35)/2.5 Gy (2.5–4.5) F BID 6 h apartISIRTIntensity-modulation allowed for up to 200% within macroscopic tumor OAR doses less than the reference isodose 16
Platinum, 5
Cetuximab–taxane, 19
[ ]GermanyJan 2006 to
Jan 2013
35 (47)2.89 60 NC, 46
PNS, 54
SCC, 63
Adeno, 20
Other, 17
I, 17
II, 20
III, 11
IV, 51
Not reportedR0, 54
R1, 31
R2, 3
Rx, 11
Recon
Osteosynthesis plates as needed
57
Mdn 50.4 Gy (40–63 Gy)
Mdn 20 Gy (10–35 Gy)/2.5 Gy-F BID 6 h apartISIRT
CTV: Tumor bed
Intensity-modulation by variable catheter spacing (5–12 mm)
CTV: Maximum 10 mm around cathetersMdn D7 (D2–14)31 (chemo given only for SCC)
Cisplatin, 26
Taxane, 9
Etoposide, 3
[ ]LithuaniaDec 2008 to Mar 201030 (43)2.33 59
(41–79)
OC, 27
NC/PNS, 13
Parotid, 3
OPx, 13
Neck, 44
SCC, 100 LR, 57
RR, 43
100
Definitive, 33
Adjuvant, 67
Mdn 66 Gy (50–72)
Chemo, 30
Time to first recurrence
Mdn 12 mo (3–19)
Not reportedNone30 Gy/12 F BID 6 h apartISIRT
Catheter spacing 10–15 mm
3 D: CTV D90 isodose
[ ]BrazilOct 1994 to Jun 200421 (71)3.2 53.5
(31–73)
Pharynx 48
OC, 29
Skin, 19
Neck, 5
SCC, 100 RR, 100 71
Mdn 52 Gy (30–66 Gy)
Chemo, 5
Time to salvage therapy
Mdn 32 mo (14–86)
GTR, 100
Recon
As needed, x%
100
ReRT subset
Mdn 30 Gy (25–50)
ReRT subset
Mdn 24 Gy
ISIRT
CTV: Tumor bed + 15–20 mm margins
Single plane, 90.5%
Double plane, 9.5%
CTV: Tumor bed + 5 mmD5 (D4-D12)3
Platinum
, adenocarcinoma; , twice daily; , clinical; , clinical target volume; , day; , dose received by 90% of the volume; , external beam radiotherapy; , extranodal extension; , fraction; , gross total resection; , Gray; , hour; , hypopharynx; , intracavitary interventional radiotherapy; , interventional radiotherapy; , interstitial interventional radiotherapy; , local recurrence; , larynx; , median; , month; , nodal stage; , nasal cavity; , nasopharynx; , organ at risk; , oral cavity; , oropharynx; , oral tongue; , paranasal sinus; , pathologic; , resection status; , reirradiation; , regional recurrence; , radiotherapy; , squamous cell carcinoma; , primary tumor stage; , unknown; , three-dimensional
a. Percentage comprising the population and intervention of interest, if from a mixed cohort.
b. Separate numbers not derivable for the population or intervention of interest, numbers reported for the entire cohort.
Peri-Operative Interventional Radiotherapy in the Primary Setting
Baseline CharacteristicsInterventionSurvival Outcomes
Study IDn, % Mdn AgeSite
%
SCC
%
T1-2
%
N0
%
GTR
%
EBRT
%
Mdn EBRT Dose (Gy)Mdn POIRT Dose (Gy)Mdn FU (mo)3y RFS %3y OS %5y RFS %5y OS %

[ ]
55, 7562 OT, 100 100 96 65 100 3950–55 342574 76 69 59

[ , , ]
57, 7059 OT, 35
OPx, 21
FOM, 11
Other, 33
100--30100100454052 ----52 (9y)55 (9y)

[ ]
73, 3652 OT, 100 100 100100100004074 ----92 (6y)92 (6y)

[ ]
35, 6360 PNS, 54 NC, 46 63 --37 85 57 50.42028 83 72 ----
percentage of the cohort that received POIRT in the primary setting; for the entire cohort (n); non-overlapping with POIRT; DFS
, disease-free survival; , external beam radiotherapy; , floor of mouth; , follow up; , Gray; , gross total resection; , median; , month; , nodal stage; , nasal cavity; , oropharynx; , overall survival; , oral tongue; , paranasal sinus; , peri-operative interventional radiotherapy; , recurrence-free survival; , squamous cell carcinoma; , primary tumor stage; , year







[ , , , ]
63, 10063Neck, 32
OT, 24
BOT, 13
OPx, 8
Other, 23
95--3876; 24----100004082----5536
[ , , ]34, 8565NPx, 31
OC, 14
Ethmoid, 10
Lx, 10
Other, 35
79----100; 0>65--10000302529 (2y)46 (2y)----
[ ]60, 7066 OPx, 25
OC, 23
Neck, 23
Other, 29
90 ----92 ; 860 --55 12 (30–50)302288 39 37 17
[ ]94,
~67
≥60OPx/NPx, 28
OC, 26
Neck, 9
HPx/Lx, 9
Other, 32
80 40 71 100 ; 0642473 26 49 26 13 --------
[ ]35, ~3760 PNS, 54 NC, 46 63 --37 ------85 55 28 2028 34 72 ----
[ ]30, 4359 Neck, 44
OC, 27
OPx, 13
Other, 16
100 ----100 ; 066 ~12 --0030 16 53 (2y)62 (2y)----
[ ]21, 7154 Pharynx, 47
OC, 29
Other, 24
100 (rT0)0 100 ; 052~32 100100 302436 ----43 50
percentage of the cohort (N) that received POIRT in the re-irradiation setting; for the entire cohort (n); endocavitary; local RFS; including pre-op or post-op EBRT; DFS; time to recurrence; time to salvage therapy
, base of tongue; , external beam radiotherapy; , follow up; , Gray; , gross total resection; , hypopharynx; , larynx; , median; , month; , nodal stage; , nasal cavity; , nasopharynx; , oral cavity; , oropharynx; , overall survival; , oral tongue; , paranasal sinus; , peri-operative interventional radiotherapy; , recurrence; , reirradiation; , recurrence-free survival; , squamous cell carcinoma; , secondary primary; , primary tumor stage; , year
Peri-Operative Interventional Radiotherapy in the Primary Setting
Baseline CharacteristicsInterventionToxicity Outcomes
Study IDn, % Mdn AgeSite
%
GTR
%
Recon %EBRT
%
Mdn EBRT Dose (Gy)Mdn POIRT Dose (Gy)Dosimetry ConstraintsPOIRT Start (Day PO)Mdn FU (mo)AcuteLate
Grade 1–2
%
Grade 3–4
%
Grade 5
%
Grade 1–2
%
Grade 3–4
%
Grade 5
%

[ ]
55, 7562 OT, 100 100 100 3950–55 34--3–525Glossitis, 100Bleeding, 20Local pain, 700

[ , , ]
57, 7059 OT, 35
OPx, 21
FOM, 11
Other, 33
100--1004540DHI ≥ 0.62–352 --Fistula, 5
Bleeding, 2
Graft failure, 2
Wound complication, 2
Bleeding, 2 --Fibrosis, 5
STN, 5
Bleeding, 2
Fistula, 2
Nerve damage, 2
Wound complication, 2 ORN, 0
Bleeding, 4

[ ]
73, 3652 OT, 100 10000040--5–774 ----0--STN, 0
ORN, 0
0

[ ]
35, 6360 PNS, 54 NC, 46 85 --57 50.420--2–1428 Mucosal crusting, 11
Peri-orbital edema, 9
Allodynia, 6
Wound complication, 6
Alopecia, 3
Dysesthesia, 3
Epiphora, 3
Fatigue, 3
Flushing, 3
Wound complication, 3 0 Mucosal crusting, 17
Wound complication, 14
Dysgeusia due to hyposmia, 14
Allodynia, 6
Epiphora, 6 Peri-orbital Edema, 6
Eustachian tube dysfunction, 3
0 0
percentage of the cohort that received POIRT in the primary setting; for the entire cohort (n); non-overlapping with POIRT
, dose homogeneity index; , external beam radiotherapy; , floor of mouth; , follow up; , gross total resection; , Gray; , median; , nodal stage; , nasal cavity; , oropharynx; , osteoradionecrosis; , oral tongue; , paranasal sinus; , post-op; , peri-operative interventional radiotherapy; , soft tissue necrosis; , primary tumor stage











[ , , , ]
63, 10063Neck, 32
OT, 24
BOT, 13
OPx, 8
Other, 23
----100--0040V150 (6 Gy) <13 cc
Mandibular/vascular D10 cc <4 Gy
0–1082--Wound dehiscence, 8
Graft failure, 6
Bleeding, 5
Delayed bleeding, 3 Post-op bleeding, 2 Post-op mortality before BRT completion, 2 --Fistula, 19
ORN, 5
STN, 3
Dysphagia, 3
Fibrosis, 3
Nerve damage, 3
Fistula, 2
STN, 2
[ , , ]34, 8565NPx, 31
OC, 14
Ethmoid, 10
Lx, 10
Other, 35
>65--100940030QUANTEC3–525Cranial neuropathy, 3
Graft failure, 3
00000
[ ]60, 7066 OPx, 25
OC, 23
Neck, 23
Other, 29
60 --55 --12 (30–50)30--2–522Pain, 25 Mucositis, 22
Xerostomia, 15 Dysphagia, 13
Hypogeusia, 8
Hyposmia, 3
Bleeding, 3
Dysphagia, 20
Pain, 17
Xerostomia, 10
Hyposmia, 3
Local infection, 3
Respiratory infection, 3
Hypogeusia, 2
Mucositis, 2
0Xerostomia, 32
Pain, 18
Dysphagia, 17 Hypogeusia, 15
Mucositis, 10
Hyposmia, 3
Xerostomia, 13
Dysphagia, 10
Pain, 8
Hyposmia, 5 Mucositis, 5
Hypogeusia, 3
ORN, 2
STN, 2
0
[ ]94,
~67
≥60OPx/NPx, 28
OC, 26
Neck, 9
HPx/Lx, 9
Other, 32
642473 --26 49 26 GTV boost up to 200% allowed
OAR doses less than reference isodose
--13 -- -- 0STN, 0
ORN, 0
STN, 0
ORN, 0
0
[ ]35, ~3760 PNS, 54 NC, 46 ----50.4--2028 20--2–1428 Mucosal crusting, 11
Peri-orbital edema, 9
Allodynia, 6 Wound complication, 6
Alopecia, 3
Dysesthesia, 3
Epiphora, 3
Fatigue, 3 Flushing, 3
Wound complication, 3 0 Mucosal crusting, 17
Dysgeusia due to Hyposmia, 14
Wound complication, 14
Allodynia, 6
Epiphora, 6 Peri-orbital edema, 6
Eustachian tube dysfunction, 3
0 0
[ ]30, 4359 Neck, 44
OC, 27
OPx, 13
Other, 16
66
~12 ----0030 ----16 Fibrosis, 6 Wound complication, 3
Bleeding, 0
0 Dysphagia, 3
Hoarseness, 3
ORN, 3 0
[ ]21, 7154 Pharynx, 47
OC, 29
Other, 24
52~32 100--100 3024Dmax ≤135%
Skin dose <60%
4–1236 --Wound dehiscence, 14 Subcutaneous infection, 5 0 --Local ulcer, 14
Neck fibrosis, 5
STN, 0
ORN, 0
0
percentage of the cohort (N) that received POIRT in the re-irradiation setting; for the entire cohort (n); time to recurrence; time to salvage therapy; reported overall grade 1–2 and grade 3 toxicity rates of 17% and 10%, chronicity not specified
, base of tongue; , cubic centimeter; , maximum dose; , external beam radiotherapy; , floor of mouth; , follow up; , gross total resection; , gross tumor volume; , Gray; , larynx; , median; , nodal stage; , nasal cavity; , nasopharynx; , oral cavity; , organ at risk; , oropharynx; , osteoradionecrosis; , overall survival; , oral tongue; , paranasal sinus; , post-op; , peri-operative interventional radiotherapy; , Quantitative Analysis of Normal Tissue Effects in the Clinic; , re-irradiation; , recurrence-free survival; , squamous cell carcinoma; , soft tissue necrosis; , primary tumor stage; , volume receiving n% of the prescribed dose
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Bacorro, W.; Fionda, B.; Soror, T.; Bussu, F.; Kovács, G.; Tagliaferri, L. Local Control, Survival, and Toxicity Outcomes with High-Dose-Rate Peri-Operative Interventional Radiotherapy (Brachytherapy) in Head and Neck Cancers: A Systematic Review. J. Pers. Med. 2024 , 14 , 853. https://doi.org/10.3390/jpm14080853

Bacorro W, Fionda B, Soror T, Bussu F, Kovács G, Tagliaferri L. Local Control, Survival, and Toxicity Outcomes with High-Dose-Rate Peri-Operative Interventional Radiotherapy (Brachytherapy) in Head and Neck Cancers: A Systematic Review. Journal of Personalized Medicine . 2024; 14(8):853. https://doi.org/10.3390/jpm14080853

Bacorro, Warren, Bruno Fionda, Tamer Soror, Francesco Bussu, György Kovács, and Luca Tagliaferri. 2024. "Local Control, Survival, and Toxicity Outcomes with High-Dose-Rate Peri-Operative Interventional Radiotherapy (Brachytherapy) in Head and Neck Cancers: A Systematic Review" Journal of Personalized Medicine 14, no. 8: 853. https://doi.org/10.3390/jpm14080853

Article Metrics

Article access statistics, supplementary material.

ZIP-Document (ZIP, 110 KiB)

Further Information

Mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

IMAGES

  1. Systematic Literature Review Methodology

    systematic literature review tutorial

  2. How to Write A Systematic Literature Review?

    systematic literature review tutorial

  3. How to Conduct a Systematic Review

    systematic literature review tutorial

  4. How to Write a Systematic Literature Review (7-Step-Guide) 📚🔍

    systematic literature review tutorial

  5. How to conduct Systematic Literature Review

    systematic literature review tutorial

  6. how to start writing a systematic review

    systematic literature review tutorial

COMMENTS

  1. Steps of a Systematic Review

    Image by TraceyChandler. Steps to conducting a systematic review. Quick overview of the process: Steps and resources from the UMB HSHSL Guide. YouTube video (26 min); Another detailed guide on how to conduct and write a systematic review from RMIT University; A roadmap for searching literature in PubMed from the VU Amsterdam; Alexander, P. A. (2020).

  2. How-to conduct a systematic literature review: A quick guide for

    Abstract. Performing a literature review is a critical first step in research to understanding the state-of-the-art and identifying gaps and challenges in the field. A systematic literature review is a method which sets out a series of steps to methodically organize the review. In this paper, we present a guide designed for researchers and in ...

  3. A Hands-On Tutorial for Systematic Review and Meta-Analysis With

    Systematic review and meta-analysis are regarded as standard and valuable tools for providing an objective and reproducible synthesis of research findings in the literature. Their increasing popularity has led to heightened expectations for comprehensiveness and rigor in conducting scientific reviews and analyses.

  4. How to Do a Systematic Review: A Best Practice Guide for ...

    The best reviews synthesize studies to draw broad theoretical conclusions about what a literature means, linking theory to evidence and evidence to theory. This guide describes how to plan, conduct, organize, and present a systematic review of quantitative (meta-analysis) or qualitative (narrative review, meta-synthesis) information.

  5. Module 1: Introduction to conducting systematic reviews

    This module will teach you to: Recognize features of systematic reviews as a research design. Recognize the importance of using rigorous methods to conduct a systematic review. Identify the types of review questions. Identify the elements of a well-defined review question. Understand the steps in a systematic review.

  6. Ten Steps to Conduct a Systematic Review

    Registration can be done on platforms like PROSPERO 5 for health and social care reviews or Cochrane 3 for interventions. Step 3: search. In the process of conducting a systematic review, a well-organized literature search is a pivotal step.

  7. Online Tutorials & Courses

    Cochrane Interactive Learning offers an online course Conducting an Intervention Review. The first introductory module is free for everyone. You need to register. Module 1: Introduction to conducting systematic reviews (45-60 minutes) This module introduces you to what systematic reviews are and why they are useful.

  8. How to Do a Systematic Review: A Best Practice Guide ...

    Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question. The best reviews synthesize studies to ...

  9. How to write a systematic literature review [9 steps]

    Screen the literature. Assess the quality of the studies. Extract the data. Analyze the results. Interpret and present the results. 1. Decide on your team. When carrying out a systematic literature review, you should employ multiple reviewers in order to minimize bias and strengthen analysis.

  10. How to Perform a Systematic Literature Review

    He is a regular writer for Salisbury Review magazine. In partnershipPurssell and McCrae have written several papers on research methodology and literature reviewing for healthcare journals. Both have extensive experience of teaching literature reviewing at all academic levels, and explaining complex concepts in a way that is accessible to all

  11. Systematic Reviews and Meta Analysis

    A systematic review is guided filtering and synthesis of all available evidence addressing a specific, focused research question, generally about a specific intervention or exposure. The use of standardized, systematic methods and pre-selected eligibility criteria reduce the risk of bias in identifying, selecting and analyzing relevant studies.

  12. Systematic Literature Review Made EASY: A Step-by-Step Guide

    You will learn everything you need about systematic literature review. Following my instructions, many published their systematic literature reviews in top t...

  13. PDF Undertaking a Systematic Review: What You Need to Know

    Systematic Review Components. Starts with a clearly articulated question. Uses explicit, rigorous methods to identify, critically appraise, and synthesize relevant studies. Appraises relevant published and unpublished evidence for validity before combining and analyzing data. Reports methodology, studies included in the review, and conclusions ...

  14. Home

    Systematic Review Literature Review; Definition. High-level overview of primary research on a focused question that identifies, selects, synthesizes, and appraises all high quality research evidence relevant to that question: Qualitatively summarizes evidence on a topic using informal or subjective methods to collect and interpret studies: Goals

  15. Introduction to Systematic Reviews

    Tools for supporting an organized systematic review project are then highlighted, followed by a detailed review of how/why librarians collaborate on these reviews. In the final module, we highlight how you can search for systematic reviews in three major databases: PubMed, Embase, and CINAHL. Throughout the course are small assessments to ...

  16. How-to conduct a systematic literature review: A quick guide for

    Method details Overview. A Systematic Literature Review (SLR) is a research methodology to collect, identify, and critically analyze the available research studies (e.g., articles, conference proceedings, books, dissertations) through a systematic procedure [12].An SLR updates the reader with current literature about a subject [6].The goal is to review critical points of current knowledge on a ...

  17. Tutorials

    Basic Search Strategies for Systematic Review The second workshop in our three-part series on Systematic Reviews. Covers keyword searching, subject headings, search fields, grey literature, etc.

  18. PDF How to Write a Systematic Review: A Step-by-Step Guide

    A Step-by-Step Guide. ospital of Philadelphia, Philadelphia, PAIntroductionA systematic review attempts to comprehensively and reproducibly collect, appraise, and synthesize all available empirical evidence that meets pre-d. fined criteria in order to answer a research question. The quantitative combination and statistical synthesis of the systema.

  19. Video Tutorials

    Welcome to the Systematic Review video tutorial series. This series of videos is designed to introduce you to the systematic review process. The first 4 videos will provide you with background information about why systematic reviews are conducted and the best practices around designing and beginning a review.

  20. Guidance on Conducting a Systematic Literature Review

    Literature reviews establish the foundation of academic inquires. However, in the planning field, we lack rigorous systematic reviews. In this article, through a systematic search on the methodology of literature review, we categorize a typology of literature reviews, discuss steps in conducting a systematic literature review, and provide suggestions on how to enhance rigor in literature ...

  21. Resource Guides: Literature Reviews: Tutorials and resources

    Systematic Approaches to a Successful Literature Review by Andrew Booth; Diana Papaioannou; Anthea Sutton. ISBN: 9780857021359. Publication Date: 2012-01-24. Reviewing the literature is an essential part of every research project. This book takes you step-by-step through the process of approaching your literature review systematically, applying ...

  22. How to do a systematic review

    Systematic reviews answer pre-defined research questions using explicit, reproducible methods to identify, critically appraise and combine results of primary research studies. Key stages in the production of systematic reviews include clarification of aims and methods in a protocol, finding relevant research, collecting data, assessing study ...

  23. A Guide to Conducting a Standalone Systematic Literature Review

    Beile, 2005). In this tutorial paper, I fill this need of better knowledge for conducting a high-quality standalone literature review by providing a clear, detailed guide using the rigorous systematic literature ... conducting a systematic literature review and particularly tailored it for the diverse needs of information systems research. This ...

  24. Guidance to best tools and practices for systematic reviews

    Guise JM, Chang C, Butler M, Viswanathan M, Tugwell P. AHRQ series on complex intervention systematic reviews—paper 1: an introduction to a series of articles that provide guidance and tools for reviews of complex interventions. J Clin Epidemiol. 2017;90:6-10.

  25. JPM

    We conducted a systematic review to synthesize the evidence regarding local control, survival, toxicity, and quality of life outcomes with POIRT in HNCs in the primary and re-reirradiation settings. This will guide clinical decision-making regarding case eligibility and management planning and inform feasibility appraisal and set-up planning of ...