Systematic Reviews and Meta Analysis

  • Getting Started
  • Guides and Standards
  • Review Protocols
  • Databases and Sources
  • Randomized Controlled Trials
  • Controlled Clinical Trials
  • Observational Designs
  • Tests of Diagnostic Accuracy
  • Software and Tools
  • Where do I get all those articles?
  • Collaborations
  • EPI 233/528
  • Countway Mediated Search
  • Risk of Bias (RoB)

Systematic review Q & A

What is a systematic review.

A systematic review is guided filtering and synthesis of all available evidence addressing a specific, focused research question, generally about a specific intervention or exposure. The use of standardized, systematic methods and pre-selected eligibility criteria reduce the risk of bias in identifying, selecting and analyzing relevant studies. A well-designed systematic review includes clear objectives, pre-selected criteria for identifying eligible studies, an explicit methodology, a thorough and reproducible search of the literature, an assessment of the validity or risk of bias of each included study, and a systematic synthesis, analysis and presentation of the findings of the included studies. A systematic review may include a meta-analysis.

For details about carrying out systematic reviews, see the Guides and Standards section of this guide.

Is my research topic appropriate for systematic review methods?

A systematic review is best deployed to test a specific hypothesis about a healthcare or public health intervention or exposure. By focusing on a single intervention or a few specific interventions for a particular condition, the investigator can ensure a manageable results set. Moreover, examining a single or small set of related interventions, exposures, or outcomes, will simplify the assessment of studies and the synthesis of the findings.

Systematic reviews are poor tools for hypothesis generation: for instance, to determine what interventions have been used to increase the awareness and acceptability of a vaccine or to investigate the ways that predictive analytics have been used in health care management. In the first case, we don't know what interventions to search for and so have to screen all the articles about awareness and acceptability. In the second, there is no agreed on set of methods that make up predictive analytics, and health care management is far too broad. The search will necessarily be incomplete, vague and very large all at the same time. In most cases, reviews without clearly and exactly specified populations, interventions, exposures, and outcomes will produce results sets that quickly outstrip the resources of a small team and offer no consistent way to assess and synthesize findings from the studies that are identified.

If not a systematic review, then what?

You might consider performing a scoping review . This framework allows iterative searching over a reduced number of data sources and no requirement to assess individual studies for risk of bias. The framework includes built-in mechanisms to adjust the analysis as the work progresses and more is learned about the topic. A scoping review won't help you limit the number of records you'll need to screen (broad questions lead to large results sets) but may give you means of dealing with a large set of results.

This tool can help you decide what kind of review is right for your question.

Can my student complete a systematic review during her summer project?

Probably not. Systematic reviews are a lot of work. Including creating the protocol, building and running a quality search, collecting all the papers, evaluating the studies that meet the inclusion criteria and extracting and analyzing the summary data, a well done review can require dozens to hundreds of hours of work that can span several months. Moreover, a systematic review requires subject expertise, statistical support and a librarian to help design and run the search. Be aware that librarians sometimes have queues for their search time. It may take several weeks to complete and run a search. Moreover, all guidelines for carrying out systematic reviews recommend that at least two subject experts screen the studies identified in the search. The first round of screening can consume 1 hour per screener for every 100-200 records. A systematic review is a labor-intensive team effort.

How can I know if my topic has been been reviewed already?

Before starting out on a systematic review, check to see if someone has done it already. In PubMed you can use the systematic review subset to limit to a broad group of papers that is enriched for systematic reviews. You can invoke the subset by selecting if from the Article Types filters to the left of your PubMed results, or you can append AND systematic[sb] to your search. For example:

"neoadjuvant chemotherapy" AND systematic[sb]

The systematic review subset is very noisy, however. To quickly focus on systematic reviews (knowing that you may be missing some), simply search for the word systematic in the title:

"neoadjuvant chemotherapy" AND systematic[ti]

Any PRISMA-compliant systematic review will be captured by this method since including the words "systematic review" in the title is a requirement of the PRISMA checklist. Cochrane systematic reviews do not include 'systematic' in the title, however. It's worth checking the Cochrane Database of Systematic Reviews independently.

You can also search for protocols that will indicate that another group has set out on a similar project. Many investigators will register their protocols in PROSPERO , a registry of review protocols. Other published protocols as well as Cochrane Review protocols appear in the Cochrane Methodology Register, a part of the Cochrane Library .

  • Next: Guides and Standards >>
  • Last Updated: Feb 26, 2024 3:17 PM
  • URL: https://guides.library.harvard.edu/meta-analysis
  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • The PRISMA 2020...

The PRISMA 2020 statement: an updated guideline for reporting systematic reviews

PRISMA 2020 explanation and elaboration: updated guidance and exemplars for reporting systematic reviews

  • Related content
  • Peer review
  • Matthew J Page , senior research fellow 1 ,
  • Joanne E McKenzie , associate professor 1 ,
  • Patrick M Bossuyt , professor 2 ,
  • Isabelle Boutron , professor 3 ,
  • Tammy C Hoffmann , professor 4 ,
  • Cynthia D Mulrow , professor 5 ,
  • Larissa Shamseer , doctoral student 6 ,
  • Jennifer M Tetzlaff , research product specialist 7 ,
  • Elie A Akl , professor 8 ,
  • Sue E Brennan , senior research fellow 1 ,
  • Roger Chou , professor 9 ,
  • Julie Glanville , associate director 10 ,
  • Jeremy M Grimshaw , professor 11 ,
  • Asbjørn Hróbjartsson , professor 12 ,
  • Manoj M Lalu , associate scientist and assistant professor 13 ,
  • Tianjing Li , associate professor 14 ,
  • Elizabeth W Loder , professor 15 ,
  • Evan Mayo-Wilson , associate professor 16 ,
  • Steve McDonald , senior research fellow 1 ,
  • Luke A McGuinness , research associate 17 ,
  • Lesley A Stewart , professor and director 18 ,
  • James Thomas , professor 19 ,
  • Andrea C Tricco , scientist and associate professor 20 ,
  • Vivian A Welch , associate professor 21 ,
  • Penny Whiting , associate professor 17 ,
  • David Moher , director and professor 22
  • 1 School of Public Health and Preventive Medicine, Monash University, Melbourne, Australia
  • 2 Department of Clinical Epidemiology, Biostatistics and Bioinformatics, Amsterdam University Medical Centres, University of Amsterdam, Amsterdam, Netherlands
  • 3 Université de Paris, Centre of Epidemiology and Statistics (CRESS), Inserm, F 75004 Paris, France
  • 4 Institute for Evidence-Based Healthcare, Faculty of Health Sciences and Medicine, Bond University, Gold Coast, Australia
  • 5 University of Texas Health Science Center at San Antonio, San Antonio, Texas, USA; Annals of Internal Medicine
  • 6 Knowledge Translation Program, Li Ka Shing Knowledge Institute, Toronto, Canada; School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada
  • 7 Evidence Partners, Ottawa, Canada
  • 8 Clinical Research Institute, American University of Beirut, Beirut, Lebanon; Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Ontario, Canada
  • 9 Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, Oregon, USA
  • 10 York Health Economics Consortium (YHEC Ltd), University of York, York, UK
  • 11 Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada; School of Epidemiology and Public Health, University of Ottawa, Ottawa, Canada; Department of Medicine, University of Ottawa, Ottawa, Canada
  • 12 Centre for Evidence-Based Medicine Odense (CEBMO) and Cochrane Denmark, Department of Clinical Research, University of Southern Denmark, Odense, Denmark; Open Patient data Exploratory Network (OPEN), Odense University Hospital, Odense, Denmark
  • 13 Department of Anesthesiology and Pain Medicine, The Ottawa Hospital, Ottawa, Canada; Clinical Epidemiology Program, Blueprint Translational Research Group, Ottawa Hospital Research Institute, Ottawa, Canada; Regenerative Medicine Program, Ottawa Hospital Research Institute, Ottawa, Canada
  • 14 Department of Ophthalmology, School of Medicine, University of Colorado Denver, Denver, Colorado, United States; Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, USA
  • 15 Division of Headache, Department of Neurology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, USA; Head of Research, The BMJ , London, UK
  • 16 Department of Epidemiology and Biostatistics, Indiana University School of Public Health-Bloomington, Bloomington, Indiana, USA
  • 17 Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK
  • 18 Centre for Reviews and Dissemination, University of York, York, UK
  • 19 EPPI-Centre, UCL Social Research Institute, University College London, London, UK
  • 20 Li Ka Shing Knowledge Institute of St. Michael's Hospital, Unity Health Toronto, Toronto, Canada; Epidemiology Division of the Dalla Lana School of Public Health and the Institute of Health Management, Policy, and Evaluation, University of Toronto, Toronto, Canada; Queen's Collaboration for Health Care Quality Joanna Briggs Institute Centre of Excellence, Queen's University, Kingston, Canada
  • 21 Methods Centre, Bruyère Research Institute, Ottawa, Ontario, Canada; School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada
  • 22 Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada; School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada
  • Correspondence to: M J Page matthew.page{at}monash.edu
  • Accepted 4 January 2021

The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement replaces the 2009 statement and includes new reporting guidance that reflects advances in methods to identify, select, appraise, and synthesise studies. The structure and presentation of the items have been modified to facilitate implementation. In this article, we present the PRISMA 2020 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and the revised flow diagrams for original and updated reviews.

Systematic reviews serve many critical roles. They can provide syntheses of the state of knowledge in a field, from which future research priorities can be identified; they can address questions that otherwise could not be answered by individual studies; they can identify problems in primary research that should be rectified in future studies; and they can generate or evaluate theories about how or why phenomena occur. Systematic reviews therefore generate various types of knowledge for different users of reviews (such as patients, healthcare providers, researchers, and policy makers). 1 2 To ensure a systematic review is valuable to users, authors should prepare a transparent, complete, and accurate account of why the review was done, what they did (such as how studies were identified and selected) and what they found (such as characteristics of contributing studies and results of meta-analyses). Up-to-date reporting guidance facilitates authors achieving this. 3

The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement published in 2009 (hereafter referred to as PRISMA 2009) 4 5 6 7 8 9 10 is a reporting guideline designed to address poor reporting of systematic reviews. 11 The PRISMA 2009 statement comprised a checklist of 27 items recommended for reporting in systematic reviews and an “explanation and elaboration” paper 12 13 14 15 16 providing additional reporting guidance for each item, along with exemplars of reporting. The recommendations have been widely endorsed and adopted, as evidenced by its co-publication in multiple journals, citation in over 60 000 reports (Scopus, August 2020), endorsement from almost 200 journals and systematic review organisations, and adoption in various disciplines. Evidence from observational studies suggests that use of the PRISMA 2009 statement is associated with more complete reporting of systematic reviews, 17 18 19 20 although more could be done to improve adherence to the guideline. 21

Many innovations in the conduct of systematic reviews have occurred since publication of the PRISMA 2009 statement. For example, technological advances have enabled the use of natural language processing and machine learning to identify relevant evidence, 22 23 24 methods have been proposed to synthesise and present findings when meta-analysis is not possible or appropriate, 25 26 27 and new methods have been developed to assess the risk of bias in results of included studies. 28 29 Evidence on sources of bias in systematic reviews has accrued, culminating in the development of new tools to appraise the conduct of systematic reviews. 30 31 Terminology used to describe particular review processes has also evolved, as in the shift from assessing “quality” to assessing “certainty” in the body of evidence. 32 In addition, the publishing landscape has transformed, with multiple avenues now available for registering and disseminating systematic review protocols, 33 34 disseminating reports of systematic reviews, and sharing data and materials, such as preprint servers and publicly accessible repositories. To capture these advances in the reporting of systematic reviews necessitated an update to the PRISMA 2009 statement.

Summary points

To ensure a systematic review is valuable to users, authors should prepare a transparent, complete, and accurate account of why the review was done, what they did, and what they found

The PRISMA 2020 statement provides updated reporting guidance for systematic reviews that reflects advances in methods to identify, select, appraise, and synthesise studies

The PRISMA 2020 statement consists of a 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and revised flow diagrams for original and updated reviews

We anticipate that the PRISMA 2020 statement will benefit authors, editors, and peer reviewers of systematic reviews, and different users of reviews, including guideline developers, policy makers, healthcare providers, patients, and other stakeholders

Development of PRISMA 2020

A complete description of the methods used to develop PRISMA 2020 is available elsewhere. 35 We identified PRISMA 2009 items that were often reported incompletely by examining the results of studies investigating the transparency of reporting of published reviews. 17 21 36 37 We identified possible modifications to the PRISMA 2009 statement by reviewing 60 documents providing reporting guidance for systematic reviews (including reporting guidelines, handbooks, tools, and meta-research studies). 38 These reviews of the literature were used to inform the content of a survey with suggested possible modifications to the 27 items in PRISMA 2009 and possible additional items. Respondents were asked whether they believed we should keep each PRISMA 2009 item as is, modify it, or remove it, and whether we should add each additional item. Systematic review methodologists and journal editors were invited to complete the online survey (110 of 220 invited responded). We discussed proposed content and wording of the PRISMA 2020 statement, as informed by the review and survey results, at a 21-member, two-day, in-person meeting in September 2018 in Edinburgh, Scotland. Throughout 2019 and 2020, we circulated an initial draft and five revisions of the checklist and explanation and elaboration paper to co-authors for feedback. In April 2020, we invited 22 systematic reviewers who had expressed interest in providing feedback on the PRISMA 2020 checklist to share their views (via an online survey) on the layout and terminology used in a preliminary version of the checklist. Feedback was received from 15 individuals and considered by the first author, and any revisions deemed necessary were incorporated before the final version was approved and endorsed by all co-authors.

The PRISMA 2020 statement

Scope of the guideline.

The PRISMA 2020 statement has been designed primarily for systematic reviews of studies that evaluate the effects of health interventions, irrespective of the design of the included studies. However, the checklist items are applicable to reports of systematic reviews evaluating other interventions (such as social or educational interventions), and many items are applicable to systematic reviews with objectives other than evaluating interventions (such as evaluating aetiology, prevalence, or prognosis). PRISMA 2020 is intended for use in systematic reviews that include synthesis (such as pairwise meta-analysis or other statistical synthesis methods) or do not include synthesis (for example, because only one eligible study is identified). The PRISMA 2020 items are relevant for mixed-methods systematic reviews (which include quantitative and qualitative studies), but reporting guidelines addressing the presentation and synthesis of qualitative data should also be consulted. 39 40 PRISMA 2020 can be used for original systematic reviews, updated systematic reviews, or continually updated (“living”) systematic reviews. However, for updated and living systematic reviews, there may be some additional considerations that need to be addressed. Where there is relevant content from other reporting guidelines, we reference these guidelines within the items in the explanation and elaboration paper 41 (such as PRISMA-Search 42 in items 6 and 7, Synthesis without meta-analysis (SWiM) reporting guideline 27 in item 13d). Box 1 includes a glossary of terms used throughout the PRISMA 2020 statement.

Glossary of terms

Systematic review —A review that uses explicit, systematic methods to collate and synthesise findings of studies that address a clearly formulated question 43

Statistical synthesis —The combination of quantitative results of two or more studies. This encompasses meta-analysis of effect estimates (described below) and other methods, such as combining P values, calculating the range and distribution of observed effects, and vote counting based on the direction of effect (see McKenzie and Brennan 25 for a description of each method)

Meta-analysis of effect estimates —A statistical technique used to synthesise results when study effect estimates and their variances are available, yielding a quantitative summary of results 25

Outcome —An event or measurement collected for participants in a study (such as quality of life, mortality)

Result —The combination of a point estimate (such as a mean difference, risk ratio, or proportion) and a measure of its precision (such as a confidence/credible interval) for a particular outcome

Report —A document (paper or electronic) supplying information about a particular study. It could be a journal article, preprint, conference abstract, study register entry, clinical study report, dissertation, unpublished manuscript, government report, or any other document providing relevant information

Record —The title or abstract (or both) of a report indexed in a database or website (such as a title or abstract for an article indexed in Medline). Records that refer to the same report (such as the same journal article) are “duplicates”; however, records that refer to reports that are merely similar (such as a similar abstract submitted to two different conferences) should be considered unique.

Study —An investigation, such as a clinical trial, that includes a defined group of participants and one or more interventions and outcomes. A “study” might have multiple reports. For example, reports could include the protocol, statistical analysis plan, baseline characteristics, results for the primary outcome, results for harms, results for secondary outcomes, and results for additional mediator and moderator analyses

PRISMA 2020 is not intended to guide systematic review conduct, for which comprehensive resources are available. 43 44 45 46 However, familiarity with PRISMA 2020 is useful when planning and conducting systematic reviews to ensure that all recommended information is captured. PRISMA 2020 should not be used to assess the conduct or methodological quality of systematic reviews; other tools exist for this purpose. 30 31 Furthermore, PRISMA 2020 is not intended to inform the reporting of systematic review protocols, for which a separate statement is available (PRISMA for Protocols (PRISMA-P) 2015 statement 47 48 ). Finally, extensions to the PRISMA 2009 statement have been developed to guide reporting of network meta-analyses, 49 meta-analyses of individual participant data, 50 systematic reviews of harms, 51 systematic reviews of diagnostic test accuracy studies, 52 and scoping reviews 53 ; for these types of reviews we recommend authors report their review in accordance with the recommendations in PRISMA 2020 along with the guidance specific to the extension.

How to use PRISMA 2020

The PRISMA 2020 statement (including the checklists, explanation and elaboration, and flow diagram) replaces the PRISMA 2009 statement, which should no longer be used. Box 2 summarises noteworthy changes from the PRISMA 2009 statement. The PRISMA 2020 checklist includes seven sections with 27 items, some of which include sub-items ( table 1 ). A checklist for journal and conference abstracts for systematic reviews is included in PRISMA 2020. This abstract checklist is an update of the 2013 PRISMA for Abstracts statement, 54 reflecting new and modified content in PRISMA 2020 ( table 2 ). A template PRISMA flow diagram is provided, which can be modified depending on whether the systematic review is original or updated ( fig 1 ).

Noteworthy changes to the PRISMA 2009 statement

Inclusion of the abstract reporting checklist within PRISMA 2020 (see item #2 and table 2 ).

Movement of the ‘Protocol and registration’ item from the start of the Methods section of the checklist to a new Other section, with addition of a sub-item recommending authors describe amendments to information provided at registration or in the protocol (see item #24a-24c).

Modification of the ‘Search’ item to recommend authors present full search strategies for all databases, registers and websites searched, not just at least one database (see item #7).

Modification of the ‘Study selection’ item in the Methods section to emphasise the reporting of how many reviewers screened each record and each report retrieved, whether they worked independently, and if applicable, details of automation tools used in the process (see item #8).

Addition of a sub-item to the ‘Data items’ item recommending authors report how outcomes were defined, which results were sought, and methods for selecting a subset of results from included studies (see item #10a).

Splitting of the ‘Synthesis of results’ item in the Methods section into six sub-items recommending authors describe: the processes used to decide which studies were eligible for each synthesis; any methods required to prepare the data for synthesis; any methods used to tabulate or visually display results of individual studies and syntheses; any methods used to synthesise results; any methods used to explore possible causes of heterogeneity among study results (such as subgroup analysis, meta-regression); and any sensitivity analyses used to assess robustness of the synthesised results (see item #13a-13f).

Addition of a sub-item to the ‘Study selection’ item in the Results section recommending authors cite studies that might appear to meet the inclusion criteria, but which were excluded, and explain why they were excluded (see item #16b).

Splitting of the ‘Synthesis of results’ item in the Results section into four sub-items recommending authors: briefly summarise the characteristics and risk of bias among studies contributing to the synthesis; present results of all statistical syntheses conducted; present results of any investigations of possible causes of heterogeneity among study results; and present results of any sensitivity analyses (see item #20a-20d).

Addition of new items recommending authors report methods for and results of an assessment of certainty (or confidence) in the body of evidence for an outcome (see items #15 and #22).

Addition of a new item recommending authors declare any competing interests (see item #26).

Addition of a new item recommending authors indicate whether data, analytic code and other materials used in the review are publicly available and if so, where they can be found (see item #27).

PRISMA 2020 item checklist

  • View inline

PRISMA 2020 for Abstracts checklist*

Fig 1

PRISMA 2020 flow diagram template for systematic reviews. The new design is adapted from flow diagrams proposed by Boers, 55 Mayo-Wilson et al. 56 and Stovold et al. 57 The boxes in grey should only be completed if applicable; otherwise they should be removed from the flow diagram. Note that a “report” could be a journal article, preprint, conference abstract, study register entry, clinical study report, dissertation, unpublished manuscript, government report or any other document providing relevant information.

  • Download figure
  • Open in new tab
  • Download powerpoint

We recommend authors refer to PRISMA 2020 early in the writing process, because prospective consideration of the items may help to ensure that all the items are addressed. To help keep track of which items have been reported, the PRISMA statement website ( http://www.prisma-statement.org/ ) includes fillable templates of the checklists to download and complete (also available in the data supplement on bmj.com). We have also created a web application that allows users to complete the checklist via a user-friendly interface 58 (available at https://prisma.shinyapps.io/checklist/ and adapted from the Transparency Checklist app 59 ). The completed checklist can be exported to Word or PDF. Editable templates of the flow diagram can also be downloaded from the PRISMA statement website.

We have prepared an updated explanation and elaboration paper, in which we explain why reporting of each item is recommended and present bullet points that detail the reporting recommendations (which we refer to as elements). 41 The bullet-point structure is new to PRISMA 2020 and has been adopted to facilitate implementation of the guidance. 60 61 An expanded checklist, which comprises an abridged version of the elements presented in the explanation and elaboration paper, with references and some examples removed, is available in the data supplement on bmj.com. Consulting the explanation and elaboration paper is recommended if further clarity or information is required.

Journals and publishers might impose word and section limits, and limits on the number of tables and figures allowed in the main report. In such cases, if the relevant information for some items already appears in a publicly accessible review protocol, referring to the protocol may suffice. Alternatively, placing detailed descriptions of the methods used or additional results (such as for less critical outcomes) in supplementary files is recommended. Ideally, supplementary files should be deposited to a general-purpose or institutional open-access repository that provides free and permanent access to the material (such as Open Science Framework, Dryad, figshare). A reference or link to the additional information should be included in the main report. Finally, although PRISMA 2020 provides a template for where information might be located, the suggested location should not be seen as prescriptive; the guiding principle is to ensure the information is reported.

Use of PRISMA 2020 has the potential to benefit many stakeholders. Complete reporting allows readers to assess the appropriateness of the methods, and therefore the trustworthiness of the findings. Presenting and summarising characteristics of studies contributing to a synthesis allows healthcare providers and policy makers to evaluate the applicability of the findings to their setting. Describing the certainty in the body of evidence for an outcome and the implications of findings should help policy makers, managers, and other decision makers formulate appropriate recommendations for practice or policy. Complete reporting of all PRISMA 2020 items also facilitates replication and review updates, as well as inclusion of systematic reviews in overviews (of systematic reviews) and guidelines, so teams can leverage work that is already done and decrease research waste. 36 62 63

We updated the PRISMA 2009 statement by adapting the EQUATOR Network’s guidance for developing health research reporting guidelines. 64 We evaluated the reporting completeness of published systematic reviews, 17 21 36 37 reviewed the items included in other documents providing guidance for systematic reviews, 38 surveyed systematic review methodologists and journal editors for their views on how to revise the original PRISMA statement, 35 discussed the findings at an in-person meeting, and prepared this document through an iterative process. Our recommendations are informed by the reviews and survey conducted before the in-person meeting, theoretical considerations about which items facilitate replication and help users assess the risk of bias and applicability of systematic reviews, and co-authors’ experience with authoring and using systematic reviews.

Various strategies to increase the use of reporting guidelines and improve reporting have been proposed. They include educators introducing reporting guidelines into graduate curricula to promote good reporting habits of early career scientists 65 ; journal editors and regulators endorsing use of reporting guidelines 18 ; peer reviewers evaluating adherence to reporting guidelines 61 66 ; journals requiring authors to indicate where in their manuscript they have adhered to each reporting item 67 ; and authors using online writing tools that prompt complete reporting at the writing stage. 60 Multi-pronged interventions, where more than one of these strategies are combined, may be more effective (such as completion of checklists coupled with editorial checks). 68 However, of 31 interventions proposed to increase adherence to reporting guidelines, the effects of only 11 have been evaluated, mostly in observational studies at high risk of bias due to confounding. 69 It is therefore unclear which strategies should be used. Future research might explore barriers and facilitators to the use of PRISMA 2020 by authors, editors, and peer reviewers, designing interventions that address the identified barriers, and evaluating those interventions using randomised trials. To inform possible revisions to the guideline, it would also be valuable to conduct think-aloud studies 70 to understand how systematic reviewers interpret the items, and reliability studies to identify items where there is varied interpretation of the items.

We encourage readers to submit evidence that informs any of the recommendations in PRISMA 2020 (via the PRISMA statement website: http://www.prisma-statement.org/ ). To enhance accessibility of PRISMA 2020, several translations of the guideline are under way (see available translations at the PRISMA statement website). We encourage journal editors and publishers to raise awareness of PRISMA 2020 (for example, by referring to it in journal “Instructions to authors”), endorsing its use, advising editors and peer reviewers to evaluate submitted systematic reviews against the PRISMA 2020 checklists, and making changes to journal policies to accommodate the new reporting recommendations. We recommend existing PRISMA extensions 47 49 50 51 52 53 71 72 be updated to reflect PRISMA 2020 and advise developers of new PRISMA extensions to use PRISMA 2020 as the foundation document.

We anticipate that the PRISMA 2020 statement will benefit authors, editors, and peer reviewers of systematic reviews, and different users of reviews, including guideline developers, policy makers, healthcare providers, patients, and other stakeholders. Ultimately, we hope that uptake of the guideline will lead to more transparent, complete, and accurate reporting of systematic reviews, thus facilitating evidence based decision making.

Acknowledgments

We dedicate this paper to the late Douglas G Altman and Alessandro Liberati, whose contributions were fundamental to the development and implementation of the original PRISMA statement.

We thank the following contributors who completed the survey to inform discussions at the development meeting: Xavier Armoiry, Edoardo Aromataris, Ana Patricia Ayala, Ethan M Balk, Virginia Barbour, Elaine Beller, Jesse A Berlin, Lisa Bero, Zhao-Xiang Bian, Jean Joel Bigna, Ferrán Catalá-López, Anna Chaimani, Mike Clarke, Tammy Clifford, Ioana A Cristea, Miranda Cumpston, Sofia Dias, Corinna Dressler, Ivan D Florez, Joel J Gagnier, Chantelle Garritty, Long Ge, Davina Ghersi, Sean Grant, Gordon Guyatt, Neal R Haddaway, Julian PT Higgins, Sally Hopewell, Brian Hutton, Jamie J Kirkham, Jos Kleijnen, Julia Koricheva, Joey SW Kwong, Toby J Lasserson, Julia H Littell, Yoon K Loke, Malcolm R Macleod, Chris G Maher, Ana Marušic, Dimitris Mavridis, Jessie McGowan, Matthew DF McInnes, Philippa Middleton, Karel G Moons, Zachary Munn, Jane Noyes, Barbara Nußbaumer-Streit, Donald L Patrick, Tatiana Pereira-Cenci, Ba’ Pham, Bob Phillips, Dawid Pieper, Michelle Pollock, Daniel S Quintana, Drummond Rennie, Melissa L Rethlefsen, Hannah R Rothstein, Maroeska M Rovers, Rebecca Ryan, Georgia Salanti, Ian J Saldanha, Margaret Sampson, Nancy Santesso, Rafael Sarkis-Onofre, Jelena Savović, Christopher H Schmid, Kenneth F Schulz, Guido Schwarzer, Beverley J Shea, Paul G Shekelle, Farhad Shokraneh, Mark Simmonds, Nicole Skoetz, Sharon E Straus, Anneliese Synnot, Emily E Tanner-Smith, Brett D Thombs, Hilary Thomson, Alexander Tsertsvadze, Peter Tugwell, Tari Turner, Lesley Uttley, Jeffrey C Valentine, Matt Vassar, Areti Angeliki Veroniki, Meera Viswanathan, Cole Wayant, Paul Whaley, and Kehu Yang. We thank the following contributors who provided feedback on a preliminary version of the PRISMA 2020 checklist: Jo Abbott, Fionn Büttner, Patricia Correia-Santos, Victoria Freeman, Emily A Hennessy, Rakibul Islam, Amalia (Emily) Karahalios, Kasper Krommes, Andreas Lundh, Dafne Port Nascimento, Davina Robson, Catherine Schenck-Yglesias, Mary M Scott, Sarah Tanveer and Pavel Zhelnov. We thank Abigail H Goben, Melissa L Rethlefsen, Tanja Rombey, Anna Scott, and Farhad Shokraneh for their helpful comments on the preprints of the PRISMA 2020 papers. We thank Edoardo Aromataris, Stephanie Chang, Toby Lasserson and David Schriger for their helpful peer review comments on the PRISMA 2020 papers.

Contributors: JEM and DM are joint senior authors. MJP, JEM, PMB, IB, TCH, CDM, LS, and DM conceived this paper and designed the literature review and survey conducted to inform the guideline content. MJP conducted the literature review, administered the survey and analysed the data for both. MJP prepared all materials for the development meeting. MJP and JEM presented proposals at the development meeting. All authors except for TCH, JMT, EAA, SEB, and LAM attended the development meeting. MJP and JEM took and consolidated notes from the development meeting. MJP and JEM led the drafting and editing of the article. JEM, PMB, IB, TCH, LS, JMT, EAA, SEB, RC, JG, AH, TL, EMW, SM, LAM, LAS, JT, ACT, PW, and DM drafted particular sections of the article. All authors were involved in revising the article critically for important intellectual content. All authors approved the final version of the article. MJP is the guarantor of this work. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted.

Funding: There was no direct funding for this research. MJP is supported by an Australian Research Council Discovery Early Career Researcher Award (DE200101618) and was previously supported by an Australian National Health and Medical Research Council (NHMRC) Early Career Fellowship (1088535) during the conduct of this research. JEM is supported by an Australian NHMRC Career Development Fellowship (1143429). TCH is supported by an Australian NHMRC Senior Research Fellowship (1154607). JMT is supported by Evidence Partners Inc. JMG is supported by a Tier 1 Canada Research Chair in Health Knowledge Transfer and Uptake. MML is supported by The Ottawa Hospital Anaesthesia Alternate Funds Association and a Faculty of Medicine Junior Research Chair. TL is supported by funding from the National Eye Institute (UG1EY020522), National Institutes of Health, United States. LAM is supported by a National Institute for Health Research Doctoral Research Fellowship (DRF-2018-11-ST2-048). ACT is supported by a Tier 2 Canada Research Chair in Knowledge Synthesis. DM is supported in part by a University Research Chair, University of Ottawa. The funders had no role in considering the study design or in the collection, analysis, interpretation of data, writing of the report, or decision to submit the article for publication.

Competing interests: All authors have completed the ICMJE uniform disclosure form at http://www.icmje.org/conflicts-of-interest/ and declare: EL is head of research for the BMJ ; MJP is an editorial board member for PLOS Medicine ; ACT is an associate editor and MJP, TL, EMW, and DM are editorial board members for the Journal of Clinical Epidemiology ; DM and LAS were editors in chief, LS, JMT, and ACT are associate editors, and JG is an editorial board member for Systematic Reviews . None of these authors were involved in the peer review process or decision to publish. TCH has received personal fees from Elsevier outside the submitted work. EMW has received personal fees from the American Journal for Public Health , for which he is the editor for systematic reviews. VW is editor in chief of the Campbell Collaboration, which produces systematic reviews, and co-convenor of the Campbell and Cochrane equity methods group. DM is chair of the EQUATOR Network, IB is adjunct director of the French EQUATOR Centre and TCH is co-director of the Australasian EQUATOR Centre, which advocates for the use of reporting guidelines to improve the quality of reporting in research articles. JMT received salary from Evidence Partners, creator of DistillerSR software for systematic reviews; Evidence Partners was not involved in the design or outcomes of the statement, and the views expressed solely represent those of the author.

Provenance and peer review: Not commissioned; externally peer reviewed.

Patient and public involvement: Patients and the public were not involved in this methodological research. We plan to disseminate the research widely, including to community participants in evidence synthesis organisations.

This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: http://creativecommons.org/licenses/by/4.0/ .

  • Gurevitch J ,
  • Koricheva J ,
  • Nakagawa S ,
  • Liberati A ,
  • Tetzlaff J ,
  • Altman DG ,
  • PRISMA Group
  • Tricco AC ,
  • Sampson M ,
  • Shamseer L ,
  • Leoncini E ,
  • de Belvis G ,
  • Ricciardi W ,
  • Fowler AJ ,
  • Leclercq V ,
  • Beaudart C ,
  • Ajamieh S ,
  • Rabenda V ,
  • Tirelli E ,
  • O’Mara-Eves A ,
  • McNaught J ,
  • Ananiadou S
  • Marshall IJ ,
  • Noel-Storr A ,
  • Higgins JPT ,
  • Chandler J ,
  • McKenzie JE ,
  • López-López JA ,
  • Becker BJ ,
  • Campbell M ,
  • Sterne JAC ,
  • Savović J ,
  • Sterne JA ,
  • Hernán MA ,
  • Reeves BC ,
  • Whiting P ,
  • Higgins JP ,
  • ROBIS group
  • Hultcrantz M ,
  • Stewart L ,
  • Bossuyt PM ,
  • Flemming K ,
  • McInnes E ,
  • France EF ,
  • Cunningham M ,
  • Rethlefsen ML ,
  • Kirtley S ,
  • Waffenschmidt S ,
  • PRISMA-S Group
  • ↵ Higgins JPT, Thomas J, Chandler J, et al, eds. Cochrane Handbook for Systematic Reviews of Interventions : Version 6.0. Cochrane, 2019. Available from https://training.cochrane.org/handbook .
  • Dekkers OM ,
  • Vandenbroucke JP ,
  • Cevallos M ,
  • Renehan AG ,
  • ↵ Cooper H, Hedges LV, Valentine JV, eds. The Handbook of Research Synthesis and Meta-Analysis. Russell Sage Foundation, 2019.
  • IOM (Institute of Medicine)
  • PRISMA-P Group
  • Salanti G ,
  • Caldwell DM ,
  • Stewart LA ,
  • PRISMA-IPD Development Group
  • Zorzela L ,
  • Ioannidis JP ,
  • PRISMAHarms Group
  • McInnes MDF ,
  • Thombs BD ,
  • and the PRISMA-DTA Group
  • Beller EM ,
  • Glasziou PP ,
  • PRISMA for Abstracts Group
  • Mayo-Wilson E ,
  • Dickersin K ,
  • MUDS investigators
  • Stovold E ,
  • Beecher D ,
  • Noel-Storr A
  • McGuinness LA
  • Sarafoglou A ,
  • Boutron I ,
  • Giraudeau B ,
  • Porcher R ,
  • Chauvin A ,
  • Schulz KF ,
  • Schroter S ,
  • Stevens A ,
  • Weinstein E ,
  • Macleod MR ,
  • IICARus Collaboration
  • Kirkham JJ ,
  • Petticrew M ,
  • Tugwell P ,
  • PRISMA-Equity Bellagio group

systematic literature review analysis

  • Locations and Hours
  • UCLA Library
  • Research Guides
  • Biomedical Library Guides

Systematic Reviews

  • Types of Literature Reviews

What Makes a Systematic Review Different from Other Types of Reviews?

  • Planning Your Systematic Review
  • Database Searching
  • Creating the Search
  • Search Filters and Hedges
  • Grey Literature
  • Managing and Appraising Results
  • Further Resources

Reproduced from Grant, M. J. and Booth, A. (2009), A typology of reviews: an analysis of 14 review types and associated methodologies. Health Information & Libraries Journal, 26: 91–108. doi:10.1111/j.1471-1842.2009.00848.x

  • << Previous: Home
  • Next: Planning Your Systematic Review >>
  • Last Updated: Apr 17, 2024 2:02 PM
  • URL: https://guides.library.ucla.edu/systematicreviews

Understanding the influence of different proxy perspectives in explaining the difference between self-rated and proxy-rated quality of life in people living with dementia: a systematic literature review and meta-analysis

  • Open access
  • Published: 24 April 2024

Cite this article

You have full access to this open access article

systematic literature review analysis

  • Lidia Engel   ORCID: orcid.org/0000-0002-7959-3149 1 ,
  • Valeriia Sokolova 1 ,
  • Ekaterina Bogatyreva 2 &
  • Anna Leuenberger 2  

Proxy assessment can be elicited via the proxy-patient perspective (i.e., asking proxies to assess the patient’s quality of life (QoL) as they think the patient would respond) or proxy-proxy perspective (i.e., asking proxies to provide their own perspective on the patient’s QoL). This review aimed to identify the role of the proxy perspective in explaining the differences between self-rated and proxy-rated QoL in people living with dementia.

A systematic literate review was conducted by sourcing articles from a previously published review, supplemented by an update of the review in four bibliographic databases. Peer-reviewed studies that reported both self-reported and proxy-reported mean QoL estimates using the same standardized QoL instrument, published in English, and focused on the QoL of people with dementia were included. A meta-analysis was conducted to synthesize the mean differences between self- and proxy-report across different proxy perspectives.

The review included 96 articles from which 635 observations were extracted. Most observations extracted used the proxy-proxy perspective (79%) compared with the proxy-patient perspective (10%); with 11% of the studies not stating the perspective. The QOL-AD was the most commonly used measure, followed by the EQ-5D and DEMQOL. The standardized mean difference (SMD) between the self- and proxy-report was lower for the proxy-patient perspective (SMD: 0.250; 95% CI 0.116; 0.384) compared to the proxy-proxy perspective (SMD: 0.532; 95% CI 0.456; 0.609).

Different proxy perspectives affect the ratings of QoL, whereby adopting a proxy-proxy QoL perspective has a higher inter-rater gap in comparison with the proxy-patient perspective.

Similar content being viewed by others

systematic literature review analysis

Exploring self-report and proxy-report quality-of-life measures for people living with dementia in care homes

systematic literature review analysis

Convergent validity of EQ-5D with core outcomes in dementia: a systematic review

systematic literature review analysis

Proxy reporting of health-related quality of life for people with dementia: a psychometric solution

Avoid common mistakes on your manuscript.

Quality of life (QoL) has become an important outcome for research and practice but obtaining reliable and valid estimates remains a challenge in people living with dementia [ 1 ]. According to the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) criteria [ 2 ], dementia, termed as Major Neurocognitive Disorder (MND), involves a significant decline in at least one cognitive domain (executive function, complex attention, language, learning, memory, perceptual-motor, or social cognition), where the decline represents a change from a patient's prior level of cognitive ability, is persistent and progressive over time, is not associated exclusively with an episode of delirium, and reduces a person’s ability to perform everyday activities. Since dementia is one of the most pressing challenges for healthcare systems nowadays [ 3 ], it is critical to study its impact on QoL. The World Health Organization defines the concept of QoL as “individuals' perceptions of their position in life in the context of the culture and value systems in which they live and in relation to their goals, expectations, standards, and concerns” [ 4 ]. It is a broad ranging concept incorporating in a complex way the persons' physical health, psychological state, level of independence, social relationships, personal beliefs, and their relationships to salient features of the environment.

Although there is evidence that people with mild to moderate dementia can reliably rate their own QoL [ 5 ], as the disease progresses, there is typically a decline in memory, attention, judgment, insight, and communication that may compromise self-reporting of QoL [ 6 ]. Additionally, behavioral symptoms, such as agitation, and affective symptoms, such as depression, may present another challenge in obtaining self-reported QoL ratings due to emotional shifts and unwillingness to complete the assessment [ 7 ]. Although QoL is subjective and should ideally be assessed from an individual’s own perspective [ 8 ], the decline in cognitive function emphasizes the need for proxy-reporting by family members, health professionals, or care staff who are asked to report on behalf of the person with dementia. However, proxy-reports are not substitutable for self-reports from people with dementia, as they offer supplementary insights, reflecting the perceptions and viewpoints of people surrounding the person with dementia [ 9 ].

Previous research has consistently highlighted a disagreement between self-rated and proxy-rated QoL in people living with dementia, with proxies generally providing lower ratings (indicating poorer QoL) compared with person’s own ratings [ 8 , 10 , 11 , 12 ]. Impairment in cognition associated with greater dementia severity has been found to be associated with larger difference between self-rating and proxy-rating obtained from family caregivers, as it becomes increasingly difficult for severely cognitively impaired individuals to respond to questions that require contemplation, introspection, and sustained attention [ 13 , 14 ]. Moreover, non-cognitive factors, such as awareness of disease and depressive symptoms play an important role when comparing QoL ratings between individuals with dementia and their proxies [ 15 ]. Qualitative evidence has also shown that people with dementia tend to compare themselves with their peers, whereas carers make comparisons with how the person used to be in the past [ 9 ]. The disagreement between self-reported QoL and carer proxy-rated QoL could be modulated by some personal, cognitive or relational factors, for example, the type of relationship or the frequency of contact maintained, person’s cognitive status, carer’s own feeling about dementia, carer’s mood, and perceived burden of caregiving [ 14 , 16 ]. Disagreement may also arise from the person with dementia’s problems to communicate symptoms, and proxies’ inability to recognize certain symptoms, like pain [ 17 ], or be impacted by the amount of time spent with the person with dementia [ 18 ]. This may also prevent proxies to rate accurately certain domains of QoL, with previous evidence showing higher level of agreement for observable domains, such as mobility, compared with less observable domains like emotional wellbeing [ 8 ]. Finally, agreement also depends on the type of proxy (i.e., informal/family carers or professional staff) and the nature of their relationship, for instance, proxy QoL scores provided by formal carers tend to be higher (reflecting better QoL) compared to the scores supplied by family members [ 19 , 20 ]. Staff members might associate residents’ QoL with the quality of care delivered or the stage of their cognitive impairment, whereas relatives often focus on comparison with the person’s QoL when they were younger, lived in their own home and did not have dementia [ 20 ].

What has been not been fully examined to date is the role of different proxy perspectives employed in QoL questionnaires in explaining disagreement between self-rated and proxy-rated scores in people with dementia. Pickard et al. (2005) have proposed a conceptual framework for proxy assessments that distinguish between the proxy-patient perspective (i.e., asking proxies to assess the patient’s QoL as they think the patient would respond) or proxy-proxy perspective (i.e., asking proxies to provide their own perspective on the patient’s QoL) [ 21 ]. In this context, the intra-proxy gap describes the differences between proxy-patient and proxy-proxy perspective, whereas the inter-rater gap is the difference between self-report and proxy-report [ 21 ].

Existing generic and dementia-specific QoL instruments specify the perspective explicitly in their instructions or imply the perspective indirectly in their wording. For example, the instructions of the Dementia Quality of Life Measure (DEMQOL) asks proxies to give the answer they think their relative would give (i.e., proxy-patient perspective) [ 22 ], whereas the family version of the Quality of Life in Alzheimer’s Disease (QOL-AD) instructs the proxies to rate their relative’s current situation as they (the proxy) see it (i.e., proxy-proxy perspective) [ 7 ]. Some instruments, like the EQ-5D measures, have two proxy versions for each respective perspective [ 23 , 24 ]. The Adult Social Care Outcome Toolkit (ASCOT) proxy version, on the other hand, asks proxies to complete the questions from both perspectives, from their own opinion and how they think the person would answer [ 25 ].

QoL scores generated using different perspectives are expected to differ, with qualitative evidence showing that carers rate the person with dementia’s QoL lower (worse) when instructed to comment from their own perspective than from the perspective of the person with dementia [ 26 ]. However, to our knowledge, no previous review has fully synthesized existing evidence in this area. Therefore, we aimed to undertake a systematic literature review to examine the role of different proxy-assessment perspectives in explaining differences between self-rated and proxy-rated QoL in people living with dementia. The review was conducted under the hypothesis that the difference in QoL estimates will be larger when adopting the proxy-proxy perspective compared with proxy-patient perspective.

The review was registered with the International Prospective Register of Systematic Reviews (CRD42022333542) and followed the Preferred Reporting Items System for Systematic Reviews and Meta-Analysis (PRISMA) guidelines (see Appendix 1 ) [ 27 ].

Search strategy

This review used two approaches to obtain literature. First, primary articles from an existing review by Roydhouse et al. were retrieved [ 28 ]. The review included studies published from inception to February 2018 that compared self- and proxy-reports. Studies that focused explicitly on Alzheimer’s Disease or dementia were retrieved for the current review. Two reviewers conducted a full-text review to assess whether the eligibility criteria listed below for the respective study were met. An update of the Roydhouse et al. review was undertaken to capture more recent studies. The search strategy by Roydhouse et al. was amended and covered studies published after January 1, 2018, and was limited to studies within the context of dementia. The original search was undertaken over a three-week period (17/11/2021–9/12/2021) and then updated on July 3, 2023. Peer-reviewed literature was sourced from MEDLINE, CINAHL, and PsycINFO databases via EBSCOHost as well as EMBASE. Four main search term categories were used: (1) proxy terms (i.e., care*-report*), (2) QoL/ outcome terms (i.e., ‘quality of life’), (3) disease terms (i.e., ‘dementia’), and (4) pediatric terms (i.e., ‘pediatric*’) (for exclusion). Keywords were limited to appear in titles and abstracts only, and MeSH terms were included for all databases. A list of search strategy can be found in Appendix 2 . The first three search term categories were searched with AND, and the NOT function was used to exclude pediatric terms. A limiter was applied in all database searches to only include studies with human participants and articles published in English.

Selection criteria

Studies from all geographical locations were included in the review if they (1) were published in English in a peer-reviewed journal (conference abstracts, dissertations, a gray literature were excluded); (2) were primary studies (reviews were excluded); (3) clearly defined the disease of participants, which were limited to Alzheimer’s disease or dementia; (4) reported separate QoL scores for people with dementia (studies that included mixed populations had to report a separate QoL score for people with dementia to be considered); (5) were using a standardized and existing QoL instrument for assessment; and (6) provided a mean self-reported and proxy-reported QoL score for the same dyads sample (studies that reported means for non-matched samples were excluded) using the same QoL instrument.

Four reviewers (LE, VS, KB, AL) were grouped into two groups who independently screened the 179 full texts from the Roydhouse et. al (2022) study that included Alzheimer’s disease or dementia patients. If a discrepancy within the inclusion selection occurred, articles were discussed among all the reviewers until a consensus was reached. Studies identified from the database search were imported into EndNote [ 29 ]. Duplicates were removed through EndNote and then uploaded to Rayyan [ 30 ]. Each abstract was reviewed by two independent reviewers (any two from four reviewers). Disagreements regarding study inclusions were discussed between all reviewers until a consensus was reached. Full-text screening of each eligible article was completed by two independent reviewers (any two from four reviewers). Again, a discussion between all reviewers was used in case of disagreements.

Data extraction

A data extraction template was created in Microsoft Excel. The following information were extracted if available: country, study design, study sample, study setting, dementia type, disease severity, Mini-Mental Health State Exam (MMSE) score details, proxy type, perspective, living arrangements, QoL assessment measure/instrument, self-reported scores (mean, SD), proxy-reported scores (mean, SD), and agreement statistics. If a study reported the mean (SD) for the total score as well as for specific QoL domains of the measure, we extracted both. If studies reported multiple scores across different time points or subgroups, we extracted all scores. For interventional studies, scores from both the intervention group and the control group were recorded. In determining the proxy perspective, we relied on authors’ description in the article. If the perspective was not explicitly stated, we adopted the perspective of the instrument developers; where more perspectives were possible (e.g., in the case of the EQ-5D measures) and the perspective was not explicitly stated, it was categorized as ‘undefined.’ For agreement, we extracted the Intraclass Correlation Coefficient (ICC), a reliability index that reflects both degree of correlation and agreement between measurements of continuous variables. While there are different forms of ICC based on the model (1-way random effects, 2-wy random effects, or 2-way fixed effects), the type (single rater/measurement or the mean k raters/measurements), and definition of relationship [ 31 ], this level of information was not extracted due to insufficient information provided in the original studies. Values for ICC range between 0 and 1, with values interpreted as poor (less than 0.5), moderate (0.5–0.75), good (0.75–0.9), and excellent (greater than 0.9) reliability between raters [ 31 ].

Data synthesis and analysis

Characteristics of studies were summarized descriptively. Self-reported and proxy-reported means and SD were extracted from the full texts and the mean difference was calculated (or extracted if available) for each pair. Studies that reported median values instead of mean values were converted using the approach outlined by Wan et al. (2014) [ 32 ]. Missing SDs (5 studies, 20 observations) were obtained from standard errors or confidence intervals reported following the Cochrane guidelines [ 33 ]. Missing SDs (6 studies, 29 observations) in studies that only presented the mean value without any additional summary statistics were imputed using the prognostic method [ 34 ]. Thereby, we predicted the missing SDs by calculating the average SDs of observed studies with full information by the respective measure and source (self-report versus proxy-report).

A meta-analysis was performed in Stata (17.1 Stata Corp LLC, College Station, TX) to synthesize mean differences between self- and proxy-reported scores across different proxy perspectives. First, the pooled raw mean differences were calculated for each QoL measure separately, given differences in scales between measures. Secondly, we calculated the pooled standardized mean difference (SMD) for all studies stratified by proxy type (family carer, formal carers, mixed), dementia severity (mild, moderate, severe), and living arrangement (residential/institutional care, mixed). SMD accounts for the use of different measurement scales, where effect sizes were estimated using Cohen’s d. Random-effects models were used to allow for unexplained between-study variability based on the restricted maximum-likelihood (REML) estimator. The percentage of variability attributed to heterogeneity between the studies was assessed using the I 2 statistic; an I 2 of 0%-40% represents possibly unimportant heterogeneity, 30–60% moderate heterogeneity, 50–90% substantial heterogeneity, and 75%-100% considerable heterogeneity [ 35 ]. Chi-squared statistics (χ 2 ) provided evidence of heterogeneity, where a p -value of 0.1 was used as significance level. For studies that reported agreement statistics, based on ICC, we also ran a forest plot stratified by the study perspective. We also calculated Q statistic (Cochran’s test of homogeneity), which assesses whether observed differences in results are compatible with chance alone.

Risk of bias and quality assessment

The quality of studies was assessed using the using a checklist for assessing the quality of quantitative studies developed by Kmet et al. (2004) [ 36 ]. The checklist consists of 14 items and items are scored as ‘2’ (yes, item sufficiently addressed), ‘1’ (item partially addressed), ‘0’ (no, not addressed), or ‘not applicable.’ A summary score was calculated for each study by summing the total score obtained across relevant items and dividing by the total possible score. Scores were adjusted by excluding items that were not applicable from the total score. Quality assessment was undertaken by one reviewer, with 25% of the papers assessed independently by a second reviewer.

The PRISMA diagram in Fig.  1 shows that after the abstract and full-text screening, 38 studies from the database search and 58 studies from the Roydhouse et al. (2022) review were included in this review—a total of 96 studies. A list of all studies included and their characteristics can be found in Appendix 3.

figure 1

PRISMA 2020 flow diagram

General study characteristics

The 96 articles included in the review were published between 1999 and 2023 from across the globe; most studies (36%) were conducted in Europe. People with dementia in these studies were living in the community (67%), residential/institutional care (15%), as well as mixed dwelling settings (18%). Most proxy-reports were provided by family carers (85%) and only 8 studies (8%) included formal carers. The mean MMSE score for dementia and Alzheimer’s participants was 18.77 (SD = 4.34; N  = 85 studies), which corresponds to moderate cognitive impairment [ 37 ]. Further characteristics of studies included are provided in Table  1 . The quality of studies included (see Appendix 4) was generally very good, scoring on average 91% (SD: 9.1) with scores ranging from 50 to 100%.

Quality of life measure and proxy perspective used

A total of 635 observations were recorded from the 96 studies. The majority of studies and observations extracted assumed the proxy-proxy perspective (77 studies, 501 observations), followed by the proxy-patient perspective (18 studies, 62 observations), with 18 studies (72 observations) not clearly defining the perspective. Table 2 provides a detailed overview of number of studies and observations across the respective QoL measures and proxy perspectives. Two studies (14 observations) adopted both perspectives within the same study design: one using the QOL-AD measure [ 5 ] and the second study exploring the EQ-5D-3L and EQ VAS [ 38 ]. Overall, the QOL-AD was the most often used QoL measure, followed by the EQ-5D and DEMQOL. Mean scores for specific QoL domains were accessible for the DEMQOL and QOL-AD. However, only the QOL-AD provided domain-specific mean scores from both proxy perspectives.

Mean scores and mean differences by proxy perspective and QoL measure

The raw mean scores for self-reported and proxy-reported QoL scores are provided in the Supplementary file 2. The pooled raw mean difference by proxy perspective and measure is shown in Table  3 . Regardless of the perspective adopted and the QoL instrument used, self-reported scores were higher (indicating better QoL) compared with proxy-reported scores, except for the DEMQOL, where proxies reported better QoL than people with dementia themselves. Most instruments were explored from one perspective, except for the EQ-5D-3L, EQ VAS, and QOL-AD, for which mean differences were available for both perspectives. For these three measures, mean differences were smaller when adopting the proxy-patient perspective compared with proxy-proxy perspective, although mean scores for the QOL-AD were slightly lower from the proxy-proxy perspective. I 2 statistics indicate considerable heterogeneity (I 2  > 75%) between studies. Mean differences by specific QoL domains are provided in Appendix 5, but only for the QOL-AD measure that was explored from both perspectives. Generally, mean differences appeared to be smaller for the proxy-proxy perspective than the proxy-patient perspective across all domains, except for ‘physical health’ and ‘doing chores around the house.’ However, results need to be interpreted carefully as proxy-patient perspective scores were derived from only one study.

Standardized mean differences by proxy perspective, stratified by proxy type, dementia severity, and living arrangement

Table 4 provides the SMD by proxy perspective, which adjusts for the different QoL measurement scales. Findings suggest that adopting the proxy-patient perspective results in lower SMDs (SMD: 0.250; 95% CI 0.116; 0.384) compared with the proxy-proxy perspective (SMD: 0.532; 95% CI 0.456; 0.609). The largest SMD was recorded for studies that did not define the study perspective (SMD: 0.594; 95% CI 0.469; 0.718). A comparison by different proxy types (formal carers, family carers, and mixed proxies) revealed some mixed results. When adopting the proxy-proxy perspective, the largest SMD was found for family carers (SMD: 0.556; 95% CI 0.465; 0.646) compared with formal carers (SMD: 0.446; 95% CI 0.305; 0.586) or mixed proxies (SMD: 0.335; 95% CI 0.211; 0.459). However, the opposite relationship was found when the proxy-patient perspective was used, where the smallest SMD was found for family carers compared with formal carers and mixed proxies. The SMD increased with greater level of dementia severity, suggesting a greater disagreement. However, compared with the proxy-proxy perspective, where self-reported scores were greater (i.e., better QoL) than proxy-reported scores across all dementia severity levels, the opposite was found when adopting the proxy-patient perspective, where proxies reported better QoL than people with dementia themselves, except for the severe subgroup. No clear trend was observed for different living settings, although the SMD appeared to be smaller for people with dementia living in residential care compared with those living in the community.

Direct proxy perspectives comparison studies

Two studies assessed both proxy perspectives within the same study design. Bosboom et al. (2012) found that compared with self-reported scores (mean: 34.7; SD: 5.3) using the QOL-AD, proxy scores using the proxy-patient perspective were closer to the self-reported scores (mean: 32.1; SD: 6.1) compared with the proxy-proxy perspective (mean: 29.5; SD: 5.4) [ 5 ]. Similar findings were reported by Leontjevas et al. (2016) using the EQ-5D-3L, including the EQ VAS, showing that the inter-proxy gap between self-report (EQ-5D-3L: 0.609; EQ VAS: 65.37) and proxy-report was smaller when adopting the proxy-patient perspective (EQ-5D-3L: 0.555; EQ VAS: 65.15) compared with the proxy-proxy perspective (EQ-5D-3L: 0.492; EQ VAS: 64.42) [ 38 ].

Inter-rater agreement (ICC) statistics

Six studies reported agreement statistics based on ICC, from which we extracted 17 observations that were included in the meta-analysis. Figure  2 shows the study-specific and overall estimates of ICC by the respective study perspective. The heterogeneity between studies was high ( I 2  = 88.20%), with a Q test score of 135.49 ( p  < 0.001). While the overall ICC for the 17 observations was 0.3 (95% CI 0.22; 0.38), indicating low agreement, the level of agreement was slightly better when adopting a proxy-patient perspective (ICC: 0.36, 95% CI 0.23; 0.49) than a proxy-proxy perspective (ICC: 0.26, 95% CI 0.17; 0.35).

figure 2

Forest plot depicting study-specific and overall ICC estimates by study perspective

While previous studies highlighted a disagreement between self-rated and proxy-rated QoL in people living with dementia, this review, for the first time, assessed the role of different proxy perspectives in explaining the inter-rater gap. Our findings align with the baseline hypothesis and indicate that QoL scores reported from the proxy-patient perspective are closer to self-reported QoL scores than the proxy-proxy perspective, suggesting that the proxy perspective does impact the inter-rater gap and should not be ignored. This finding was observed across different analyses conducted in this review (i.e., pooled raw mean difference, SMD, ICC analysis), which also confirms the results of two previous primary studies that adopted both proxy perspectives within the same study design [ 5 , 38 ]. Our findings emphasize the need for transparency in reporting the proxy perspective used in future studies, as it can impact results and interpretation. This was also noted by the recent ISPOR Proxy Task Force that developed a checklist of considerations when using proxy-reporting [ 39 ]. While consistency in proxy-reports is desirable, it is crucial to acknowledge that each proxy perspective holds significance in future research, depending on study objectives. It is evident that both proxy perspectives offer distinct insights—one encapsulating the perspectives of people with dementia, and the other reflecting the viewpoints of proxies. Therefore, in situations where self-report is unattainable due to advanced disease severity and the person’s perspective on their own QoL assessment is sought, it is recommended to use the proxy-patient perspective. Conversely, if the objective of future research is to encompass the viewpoints of proxies, opting for the proxy-proxy perspective is advisable. However, it is important to note that proxies may deviate from instructed perspectives, requiring future qualitative research to examine the adherence to proxy perspectives. Additionally, others have argued that proxy-reports should not substitute self-reports, and only serve as supplementary sources alongside patient self-reports whenever possible [ 9 ].

This review considered various QoL instruments, but most instruments adopted one specific proxy perspective, limiting detailed analyses. QoL instruments differ in their scope (generic versus disease-specific) as well as coverage of QoL domains. The QOL-AD, an Alzheimer's Disease-specific measure, was commonly used. Surprisingly, for this measure, the mean differences between self-reported and proxy-reported scores were smaller using the proxy-proxy perspective, contrary to the patterns observed with all other instruments. This may be due to the lack of studies reporting QOL-AD proxy scores from the proxy-patient perspective, as the study by Bosboom et al. (2012) found the opposite [ 5 ]. Previous research has also suggested that the inter-rater gap is dependent on the QoL domains and that the risk of bias is greater for more ‘subjective’ (less observable) domains such as emotions, feelings, and moods in comparison with observable, objective areas such as physical domains [ 8 , 40 ]. However, this review lacks sufficient observations for definitive results on QoL dimensions and their impact on self-proxy differences, emphasizing the need for future research in this area.

With regard to proxy type, there is an observable trend suggesting a wider inter-rater gap when family proxies are employed using the proxy-proxy perspective, in contrast to formal proxies. This variance might be attributed to the use of distinct anchoring points; family proxies tend to assess the individual's QoL in relation to their past self before having dementia, while formal caregivers may draw comparisons with other individuals with dementia under their care [ 41 ]. However, the opposite was found when the proxy-patient perspective was used, where family proxies scores seemed to align more closely with self-reported scores, resulting in lower SMD scores. This suggests that family proxies might possess a better ability to empathize with the perspective of the person with dementia compared to formal proxies. Nonetheless, it is important to interpret these findings cautiously, given the relatively small number of observations for formal caregiver reports. Additionally, other factors such as emotional connection, caregiver burden, and caregiver QoL may also impact proxy-reports by family proxies [ 14 , 16 ] that have not been explored in this review.

Our review found that the SMD between proxy and self-report increased with greater level of dementia severity, contrasting a previous study, which showed that cognitive impairment was not the primary factor that accounted for the differences in the QoL assessments between family proxies and the person with dementia [ 15 ]. However, it is noteworthy that different interpretations and classifications were used across studies to define mild, moderate, and severe dementia, which needs to be considered. Most studies used MMSE to define dementia severity levels. Given the MMSE’s role as a standard measure of cognitive function, the study findings are considered generalizable and clinically relevant for people with dementia across different dementia severity levels. When examining the role of the proxy perspective by level of severity, we found that compared with the proxy-proxy perspective, where self-reported scores were greater than proxy-reported scores across all dementia severity levels, the proxy-patient perspective yielded the opposite results, and proxies reported better QoL than people with dementia themselves, except for the severe subgroup. It is possible that in the early stages of dementia, the person with dementia has a greater awareness of increasing deficits, coupled with denial and lack of acceptance, leading to a more critical view of their own QoL than how proxies think they would rate their QoL. However, future studies are warranted, given the small number of observations adopting the proxy-patient perspective in our review.

The heterogeneity observed in the studies included was high, supporting the use of random-effects meta-analysis. This is not surprising given the diverse nature of studies included (i.e., RCTs, cross-sectional studies), differences in the population (i.e., people living in residential care versus community-dwelling people), mixed levels of dementia severity, and differences between instruments. While similar heterogeneity was observed in another review on a similar topic [ 42 ], our presentation of findings stratified by proxy type, dementia severity, and living arrangement attempted to account for such differences across studies.

Limitations and recommendations for future studies

Our review has some limitations. Firstly, proxy perspectives were categorized based on the authors' descriptions, but many papers did not explicitly state the perspective, which led to the use of assumptions based on instrument developers. Some studies may have modified the perspective's wording without reporting it. Due to lack of resources, we did not contact the authors of the original studies directly to seek clarification around the proxy perspective adopted. Regarding studies using the EQ-5D, which has two proxy perspectives, some studies did not specify which proxy version was used, suggesting the potential use of self-reported versions for proxies. In such cases, the proxy perspective was categorized as undefined. Despite accounting for factors like QoL measure, proxy type, setting, and dementia severity, we could not assess the impact of proxy characteristics (e.g., carer burden) or dementia type due to limited information provided in the studies. We also faced limitations in exploring the proxy perspective by QoL domains due to limited information. Further, not all studies outlined the data collection process in full detail. For example, it is possible that the proxy also assisted the person with dementia with their self-report, which could have resulted in biased estimates and the need for future studies applying blinding. Although we assessed the risk of bias of included studies, the checklist was not directly reflecting the purpose of our study that looked into inter-rater agreement. No checklist for this purpose currently exists. Finally, quality appraisal by a second reviewer was only conducted for the first 25% of the studies due to resource constraints and a low rate of disagreement between the two assessors. However, an agreement index between reviewers regarding the concordance in selecting full texts for inclusion and conducting risk of bias assessments was not calculated.

This review demonstrates that the choice of proxy perspective impacts the inter-rater gap. QoL scores from the proxy-patient perspective align more closely with self-reported scores than the proxy-proxy perspective. These findings contribute to the broader literature investigating factors influencing differences in QoL scores between proxies and individuals with dementia. While self-reported QoL is the gold standard, proxy-reports should be viewed as complements rather than substitutes. Both proxy perspectives offer unique insights, yet QoL assessments in people with dementia are complex. The difference in self- and proxy-reports can be influenced by various factors, necessitating further research before presenting definitive results that inform care provision and policy.

Data availability

All data associated with the systematic literature review are available in the supplementary file.

Moyle, W., & Murfield, J. E. (2013). Health-related quality of life in older people with severe dementia: Challenges for measurement and management. Expert Review of Pharmacoeconomics & Outcomes, 13 (1), 109–122. https://doi.org/10.1586/erp.12.84

Article   Google Scholar  

Sachdev, P. S., Blacker, D., Blazer, D. G., Ganguli, M., Jeste, D. V., Paulsen, J. S., & Petersen, R. C. (2014). Classifying neurocognitive disorders: The DSM-5 approach. Nature reviews Neurology, 10 (11), 634–642. https://doi.org/10.1038/nrneurol.2014.181

Article   PubMed   Google Scholar  

Health, The Lancet Regional., & – Europe. (2022). Challenges for addressing dementia. The Lancet Regional Health . https://doi.org/10.1016/j.lanepe.2022.100504

The WHOQOL Group. (1995). The World Health Organization quality of life assessment (WHOQOL): Position paper from the World Health Organization. Social Science & Medicine, 41 (10), 1403–1409. https://doi.org/10.1016/0277-9536(95)00112-K

Bosboom, P. R., Alfonso, H., Eaton, J., & Almeida, O. P. (2012). Quality of life in Alzheimer’s disease: Different factors associated with complementary ratings by patients and family carers. International Psychogeriatrics, 24 (5), 708–721. https://doi.org/10.1017/S1041610211002493

Scholzel-Dorenbos, C. J., Rikkert, M. G., Adang, E. M., & Krabbe, P. F. (2009). The challenges of accurate measurement of health-related quality of life in frail elderly people and dementia. Journal of the American Geriatrics Society, 57 (12), 2356–2357. https://doi.org/10.1111/j.1532-5415.2009.02586.x

Logsdon, R. G., Gibbons, L. E., McCurry, S. M., & Teri, L. (2002). Assessing quality of life in older adults with cognitive impairment. Psychosomatic Medicine, 64 (3), 510–519. https://doi.org/10.1097/00006842-200205000-00016

Hutchinson, C., Worley, A., Khadka, J., Milte, R., Cleland, J., & Ratcliffe, J. (2022). Do we agree or disagree? A systematic review of the application of preference-based instruments in self and proxy reporting of quality of life in older people. Social Science & Medicine, 305 , 115046. https://doi.org/10.1016/j.socscimed.2022.115046

Smith, S. C., Hendriks, A. A. J., Cano, S. J., & Black, N. (2020). Proxy reporting of health-related quality of life for people with dementia: A psychometric solution. Health and Quality of Life Outcomes, 18 (1), 148. https://doi.org/10.1186/s12955-020-01396-y

Article   CAS   PubMed   PubMed Central   Google Scholar  

Andrieu, S., Coley, N., Rolland, Y., Cantet, C., Arnaud, C., Guyonnet, S., Nourhashemi, F., Grand, A., Vellas, B., & group, P. (2016). Assessing Alzheimer’s disease patients’ quality of life: Discrepancies between patient and caregiver perspectives. Alzheimer’s & Dementia, 12 (4), 427–437. https://doi.org/10.1016/j.jalz.2015.09.003

Jönsson, L., Andreasen, N., Kilander, L., Soininen, H., Waldemar, G., Nygaard, H., Winblad, B., Jonhagen, M. E., Hallikainen, M., & Wimo, A. (2006). Patient- and proxy-reported utility in Alzheimer disease using the EuroQoL. Alzheimer Disease & Associated Disorders, 20 (1), 49–55. https://doi.org/10.1097/01.wad.0000201851.52707.c9

Zucchella, C., Bartolo, M., Bernini, S., Picascia, M., & Sinforiani, E. (2015). Quality of life in Alzheimer disease: A comparison of patients’ and caregivers’ points of view. Alzheimer Disease & Associated Disorders, 29 (1), 50–54. https://doi.org/10.1097/WAD.0000000000000050

Article   CAS   Google Scholar  

Buckley, T., Fauth, E. B., Morrison, A., Tschanz, J., Rabins, P. V., Piercy, K. W., Norton, M., & Lyketsos, C. G. (2012). Predictors of quality of life ratings for persons with dementia simultaneously reported by patients and their caregivers: The Cache County (Utah) study. International Psychogeriatrics, 24 (7), 1094–1102. https://doi.org/10.1017/S1041610212000063

Article   PubMed   PubMed Central   Google Scholar  

Schiffczyk, C., Romero, B., Jonas, C., Lahmeyer, C., Muller, F., & Riepe, M. W. (2010). Generic quality of life assessment in dementia patients: A prospective cohort study. BMC Neurology, 10 , 48. https://doi.org/10.1186/1471-2377-10-48

Sousa, M. F., Santos, R. L., Arcoverde, C., Simoes, P., Belfort, T., Adler, I., Leal, C., & Dourado, M. C. (2013). Quality of life in dementia: The role of non-cognitive factors in the ratings of people with dementia and family caregivers. International Psychogeriatrics, 25 (7), 1097–1105. https://doi.org/10.1017/S1041610213000410

Arons, A. M., Krabbe, P. F., Scholzel-Dorenbos, C. J., van der Wilt, G. J., & Rikkert, M. G. (2013). Quality of life in dementia: A study on proxy bias. BMC Medical Research Methodology, 13 , 110. https://doi.org/10.1186/1471-2288-13-110

Gomez-Gallego, M., Gomez-Garcia, J., & Ato-Lozano, E. (2015). Addressing the bias problem in the assessment of the quality of life of patients with dementia: Determinants of the accuracy and precision of the proxy ratings. The Journal of Nutrition, Health & Aging, 19 (3), 365–372. https://doi.org/10.1007/s12603-014-0564-7

Moon, H., Townsend, A. L., Dilworth-Anderson, P., & Whitlatch, C. J. (2016). Predictors of discrepancy between care recipients with mild-to-moderate dementia and their caregivers on perceptions of the care recipients’ quality of life. American Journal of Alzheimer’s Disease & Other Dementias, 31 (6), 508–515. https://doi.org/10.1177/1533317516653819

Crespo, M., Bernaldo de Quiros, M., Gomez, M. M., & Hornillos, C. (2012). Quality of life of nursing home residents with dementia: A comparison of perspectives of residents, family, and staff. The Gerontologist, 52 (1), 56–65. https://doi.org/10.1093/geront/gnr080

Griffiths, A. W., Smith, S. J., Martin, A., Meads, D., Kelley, R., & Surr, C. A. (2020). Exploring self-report and proxy-report quality-of-life measures for people living with dementia in care homes. Quality of Life Research, 29 (2), 463–472. https://doi.org/10.1007/s11136-019-02333-3

Pickard, A. S., & Knight, S. J. (2005). Proxy evaluation of health-related quality of life: A conceptual framework for understanding multiple proxy perspectives. Medical Care, 43 (5), 493–499. https://doi.org/10.1097/01.mlr.0000160419.27642.a8

Smith, S. C., Lamping, D. L., Banerjee, S., Harwood, R. H., Foley, B., Smith, P., Cook, J. C., Murray, J., Prince, M., Levin, E., Mann, A., & Knapp, M. (2007). Development of a new measure of health-related quality of life for people with dementia: DEMQOL. Psychological Medicine, 37 (5), 737–746. https://doi.org/10.1017/S0033291706009469

Article   CAS   PubMed   Google Scholar  

Brooks, R. (1996). EuroQol: The current state of play. Health Policy, 37 (1), 53–72. https://doi.org/10.1016/0168-8510(96)00822-6

Herdman, M., Gudex, C., Lloyd, A., Janssen, M., Kind, P., Parkin, D., Bonsel, G., & Badia, X. (2011). Development and preliminary testing of the new five-level version of EQ-5D (EQ-5D-5L). Quality of Life Research, 20 (10), 1727–1736. https://doi.org/10.1007/s11136-011-9903-x

Rand, S., Caiels, J., Collins, G., & Forder, J. (2017). Developing a proxy version of the adult social care outcome toolkit (ASCOT). Health and Quality of Life Outcomes, 15 (1), 108. https://doi.org/10.1186/s12955-017-0682-0

Engel, L., Bucholc, J., Mihalopoulos, C., Mulhern, B., Ratcliffe, J., Yates, M., & Hanna, L. (2020). A qualitative exploration of the content and face validity of preference-based measures within the context of dementia. Health and Quality of Life Outcomes, 18 (1), 178. https://doi.org/10.1186/s12955-020-01425-w

Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hrobjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., … Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. British Medical Journal, 372 , n71. https://doi.org/10.1136/bmj.n71

Roydhouse, J. K., Cohen, M. L., Eshoj, H. R., Corsini, N., Yucel, E., Rutherford, C., Wac, K., Berrocal, A., Lanzi, A., Nowinski, C., Roberts, N., Kassianos, A. P., Sebille, V., King, M. T., Mercieca-Bebber, R., Force, I. P. T., & the, I. B. o. D. (2022). The use of proxies and proxy-reported measures: A report of the international society for quality of life research (ISOQOL) proxy task force. Quality of Life Research, 31 (2), 317–327. https://doi.org/10.1007/s11136-021-02937-8

The EndNote Team. (2013). EndNote (Version EndNote X9) [64 bit]. Philadelphia, PA: Clarivate.

Ouzzani, M., Hammady, H., Fedorowicz, Z., & Elmagarmid, A. (2016). Rayyan—A web and mobile app for systematic reviews. Systematic Reviews, 5 (1), 210. https://doi.org/10.1186/s13643-016-0384-4

Koo, T. K., & Li, M. Y. (2016). A guideline of selecting and reporting intraclass correlation coefficients for reliability research. Journal of Chiropractic Medicine, 15 (2), 155–163. https://doi.org/10.1016/j.jcm.2016.02.012

Wan, X., Wang, W., Liu, J., & Tong, T. (2014). Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range. BMC Medical Research Methodology, 14 , 135. https://doi.org/10.1186/1471-2288-14-135

Higgins JPT and Green S (editors). (2011). Cochrane handbook for systematic reviews of interventions.Version 5.1.0 [updated March 2011]. Retrieved 20 Jan 2023, from https://handbook-5-1.cochrane.org/chapter_7/7_7_3_2_obtaining_standard_deviations_from_standard_errors_and.htm

Ma, J., Liu, W., Hunter, A., & Zhang, W. (2008). Performing meta-analysis with incomplete statistical information in clinical trials. BMC Medical Research Methodology, 8 , 56. https://doi.org/10.1186/1471-2288-8-56

Deeks, J. J., Higgins, J. P. T., Altman, D. G., & on behalf of the Cochrane Statistical Methods Group. (2023). Chapter 10: Analysing data and undertaking meta-analyses. In J. Higgins & J. Thomas (Eds.), Cochrane Handbook for Systematic Reviews of Interventions. Version 6.4.

Kmet, L. M., Cook, L. S., & Lee, R. C. (2004). Standard quality assessment criteria for evaluating primary research papers from a variety of fields: Health and technology assessment unit. Alberta Heritage Foundation for Medical Research.

Lewis, T. J., & Trempe, C. L. (2017). Diagnosis of Alzheimer’s: Standard-of-care . USA: Elsevier Science & Technology.

Google Scholar  

Leontjevas, R., Teerenstra, S., Smalbrugge, M., Koopmans, R. T., & Gerritsen, D. L. (2016). Quality of life assessments in nursing homes revealed a tendency of proxies to moderate patients’ self-reports. Journal of Clinical Epidemiology, 80 , 123–133. https://doi.org/10.1016/j.jclinepi.2016.07.009

Lapin, B., Cohen, M. L., Corsini, N., Lanzi, A., Smith, S. C., Bennett, A. V., Mayo, N., Mercieca-Bebber, R., Mitchell, S. A., Rutherford, C., & Roydhouse, J. (2023). Development of consensus-based considerations for use of adult proxy reporting: An ISOQOL task force initiative. Journal of Patient-Reported Outcomes, 7 (1), 52. https://doi.org/10.1186/s41687-023-00588-6

Li, M., Harris, I., & Lu, Z. K. (2015). Differences in proxy-reported and patient-reported outcomes: assessing health and functional status among medicare beneficiaries. BMC Medical Research Methodology . https://doi.org/10.1186/s12874-015-0053-7

Robertson, S., Cooper, C., Hoe, J., Lord, K., Rapaport, P., Marston, L., Cousins, S., Lyketsos, C. G., & Livingston, G. (2020). Comparing proxy rated quality of life of people living with dementia in care homes. Psychological Medicine, 50 (1), 86–95. https://doi.org/10.1017/S0033291718003987

Khanna, D., Khadka, J., Mpundu-Kaambwa, C., Lay, K., Russo, R., Ratcliffe, J., & Quality of Life in Kids: Key Evidence to Strengthen Decisions in Australia Project, T. (2022). Are We Agreed? Self- versus proxy-reporting of paediatric health-related quality of life (HRQoL) Using generic preference-based measures: A systematic review and meta-analysis. PharmacoEconomics, 40 (11), 1043–1067. https://doi.org/10.1007/s40273-022-01177-z

Download references

Open Access funding enabled and organized by CAUL and its Member Institutions. This study was conducted without financial support.

Author information

Authors and affiliations.

Monash University Health Economics Group, School of Public Health and Preventive Medicine, Monash University, Level 4, 553 St. Kilda Road, Melbourne, VIC, 3004, Australia

Lidia Engel & Valeriia Sokolova

School of Health and Social Development, Deakin University, Burwood, VIC, Australia

Ekaterina Bogatyreva & Anna Leuenberger

You can also search for this author in PubMed   Google Scholar

Contributions

LE contributed to the study conception and design. The original database search was performed by AL and later updated by VS. All authors were involved in the screening process, data extraction, and data analyses. Quality assessment was conducted by VS and LE. The first draft of the manuscript was written by LE and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Lidia Engel .

Ethics declarations

Competing interests.

Lidia Engel is a member of the EuroQol Group.

Ethical approval

Not applicable.

Consent to participate

Consent to publish, additional information, publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (XLSX 67 KB)

Supplementary file2 (docx 234 kb), rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Engel, L., Sokolova, V., Bogatyreva, E. et al. Understanding the influence of different proxy perspectives in explaining the difference between self-rated and proxy-rated quality of life in people living with dementia: a systematic literature review and meta-analysis. Qual Life Res (2024). https://doi.org/10.1007/s11136-024-03660-w

Download citation

Accepted : 27 March 2024

Published : 24 April 2024

DOI : https://doi.org/10.1007/s11136-024-03660-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Quality of Life
  • Outcome measurement
  • Find a journal
  • Publish with us
  • Track your research

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Study Protocol

Frequency, complications, and mortality of inhalation injury in burn patients: A systematic review and meta-analysis protocol

Roles Conceptualization, Project administration, Writing – original draft, Writing – review & editing

* E-mail: [email protected]

Affiliation Culdade de Ciências de Saúde - Universidade de Brasília-UnB, Programa de Pós-Graduação em Ciências da Saúde, FaBrasilia (DF), Brazil

ORCID logo

Roles Writing – review & editing

Affiliation Programa de Pós-Graduação em Ciências da Saúde, Escola Superior de Ciências da Saúde (ESCS), Brasilia (DF), Brazil

Roles Investigation, Writing – review & editing

Affiliation Programa de Pós-Graduação em Ciências da Saúde, Coordenação de Cursos Pós-Graduação Stricto Sensu, Escola Superior de Ciências da Saúde (ESCS), Brasilia (DF), Brazil

Affiliation Universidade de Brasília, Brasilia (DF), Brazil and Programa de Pós Graduação em Ciências do Movimento Humano e Reabilitação, Universidade Evangélica de Goiás, Goiás, Brazil

Roles Conceptualization, Data curation, Writing – review & editing

Affiliation Radiology Professor of Universidade de Ribeirão Preto, Campus Guarujá, Guarujá-SP, Brazil

Roles Data curation, Writing – review & editing

Roles Conceptualization, Methodology, Project administration, Writing – review & editing

Affiliation Programa de Pós-Graduação em Ciências da Saúde, Coordenação de Pesquisa e Comunicação Científica, Escola Superior de Ciências da Saúde (ESCS), Brasilia (DF), Brazil

  • Juliana Elvira Herdy Guerra Avila, 
  • Levy Aniceto Santana, 
  • Denise Rabelo Suzuki, 
  • Vinícius Zacarias Maldaner da Silva, 
  • Marcio Luís Duarte, 
  • Aline Mizusaki Imoto, 
  • Fábio Ferreira Amorim

PLOS

  • Published: April 23, 2024
  • https://doi.org/10.1371/journal.pone.0295318
  • Peer Review
  • Reader Comments

Table 1

Introduction

Burns are tissue traumas caused by energy transfer and occur with a variable inflammatory response. The consequences of burns represent a public health problem worldwide. Inhalation injury (II) is a severity factor when associated with burn, leading to a worse prognosis. Its treatment is complex and often involves invasive mechanical ventilation (IMV). The primary purpose of this study will be to assess the evidence regarding the frequency and mortality of II in burn patients. The secondary purposes will be to assess the evidence regarding the association between IIs and respiratory complications (pneumonia, airway obstruction, acute respiratory failure, acute respiratory distress syndrome), need for IMV and complications in other organ systems, and highlight factors associated with IIs in burn patients and prognostic factors associated with acute respiratory failure, need for IMV and mortality of II in burn patients.

This is a systematic literature review and meta-analysis, according to the Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA). PubMed/MEDLINE, Embase, LILACS/VHL, Scopus, Web of Science, and CINAHL databases will be consulted without language restrictions and publication date. Studies presenting incomplete data and patients under 19 years of age will be excluded. Data will be synthesized through continuous (mean and standard deviation) and dichotomous (relative risk) variables and the total number of participants. The means, sample sizes, standard deviations from the mean, and relative risks will be entered into the Review Manager web analysis software (The Cochrane Collaboration).

Despite the extensive experience managing IIs in burn patients, they still represent an important cause of morbidity and mortality. Diagnosis and accurate measurement of its damage are complex, and therapies are essentially based on supportive measures. Considering the challenge, their impact, and their potential severity, IIs represent a promising area for research, needing further studies to understand and contribute to its better evolution.

The protocol of this review is registered on the International prospective register of systematic reviews platform of the Center for Revisions and Disclosure of the University of York, United Kingdom ( https://www.crd.york.ac.uk/prospero ), under number RD42022343944.

Citation: Herdy Guerra Avila JE, Aniceto Santana L, Rabelo Suzuki D, Maldaner da Silva VZ, Duarte ML, Mizusaki Imoto A, et al. (2024) Frequency, complications, and mortality of inhalation injury in burn patients: A systematic review and meta-analysis protocol. PLoS ONE 19(4): e0295318. https://doi.org/10.1371/journal.pone.0295318

Editor: Mohamed Boussarsar, Centre Hospitalier Universitaire Farhat Hached de Sousse, TUNISIA

Received: July 19, 2023; Accepted: November 19, 2023; Published: April 23, 2024

Copyright: © 2024 Herdy Guerra Avila et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: The identified research data will be made publicly available when the study is completed and published.

Funding: The authors received funding from Fundação de Ensino e Pesquisa em Ciências da Saúde - FEPECS, Address: SMHN 03 - conjunto A - bloco 1 - Edifício FEPECS CEP: 70701-907.

Competing interests: The authors have declared that no competing interests exist.

Burns are tissue traumas caused by energy transfer (thermal, chemical, electrical, radiation) [ 1 , 2 ] and occur with variable local and systemic inflammatory responses according to the intensity, location, and the affected area depth [ 3 ]. Due to the severity of their conditions, most patients require treatment in specialized units with intensive support and monitoring [ 4 ]. The consequences of burns represent a public health problem, ranging from physical incapacity and psychological and social damage to death [ 4 ]. As per a World Health Organization Fact Sheet dated October 2023, there are over 11,000,000 cases worldwide annually, resulting in 180,000 deaths [ 5 ]. Only in Brazil, from 2015 to 2020, there were 19,772 deaths from burns, as delineated by data provided by the Brazilian Department of Health [ 6 ]. According to US statistics from the National Inpatient Sample and the National Burn Repository, 40,000 hospitalizations are estimated yearly due to burns in the United States, with about 5% presenting inhalation injuries (IIs) [ 6 , 7 ]. Approximately 33% of all burn patients will require invasive mechanical ventilation (IMV), which increases significantly with II [ 8 ].

The diagnosis of respiratory system involvement is essentially clinical and can be complemented by bronchoscopy and other radiological and laboratory tests [ 9 ]. In ideal conditions, bronchoscopy should be performed in the first 24 hours in all patients with a history of smoke inhalation and is considered the gold standard for this type of evaluation [ 10 , 11 ]. When present, IIs significantly impact patient outcomes, increasing fluid needs during resuscitation, pulmonary complications, and mortality [ 12 – 14 ], serving as a marker of severity and an independent risk factor for death [ 13 , 14 ], especially in patients with over 20% of body surface area burned [ 15 , 16 ]. Besides, contrary to the recent advancements in the treatment of cutaneous burn injuries, the complex treatment of the IIs remains a challenging frontier since the pathophysiology is not fully understood, the diagnostic criteria remain unclear, the interventions are often ineffective, and the mortality remains high [ 17 – 20 ].

The treatment of IIs is traditionally performed through respiratory support with 100% oxygen, hyperbaric oxygen therapy, and/or protective IMV [ 9 , 21 , 22 ]. However, questions regarding the best way to identify and classify respiratory tract involvement, whether all patients should be intubated and receive IMV, which IMV mode is best indicated, and issues related to systemic toxicity are essential points that must be better elucidated [ 21 ].

In this context, the primary purpose of this study will be to assess the evidence regarding the frequency and mortality of II in burn patients. The secondary purposes will be to assess the evidence regarding the association between IIs and respiratory complications (pneumonia, airway obstruction, acute respiratory failure, acute respiratory distress syndrome—ARDS), need for IMV and complications in other organ systems, and highlight factors associated with IIs in burn patients and prognostic factors associated with acute respiratory failure, need for IMV and mortality of II in burn patients.

Materials and methods

Study design.

This systematic literature review will be guided and reported according to the guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) [ 23 ] ( S1 Checklist ). The protocol of this review is registered on the International prospective register of systematic reviews platform of the Center for Revisions and Disclosure of the University of York, United Kingdom ( https://www.crd.york.ac.uk/prospero ), under number RD42022343944.

Research question

The question guiding this study will be: what is the frequency and mortality of inhalation injuries in burn patients?

The PICOS criterion was followed, where:

  • P (population) = burn patients;
  • I (exposure) = smoke inhalation;
  • C (comparison/control) = no smoke inhalation;
  • O (outcomes): frequency, mortality, need for IMV, complications;
  • S (study design): observational studies, clinical trials

Inclusion criteria

Population of interest..

Adult patients of both sexes victims of acute burns regardless of magnitude or cause.

Exposure type.

Inhalation injury associated with the burn event. Inhalation injury will be defined as the damage inflicted to the respiratory tract or lung tissue from smoke, heat, and/or chemical irritants introduced into the airway during a burn event [ 24 ]. Although bronchoscopy may be performed to confirm the diagnosis of II and is considered the gold standard for this type of evaluation, the studies that applied only clinical criteria or used imaging or laboratory findings for II diagnosis will be included in the review [ 10 , 11 ].

Control group.

Adult patients victims of burns who have not been exposed to smoke inhalation.

Outcomes evaluated.

The primary outcomes will be:

Secondary outcomes will be:

  • Respiratory complications: pneumonia, airway obstruction, acute respiratory failure, and ARDS;
  • Need for IMV;
  • Complications in other organ systems;
  • Factors associated with IIs, need for IMV, complications in other organ systems, and mortality.

The ARDS Berlin definition will be used to diagnose the ARDS [ 25 ].

Type of study included.

Observational studies and clinical trials that evaluated the frequency and mortality of burn patients exposed to smoke inhalation.

Exclusion criteria

Studies in patients under 19 years of age will be excluded.

Studies presenting incomplete data, reviews, case series, case reports, and editorials will be excluded. Letters to the editor that do not report results from original data will also be excluded.

The inclusion and exclusion criteria are summarized in Table 1

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0295318.t001

Methods for identification of studies

The search for studies will be performed without language and publication date restrictions in the following databases:

  • LILACS/VHL;
  • Web of Science

Search strategy.

In the search, descriptors previously identified in DeCS (Descriptors in Health Sciences, http://decs.bvs.br/ ), MeSH (Medical Subject Headings https://www.nlm.nih.gov/mesh/meshhome.html , https://www.nlm.nih.gov/mesh/meshhome.html ), and Entree terms ( https://www.embase.com ) will be used, and their respective synonyms to include the largest number of relevant studies.

In this context, the search terms used will be:

  • (1) Burns, inhalation; inhalation burns; smoke inhalation injury; burn, inhalation; inhalation burn; smoke inhalation injury; inhalation injury, smoke; injury, smoke inhalation; inhalation injuries, smoke; injuries, smoke inhalation; smoke inhalation injuries; lung burn; queimaduras por inalação; quemaduras por inhalación; brûlures par inhalation; lesão por inalação de fumaça; smoke; lesión por inhalación de humo; lésion par inhalation de fumée;
  • (2) Epidemiology; epidemiology or incidence or prevalence or occurrence; social epidemiology; epidemiologies, social; epidemiology, social; social epidemiologies; epidemiologia; epidemiology; epidemiología; épidémiologie; epidemiologia social.

The complete search strategy for all databases is shown in Table 2 .

thumbnail

https://doi.org/10.1371/journal.pone.0295318.t002

In addition, grey literature reports will be sourced through simplified searches on Google Scholar and worldwidescience.org.

Finally, forward and backward reference searches will be performed to identify any other potential studies that might have been missed in the search process (backward and forward snowballing).

Selection and data analysis

Selection of studies and evaluation of methodological quality..

All references found by the searches will be organized using the Rayyan platform for Systematic Review ( https://rayyan.qcri.org/ ) used as a tool for removing duplicates, selecting, and screening studies. The data extraction from the selected studies, such as information from the participants and analyzed outcomes, will be performed manually using Microsoft Word.

Two reviewers will independently perform the studies’ selection (JA and DS in the authors’ list). The Rayyan platform provides an interface for each reviewer. Then, it indicates which studies showed disagreements in the analysis so that a third reviewer (AI in the authors’ list) can resolve them.

Initially, the title and abstract will be analyzed. Next, the third reviewer (AI in the authors’ list) will analyze the inclusion and exclusion disagreements about a particular study. Then, the texts will be fully evaluated, and the studies composing the review will be defined.

Again, for studies with a disagreement between the two main reviewers, the third reviewer (AI in the authors’ list) will resolve the conflicts. Studies not meeting the inclusion criteria will be excluded, and the reasons for this decision will be recorded.

The nonrandomized eligible studies will be included in the risk of bias assessment stage through the Newcastle Ottawa Scale Tool [ 26 ] and randomized controlled trials through the Cochrane Risk of Bias 2 (RoB 2) tool 2019 version [ 27 ].

The study selection process will be performed based on PRISMA Flow Diagram [ 23 ].

Data extraction process.

Data extraction will be performed according to criteria related to the following protocols:

  • general characteristics of the studies (author, year, title, journal, country and language of publication, study design);
  • information on participants (age, sex, ethnicity, diagnosis or specific characteristics, sample size);
  • exposure data (description of inhaled material, duration of exposure);
  • data on control;
  • characteristics of inhalation injuries;
  • data related to outcomes (frequency, mortality, need for IMV, development of complications in other organ systems).

Summary of results and statistical analysis.

Outcome scores after the intervention will be extracted from the included studies and collected using continuous (mean and standard deviation) and dichotomous (relative risk) data and the total number of participants. When numerical data are missing, the authors will be contacted via e-mail, requesting additional data for analysis.

Means, sample sizes, standard deviations from the mean, and relative risks will be entered into the Review Manager analysis software, version 5.3 (The Cochrane Collaboration), which will be used to quantify the results. Statistical significance will be defined as p < 0.05. Since the outcomes of interest will be evaluated with different scales and units, standardized measurements will be used to calculate the effect sizes, standard mean deviation, and 95% confidence intervals (95% CI).

For further comparisons concerning the extent of burn injury size, shock, or presence of infections, subgroup analysis will be performed if feasible.

Assessment of risk of bias

For this evaluation, the Newcastle Ottawa Scale tool will be used for the nonrandomized studies [ 26 ] and the RoB 2 (2019) version for the randomized controlled trials [ 27 ]. Two reviewers will independently evaluate the risk of bias of the included studies (JA and DS in the authors’ list). The third reviewer (AI in the authors’ list) will resolve the disagreements.

Quality of the evidence

For this evaluation, the criteria of the Grading of Recommendations Assessment, Development and Evaluation (GRADE) working group will be used [ 28 ]. GRADE assesses the quality of the evidence based on the assessment of five domains: risk of bias, imprecision, inconsistency, indirectness, and publication bias [ 28 ]. Two reviewers will independently evaluate the quality of evidence of the included studies (JA and DS in the authors’ list). The third reviewer (AI in the authors’ list) will resolve the disagreements.

Ethical considerations

The research will be performed with information from studies published in electronic databases, respecting ethical principles at all stages. When processing the data collected, the principles of fidelity to the authors and respect for textual integrity will be protected.

The reviewers will not have any connection with the authors of the articles; therefore, there will be no conflicts of interest.

Inhalation injury is a frequent condition following burn injury, notably increasing the frequency with the rise of the burn injury size and patient age [ 29 , 30 ]. Although there is already extensive experience managing IIs in burn patients, they still represent a great challenge, mainly due to their complex pathophysiological process that has not yet been fully clarified, where the involvement of several inflammatory cells, mediators, and cytokines has been demonstrated [ 19 ]. Diagnosis and accurate measurement of its damage are also complex, and therapies are essentially based on supportive measures [ 17 ].

In the IIs, the magnitude and location of the injury vary considerably according to the environment and the host factors [ 31 ]. In this respect, several factors should be considered, such as ignition source, concentration and solubility of inhaled substances, diameter and size of the particles in the smoke, exposure duration, temperature, and patient immune response [ 20 , 31 , 32 ]. Individuals aged 65 and beyond exhibit a mortality rate from burns exceeding the average six factors [ 33 ]. Due to diminished physiological reserves and comorbidities, managing this demographic poses a distinctive and formidable challenge. Multiple preexisting risk factors are manifest in older adults, encompassing an elevated susceptibility to infections, pulmonary diseases, and comorbidities [ 34 ].

Although most patients exposed to smoke inhalation evolve well, the development of respiratory injury significantly worsens the outcome of these patients with a significant increase in mortality and complications, including long-term sequelae [ 17 , 34 – 36 ]. Indeed, pulmonary complications following burns and II cause or directly contribute to 77% deaths [ 37 , 38 ]. Among the pulmonary complications, ARDS may develop early or several days after the exposure [ 39 ]. Although ARDS may also occur in burns without II, the clinical symptoms tend to worsen following IIs. In II, ARDS usually starts earlier, progresses with greater severity, and requires IMV for longer [ 13 ]. Furthermore, sepsis and acute respiratory failure are frequent causes of morbidity and mortality in patients with exclusive thermal burns, which may be even more prevalent in patients with IIs [ 13 ].

It is already known that II is an independent risk factor for mortality in patients with small and moderate burns [ 13 ]. In this respect, the management of IIs is essential and can vary from the conservative approach to more elaborate options involving drugs [ 23 ]. Specific treatments have been tested to prevent IMV, complications, and poor outcomes. Some studies observed that N acetylcysteine and inhaled anticoagulants (such as heparin) may effectively treat inhalation injury, significantly improving lung compliance and airway obstruction, reducing reintubation rates, increasing the number of ventilator-free days, and decreasing hospital length of stay, and mortality [ 40 – 43 ].

Respiratory impairment is still a major challenge in clinical practice and a promising area for research, needing further studies to understand and act on this potentially severe condition. In this systematic review, we aim to clarify the principal voids in the existing literature regarding fire-related II to guide future studies. Furthermore, the findings can contribute to diagnostic and management protocols for II in burned patients, which may improve health care and the prognosis of these people. In this aspect, especially the identification of factors associated with acute respiratory failure, need for IMV, and mortality may contribute to defining the phenotype of inhalation burns that is associated with poor prognosis and clinical approaches that may have led to better outcomes, which can contribute to stricter monitoring of these patients and the institution of earlier clinical therapeutic approaches to improve outcomes for these patients.

Supporting information

S1 checklist. prisma-p (preferred reporting items for systematic review and meta-analysis protocols) 2015 checklist: recommended items to address in a systematic review protocol*..

https://doi.org/10.1371/journal.pone.0295318.s001

  • 1. Campos EV. Cuidados intensivos ao paciente grande queimado [Intensive care for major burn patient]. In: Azevedo CPA, Taniguchi LU, Ladeira JP (editors). Medicina intensiva: abordagem prática [Intensive care medicine: practical approach]. 3 rd ed. Barueri: Manole, 2017. p. 899–922.
  • 2. Piccolo NS, Serra MCVF, Leonardi DF, Lima EM Jr, Novaes FN, Correa MD et al. Queimaduras–parte II: tratamento da lesão [Burns–part II: treatment of the injury]. In : Brazilian Medical Association, Brazilian Federal Council of Medicine. Projetos diretrizes [Project Guidelines]. São Paulo: Brazilian Medical Association; 2008. p. 1–14.
  • View Article
  • Google Scholar
  • 5. World Health Organization. Burns. WHO: Washington, 2023 [cited 2022 Oct 01]. www.who.int/mediacentre/factsheets/fs365/en/ .
  • 6. Ministério da Saúde. Brasil. Óbitos por queimaduras no Brasil: análise inicial dos dados do Sistema de Informações sobre Mortalidade, 2015 a 2020 [Burn deaths in Brazil: initial analysis of data from the Mortality Information System, 2015 to 2020]. In: Brazilian Ministry of Health. Boletim Epidemiológico Volume 47 [Epidemiological Bulletin Volume 47]. Brazilian Ministry of Health: Brasília, 2022 [cited 2022 Oct 01]. https://www.gov.br/saude/pt-br/centrais-de-conteudo/publicacoes/boletins/epidemiologicos/edicoes/2022/boletim-epidemiologico-vol-53-no47 . p. 40–48.
  • PubMed/NCBI
  • 24. Woodson CL. Diagnosis and treatment of inhalation injury. In: Herndon DN (editor). Total Burn Care, 5 th ed. Elsevier: Amsterdam, 2017. p. 184–194.
  • 26. Wells GA, Shea B, O’Connell Da, Peterson J, Welch V, Losos M, et al. The Newcastle-Ottawa Scale (NOS) for assessing the quality of nonrandomized studies in meta-analyses [Internet]. Oxford: The Ottawa Hospital Research Institute; 2000. [cited 2022 Oct 01]. http://www.ohri.ca/programs/clinical_epidemiology/oxford.as .
  • 31. Traber DL. The pathophysiology of inhalation injury. In: Herndon DN (editor). Total Burn Care, 5 th ed. Elsevier: Amsterdam, 2017. p. 174–183.
  • 33. Porro LJ, Demling RH, Pierira CT, Herndon DN. Care of the geriatric patient. In: Herndon DN (editor). Total Burn Care, 5 th ed. Elsevier: Amsterdam, 2017. p. 381–385.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Family Med Prim Care
  • v.2(1); Jan-Mar 2013

Systematic Reviews and Meta-analysis: Understanding the Best Evidence in Primary Healthcare

S. gopalakrishnan.

Department of Community Medicine, SRM Medical College, Hospital and Research Centre, Kattankulathur, Tamil Nadu, India

P. Ganeshkumar

Healthcare decisions for individual patients and for public health policies should be informed by the best available research evidence. The practice of evidence-based medicine is the integration of individual clinical expertise with the best available external clinical evidence from systematic research and patient's values and expectations. Primary care physicians need evidence for both clinical practice and for public health decision making. The evidence comes from good reviews which is a state-of-the-art synthesis of current evidence on a given research question. Given the explosion of medical literature, and the fact that time is always scarce, review articles play a vital role in decision making in evidence-based medical practice. Given that most clinicians and public health professionals do not have the time to track down all the original articles, critically read them, and obtain the evidence they need for their questions, systematic reviews and clinical practice guidelines may be their best source of evidence. Systematic reviews aim to identify, evaluate, and summarize the findings of all relevant individual studies over a health-related issue, thereby making the available evidence more accessible to decision makers. The objective of this article is to introduce the primary care physicians about the concept of systematic reviews and meta-analysis, outlining why they are important, describing their methods and terminologies used, and thereby helping them with the skills to recognize and understand a reliable review which will be helpful for their day-to-day clinical practice and research activities.

Introduction

Evidence-based healthcare is the integration of best research evidence with clinical expertise and patient values. Green denotes, “Using evidence from reliable research, to inform healthcare decisions, has the potential to ensure best practice and reduce variations in healthcare delivery.” However, incorporating research into practice is time consuming, and so we need methods of facilitating easy access to evidence for busy clinicians.[ 1 ] Ganeshkumar et al . mentioned that nearly half of the private practitioners in India were consulting more than 4 h per day in a locality,[ 2 ] which explains the difficulty of them in spending time in searching evidence during consultation. Ideally, clinical decision making ought to be based on the latest evidence available. However, to keep abreast with the continuously increasing number of publications in health research, a primary healthcare professional would need to read an insurmountable number of articles every day, covered in more than 13 million references and over 4800 biomedical and health journals in Medline alone. With the view to address this challenge, the systematic review method was developed. Systematic reviews aim to inform and facilitate this process through research synthesis of multiple studies, enabling increased and efficient access to evidence.[ 1 , 3 , 4 ]

Systematic reviews and meta-analyses have become increasingly important in healthcare settings. Clinicians read them to keep up-to-date with their field and they are often used as a starting point for developing clinical practice guidelines. Granting agencies may require a systematic review to ensure there is justification for further research and some healthcare journals are moving in this direction.[ 5 ]

This article is intended to provide an easy guide to understand the concept of systematic reviews and meta-analysis, which has been prepared with the aim of capacity building for general practitioners and other primary healthcare professionals in research methodology and day-to-day clinical practice.

The purpose of this article is to introduce readers to:

  • The two approaches of evaluating all the available evidence on an issue i.e., systematic reviews and meta-analysis,
  • Discuss the steps in doing a systematic review,
  • Introduce the terms used in systematic reviews and meta-analysis,
  • Interpret results of a meta-analysis, and
  • The advantages and disadvantages of systematic review and meta-analysis.

Application

What is the effect of antiviral treatment in dengue fever? Most often a primary care physician needs to know convincing answers to questions like this in a primary care setting.

To find out the solutions or answers to a clinical question like this, one has to refer textbooks, ask a colleague, or search electronic database for reports of clinical trials. Doctors need reliable information on such problems and on the effectiveness of large number of therapeutic interventions, but the information sources are too many, i.e., nearly 20,000 journals publishing 2 million articles per year with unclear or confusing results. Because no study, regardless of its type, should be interpreted in isolation, a systematic review is generally the best form of evidence.[ 6 ] So, the preferred method is a good summary of research reports, i.e., systematic reviews and meta-analysis, which will give evidence-based answers to clinical situations.

There are two fundamental categories of research: Primary research and secondary research. Primary research is collecting data directly from patients or population, while secondary research is the analysis of data already collected through primary research. A review is an article that summarizes a number of primary studies and may draw conclusions on the topic of interest which can be traditional (unsystematic) or systematic.

Terminologies

Systematic review.

A systematic review is a summary of the medical literature that uses explicit and reproducible methods to systematically search, critically appraise, and synthesize on a specific issue. It synthesizes the results of multiple primary studies related to each other by using strategies that reduce biases and random errors.[ 7 ] To this end, systematic reviews may or may not include a statistical synthesis called meta-analysis, depending on whether the studies are similar enough so that combining their results is meaningful.[ 8 ] Systematic reviews are often called overviews.

The evidence-based practitioner, David Sackett, defines the following terminologies.[ 3 ]

  • Review: The general term for all attempts to synthesize the results and conclusions of two or more publications on a given topic.
  • Overview: When a review strives to comprehensively identify and track down all the literature on a given topic (also called “systematic literature review”).
  • Meta-analysis: A specific statistical strategy for assembling the results of several studies into a single estimate.

Systematic reviews adhere to a strict scientific design based on explicit, pre-specified, and reproducible methods. Because of this, when carried out well, they provide reliable estimates about the effects of interventions so that conclusions are defensible. Systematic reviews can also demonstrate where knowledge is lacking. This can then be used to guide future research. Systematic reviews are usually carried out in the areas of clinical tests (diagnostic, screening, and prognostic), public health interventions, adverse (harm) effects, economic (cost) evaluations, and how and why interventions work.[ 9 ]

Cochrane reviews

Cochrane reviews are systematic reviews undertaken by members of the Cochrane Collaboration which is an international not-for-profit organization that aims to help people to make well-informed decisions about healthcare by preparing, maintaining, and promoting the accessibility of systematic reviews of the effects of healthcare interventions.

Cochrane Primary Health Care Field is a systematic review of primary healthcare research on prevention, treatment, rehabilitation, and diagnostic test accuracy. The overall aim and mission of the Primary Health Care Field is to promote the quality, quantity, dissemination, accessibility, applicability, and impact of Cochrane systematic reviews relevant to people who work in primary care and to ensure proper representation in the interests of primary care clinicians and consumers in Cochrane reviews and review groups, and in other entities. This field would serve to coordinate and promote the mission of the Cochrane Collaboration within the primary healthcare disciplines, as well as ensuring that primary care perspectives are adequately represented within the Collaboration.[ 10 ]

Meta-analysis

A meta-analysis is the combination of data from several independent primary studies that address the same question to produce a single estimate like the effect of treatment or risk factor. It is the statistical analysis of a large collection of analysis and results from individual studies for the purpose of integrating the findings.[ 11 ] The term meta-analysis has been used to denote the full range of quantitative methods for research reviews.[ 12 ] Meta-analyses are studies of studies.[ 13 ] Meta-analysis provides a logical framework to a research review where similar measures from comparable studies are listed systematically and the available effect measures are combined wherever possible.[ 14 ]

The fundamental rationale of meta-analysis is that it reduces the quantity of data by summarizing data from multiple resources and helps to plan research as well as to frame guidelines. It also helps to make efficient use of existing data, ensuring generalizability, helping to check consistency of relationships, explaining data inconsistency, and quantifies the data. It helps to improve the precision in estimating the risk by using explicit methods.

Therefore, “systematic review” will refer to the entire process of collecting, reviewing, and presenting all available evidence, while the term “meta-analysis” will refer to the statistical technique involved in extracting and combining data to produce a summary result.[ 15 ]

Steps in doing systematic reviews/meta-analysis

Following are the six fundamental essential steps while doing systematic review and meta-analysis.[ 16 ]

Define the question

This is the most important part of systematic reviews/meta-analysis. The research question for the systematic reviews may be related to a major public health problem or a controversial clinical situation which requires acceptable intervention as a possible solution to the present healthcare need of the community. This step is most important since the remaining steps will be based on this.

Reviewing the literature

This can be done by going through scientific resources such as electronic database, controlled clinical trials registers, other biomedical databases, non-English literatures, “gray literatures” (thesis, internal reports, non–peer-reviewed journals, pharmaceutical industry files), references listed in primary sources, raw data from published trials and other unpublished sources known to experts in the field. Among the available electronic scientific database, the popular ones are PUBMED, MEDLINE, and EMBASE.

Sift the studies to select relevant ones

To select the relevant studies from the searches, we need to sift through the studies thus identified. The first sift is pre-screening, i.e., to decide which studies to retrieve in full, and the second sift is selection which is to look again at these studies and decide which are to be included in the review. The next step is selecting the eligible studies based on similar study designs, year of publication, language, choice among multiple articles, sample size or follow-up issues, similarity of exposure, and or treatment and completeness of information.

It is necessary to ensure that the sifting includes all relevant studies like the unpublished studies (desk drawer problem), studies which came with negative conclusions or were published in non-English journals, and studies with small sample size.

Assess the quality of studies

The steps undertaken in evaluating the study quality are early definition of study quality and criteria, setting up a good scoring system, developing a standard form for assessment, calculating quality for each study, and finally using this for sensitivity analysis.

For example, the quality of a randomized controlled trial can be assessed by finding out the answers to the following questions:

  • Was the assignment to the treatment groups really random?
  • Was the treatment allocation concealed?
  • Were the groups similar at baseline in terms of prognostic factors?
  • Were the eligibility criteria specified?
  • Were the assessors, the care provider, and the patient blinded?
  • Were the point estimates and measure of variability presented for the primary outcome measure?
  • Did the analyses include intention-to-treat analysis?

Calculate the outcome measures of each study and combine them

We need a standard measure of outcome which can be applied to each study on the basis of its effect size. Based on their type of outcome, following are the measures of outcome: Studies with binary outcomes (cured/not cured) have odds ratio, risk ratio; studies with continuous outcomes (blood pressure) have means, difference in means, standardized difference in means (effect sizes); and survival or time-to-event data have hazard ratios.

Combining studies

Homogeneity of different studies can be estimated at a glance from a forest plot (explained below). For example, if the lower confidence interval of every trial is below the upper of all the others, i.e., the lines all overlap to some extent, then the trials are homogeneous. If some lines do not overlap at all, these trials may be said to be heterogeneous.

The definitive test for assessing the heterogeneity of studies is a variant of Chi-square test (Mantel–Haenszel test). The final step is calculating the common estimate and its confidence interval with the original data or with the summary statistics from all the studies. The best estimate of treatment effect can be derived from the weighted summary statistics of all studies which will be based on weighting to sample size, standard errors, and other summary statistics. Log scale is used to combine the data to estimate the weighting.

Interpret results: Graph

The results of a meta-analysis are usually presented as a graph called forest plot because the typical forest plots appear as forest of lines. It provides a simple visual presentation of individual studies that went into the meta-analysis at a glance. It shows the variation between the studies and an estimate of the overall result of all the studies together.

Forest plot

Meta-analysis graphs can principally be divided into six columns [ Figure 1 ]. Individual study results are displayed in rows. The first column (“study”) lists the individual study IDs included in the meta-analysis; usually the first author and year are displayed. The second column relates to the intervention groups and the third column to the control groups. The fourth column visually displays the study results. The line in the middle is called “the line of no effect.” The weight (in %) in the fifth column indicates the weighting or influence of the study on the overall results of the meta-analysis of all included studies. The higher the percentage weight, the bigger the box, the more influence the study has on the overall results. The sixth column gives the numerical results for each study (e.g., odds ratio or relative risk and 95% confidence interval), which are identical to the graphical display in the fourth column. The diamond in the last row of the graph illustrates the overall result of the meta-analysis.[ 4 ]

An external file that holds a picture, illustration, etc.
Object name is JFMPC-2-9-g001.jpg

Interpretation of meta-analysis[ 4 ]

Thus, the horizontal lines represent individual studies. Length of line is the confidence interval (usually 95%), squares on the line represent effect size (risk ratio) for the study, with area of the square being the study size (proportional to weight given) and position as point estimate (relative risk) of the study.[ 7 ]

For example, the forest plot of the effectiveness of dexamethasone compared with placebo in preventing the recurrence of acute severe migraine headache in adults is shown in Figure 2 .[ 17 ]

An external file that holds a picture, illustration, etc.
Object name is JFMPC-2-9-g002.jpg

Forest plot of the effectiveness of dexamethasone compared with placebo in preventing the recurrence of acute severe migraine headache in adults[ 17 ]

The overall effect is shown as diamond where the position toward the center represents pooled point estimate, the width represents estimated 95% confidence interval for all studies, and the black plain line vertically in the middle of plot is the “line of no effect” (e.g., relative risk = 1).

Therefore, when examining the results of a systematic reviews/meta-analysis, the following questions should be kept in mind:

  • Heterogeneity among studies may make any pooled estimate meaningless.
  • The quality of a meta-analysis cannot be any better than the quality of the studies it is summarizing.
  • An incomplete search of the literature can bias the findings of a meta-analysis.
  • Make sure that the meta-analysis quantifies the size of the effect in units that you can understand.

Subgroup analysis and sensitivity analysis

Subgroup analysis looks at the results of different subgroups of trials, e.g., by considering trials on adults and children separately. This should be planned at the protocol stage itself which is based on good scientific reasoning and is to be kept to a minimum.

Sensitivity analysis is used to determine how results of a systematic review/meta-analysis change by fiddling with data, for example, what is the implication if the exclusion criteria or excluded unpublished studies or weightings are assigned differently. Thus, after the analysis, if changing makes little or no difference to the overall results, the reviewer's conclusions are robust. If the key findings disappear, then the conclusions need to be expressed more cautiously.

Advantages of Systematic Reviews

Systematic reviews have specific advantages because of using explicit methods which limit bias, draw reliable and accurate conclusions, easily deliver required information to healthcare providers, researchers, and policymakers, help to reduce the time delay in the research discoveries to implementation, improve the generalizability and consistency of results, generation of new hypotheses about subgroups of the study population, and overall they increase precision of the results.[ 18 ]

Limitations in Systematic Reviews/Meta-analysis

As with all research, the value of a systematic review depends on what was done, what was found, and the clarity of reporting. As with other publications, the reporting quality of systematic reviews varies, limiting readers’ ability to assess the strengths and weaknesses of those reviews.[ 5 ]

Even though systematic review and meta-analysis are considered the best evidence for getting a definitive answer to a research question, there are certain inherent flaws associated with it, such as the location and selection of studies, heterogeneity, loss of information on important outcomes, inappropriate subgroup analyses, conflict with new experimental data, and duplication of publication.

Publication Bias

Publication bias results in it being easier to find studies with a “positive” result.[ 19 ] This occurs particularly due to inappropriate sifting of the studies where there is always a tendency towards the studies with positive (significant) outcomes. This effect occurs more commonly in systematic reviews/meta-analysis which need to be eliminated.

The quality of reporting of systematic reviews is still not optimal. In a recent review of 300 systematic reviews, few authors reported assessing possible publication bias even though there is overwhelming evidence both for its existence and its impact on the results of systematic reviews. Even when the possibility of publication bias is assessed, there is no guarantee that systematic reviewers have assessed or interpreted it appropriately.[ 20 ]

To overcome certain limitations mentioned above, the Cochrane reviews are currently reported in a format where at the end of every review, findings are summarized in the author's point of view and also give an overall picture of the outcome by means of plain language summary. This is found to be much helpful to understand the existing evidence about the topic more easily by the reader.

A systematic review is an overview of primary studies which contains an explicit statement of objectives, materials, and methods, and has been conducted according to explicit and reproducible methodology. A meta-analysis is a mathematical synthesis of the results of two or more primary studies that addressed the same hypothesis in the same way. Although meta-analysis can increase the precision of a result, it is important to ensure that the methods used for the reviews were valid and reliable.

High-quality systematic reviews and meta-analyses take great care to find all relevant studies, critically assess each study, synthesize the findings from individual studies in an unbiased manner, and present balanced important summary of findings with due consideration of any flaws in the evidence. Systematic review and meta-analysis is a way of summarizing research evidence, which is generally the best form of evidence, and hence positioned at the top of the hierarchy of evidence.

Systematic reviews can be very useful decision-making tools for primary care/family physicians. They objectively summarize large amounts of information, identifying gaps in medical research, and identifying beneficial or harmful interventions which will be useful for clinicians, researchers, and even for public and policymakers.

Source of Support: Nil

Conflict of Interest: None declared.

Mortality burden of pre-treatment weight loss in patients with non-small-cell lung cancer: A systematic literature review and meta-analysis

Affiliations.

  • 1 Department of Internal Medicine, Division of Hematology, Oncology and Cell Therapy, Rush University Medical Center, Chicago, IL, USA.
  • 2 Duke Cancer Institute, Duke University Medical Center, Durham, NC, USA.
  • 3 Department of Medicine and Wilmot Cancer Institute, Division of Hematology/Oncology, University of Rochester Medical Center, Rochester, NY, USA.
  • 4 Knight Cancer Institute, Oregon Health and Science University, Portland, OR, USA.
  • 5 Curo, Envision Pharma Group, Philadelphia, PA, USA.
  • 6 EBM Health Consultants, New Delhi, Delhi, India.
  • 7 Internal Medicine Business Unit, Global Product Development, Pfizer Inc, New York, NY, USA.
  • 8 Internal Medicine Research Unit, Worldwide Research, Development and Medical, Pfizer Inc, Cambridge, MA, USA.
  • 9 Internal Medicine Research Unit, Clinical Development, Pfizer Inc, Cambridge, MA, USA.
  • 10 Global Medical Affairs, Pfizer Inc, New York, NY, USA.
  • PMID: 38650388
  • DOI: 10.1002/jcsm.13477

Cachexia, with weight loss (WL) as a major component, is highly prevalent in patients with cancer and indicates a poor prognosis. The primary objective of this study was to conduct a meta-analysis to estimate the risk of mortality associated with cachexia (using established WL criteria prior to treatment initiation) in patients with non-small-cell lung cancer (NSCLC) in studies identified through a systematic literature review. The review was conducted according to PRISMA guidelines. Embase® and PubMed were searched to identify articles on survival outcomes in adult patients with NSCLC (any stage) and cachexia published in English between 1 January 2016 and 10 October 2021. Two independent reviewers screened titles, abstracts and full texts of identified records against predefined inclusion/exclusion criteria. Following a feasibility assessment, a meta-analysis evaluating the impact of cachexia, defined per the international consensus criteria (ICC), or of pre-treatment WL ≥ 5% without a specified time interval, on overall survival in patients with NSCLC was conducted using a random-effects model that included the identified studies as the base case. The impact of heterogeneity was evaluated through sensitivity and subgroup analyses. The standard measures of statistical heterogeneity were calculated. Of the 40 NSCLC publications identified in the review, 20 studies that used the ICC for cachexia or reported WL ≥ 5% and that performed multivariate analyses with hazard ratios (HRs) or Kaplan-Meier curves were included in the feasibility assessment. Of these, 16 studies (80%; n = 6225 patients; published 2016-2021) met the criteria for inclusion in the meta-analysis: 11 studies (69%) used the ICC and 5 studies (31%) used WL ≥ 5%. Combined criteria (ICC plus WL ≥ 5%) were associated with an 82% higher mortality risk versus no cachexia or WL < 5% (pooled HR [95% confidence interval, CI]: 1.82 [1.47, 2.25]). Although statistical heterogeneity was high (I 2 = 88%), individual study HRs were directionally aligned with the pooled estimate, and there was considerable overlap in CIs across included studies. A subgroup analysis of studies using the ICC (HR [95% CI]: 2.26 [1.80, 2.83]) or WL ≥ 5% (HR [95% CI]: 1.28 [1.12, 1.46]) showed consistent findings. Assessments of methodological, clinical and statistical heterogeneity indicated that the meta-analysis was robust. Overall, this analysis found that ICC-defined cachexia or WL ≥ 5% was associated with inferior survival in patients with NSCLC. Routine assessment of both weight and weight changes in the oncology clinic may help identify patients with NSCLC at risk for worse survival, better inform clinical decision-making and assess eligibility for cachexia clinical trials.

Keywords: cachexia; meta‐analysis; muscle wasting; non‐small‐cell lung cancer; systematic literature review; weight loss.

© 2024 Pfizer Inc and The Authors. Journal of Cachexia, Sarcopenia and Muscle published by Wiley Periodicals LLC.

Publication types

Grants and funding.

IMAGES

  1. How to Conduct a Systematic Review

    systematic literature review analysis

  2. Systematic Literature Review Methodology

    systematic literature review analysis

  3. Process of the systematic literature review

    systematic literature review analysis

  4. How to Write A Systematic Literature Review?

    systematic literature review analysis

  5. systematic literature review use cases

    systematic literature review analysis

  6. Phases of the systematic literature review

    systematic literature review analysis

VIDEO

  1. Systematic Literature Review, by Prof. Ranjit Singh, IIIT Allahabad

  2. Systematic Literature Review Paper presentation

  3. Systematic Literature Review Part2 March 20, 2023 Joseph Ntayi

  4. Introduction Systematic Literature Review-Various frameworks Bibliometric Analysis

  5. Systematic Literature Review

  6. Systematic Literature Review in Quantitative & Qualitative Research

COMMENTS

  1. Introduction to systematic review and meta-analysis

    A systematic review collects all possible studies related to a given topic and design, and reviews and analyzes their results [ 1 ]. During the systematic review process, the quality of studies is evaluated, and a statistical meta-analysis of the study results is conducted on the basis of their quality. A meta-analysis is a valid, objective ...

  2. Systematic Review

    A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer. Example: Systematic review. In 2008, Dr. Robert Boyle and his colleagues published a systematic review in ...

  3. Guidance on Conducting a Systematic Literature Review

    Literature reviews establish the foundation of academic inquires. However, in the planning field, we lack rigorous systematic reviews. In this article, through a systematic search on the methodology of literature review, we categorize a typology of literature reviews, discuss steps in conducting a systematic literature review, and provide suggestions on how to enhance rigor in literature ...

  4. How-to conduct a systematic literature review: A quick guide for

    Method details Overview. A Systematic Literature Review (SLR) is a research methodology to collect, identify, and critically analyze the available research studies (e.g., articles, conference proceedings, books, dissertations) through a systematic procedure [12].An SLR updates the reader with current literature about a subject [6].The goal is to review critical points of current knowledge on a ...

  5. Systematic Reviews and Meta Analysis

    A well-designed systematic review includes clear objectives, pre-selected criteria for identifying eligible studies, an explicit methodology, a thorough and reproducible search of the literature, an assessment of the validity or risk of bias of each included study, and a systematic synthesis, analysis and presentation of the findings of the ...

  6. How to Do a Systematic Review: A Best Practice Guide for Conducting and

    The best reviews synthesize studies to draw broad theoretical conclusions about what a literature means, linking theory to evidence and evidence to theory. This guide describes how to plan, conduct, organize, and present a systematic review of quantitative (meta-analysis) or qualitative (narrative review, meta-synthesis) information.

  7. Systematic Reviews and Meta-Analysis: A Guide for Beginners

    Meta-analysis is a statistical tool that provides pooled estimates of effect from the data extracted from individual studies in the systematic review. The graphical output of meta-analysis is a forest plot which provides information on individual studies and the pooled effect. Systematic reviews of literature can be undertaken for all types of ...

  8. How to Do a Systematic Review: A Best Practice Guide for Conducting and

    The best reviews synthesize studies to draw broad theoretical conclusions about what a literature means, linking theory to evidence and evidence to theory. This guide describes how to plan, conduct, organize, and present a systematic review of quantitative (meta-analysis) or qualitative (narrative review, meta-synthesis) information.

  9. PDF Systematic Literature Reviews: an Introduction

    Systematic literature reviews (SRs) are a way of synthesising scientific evidence to answer a particular ... SRs treat the literature review process like a scientific process, and apply concepts of empirical ... Perform sensitivity analysis if possible. If the results of all studies are pooled together in a quantitative analysis, this is called ...

  10. A Simple Guide to Systematic Reviews and Meta-Analyses

    Systematic reviews and meta-analyses lie on the top of the evidence hierarchy because they utilize explicit methods for literature search and retrieval of studies relevant to the review question ... Mani R. A systematic review and meta-analysis of nutritional supplementation in chronic lower extremity wounds. Int J Low Extrem Wounds. 2016;15(4 ...

  11. Systematic reviews: Structure, form and content

    A systematic review collects secondary data, and is a synthesis of all available, relevant evidence which brings together all existing primary studies for review (Cochrane 2016). A systematic review differs from other types of literature review in several major ways.

  12. Conducting systematic literature reviews and bibliometric analyses

    R provides packages for various areas of interest, including systematic literature review or the related field of meta-analysis. 2 These include Bibliometrix (Aria and Cuccurullo, 2017), Revtools (Westgate, 2018) and Litsearchr (Grames et al, 2019) of the Metaverse project, 3 as well as Adjutant (Crisan et al., 2018) and Metagear (Lajeunesse ...

  13. The PRISMA 2020 statement: an updated guideline for reporting ...

    The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement ...

  14. Meta‐analysis and traditional systematic literature reviews—What, why

    This paper takes a holistic view, comparing meta-analyses to traditional systematic literature reviews. We described steps of the meta-analytic process including question definition, data collection, data analysis, and reporting results. ... is a web-based software that manages the entire literature review process and meta-analysis. The meta ...

  15. Literature review as a research methodology: An ...

    Systematic review and meta-analysis • Synthesizes guidelines for systematic literature reviews • Provides guidelines for conducting a systematic review and meta-analysis in social sciences. Palmatier et al. (2018) Marketing: Review papers and systematic reviews •

  16. How to Conduct a Systematic Review: A Narrative Literature Review

    Our goal with this paper is to conduct a narrative review of the literature about systematic reviews and outline the essential elements of a systematic review along with the limitations of such a review. Keywords: systematic reviews, meta-analysis, narrative literature review, prisma checklist. Go to: A literature review provides an important ...

  17. Systematic reviews: Brief overview of methods, limitations, and

    CONCLUSION. Siddaway 16 noted that, "The best reviews synthesize studies to draw broad theoretical conclusions about what the literature means, linking theory to evidence and evidence to theory" (p. 747). To that end, high quality systematic reviews are explicit, rigorous, and reproducible. It is these three criteria that should guide authors seeking to write a systematic review or editors ...

  18. (PDF) A Practical Guide to Perform a Systematic Literature Review and

    Nevertheless, to carry out a systematic literature review /meta-analysis, researchers must deeply understand its methodology. This narrative review aims to act as a learning tool for new ...

  19. Method for conducting systematic literature review and meta-analysis

    This paper presents a method to conduct a systematic literature review (SLR) and meta-analysis studies on environmental science. SLR is a process that allowed to collect relevant evidence on the given topic that fits the pre-specified eligibility criteria and to have an answer for the formulated research questions.

  20. How-to conduct a systematic literature review: A quick guide for

    Abstract. Performing a literature review is a critical first step in research to understanding the state-of-the-art and identifying gaps and challenges in the field. A systematic literature review is a method which sets out a series of steps to methodically organize the review. In this paper, we present a guide designed for researchers and in ...

  21. Research Guides: Systematic Reviews: Types of Literature Reviews

    Qualitative, narrative synthesis. Thematic analysis, may include conceptual models. Rapid review. Assessment of what is already known about a policy or practice issue, by using systematic review methods to search and critically appraise existing research. Completeness of searching determined by time constraints.

  22. Understanding the influence of different proxy perspectives in

    Therefore, we aimed to undertake a systematic literature review to examine the role of different proxy-assessment perspectives in explaining differences between self-rated and proxy-rated QoL in people living with dementia. The review was conducted under the hypothesis that the difference in QoL estimates will be larger when adopting the proxy ...

  23. Frequency, complications, and mortality of inhalation injury in burn

    This is a systematic literature review and meta-analysis, according to the Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA). PubMed/MEDLINE, Embase, LILACS/VHL, Scopus, Web of Science, and CINAHL databases will be consulted without language restrictions and publication date.

  24. Guidance to best tools and practices for systematic reviews

    The gray literature and a search of trials may also reveal important details about topics that would otherwise be missed ... Evidence synthesis with meta-analysis. Systematic review with meta-analysis. Meta-analysis: Overview or umbrella review: Systematic review of systematic reviews. Review of reviews. Meta-review. Randomized:

  25. Integration of Shared Micromobility into Public Transit: A Systematic

    This analysis categorized the literature into four major study themes: policy, sustainability, the interaction between shared micromobility and public transportation, and infrastructure. ... To explore this topic, a systematic literature review was conducted to consolidate knowledge, analyze research achievements and best practices, and provide ...

  26. A practical guide to data analysis in general literature reviews

    The general literature review is a synthesis and analysis of published research on a relevant clinical issue, and is a common format for academic theses at the bachelor's and master's levels in nursing, physiotherapy, occupational therapy, public health and other related fields. ... This article seeks to describe a systematic method of data ...

  27. Safety, immunogenicity, and protective effective of inhaled COVID‐19

    This systematic review adhered to the guidelines established by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). 25 Literature research was done through EMBASE, Cochrane Central Register of Controlled Trials, PubMed, and Web of Science up to 10 March 2024 using the keywords, "Inhalation", "COVID-19 vaccine ...

  28. Indigenous conflict management practices in Ethiopia: a systematic

    Finding papers that define or conceptualize the dynamics of indigenous conflict management practices in Ethiopia required a systematic review of the literature, which was used in this research. Because of its comparative benefits, we decided to conduct a systematic review of the literature rather than a narrative or meta-analysis.

  29. Systematic Reviews and Meta-analysis: Understanding the Best Evidence

    A systematic review is a summary of the medical literature that uses explicit and reproducible methods to systematically search, critically appraise, and synthesize on a specific issue. ... Even though systematic review and meta-analysis are considered the best evidence for getting a definitive answer to a research question, there are certain ...

  30. Mortality burden of pre-treatment weight loss in patients with non

    The primary objective of this study was to conduct a meta-analysis to estimate the risk of mortality associated with cachexia (using established WL criteria prior to treatment initiation) in patients with non-small-cell lung cancer (NSCLC) in studies identified through a systematic literature review.