FSTA Logo

Start your free trial

Arrange a trial for your organisation and discover why FSTA is the leading database for reliable research on the sciences of food and health.

REQUEST A FREE TRIAL

  • Research Skills Blog

5 software tools to support your systematic review processes

By Dr. Mina Kalantar on 19-Jan-2021 13:01:01

4 software tools to support your systematic review processes | IFIS Publishing

Systematic reviews are a reassessment of scholarly literature to facilitate decision making. This methodical approach of re-evaluating evidence was initially applied in healthcare, to set policies, create guidelines and answer medical questions.

Systematic reviews are large, complex projects and, depending on the purpose, they can be quite expensive to conduct. A team of researchers, data analysts and experts from various fields may collaborate to review and examine incredibly large numbers of research articles for evidence synthesis. Depending on the spectrum, systematic reviews often take at least 6 months, and sometimes upwards of 18 months to complete.

The main principles of transparency and reproducibility require a pragmatic approach in the organisation of the required research activities and detailed documentation of the outcomes. As a result, many software tools have been developed to help researchers with some of the tedious tasks required as part of the systematic review process.

hbspt.cta._relativeUrls=true;hbspt.cta.load(97439, 'ccc20645-09e2-4098-838f-091ed1bf1f4e', {"useNewLoader":"true","region":"na1"});

The first generation of these software tools were produced to accommodate and manage collaborations, but gradually developed to help with screening literature and reporting outcomes. Some of these software packages were initially designed for medical and healthcare studies and have specific protocols and customised steps integrated for various types of systematic reviews. However, some are designed for general processing, and by extending the application of the systematic review approach to other fields, they are being increasingly adopted and used in software engineering, health-related nutrition, agriculture, environmental science, social sciences and education.

Software tools

There are various free and subscription-based tools to help with conducting a systematic review. Many of these tools are designed to assist with the key stages of the process, including title and abstract screening, data synthesis, and critical appraisal. Some are designed to facilitate the entire process of review, including protocol development, reporting of the outcomes and help with fast project completion.

As time goes on, more functions are being integrated into such software tools. Technological advancement has allowed for more sophisticated and user-friendly features, including visual graphics for pattern recognition and linking multiple concepts. The idea is to digitalise the cumbersome parts of the process to increase efficiency, thus allowing researchers to focus their time and efforts on assessing the rigorousness and robustness of the research articles.

This article introduces commonly used systematic review tools that are relevant to food research and related disciplines, which can be used in a similar context to the process in healthcare disciplines.

These reviews are based on IFIS' internal research, thus are unbiased and not affiliated with the companies.

ross-sneddon-sWlDOWk0Jp8-unsplash-1-2

This online platform is a core component of the Cochrane toolkit, supporting parts of the systematic review process, including title/abstract and full-text screening, documentation, and reporting.

The Covidence platform enables collaboration of the entire systematic reviews team and is suitable for researchers and students at all levels of experience.

From a user perspective, the interface is intuitive, and the citation screening is directed step-by-step through a well-defined workflow. Imports and exports are straightforward, with easy export options to Excel and CVS.

Access is free for Cochrane authors (a single reviewer), and Cochrane provides a free trial to other researchers in healthcare. Universities can also subscribe on an institutional basis.

Rayyan is a free and open access web-based platform funded by the Qatar Foundation, a non-profit organisation supporting education and community development initiative . Rayyan is used to screen and code literature through a systematic review process.

Unlike Covidence, Rayyan does not follow a standard SR workflow and simply helps with citation screening. It is accessible through a mobile application with compatibility for offline screening. The web-based platform is known for its accessible user interface, with easy and clear export options.

Function comparison of 5 software tools to support the systematic review process

Protocol development

Database integration

Only PubMed

PubMed 

Ease of import & export

Duplicate removal

Article screening

Inc. full text

Title & abstract

Inc. full text

Inc. full text

Inc. full text 

Critical appraisal

Assist with reporting

Meta-analysis

Cost

Subscription

Free

Subscription

Free

Subscription

EPPI-Reviewer

EPPI-Reviewer is a web-based software programme developed by the Evidence for Policy and Practice Information and Co-ordinating Centre  (EPPI) at the UCL Institute for Education, London .

It provides comprehensive functionalities for coding and screening. Users can create different levels of coding in a code set tool for clustering, screening, and administration of documents. EPPI-Reviewer allows direct search and import from PubMed. The import of search results from other databases is feasible in different formats. It stores, references, identifies and removes duplicates automatically. EPPI-Reviewer allows full-text screening, text mining, meta-analysis and the export of data into different types of reports.

There is no limit for concurrent use of the software and the number of articles being reviewed. Cochrane reviewers can access EPPI reviews using their Cochrane subscription details.

EPPI-Centre has other tools for facilitating the systematic review process, including coding guidelines and data management tools.

CADIMA is a free, online, open access review management tool, developed to facilitate research synthesis and structure documentation of the outcomes.

The Julius Institute and the Collaboration for Environmental Evidence established the software programme to support and guide users through the entire systematic review process, including protocol development, literature searching, study selection, critical appraisal, and documentation of the outcomes. The flexibility in choosing the steps also makes CADIMA suitable for conducting systematic mapping and rapid reviews.

CADIMA was initially developed for research questions in agriculture and environment but it is not limited to these, and as such, can be used for managing review processes in other disciplines. It enables users to export files and work offline.

The software allows for statistical analysis of the collated data using the R statistical software. Unlike EPPI-Reviewer, CADIMA does not have a built-in search engine to allow for searching in literature databases like PubMed.

DistillerSR

DistillerSR is an online software maintained by the Canadian company, Evidence Partners which specialises in literature review automation. DistillerSR provides a collaborative platform for every stage of literature review management. The framework is flexible and can accommodate literature reviews of different sizes. It is configurable to different data curation procedures, workflows and reporting standards. The platform integrates necessary features for screening, quality assessment, data extraction and reporting. The software uses Artificial Learning (AL)-enabled technologies in priority screening. It is to cut the screening process short by reranking the most relevant references nearer to the top. It can also use AL, as a second reviewer, in quality control checks of screened studies by human reviewers. DistillerSR is used to manage systematic reviews in various medical disciplines, surveillance, pharmacovigilance and public health reviews including food and nutrition topics. The software does not support statistical analyses. It provides configurable forms in standard formats for data extraction.

DistillerSR allows direct search and import of references from PubMed. It provides an add on feature called LitConnect which can be set to automatically import newly published references from data providers to keep reviews up to date during their progress.

The systematic review Toolbox is a web-based catalogue of various tools, including software packages which can assist with single or multiple tasks within the evidence synthesis process. Researchers can run a quick search or tailor a more sophisticated search by choosing their approach, budget, discipline, and preferred support features, to find the right tools for their research.

If you enjoyed this blog post, you may also be interested in our recently published blog post addressing the difference between a systematic review and a systematic literature review.

BLOG CTA

  • FSTA - Food Science & Technology Abstracts
  • IFIS Collections
  • Resources Hub
  • Diversity Statement
  • Sustainability Commitment
  • Company news
  • Frequently Asked Questions
  • Privacy Policy
  • Terms of Use for IFIS Collections

Ground Floor, 115 Wharfedale Road,  Winnersh Triangle, Wokingham, Berkshire RG41 5RB

Get in touch with IFIS

© International Food Information Service (IFIS Publishing) operating as IFIS – All Rights Reserved     |     Charity Reg. No. 1068176     |     Limited Company No. 3507902     |     Designed by Blend

10 Best Literature Review Tools for Researchers

Best Literature Review Tools for Researchers

Boost your research game with these Best Literature Review Tools for Researchers! Uncover hidden gems, organize your findings, and ace your next research paper!

Researchers struggle to identify key sources, extract relevant information, and maintain accuracy while manually conducting literature reviews. This leads to inefficiency, errors, and difficulty in identifying gaps or trends in existing literature.

Table of Contents

Top 10 Literature Review Tools for Researchers: In A Nutshell (2023)

1.Semantic ScholarResearchers to access and analyze scholarly literature, particularly focused on leveraging AI and semantic analysis
2.ElicitResearchers in extracting, organizing, and synthesizing information from various sources, enabling efficient data analysis
3.Scite.AiDetermine the credibility and reliability of research articles, facilitating evidence-based decision-making
4.DistillerSRStreamlining and enhancing the process of literature screening, study selection, and data extraction
5.RayyanFacilitating efficient screening and selection of research outputs
6.ConsensusResearchers to work together, annotate, and discuss research papers in real-time, fostering team collaboration and knowledge sharing
7.RAxResearchers to perform efficient literature search and analysis, aiding in identifying relevant articles, saving time, and improving the quality of research
8.LateralDiscovering relevant scientific articles and identify potential research collaborators based on user interests and preferences
9.Iris AIExploring and mapping the existing literature, identifying knowledge gaps, and generating research questions
10.ScholarcyExtracting key information from research papers, aiding in comprehension and saving time

#1. Semantic Scholar – A free, AI-powered research tool for scientific literature

Not all scholarly content may be indexed, and occasional false positives or inaccurate associations can occur. Furthermore, the tool primarily focuses on computer science and related fields, potentially limiting coverage in other disciplines. 

#2. Elicit – Research assistant using language models like GPT-3

Elicit is a game-changing literature review tool that has gained popularity among researchers worldwide. With its user-friendly interface and extensive database of scholarly articles, it streamlines the research process, saving time and effort. 

However, users should be cautious when using Elicit. It is important to verify the credibility and accuracy of the sources found through the tool, as the database encompasses a wide range of publications. 

Additionally, occasional glitches in the search function have been reported, leading to incomplete or inaccurate results. While Elicit offers tremendous benefits, researchers should remain vigilant and cross-reference information to ensure a comprehensive literature review.

#3. Scite.Ai – Your personal research assistant

Scite.Ai is a popular literature review tool that revolutionizes the research process for scholars. With its innovative citation analysis feature, researchers can evaluate the credibility and impact of scientific articles, making informed decisions about their inclusion in their own work. 

However, while Scite.Ai offers numerous advantages, there are a few aspects to be cautious about. As with any data-driven tool, occasional errors or inaccuracies may arise, necessitating researchers to cross-reference and verify results with other reputable sources. 

Rayyan offers the following paid plans:

#4. DistillerSR – Literature Review Software

Despite occasional technical glitches reported by some users, the developers actively address these issues through updates and improvements, ensuring a better user experience. 

#5. Rayyan – AI Powered Tool for Systematic Literature Reviews

However, it’s important to be aware of a few aspects. The free version of Rayyan has limitations, and upgrading to a premium subscription may be necessary for additional functionalities. 

#6. Consensus – Use AI to find you answers in scientific research

With Consensus, researchers can save significant time by efficiently organizing and accessing relevant research material.People consider Consensus for several reasons. 

Consensus offers both free and paid plans:

#7. RAx – AI-powered reading assistant

#8. lateral – advance your research with ai.

Additionally, researchers must be mindful of potential biases introduced by the tool’s algorithms and should critically evaluate and interpret the results. 

#9. Iris AI – Introducing the researcher workspace

Researchers are drawn to this tool because it saves valuable time by automating the tedious task of literature review and provides comprehensive coverage across multiple disciplines. 

#10. Scholarcy – Summarize your literature through AI

Scholarcy’s ability to extract key information and generate concise summaries makes it an attractive option for scholars looking to quickly grasp the main concepts and findings of multiple papers.

Scholarcy’s automated summarization may not capture the nuanced interpretations or contextual information presented in the full text. 

Final Thoughts

In conclusion, conducting a comprehensive literature review is a crucial aspect of any research project, and the availability of reliable and efficient tools can greatly facilitate this process for researchers. This article has explored the top 10 literature review tools that have gained popularity among researchers.

Q1. What are literature review tools for researchers?

Q2. what criteria should researchers consider when choosing literature review tools.

When choosing literature review tools, researchers should consider factors such as the tool’s search capabilities, database coverage, user interface, collaboration features, citation management, annotation and highlighting options, integration with reference management software, and data extraction capabilities. 

Q3. Are there any literature review tools specifically designed for systematic reviews or meta-analyses?

Meta-analysis support: Some literature review tools include statistical analysis features that assist in conducting meta-analyses. These features can help calculate effect sizes, perform statistical tests, and generate forest plots or other visual representations of the meta-analytic results.

Q4. Can literature review tools help with organizing and annotating collected references?

Integration with citation managers: Some literature review tools integrate with popular citation managers like Zotero, Mendeley, or EndNote, allowing seamless transfer of references and annotations between platforms.

By leveraging these features, researchers can streamline the organization and annotation of their collected references, making it easier to retrieve relevant information during the literature review process.

Leave a Comment Cancel reply

University of Maryland Libraries Logo

Systematic Review

  • Library Help
  • What is a Systematic Review (SR)?

Steps of a Systematic Review

  • Framing a Research Question
  • Developing a Search Strategy
  • Searching the Literature
  • Managing the Process
  • Meta-analysis
  • Publishing your Systematic Review

Forms and templates

Logos of MS Word and MS Excel

Image: David Parmenter's Shop

  • PICO Template
  • Inclusion/Exclusion Criteria
  • Database Search Log
  • Review Matrix
  • Cochrane Tool for Assessing Risk of Bias in Included Studies

   • PRISMA Flow Diagram  - Record the numbers of retrieved references and included/excluded studies. You can use the Create Flow Diagram tool to automate the process.

   •  PRISMA Checklist - Checklist of items to include when reporting a systematic review or meta-analysis

PRISMA 2020 and PRISMA-S: Common Questions on Tracking Records and the Flow Diagram

  • PROSPERO Template
  • Manuscript Template
  • Steps of SR (text)
  • Steps of SR (visual)
  • Steps of SR (PIECES)

Image by

from the UMB HSHSL Guide. (26 min) on how to conduct and write a systematic review from RMIT University  from the VU Amsterdam . , (1), 6–23. https://doi.org/10.3102/0034654319854352

. (1), 49-60. . (4), 471-475.

 (2020)  (2020) - Methods guide for effectiveness and comparative effectiveness reviews (2017)  - Finding what works in health care: Standards for systematic reviews (2011)  - Systematic reviews: CRD’s guidance for undertaking reviews in health care (2008)

entify your research question. Formulate a clear, well-defined research question of appropriate scope. Define your terminology. Find existing reviews on your topic to inform the development of your research question, identify gaps, and confirm that you are not duplicating the efforts of previous reviews. Consider using a framework like  or to define you question scope. Use to record search terms under each concept. 

 It is a good idea to register your protocol in a publicly accessible way. This will help avoid other people completing a review on your topic. Similarly, before you start doing a systematic review, it's worth checking the different registries that nobody else has already registered a protocol on the same topic.

- Systematic reviews of health care and clinical interventions  - Systematic reviews of the effects of social interventions (Collaborative Approach to Meta-Analysis and Review of Animal Data from Experimental Studies) - The protocol is published immediately and subjected to open peer review. When two reviewers approve it, the paper is sent to Medline, Embase and other databases for indexing. - upload a protocol for your scoping review - Systematic reviews of healthcare practices to assist in the improvement of healthcare outcomes globally - Registry of a protocol on OSF creates a frozen, time-stamped record of the protocol, thus ensuring a level of transparency and accountability for the research. There are no limits to the types of protocols that can be hosted on OSF.  - International prospective register of systematic reviews. This is the primary database for registering systematic review protocols and searching for published protocols. . PROSPERO accepts protocols from all disciplines (e.g., psychology, nutrition) with the stipulation that they must include health-related outcomes.  - Similar to PROSPERO. Based in the UK, fee-based service, quick turnaround time. - Submit a pre-print, or a protocol for a scoping review.   - Share your search strategy and research protocol. No limit on the format, size, access restrictions or license.

outlining the details and documentation necessary for conducting a systematic review:

, (1), 28.
Clearly state the criteria you will use to determine whether or not a study will be included in your search. Consider study populations, study design, intervention types, comparison groups, measured outcomes. Use some database-supplied limits such as language, dates, humans, female/male, age groups, and publication/study types (randomized controlled trials, etc.).
Run your searches in the to your topic. Work with to help you design comprehensive search strategies across a variety of databases. Approach the grey literature methodically and purposefully. Collect ALL of the retrieved records from each search into , such as  , or , and prior to screening. using the  and .
- export your Endnote results in this screening software Start with a title/abstract screening to remove studies that are clearly not related to your topic. Use your to screen the full-text of studies. It is highly recommended that two independent reviewers screen all studies, resolving areas of disagreement by consensus.
Use , or systematic review software (e.g. , ), to extract all relevant data from each included study. It is recommended that you pilot your data extraction tool, to determine if other fields should be included or existing fields clarified.
Risk of Bias (Quality) Assessment -  (download the Excel spreadsheet to see all data) Use a Risk of Bias tool (such as the ) to assess the potential biases of studies in regards to study design and other factors. Read the to learn about the topic of assessing risk of bias in included studies. You can adapt  ( ) to best meet the needs of your review, depending on the types of studies included.

-

-

Clearly present your findings, including detailed methodology (such as search strategies used, selection criteria, etc.) such that your review can be easily updated in the future with new research findings. Perform a meta-analysis, if the studies allow. Provide recommendations for practice and policy-making if sufficient, high quality evidence exists, or future directions for research to fill existing gaps in knowledge or to strengthen the body of evidence.

For more information, see: 

. (2), 217–226. https://doi.org/10.2450/2012.0247-12  - Get some inspiration and find some terms and phrases for writing your manuscript - Automated high-quality spelling, grammar and rephrasing corrections using artificial intelligence (AI) to improve the flow of your writing. Free and subscription plans available.

8. Find the best journal to publish your work. Identifying the best journal to submit your research to can be a difficult process. To help you make the choice of where to submit, simply insert your title and abstract in any of the listed under the tab. 

Adapted from  A Guide to Conducting Systematic Reviews: Steps in a Systematic Review by Cornell University Library

This diagram illustrates in a visual way and in plain language what review authors actually do in the process of undertaking a systematic review.

This diagram illustrates what is actually in a published systematic review and gives examples from the relevant parts of a systematic review housed online on The Cochrane Library. It will help you to read or navigate a systematic review.

Source: Cochrane Consumers and Communications  (infographics are free to use and licensed under Creative Commons )

Check the following visual resources titled " What Are Systematic Reviews?"

  • Video  with closed captions available
  • Animated Storyboard

 

Image:   

-  the methods of the systematic review are generally decided before conducting it.  
- searching for studies which match the preset criteria in a systematic manner
- sort all retrieved articles (included or  excluded) and assess the risk of bias for each included study
- each study is coded with preset form, either qualitatively or quantitatively synthesize data.
- place results of synthesis into context, strengths and weaknesses of the studies 
- report provides description of methods and results in a clear and transparent manner

 

Source: Foster, M. (2018). Systematic reviews service: Introduction to systematic reviews. Retrieved September 18, 2018, from

  • << Previous: What is a Systematic Review (SR)?
  • Next: Framing a Research Question >>
  • Last Updated: Jul 11, 2024 6:38 AM
  • URL: https://lib.guides.umd.edu/SR

♨️ A step-by-step process

Using the PRISMA 2020 (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines involves a step-by-step process to ensure that your systematic review or meta-analysis is reported transparently and comprehensively. Below are the key steps to follow when using PRISMA 2020:

1. Understand the PRISMA 2020 Checklist: Familiarize yourself with the PRISMA 2020 Checklist and its 27 essential items. You can access the checklist and an explanation of each item from the official PRISMA website or publication.

2. Plan Your Systematic Review: Before starting your review, clearly define your research question, objectives, and inclusion/exclusion criteria for selecting studies. Ensure that your research question aligns with the PRISMA 2020 framework.

3. Develop a Protocol: Create a systematic review protocol that outlines the methodology you'll use, including search strategies, data extraction methods, and the approach to assessing risk of bias (if applicable). Register your protocol on a relevant platform like PROSPERO.

4. Conduct the Literature Search: Search for relevant studies using a systematic and comprehensive approach. Document the search strategy, databases used, search terms, and any filters applied. Ensure that your search covers the time period and study designs specified in your protocol.

5. Study Selection: Implement your inclusion/exclusion criteria to screen and select studies. Maintain detailed records of the screening process, including reasons for exclusion.

6. Data Extraction: Extract data from the selected studies using a predefined template. Include information on study characteristics, outcomes, and any other relevant data points. Ensure that your data extraction process is consistent and well-documented.

7. Risk of Bias Assessment: If applicable, assess the risk of bias in the included studies. Use appropriate tools or criteria and clearly report the results of the assessment.

8. Data Synthesis and Meta-Analysis: If relevant, conduct data synthesis and meta-analysis. Follow established statistical methods and guidelines for pooling data, calculating effect sizes, and assessing heterogeneity.

9. Report According to PRISMA 2020: When writing your systematic review or meta-analysis manuscript, ensure that you follow the PRISMA 2020 Checklist. Address each of the 27 items in the checklist in your manuscript. This includes providing clear information on your research question, search strategy, inclusion/exclusion criteria, data extraction process, risk of bias assessment, and results.

10. Transparency and Supplementary Materials: Provide supplementary materials such as a PRISMA flow diagram showing the study selection process and a summary table of included studies. These add to the transparency of your review.

11. Peer Review and Revision: Submit your manuscript to a peer-reviewed journal that accepts systematic reviews and meta-analyses. Be prepared to respond to reviewers' comments and make necessary revisions to adhere to PRISMA 2020.

12. Publish and Share: Once your systematic review or meta-analysis is accepted and published, consider sharing it on platforms like PROSPERO or other relevant databases for greater visibility.

Throughout the process, maintaining transparency, consistency, and adherence to the PRISMA 2020 guidelines will help ensure that your systematic review or meta-analysis is of high quality and can be effectively used by researchers, policymakers, and practitioners in your field.

Last updated 11 months ago

tool systematic literature review process

tool systematic literature review process

  • Help Center

GET STARTED

Rayyan

COLLABORATE ON YOUR REVIEWS WITH ANYONE, ANYWHERE, ANYTIME

Rayyan for students

Save precious time and maximize your productivity with a Rayyan membership. Receive training, priority support, and access features to complete your systematic reviews efficiently.

Rayyan for Librarians

Rayyan Teams+ makes your job easier. It includes VIP Support, AI-powered in-app help, and powerful tools to create, share and organize systematic reviews, review teams, searches, and full-texts.

Rayyan for Researchers

RESEARCHERS

Rayyan makes collaborative systematic reviews faster, easier, and more convenient. Training, VIP support, and access to new features maximize your productivity. Get started now!

Over 1 billion reference articles reviewed by research teams, and counting...

Intelligent, scalable and intuitive.

Rayyan understands language, learns from your decisions and helps you work quickly through even your largest systematic literature reviews.

WATCH A TUTORIAL NOW

Solutions for Organizations and Businesses

tool systematic literature review process

Rayyan Enterprise and Rayyan Teams+ make it faster, easier and more convenient for you to manage your research process across your organization.

  • Accelerate your research across your team or organization and save valuable researcher time.
  • Build and preserve institutional assets, including literature searches, systematic reviews, and full-text articles.
  • Onboard team members quickly with access to group trainings for beginners and experts.
  • Receive priority support to stay productive when questions arise.
  • SCHEDULE A DEMO
  • LEARN MORE ABOUT RAYYAN TEAMS+

RAYYAN SYSTEMATIC LITERATURE REVIEW OVERVIEW

tool systematic literature review process

LEARN ABOUT RAYYAN’S PICO HIGHLIGHTS AND FILTERS

tool systematic literature review process

Join now to learn why Rayyan is trusted by already more than 500,000 researchers

Individual plans, teams plans.

For early career researchers just getting started with research.

Free forever

  • 3 Active Reviews
  • Invite Unlimited Reviewers
  • Import Directly from Mendeley
  • Industry Leading De-Duplication
  • 5-Star Relevance Ranking
  • Advanced Filtration Facets
  • Mobile App Access
  • 100 Decisions on Mobile App
  • Standard Support
  • Revoke Reviewer
  • Online Training
  • PICO Highlights & Filters
  • PRISMA (Beta)
  • Auto-Resolver 
  • Multiple Teams & Management Roles
  • Monitor & Manage Users, Searches, Reviews, Full Texts
  • Onboarding and Regular Training

Professional

For researchers who want more tools for research acceleration.

per month, billed annually

  • Unlimited Active Reviews
  • Unlimited Decisions on Mobile App
  • Priority Support
  • Auto-Resolver

For currently enrolled students with valid student ID.

per month, billed quarterly

For a team that wants professional licenses for all members.

per month, per user, billed annually

  • Single Team
  • High Priority Support

For teams that want support and advanced tools for members.

  • Multiple Teams
  • Management Roles

For organizations who want access to all of their members.

Annual Subscription

Contact Sales

  • Organizational Ownership
  • For an organization or a company
  • Access to all the premium features such as PICO Filters, Auto-Resolver, PRISMA and Mobile App
  • Store and Reuse Searches and Full Texts
  • A management console to view, organize and manage users, teams, review projects, searches and full texts
  • Highest tier of support – Support via email, chat and AI-powered in-app help
  • GDPR Compliant
  • Single Sign-On
  • API Integration
  • Training for Experts
  • Training Sessions Students Each Semester
  • More options for secure access control

———————–

ANNUAL ONLY

Rayyan Subscription

membership starts with 2 users. You can select the number of additional members that you’d like to add to your membership.

Total amount:

Click Proceed to get started.

Great usability and functionality. Rayyan has saved me countless hours. I even received timely feedback from staff when I did not understand the capabilities of the system, and was pleasantly surprised with the time they dedicated to my problem. Thanks again!

This is a great piece of software. It has made the independent viewing process so much quicker. The whole thing is very intuitive.

Rayyan makes ordering articles and extracting data very easy. A great tool for undertaking literature and systematic reviews!

Excellent interface to do title and abstract screening. Also helps to keep a track on the the reasons for exclusion from the review. That too in a blinded manner.

Rayyan is a fantastic tool to save time and improve systematic reviews!!! It has changed my life as a researcher!!! thanks

Easy to use, friendly, has everything you need for cooperative work on the systematic review.

Rayyan makes life easy in every way when conducting a systematic review and it is easy to use.

  • Open access
  • Published: 01 December 2022

The Systematic Review Toolbox: keeping up to date with tools to support evidence synthesis

  • Eugenie Evelynne Johnson   ORCID: orcid.org/0000-0003-3324-7141 1 , 2 ,
  • Hannah O’Keefe 1 , 2 ,
  • Anthea Sutton 3 &
  • Christopher Marshall 4  

Systematic Reviews volume  11 , Article number:  258 ( 2022 ) Cite this article

6013 Accesses

8 Citations

16 Altmetric

Metrics details

The Systematic Review (SR) Toolbox was developed in 2014 to collate tools that can be used to support the systematic review process. Since its inception, the breadth of evidence synthesis methodologies has expanded greatly. This work describes the process of updating the SR Toolbox in 2022 to reflect these changes in evidence synthesis methodology. We also briefly analysed included tools and guidance to identify any potential gaps in what is currently available to researchers.

We manually extracted all guidance and software tools contained within the SR Toolbox in February 2022. A single reviewer, with a second checking a proportion, extracted and analysed information from records contained within the SR Toolbox using Microsoft Excel. Using this spreadsheet and Microsoft Access, the SR Toolbox was updated to reflect expansion of evidence synthesis methodologies and brief analysis conducted.

The updated version of the SR Toolbox was launched on 13 May 2022, with 235 software tools and 112 guidance documents included. Regarding review families, most software tools ( N = 223) and guidance documents ( N = 78) were applicable to systematic reviews. However, there were fewer tools and guidance documents applicable to reviews of reviews ( N = 66 and N = 22, respectively), while qualitative reviews were less served by guidance documents ( N = 19). In terms of review production stages, most guidance documents surrounded quality assessment ( N = 70), while software tools related to searching and synthesis ( N = 84 and N = 82, respectively). There appears to be a paucity of tools and guidance relating to stakeholder engagement ( N = 2 and N = 3, respectively).

Conclusions

The SR Toolbox provides a platform for those undertaking evidence syntheses to locate guidance and software tools to support different aspects of the review process across multiple review types. However, this work has also identified potential gaps in guidance and software that could inform future research.

Peer Review reports

Introduction

The Systematic Review Toolbox (SR Toolbox) was developed in 2014 by Christopher Marshall (CM) as part of his PhD surrounding tools that can be used to support the systematic review process within software engineering [ 1 ]. Whilst originally developed for the field of computer science, the methodologies for conducting systematic reviews and evidence synthesis are applicable across disciplines. Therefore, the scope of the SR Toolbox was expanded to include health topics. Its aim is to assist researchers by providing an open, free and searchable web-based catalogue of tools and guidance papers that assist with various tasks within the systematic review and wider evidence synthesis process. The SR Toolbox is regularly maintained by conducting a specialised search on MEDLINE, before being screened according to a defined inclusion and exclusion criteria by a single Editor, checked by a second editor (see Additional file 1 : Supplementary Material). Guidance and software tools that meet the eligibility criteria are added to the SR Toolbox on a rolling basis.

In January 2022, the SR Toolbox website gained approximately 28,500 hits and 6100 visits from around 4500 unique visitors, showing the popularity of the platform and its potential reach to researchers looking to find tools and guidance for use within evidence syntheses. However, since the initial launch of the SR Toolbox in 2014, there has been an increase in the number and types of evidence syntheses being produced. Many systematic review typologies and taxonomies had been developed since the SR Toolbox’s inception, including large numbers of review types. For example, Booth et al. (2016) identified 22 review types [ 2 ], Cook et al. (2017) identify 9 [ 3 ], while the typology by Munn et al. (2018) suggested there were 10 different review types [ 4 ].

More recently, a taxonomy proposed by Sutton et al (2019) incorporating research from several other previously published works suggests that 48 review types exist [ 5 ], which can be broadly categorised into seven review “families”:

Traditional reviews (that tend to use a purposive sampling approach as opposed to a systematic approach);

Systematic reviews;

Review of reviews;

Rapid reviews;

Qualitative systematic reviews;

Mixed-methods reviews; and

Purpose-specific reviews (i.e. reviews that are tailored to individual needs, such as Health Technology Assessment).

In the version of the SR Toolbox prior to 2022, the ability to search by review type was limited and not reflective of the expanding evidence synthesis landscape. The SR Toolbox’s ability to suggest support for the varying demands of different review types was therefore limited.

Additionally, although there is now a large array of tools available to support the process of conducting systematic reviews and other forms of evidence syntheses, a potential barrier to adoption includes inexperience of some of the underlying principles of tools, such as machine learning [ 6 ]. In the iteration of the SR Toolbox maintained until 2022, software tools were searchable according to their underlying approach (e.g. text mining, machine learning, visualisation), discipline (healthcare, social sciences, software engineering or multidisciplinary), and their financial cost (e.g. completely free or payment required). “Other” tools were only searchable by discipline and type (e.g. guideline, reporting standards). As such, for those with less experience or knowledge of the processes underpinning software tools, effective searching of the SR Toolbox could potentially be challenging.

We therefore set out to update the SR Toolbox interface, so it continues to be able to respond to the needs of users within a changing and continually developing evidence synthesis landscape, as well as being more accessible to a wide variety of researchers. In this paper, we describe our methods for reconstructing the platform by conducting a mapping exercise of all tools within the SR Toolbox to re-categorise them and check their validity. In addition, we also describe a brief analysis based on the mapping exercise to identify review types and processes that are both well-served and underserved by the tools currently contained within the platform.

SR Toolbox update methods

In February 2022, we embarked on a mapping exercise of all software and other tools indexed within the SR Toolbox to inform the restructuring of the platform. A coding tool was developed in Excel to extract data relevant to each tool indexed within the SR Toolbox to that point. Domains were either completed using free text or ticked using a check box. Details of domains assessed and how they were coded are detailed in Table 1 .

Part of the coding framework was adapted from the review family taxonomy proposed by Sutton et al. (2019) [ 5 ]. However, we did not include traditional reviews and purpose-specific reviews within the mapping exercise. This is because traditional reviews as described by the Sutton taxonomy were not considered systematic enough to be within scope for the SR Toolbox, while purpose-specific reviews were too broad and (potentially) too diverse to include in a systematic manner, as they include a wide variety of evidence syntheses including scoping reviews, mapping reviews and Health Technology Assessment [ 5 ]. Although both scoping reviews and mapping reviews are part of the purpose-specific family within the Sutton taxonomy [ 5 ], we separated these into their own categories. This is because it has been noted that scoping reviews are growing in number [ 7 ], while mapping reviews are becoming increasingly conducted as a way of visually representing the breadth of a body of evidence, despite being rare until as recently as 2010 [ 8 ]. Mapping reviews can also be considered distinct from scoping reviews, as although both present a broad overview of evidence relating to a topic, they are highly visual in nature [ 9 ]. Furthermore, it has been posited that scoping reviews can act as a precursor to a predefined systematic review, whereas mapping reviews may aim to identify research areas for systematic review or gaps in the evidence base [ 5 ].

All records currently contained within the SR Toolbox up to February 2022 ( n = 352) were manually extracted and coded according to the framework by a single reviewer (EEJ). The same reviewer checked all current records to ensure that hyperlinks were not broken and that tools still appeared to be active. If links to software tools were no longer active and could not be located elsewhere, these were excluded from the mapping exercise and, subsequently, the SR Toolbox ( N = 5). Tools and guidance could be coded to more than one review family and more than one stage of a review, where appropriate. A second reviewer (HOK) checked a small percentage of the records coded for accuracy before the spreadsheet was imported into a Microsoft Access database.

Microsoft Access databases are relational, meaning that relationships can be built between tables. We included a table for tool details, tool type, review stage, review family, publications, and cost. The tool details table acted as the main reference point, with all other tables being related to it via interim linker tables (Fig. 1 ).

figure 1

Diagram of Microsoft Access framework

The tables contained in the local database in Access were exported as separate CSV files, then imported using phpMyAdmin to create the same database, online, in MySQL. Custom structured query language (SQL) statements, which accounted for any combination of user query, were hard coded into the website’s hypertext preprocessor (PHP) scripts. Furthermore, the graphical user interface that facilitates users in running advanced searches was updated to reflect the updated database and new tool categories.

Analysis methods

We undertook a basic analysis of the different software tools and guidance documents included within the SR Toolbox up to February 2022 in order to assess: what review families were being covered by the included tools; what review stages and aspects were being covered by the included tools; how up to date included software tools are; and the trajectory of research for guidance and reporting documents relating to evidence syntheses.

Using the same coding document developed in Excel for the mapping exercise described above, we filtered the spreadsheet to contain either relevant software tools or relevant guidance so they could be analysed as separate entities. From here, we tabulated the number of times tools or guidance documents were checked against each review family or review stage. We also added an additional column to the spreadsheet to indicate where tools or guidance documents could be applicable to multiple review families or multiple review stages; these were manually coded within the spreadsheet. The numbers tabulated from each of these exercises were used to create tables and graphs demonstrating the volume of tools in each category.

SR Toolbox update

At the time of updating the SR Toolbox interface, there were 235 software tools and 112 guidance or reporting documents included within the platform. The new SR Toolbox interface was launched on 13 May 2022.

Analysis results

Table 2 documents the relevance of guidance documents and software tools contained within the SR Toolbox to different review families. Of the 235 software tools and 112 guidance documents currently contained within the SR Toolbox, 215 software tools (91.5%) and 61 guidance documents (54.5%) can be applicable to multiple review families. Most software tools ( N = 223) and guidance documents ( N = 78) are applicable to systematic reviews, though far less are applicable to reviews of reviews ( N = 66, 28.1% and N = 22, 19.6% respectively). Qualitative reviews were slightly better served in terms of software tools ( N = 108, 46%), but were the most under-served review family in terms of guidance documents ( N = 19, 17%).

Table 3 shows the amount of software tools and guidance contained within the SR Toolbox at the time of update in relation to what stage of the review production process they assist with. Seventy-five (32%) of the software tools were applicable to more than one review production stage, while only 16 (14.3%) guidance documents were applicable to multiple stages of the process. Guidance documents within the SR Toolbox are currently dominated by research relating to quality assessment ( N = 70; 62.5%), followed by guidelines for reporting reviews ( N = 26; 23.2%). There appears to be a paucity of software tools ( N = 2; 0.9%) and guidance ( N = 3; 2.7%) that relates to stakeholder engagement within the review process.

Figure 2 shows how up to date the software tools included within the SR Toolbox are. Most of the tools for which a new version was available have been updated within the past 4 years up to and including the first quarter of 2022 ( N = 115), with the most updates occurring in 2021 ( N = 51). However, although this is suggestive that most of the tools included in the SR Toolbox could be considered up to date, there were 71 software tools where we could not identify the latest update date (30.2% of all included software tools). We therefore cannot be certain that a relatively large proportion of software tools within the SR Toolbox are up to date.

figure 2

Number of updates for software tools included in the Toolbox by year ( N = 164)

Similarly, Fig. 3 shows the amount of guidance and reporting tools included within the SR Toolbox by the year in which they were published. Although it was not clear when four of the guidance documents were originally published or updated, this only represents a small proportion of the guidance included within the Toolbox (3.6%). The earliest guidance publication date included within the SR Toolbox dates to 1998. However, of all the guidance and reporting documents included within the SR Toolbox, the majority have been published since 2015 (63.9%). The greatest number of guidance documents or reporting tools were published in 2019 and 2021 (11 per year). Before beginning the SR Toolbox updating exercise, we had already identified five new eligible guidance and reporting documents published in 2022. These data suggest that there has been a steady increase in the number of publications offering guidance and reporting standards relating to systematic reviews and wider evidence syntheses since 1998 and the trajectory of publications in this area has been particularly high since 2015.

figure 3

Number of guidelines and reporting frameworks included in the Toolbox published by year ( N = 108)

Most of the included software tools are free to use (181/235, 77%). Of the 21 software tools that required payment, 12 had a free trial available and 3 had a free version available. Similarly, most of the guidance documents are open access (96/112, 85.7%).

Summary of main results

The update of the SR Toolbox aims to provide a simple and easily navigable interface for researchers to discover guidance and software tools to help conduct systematic reviews and wider evidence syntheses projects. The new structure of the SR Toolbox, which incorporates the ability to search by review family and review stage, has been developed and implemented to make it easier for researchers and other stakeholders with less familiarity and experience with the underlying computational concepts of tools. Stakeholders should be more able to identify and access software and guidance that may assist them with their evidence syntheses projects.

Our brief analysis of tools included in the platform up to February 2022 suggests that many software tools and guidance documents currently within the SR Toolbox can potentially be applicable to multiple review families, though reviews of reviews and qualitative reviews may currently be less well served. Guidance documents largely focus on methods for critical appraisal, followed by reporting guidelines, with far fewer publications surrounding other aspects of the review production process. Additionally, software tools to support the systematic review process may be mostly well-maintained and up to date, though there is some uncertainty surrounding this. The trajectory of guidance and reporting frameworks for evidence syntheses being published has been steadily increasing and has seen a particular increase since 2015.

Strengths and limitations of this work

Well-defined categories were used to map the guidance and software tools, based on widely accepted published standards [ 5 ]. These categories were agreed upon by highly experienced systematic reviewers (EEJ and CM) and information specialists (HOK and AS). Two editors with considerable expertise in computational and data science (CM and HOK) were responsible for the construction of the updated SR Toolbox.

However, there are some limitations of this work. The initial mapping exercise was conducted by a single reviewer, with a second checking some records for accuracy. This may be considered a bias, as it is possible that there may some minor inaccuracies in coding and charting of the tools and guidelines.

Potential areas for future research

As part of the mapping exercise for this work, we added a column in our Excel sheet to identify when the software tool or guideline was added to the SR Toolbox. This will allow us to determine the trajectory of publications and the rate at which new software tools are being added in the future more accurately.

This column may be one way of identifying areas for expansion or refinement within future iterations of the SR Toolbox. For example, there may also be an argument to further refine the ‘Other’ category in the SR Toolbox in future updates, particularly to highlight software tools and guidance relating to network meta-analyses and prognostic reviews. A 2016 review identified 456 network meta-analyses including at least four interventions [ 10 ], suggesting that the review type is increasing in number. Prognostic reviews have been formally adopted by Cochrane, with the first two Cochrane prognostic reviews published in 2018 [ 11 , 12 ], while there have also been calls for more prognostic reviews to be conducted in response to a growing amount of primary prognostic research [ 13 ].

Living systematic reviews have also been proposed as a contribution to evidence synthesis by providing high-quality reviews that are updated as new research in the area becomes available [ 14 ]. We discussed the inclusion of living systematic reviews as a standalone review category within the new iteration of the SR Toolbox, as there has been some evidence that machine learning has been used to support the production of these reviews [ 15 ], but currently the SR Toolbox does not contain any specific guidance or software tools relating to living systematic reviews. If software tools and guidelines become available for living systematic reviews, we will consider adding this review category to the Toolbox in the future.

More generally, the mapping exercise and subsequent analysis has highlighted some areas for further research and tool production. Tools and guidance to support reviews other than systematic reviews of intervention effectiveness may be needed, particularly for reviews of reviews and qualitative reviews. Additionally, there are also very few tools or guidelines relating to stakeholder engagement in the review production process. While general guidance on how to report patient and public involvement in research exists in the form of GRIPP2 [ 16 ], and the ACTIVE framework has been developed to describe stakeholder involvement in systematic reviews [ 17 ], there are currently few other frameworks or tools specifically designed to help researchers undertaking evidence syntheses to involve wider stakeholders in the process.

The updated version of the SR Toolbox is designed to be an easily-navigable interface to aid researchers in finding guidance and software tools to help conduct varying forms of evidence synthesis, informed by the evolution in evidence synthesis methodologies since its inception. Our analysis of the contents of the SR Toolbox has revealed that there are specific review families and stages of the review process that are currently well-served by guidance and software but that gaps remain surrounding others. Further investigation into these gaps may help researchers to conduct other types of review in future.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

Diagnostic test accuracy

Medical Literature Analysis and Retrieval System Online

Hypertext preprocessor

Structured Query Language

  • Systematic review

Marshall C, Sutton A, O'Keefe H, Johnson E. The Systematic Review Toolbox. 2022. Available from: http://www.systematicreviewtools.com/ . Accessed Feb 2022.

Booth A, Noyes J, Flemming K, Gerhardus A, Wahlster P, van der Wilt GJ, et al. Guidance on choosing qualitative evidence synthesis methods for use in health technology assessments of complex interventions. 2016. Available from: https://www.integrate-hta.eu/wp-content/uploads/2016/02/Guidance-on-choosing-qualitative-evidence-synthesis-methods-for-use-in-HTA-of-complex-interventions.pdf . Accessed Feb 2022.

Cook CN, Nichols SJ, Webb JA, Fuller RA, Richards RM. Simplifying the selection of evidence synthesis methods to inform environmental decisions: a guide for decision makers and scientists. Biol Conserv. 2017;213:135–45.

Article   Google Scholar  

Munn Z, Stern C, Aromataris E, et al. What kind of systematic review should I conduct? A proposed typology and guidance for systematic reviewers in the medical and health sciences. BMC Med Res Methodol. 2018;18(5). https://doi.org/10.1186/s12874-017-0468-4 .

Sutton A, Clowes M, Preston L, Booth A. Meeting the review family: exploring review types and associated information retrieval requirements. Health Inf Libr J. 2019;36:202–22.

Arno A, Elliott J, Wallace B, Turner T, Thomas J. The views of health guideline developers on the use of automation in health evidence synthesis. BMC Syst Rev. 2021;10(16).

Tricco AC, Lillie E, Zarin W, O'Brien K, Colquhoun H, Kastner M, et al. A scoping review on the conduct and reporting of scoping reviews. BMC Med Res Methodol. 2016;16(15).

Miake-Lye IM, Hempel S, Shanman R, Shekelle PG. What is an evidence map? A systematic review of published evidence maps and their definitions, methods, and products. BMC Syst Rev. 2016;5(28).

Snilstveit B, Vojtkova M, Bhavsar A, Stevenson J, Gaarder M. Evidence & gap maps: a tool for promoting evidence informed policy and strategic research agendas. J Clin Epidemiol. 2016;79:120–9.

Article   PubMed   Google Scholar  

Petropoulou M, Nikolakopoulou A, Veroniki A-A, Rios P, Vafaei A, Zarin W, et al. Bibliographic study showed improving statistical methodology of network meta-analyses published between 1999 and 2015. J Clin Epidemiol. 2017;82:20–8.

Westby MJ, Dumville JC, Stubbs N, Norman G, Wong JKF, Cullum N, et al. Protease activity as a prognostic factor for wound healing in venous leg ulcers. Cochrane Database Syst Rev. 2018;(9):Art. No. CD012841. https://doi.org/10.1002/14651858.CD012841.pub2 .

Richter B, Hemmingsen B, Metzendorf MI, Takwoingi Y. Development of type 2 diabetes mellitus in people with intermediate hyperglycaemia. Cochrane Database Syst Rev. 2018;(10).

Damen JAAG, Hooft L. The increasing need for systematic reviews of prognosis studies: strategies to facilitate review production and improve quality of primary research. Diagnostic and prognostic. Research. 2019;3(2).

Elliott JH, Turner T, Clavisi O, Thomas J, Higgins JPT, Mavergames C, et al. Living systematic reviews: an emerging opportunity to narrow the evidence-practice gap. PLoS Med. 2014;11(2):e1001603.

Article   PubMed   PubMed Central   Google Scholar  

Millard T, Synnot A, Elliott J, Green S, McDonald S, Turner T. Feasibility and acceptability of living systematic reviews: results from a mixed-methods evaluation. BMC Syst Rev. 2019;8(325).

Staniszewska S, Brett J, Simera I, Seers K, Mockford C, Goodlad S, et al. GRIPP2 reporting checklists: tools to improve reporting of patient and public involvement in research. BMJ. 2017;358:j3453.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Pollock A, Campbell P, Struthers C, Synnot A, Nunn J, Hill S, et al. Development of the ACTIVE framework to describe stakeholder involvement in systematic reviews. J Health Serv Res Policy. 2019;24(4):245–55.

Download references

Acknowledgements

We did not receive any funding for this work.

Author information

Authors and affiliations.

Population Health Sciences Institute, Newcastle University, Newcastle upon Tyne, UK

Eugenie Evelynne Johnson & Hannah O’Keefe

NIHR Innovation Observatory, Newcastle University, Newcastle upon Tyne, UK

School of Health and Related Research (ScHARR), The University of Sheffield, Sheffield, UK

Anthea Sutton

York Health Economics Consortium, University of York, York, UK

Christopher Marshall

You can also search for this author in PubMed   Google Scholar

Contributions

EEJ undertook initial mapping of existing tools to new domains and contributed in writing and editing the manuscript. CM transferred initial mapping into database format, design, backend and frontend development of the new SR Toolbox and helped in editing the manuscript. AS helped in editing the manuscript. HOK provided assistance in initial mapping of existing tools, building the Access database, editing the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Eugenie Evelynne Johnson .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: supplementary material..

Eligibility criteria for SR Toolbox.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Johnson, E.E., O’Keefe, H., Sutton, A. et al. The Systematic Review Toolbox: keeping up to date with tools to support evidence synthesis. Syst Rev 11 , 258 (2022). https://doi.org/10.1186/s13643-022-02122-z

Download citation

Received : 01 July 2022

Accepted : 05 November 2022

Published : 01 December 2022

DOI : https://doi.org/10.1186/s13643-022-02122-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Evidence synthesis

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

tool systematic literature review process

Literature Review Tips & Tools

  • Tips & Examples

Organizational Tools

Tools for systematic reviews.

  • Bubbl.us Free online brainstorming/mindmapping tool that also has a free iPad app.
  • Coggle Another free online mindmapping tool.
  • Organization & Structure tips from Purdue University Online Writing Lab
  • Literature Reviews from The Writing Center at University of North Carolina at Chapel Hill Gives several suggestions and descriptions of ways to organize your lit review.
  • Cochrane Handbook for Systematic Reviews of Interventions "The Cochrane Handbook for Systematic Reviews of Interventions is the official guide that describes in detail the process of preparing and maintaining Cochrane systematic reviews on the effects of healthcare interventions. "
  • Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) website "PRISMA is an evidence-based minimum set of items for reporting in systematic reviews and meta-analyses. PRISMA focuses on the reporting of reviews evaluating randomized trials, but can also be used as a basis for reporting systematic reviews of other types of research, particularly evaluations of interventions."
  • PRISMA Flow Diagram Generator Free tool that will generate a PRISMA flow diagram from a CSV file (sample CSV template provided) more... less... Please cite as: Haddaway, N. R., Page, M. J., Pritchard, C. C., & McGuinness, L. A. (2022). PRISMA2020: An R package and Shiny app for producing PRISMA 2020-compliant flow diagrams, with interactivity for optimised digital transparency and Open Synthesis Campbell Systematic Reviews, 18, e1230. https://doi.org/10.1002/cl2.1230
  • Rayyan "Rayyan is a 100% FREE web application to help systematic review authors perform their job in a quick, easy and enjoyable fashion. Authors create systematic reviews, collaborate on them, maintain them over time and get suggestions for article inclusion."
  • Covidence Covidence is a tool to help manage systematic reviews (and create PRISMA flow diagrams). **UMass Amherst doesn't subscribe, but Covidence offers a free trial for 1 review of no more than 500 records. It is also set up for researchers to pay for each review.
  • PROSPERO - Systematic Review Protocol Registry "PROSPERO accepts registrations for systematic reviews, rapid reviews and umbrella reviews. PROSPERO does not accept scoping reviews or literature scans. Sibling PROSPERO sites registers systematic reviews of human studies and systematic reviews of animal studies."
  • Critical Appraisal Tools from JBI Joanna Briggs Institute at the University of Adelaide provides these checklists to help evaluate different types of publications that could be included in a review.
  • Systematic Review Toolbox "The Systematic Review Toolbox is a community-driven, searchable, web-based catalogue of tools that support the systematic review process across multiple domains. The resource aims to help reviewers find appropriate tools based on how they provide support for the systematic review process. Users can perform a simple keyword search (i.e. Quick Search) to locate tools, a more detailed search (i.e. Advanced Search) allowing users to select various criteria to find specific types of tools and submit new tools to the database. Although the focus of the Toolbox is on identifying software tools to support systematic reviews, other tools or support mechanisms (such as checklists, guidelines and reporting standards) can also be found."
  • Abstrackr Free, open-source tool that "helps you upload and organize the results of a literature search for a systematic review. It also makes it possible for your team to screen, organize, and manipulate all of your abstracts in one place." -From Center for Evidence Synthesis in Health
  • SRDR Plus (Systematic Review Data Repository: Plus) An open-source tool for extracting, managing,, and archiving data developed by the Center for Evidence Synthesis in Health at Brown University
  • RoB 2 Tool (Risk of Bias for Randomized Trials) A revised Cochrane risk of bias tool for randomized trials
  • << Previous: Tips & Examples
  • Next: Writing & Citing Help >>
  • Last Updated: Jul 30, 2024 9:23 AM
  • URL: https://guides.library.umass.edu/litreviews

© 2022 University of Massachusetts Amherst • Site Policies • Accessibility

University of Texas

  • University of Texas Libraries

Literature Reviews

Steps in the literature review process.

  • What is a literature review?
  • Define your research question
  • Determine inclusion and exclusion criteria
  • Choose databases and search
  • Review Results
  • Synthesize Results
  • Analyze Results
  • Librarian Support
  • Artificial Intelligence (AI) Tools
  • You may need to some exploratory searching of the literature to get a sense of scope, to determine whether you need to narrow or broaden your focus
  • Identify databases that provide the most relevant sources, and identify relevant terms (controlled vocabularies) to add to your search strategy
  • Finalize your research question
  • Think about relevant dates, geographies (and languages), methods, and conflicting points of view
  • Conduct searches in the published literature via the identified databases
  • Check to see if this topic has been covered in other discipline's databases
  • Examine the citations of on-point articles for keywords, authors, and previous research (via references) and cited reference searching.
  • Save your search results in a citation management tool (such as Zotero, Mendeley or EndNote)
  • De-duplicate your search results
  • Make sure that you've found the seminal pieces -- they have been cited many times, and their work is considered foundational 
  • Check with your professor or a librarian to make sure your search has been comprehensive
  • Evaluate the strengths and weaknesses of individual sources and evaluate for bias, methodologies, and thoroughness
  • Group your results in to an organizational structure that will support why your research needs to be done, or that provides the answer to your research question  
  • Develop your conclusions
  • Are there gaps in the literature?
  • Where has significant research taken place, and who has done it?
  • Is there consensus or debate on this topic?
  • Which methodological approaches work best?
  • For example: Background, Current Practices, Critics and Proponents, Where/How this study will fit in 
  • Organize your citations and focus on your research question and pertinent studies
  • Compile your bibliography

Note: The first four steps are the best points at which to contact a librarian. Your librarian can help you determine the best databases to use for your topic, assess scope, and formulate a search strategy.

Videos Tutorials about Literature Reviews

This 4.5 minute video from Academic Education Materials has a Creative Commons License and a British narrator.

Recommended Reading

Cover Art

  • Last Updated: Aug 20, 2024 1:59 PM
  • URL: https://guides.lib.utexas.edu/literaturereviews

Creative Commons License

An SLR-tool: search process in practice: a tool to conduct and manage systematic literature review (SLR)

New citation alert added.

This alert has been successfully added and will be sent to:

You will be notified whenever a record that you have chosen has been cited.

To manage your alert preferences, click on the button below.

New Citation Alert!

Please log in to your account

Information & Contributors

Bibliometrics & citations.

  • Wu C Chakravorti T Carroll J Rajtmajer S (2024) Integrating measures of replicability into scholarly search: Challenges and opportunities Proceedings of the CHI Conference on Human Factors in Computing Systems 10.1145/3613904.3643043 (1-18) Online publication date: 11-May-2024 https://dl.acm.org/doi/10.1145/3613904.3643043
  • Sina L Secco C Blazevic M Nazemi K (2024) Guided Visual Analytics—A Visual Analytics Guidance Approach for Systematic Reviews in Research Artificial Intelligence and Visualization: Advancing Visual Knowledge Discovery 10.1007/978-3-031-46549-9_11 (319-343) Online publication date: 25-Apr-2024 https://doi.org/10.1007/978-3-031-46549-9_11
  • De Felice F Petrillo A Iovine G Salzano C Baffo I (2023) How Does the Metaverse Shape Education? A Systematic Literature Review Applied Sciences 10.3390/app13095682 13 :9 (5682) Online publication date: 5-May-2023 https://doi.org/10.3390/app13095682
  • Show More Cited By

Index Terms

Information systems

Information retrieval

Information systems applications

Software and its engineering

Software creation and management

Software post-development issues

Software verification and validation

Software notations and tools

Software configuration management and version control systems

Recommendations

Sesra: a web-based automated tool to support the systematic literature review process.

Systematic Literature Review (SLR) is a key tool for evidence-based practice as it combines results from multiple studies of a specific topic of research. Due its characteristics, it is a time consuming, hard process that requires a properly documented ...

Decision support tools for SLR search string construction

Systematic literature reviews (SLRs) have gained popularity during the last years as a form of providing state of the art about previous research. As part of the SLR tasks, devising the search strategy and particularly finding the right keywords to be ...

Guidelines for snowballing in systematic literature studies and a replication in software engineering

Background: Systematic literature studies have become common in software engineering, and hence it is important to understand how to conduct them efficiently and reliably.

Objective: This paper presents guidelines for conducting literature reviews using ...

Information

Published in.

  • General Chairs:

North Carolina State University

KAIST, South Korea

  • SIGSOFT: ACM Special Interest Group on Software Engineering

In-Cooperation

  • KIISE: Korean Institute of Information Scientists and Engineers

Association for Computing Machinery

New York, NY, United States

Publication History

Check for updates, author tags.

  • systematic literature review
  • Demonstration

Acceptance Rates

Upcoming conference, contributors, other metrics, bibliometrics, article metrics.

  • 7 Total Citations View Citations
  • 327 Total Downloads
  • Downloads (Last 12 months) 68
  • Downloads (Last 6 weeks) 3
  • Yahaya H Nadarajah G (2023) Determining key factors influencing SMEs’ performance: A systematic literature review and experts’ verification Cogent Business & Management 10.1080/23311975.2023.2251195 10 :3 Online publication date: 9-Nov-2023 https://doi.org/10.1080/23311975.2023.2251195
  • Jabar T Mahinderjit Singh M (2022) Exploration of Mobile Device Behavior for Mitigating Advanced Persistent Threats (APT): A Systematic Literature Review and Conceptual Framework Sensors 10.3390/s22134662 22 :13 (4662) Online publication date: 21-Jun-2022 https://doi.org/10.3390/s22134662
  • Duzen Z Riveni M Aktas M (2022) Misinformation Detection in Social Networks: A Systematic Literature Review Computational Science and Its Applications – ICCSA 2022 Workshops 10.1007/978-3-031-10545-6_5 (57-74) Online publication date: 23-Jul-2022 https://doi.org/10.1007/978-3-031-10545-6_5
  • Bahaa A Abdelaziz A Sayed A Elfangary L Fahmy H (2021) Monitoring Real Time Security Attacks for IoT Systems Using DevSecOps: A Systematic Literature Review Information 10.3390/info12040154 12 :4 (154) Online publication date: 7-Apr-2021 https://doi.org/10.3390/info12040154
  • Napoleao B Petrillo F Halle S (2021) Automated Support for Searching and Selecting Evidence in Software Engineering: A Cross-domain Systematic Mapping 2021 47th Euromicro Conference on Software Engineering and Advanced Applications (SEAA) 10.1109/SEAA53835.2021.00015 (45-53) Online publication date: Sep-2021 https://doi.org/10.1109/SEAA53835.2021.00015

View Options

Login options.

Check if you have access through your login credentials or your institution to get full access on this article.

Full Access

View options.

View or Download as a PDF file.

View online with eReader .

Share this Publication link

Copying failed.

Share on social media

Affiliations, export citations.

  • Please download or close your previous search result export first before starting a new bulk export. Preview is not available. By clicking download, a status dialog will open to start the export process. The process may take a few minutes but once it finishes a file will be downloadable from your browser. You may continue to browse the DL while the export process is in progress. Download
  • Download citation
  • Copy citation

We are preparing your search results for download ...

We will inform you here when the file is ready.

Your file of search results citations is now ready.

Your search export query has expired. Please try again.

Advertisement

Advertisement

How to conduct systematic literature reviews in management research: a guide in 6 steps and 14 decisions

  • Review Paper
  • Open access
  • Published: 12 May 2023
  • Volume 17 , pages 1899–1933, ( 2023 )

Cite this article

You have full access to this open access article

tool systematic literature review process

  • Philipp C. Sauer   ORCID: orcid.org/0000-0002-1823-0723 1 &
  • Stefan Seuring   ORCID: orcid.org/0000-0003-4204-9948 2  

29k Accesses

64 Citations

6 Altmetric

Explore all metrics

Systematic literature reviews (SLRs) have become a standard tool in many fields of management research but are often considerably less stringently presented than other pieces of research. The resulting lack of replicability of the research and conclusions has spurred a vital debate on the SLR process, but related guidance is scattered across a number of core references and is overly centered on the design and conduct of the SLR, while failing to guide researchers in crafting and presenting their findings in an impactful way. This paper offers an integrative review of the widely applied and most recent SLR guidelines in the management domain. The paper adopts a well-established six-step SLR process and refines it by sub-dividing the steps into 14 distinct decisions: (1) from the research question, via (2) characteristics of the primary studies, (3) to retrieving a sample of relevant literature, which is then (4) selected and (5) synthesized so that, finally (6), the results can be reported. Guided by these steps and decisions, prior SLR guidelines are critically reviewed, gaps are identified, and a synthesis is offered. This synthesis elaborates mainly on the gaps while pointing the reader toward the available guidelines. The paper thereby avoids reproducing existing guidance but critically enriches it. The 6 steps and 14 decisions provide methodological, theoretical, and practical guidelines along the SLR process, exemplifying them via best-practice examples and revealing their temporal sequence and main interrelations. The paper guides researchers in the process of designing, executing, and publishing a theory-based and impact-oriented SLR.

Similar content being viewed by others

tool systematic literature review process

The burgeoning role of literature review articles in management research: an introduction and outlook

tool systematic literature review process

On being ‘systematic’ in literature reviews

tool systematic literature review process

On being ‘systematic’ in literature reviews in IS

Explore related subjects.

  • Artificial Intelligence

Avoid common mistakes on your manuscript.

1 Introduction

The application of systematic or structured literature reviews (SLRs) has developed into an established approach in the management domain (Kraus et al. 2020 ), with 90% of management-related SLRs published within the last 10 years (Clark et al. 2021 ). Such reviews help to condense knowledge in the field and point to future research directions, thereby enabling theory development (Fink 2010 ; Koufteros et al. 2018 ). SLRs have become an established method by now (e.g., Durach et al. 2017 ; Koufteros et al. 2018 ). However, many SLR authors struggle to efficiently synthesize and apply review protocols and justify their decisions throughout the review process (Paul et al. 2021 ) since only a few studies address and explain the respective research process and the decisions to be taken in this process. Moreover, the available guidelines do not form a coherent body of literature but focus on the different details of an SLR, while a comprehensive and detailed SLR process model is lacking. For example, Seuring and Gold ( 2012 ) provide some insights into the overall process, focusing on content analysis for data analysis without covering the practicalities of the research process in detail. Similarly, Durach et al. ( 2017 ) address SLRs from a paradigmatic perspective, offering a more foundational view covering ontological and epistemological positions. Durach et al. ( 2017 ) emphasize the philosophy of science foundations of an SLR. Although somewhat similar guidelines for SLRs might be found in the wider body of literature (Denyer and Tranfield 2009 ; Fink 2010 ; Snyder 2019 ), they often take a particular focus and are less geared toward explaining and reflecting on the single choices being made during the research process. The current body of SLR guidelines leaves it to the reader to find the right links among the guidelines and to justify their inconsistencies. This is critical since a vast number of SLRs are conducted by early-stage researchers who likely struggle to synthesize the existing guidance and best practices (Fisch and Block 2018 ; Kraus et al. 2020 ), leading to the frustration of authors, reviewers, editors, and readers alike.

Filling these gaps is critical in our eyes since researchers conducting literature reviews form the foundation of any kind of further analysis to position their research into the respective field (Fink 2010 ). So-called “systematic literature reviews” (e.g., Davis and Crombie 2001 ; Denyer and Tranfield 2009 ; Durach et al. 2017 ) or “structured literature reviews” (e.g., Koufteros et al. 2018 ; Miemczyk et al. 2012 ) differ from nonsystematic literature reviews in that the analysis of a certain body of literature becomes a means in itself (Kraus et al. 2020 ; Seuring et al. 2021 ). Although two different terms are used for this approach, the related studies refer to the same core methodological references that are also cited in this paper. Therefore, we see them as identical and abbreviate them as SLR.

There are several guidelines on such reviews already, which have been developed outside the management area (e.g. Fink 2010 ) or with a particular focus on one management domain (e.g., Kraus et al. 2020 ). SLRs aim at capturing the content of the field at a point in time but should also aim at informing future research (Denyer and Tranfield 2009 ), making follow-up research more efficient and productive (Kraus et al. 2021 ). Such standalone literature reviews would and should also prepare subsequent empirical or modeling research, but usually, they require far more effort and time (Fisch and Block 2018 ; Lim et al. 2022 ). To achieve this preparation, SLRs can essentially a) describe the state of the literature, b) test a hypothesis based on the available literature, c) extend the literature, and d) critique the literature (Xiao and Watson 2019 ). Beyond guiding the next incremental step in research, SLRs “may challenge established assumptions and norms of a given field or topic, recognize critical problems and factual errors, and stimulate future scientific conversations around that topic” (Kraus et al. 2022 , p. 2578). Moreover, they have the power to answer research questions that are beyond the scope of individual empirical or modeling studies (Snyder 2019 ) and to build, elaborate, and test theories beyond this single study scope (Seuring et al. 2021 ). These contributions of an SLR may be highly influential and therefore underline the need for high-quality planning, execution, and reporting of their process and details.

Regardless of the individual aims of standalone SLRs, their numbers have exponentially risen in the last two decades (Kraus et al. 2022 ) and almost all PhD or large research project proposals in the management domain include such a standalone SLR to build a solid foundation for their subsequent work packages. Standalone SLRs have thus become a key part of management research (Kraus et al. 2021 ; Seuring et al. 2021 ), which is also underlined by the fact that there are journals and special issues exclusively accepting standalone SLRs (Kraus et al. 2022 ; Lim et al. 2022 ).

However, SLRs require a commitment that is often comparable to an additional research process or project. Hence, SLRs should not be taken as a quick solution, as a simplistic, descriptive approach would usually not yield a publishable paper (see also Denyer and Tranfield 2009 ; Kraus et al. 2020 ).

Furthermore, as with other research techniques, SLRs are based on the rigorous application of rules and procedures, as well as on ensuring the validity and reliability of the method (Fisch and Block 2018 ; Seuring et al. 2021 ). In effect, there is a need to ensure “the same level of rigour to reviewing research evidence as should be used in producing that research evidence in the first place” (Davis and Crombie 2001 , p.1). This rigor holds for all steps of the research process, such as establishing the research question, collecting data, analyzing it, and making sense of the findings (Durach et al. 2017 ; Fink 2010 ; Seuring and Gold 2012 ). However, there is a high degree of diversity where some would be justified, while some papers do not report the full details of the research process. This lack of detail contrasts with an SLR’s aim of creating a valid map of the currently available research in the reviewed field, as critical information on the review’s completeness and potential reviewer biases cannot be judged by the reader or reviewer. This further impedes later replications or extensions of such reviews, which could provide longitudinal evidence of the development of a field (Denyer and Tranfield 2009 ; Durach et al. 2017 ). Against this observation, this paper addresses the following question:

Which decisions need to be made in an SLR process, and what practical guidelines can be put forward for making these decisions?

Answering this question, the key contributions of this paper are fourfold: (1) identifying the gaps in existing SLR guidelines, (2) refining the SLR process model by Durach et al. ( 2017 ) through 14 decisions, (3) synthesizing and enriching guidelines for these decisions, exemplifying the key decisions by means of best practice SLRs, and (4) presenting and discussing a refined SLR process model.

In some cases, we point to examples from operations and supply chain management. However, they illustrate the purposes discussed in the respective sections. We carefully checked that the arguments held for all fields of management-related research, and multiple examples from other fields of management were also included.

2 Identification of the need for an enriched process model, including a set of sequential decisions and their interrelations

In line with the exponential increase in SLR papers (Kraus et al. 2022 ), multiple SLR guidelines have recently been published. Since 2020, we have found a total of 10 papers offering guidelines on SLRs and other reviews for the field of management in general or some of its sub-fields. These guidelines are of double interest to this paper since we aim to complement them to fill the gap identified in the introduction while minimizing the doubling of efforts. Table 1 lists the 10 most recent guidelines and highlights their characteristics, research objectives, contributions, and how our paper aims to complement these previous contributions.

The sheer number and diversity of guideline papers, as well as the relevance expressed in them, underline the need for a comprehensive and exhaustive process model. At the same time, the guidelines take specific foci on, for example, updating earlier guidelines to new technological potentials (Kraus et al. 2020 ), clarifying the foundational elements of SLRs (Kraus et al. 2022 ) and proposing a review protocol (Paul et al. 2021 ) or the application and development of theory in SLRs (Seuring et al. 2021 ). Each of these foci fills an entire paper, while the authors acknowledge that much more needs to be considered in an SLR. Working through these most recent guidelines, it becomes obvious that the common paper formats in the management domain create a tension for guideline papers between elaborating on a) the SLR process and b) the details, options, and potentials of individual process steps.

Our analysis in Table 1 evidences that there are a number of rich contributions on aspect b), while the aspect a) of SLR process models has not received the same attention despite the substantial confusion of authors toward them (Paul et al. 2021 ). In fact, only two of the most recent guidelines approach SLR process models. First, Kraus et al. ( 2020 ) incrementally extended the 20-year-old Tranfield et al. ( 2003 ) three-stage model into four stages. A little later, Paul et al. ( 2021 ) proposed a three-stage (including six sub-stages) SPAR-4-SLR review protocol. It integrates the PRISMA reporting items (Moher et al. 2009 ; Page et al. 2021 ) that originate from clinical research to define 14 actions stating what items an SLR in management needs to report for reasons of validity, reliability, and replicability. Almost naturally, these 14 reporting-oriented actions mainly relate to the first SLR stage of “assembling the literature,” which accounts for nine of the 14 actions. Since this protocol is published in a special issue editorial, its presentation and elaboration are somewhat limited by the already mentioned word count limit. Nevertheless, the SPAR-4-SLR protocol provides a very useful checklist for researchers that enables them to include all data required to document the SLR and to avoid confusion from editors, reviewers, and readers regarding SLR characteristics.

Beyond Table 1 , Durach et al. ( 2017 ) synthesized six common SLR “steps” that differ only marginally in the delimitation of one step to another from the sub-stages of the previously mentioned SLR processes. In addition, Snyder ( 2019 ) proposed a process comprising four “phases” that take more of a bird’s perspective in addressing (1) design, (2) conduct, (3) analysis, and (4) structuring and writing the review. Moreover, Xiao and Watson ( 2019 ) proposed only three “stages” of (1) planning, (2) conducting, and (3) reporting the review that combines the previously mentioned conduct and the analysis and defines eight steps within them. Much in line with the other process models, the final reporting stage contains only one of the eight steps, leaving the reader somewhat alone in how to effectively craft a manuscript that contributes to the further development of the field.

In effect, the mentioned SLR processes differ only marginally, while the systematic nature of actions in the SPAR-4-SLR protocol (Paul et al. 2021 ) can be seen as a reporting must-have within any of the mentioned SLR processes. The similarity of the SLR processes is, however, also evident in the fact that they leave open how the SLR analysis can be executed, enriched, and reflected to make a contribution to the reviewed field. In contrast, this aspect is richly described in the other guidelines that do not offer an SLR process, leading us again toward the tension for guideline papers between elaborating on a) the SLR process and b) the details, options, and potentials of each process step.

To help (prospective) SLR authors successfully navigate this tension of existing guidelines, it is thus the ambition of this paper to adopt a comprehensive SLR process model along which an SLR project can be planned, executed, and written up in a coherent way. To enable this coherence, 14 distinct decisions are defined, reflected, and interlinked, which have to be taken across the different steps of the SLR process. At the same time, our process model aims to actively direct researchers to the best practices, tips, and guidance that previous guidelines have provided for individual decisions. We aim to achieve this by means of an integrative review of the relevant SLR guidelines, as outlined in the following section.

3 Methodology: an integrative literature review of guidelines for systematic literature reviews in management

It might seem intuitive to contribute to the debate on the “gold standard” of systematic literature reviews (Davis et al. 2014 ) by conducting a systematic review ourselves. However, there are different types of reviews aiming for distinctive contributions. Snyder ( 2019 ) distinguished between a) systematic, b) semi-systematic, and c) integrative (or critical) reviews, which aim for i) (mostly quantitative) synthesis and comparison of prior (primary) evidence, ii) an overview of the development of a field over time, and iii) a critique and synthesis of prior perspectives to reconceptualize or advance them. Each review team needs to position itself in such a typology of reviews to define the aims and scope of the review. To do so and structure the related research process, we adopted the four generic steps for an (integrative) literature review by Snyder ( 2019 )—(1) design, (2) conduct, (3) analysis, and (4) structuring and writing the review—on which we report in the remainder of this section. Since the last step is a very practical one that, for example, asks, “Is the contribution of the review clearly communicated?” (Snyder 2019 ), we will focus on the presentation of the method applied to the initial three steps:

(1) Regarding the design, we see the need for this study emerging from our experience in reviewing SLR manuscripts, supervising PhD students who, almost by default, need to prepare an SLR, and recurring discussions on certain decisions in the process of both. These discussions regularly left some blank or blurry spaces (see Table 1 ) that induced substantial uncertainty regarding critical decisions in the SLR process (Paul et al. 2021 ). To address this gap, we aim to synthesize prior guidance and critically enrich it, thus adopting an integrative approach for reviewing existing SLR guidance in the management domain (Snyder 2019 ).

(2) To conduct the review, we started collecting the literature that provided guidance on the individual SLR parts. We built on a sample of 13 regularly cited or very recent papers in the management domain. We started with core articles that we successfully used to publish SLRs in top-tier OSCM journals, such as Tranfield et al. ( 2003 ) and Durach et al. ( 2017 ), and we checked their references and papers that cited these publications. The search focus was defined by the following criteria: the articles needed to a) provide original methodological guidance for SLRs by providing new aspects of the guideline or synthesizing existing ones into more valid guidelines and b) focus on the management domain. Building on the nature of a critical or integrative review that does not require a full or representative sample (Snyder 2019 ), we limited the sample to the papers displayed in Table 2 that built the core of the currently applied SLR guidelines. In effect, we found 11 technical papers and two SLRs of SLRs (Carter and Washispack 2018 ; Seuring and Gold 2012 ). From the latter, we mainly analyzed the discussion and conclusion parts that explicitly developed guidance on conducting SLRs.

(3) For analyzing these papers, we first adopted the six-step SLR process proposed by Durach et al. ( 2017 , p.70), which they define as applicable to any “field, discipline or philosophical perspective”. The contrast between the six-step SLR process used for the analysis and the four-step process applied by ourselves may seem surprising but is justified by the use of an integrative approach. This approach differs mainly in retrieving and selecting pertinent literature that is key to SLRs and thus needs to be part of the analysis framework.

While deductively coding the sample papers against Durach et al.’s ( 2017 ), guidance in the six steps, we inductively built a set of 14 decisions presented in the right columns of Table 2 that are required to be made in any SLR. These decisions built a second and more detailed level of analysis, for which the single guidelines were coded as giving low, medium, or high levels of detail (see Table 3 ), which helped us identify the gaps in the current guidance papers and led our way in presenting, critically discussing, and enriching the literature. In effect, we see that almost all guidelines touch on the same issues and try to give a comprehensive overview. However, this results in multiple guidelines that all lack the space to go into detail, while only a few guidelines focus on filling a gap in the process. It is our ambition with this analysis to identify the gaps in the guidelines, thereby identifying a precise need for refinement, and to offer a first step into this refinement. Adopting advice from the literature sample, the coding was conducted by the entire author team (Snyder 2019 ; Tranfield et al. 2003 ) including discursive alignments of interpretation (Seuring and Gold 2012 ). This enabled a certain reliability and validity of the analysis by reducing the within-study and expectancy bias (Durach et al. 2017 ), while the replicability was supported by reporting the review sample and the coding results in Table 3 (Carter and Washispack 2018 ).

(4) For the writing of the review, we only pointed to the unusual structure of presenting the method without a theory section and then the findings in the following section. However, this was motivated by the nature of the integrative review so that the review findings at the same time represent the “state of the art,” “literature review,” or “conceptualization” sections of a paper.

4 Findings of the integrative review: presentation, critical discussion, and enrichment of prior guidance

4.1 the overall research process for a systematic literature review.

Even within our sample of only 13 guidelines, there are four distinct suggestions for structuring the SLR process. One of the earliest SLR process models was proposed by Tranfield et al. ( 2003 ) encompassing the three stages of (1) planning the review, (2) conducting a review, and (3) reporting and dissemination. Snyder ( 2019 ) proposed four steps employed in this study: (1) design, (2) conduct, (3) analysis, and (4) structuring and writing the review. Borrowing from content analysis guidelines, Seuring and Gold ( 2012 ) defined four steps: (1) material collection, (2) descriptive analysis, (3) category selection, and (4) material evaluation. Most recently Kraus et al. ( 2020 ) proposed four steps: (1) planning the review, (2) identifying and evaluating studies, (3) extracting and synthesizing data, and (4) disseminating the review findings. Most comprehensively, Durach et al. ( 2017 ) condensed prior process models into their generic six steps for an SLR. Adding the review of the process models by Snyder ( 2019 ) and Seuring and Gold ( 2012 ) to Durach et al.’s ( 2017 ) SLR process review of four papers, we support their conclusion of the general applicability of the six steps defined. Consequently, these six steps form the backbone of our coding scheme, as shown in the left column of Table 2 and described in the middle column.

As stated in Sect.  3 , we synthesized the review papers against these six steps but experienced that the papers were taking substantially different foci by providing rich details for some steps while largely bypassing others. To capture this heterogeneity and better operationalize the SLR process, we inductively introduced the right column, identifying 14 decisions to be made. These decisions are all elaborated in the reviewed papers but to substantially different extents, as the detailed coding results in Table 3 underline.

Mapping Table 3 for potential gaps in the existing guidelines, we found six decisions on which we found only low- to medium-level details, while high-detail elaboration was missing. These six decisions, which are illustrated in Fig.  1 , belong to three steps: 1: defining the research question, 5: synthesizing the literature, and 6: reporting the results. This result underscores our critique of currently unbalanced guidance that is, on the one hand, detailed on determining the required characteristics of primary studies (step 2), retrieving a sample of potentially relevant literature (step 3), and selecting the pertinent literature (step 4). On the other hand, authors, especially PhD students, are left without substantial guidance on the steps critical to publication. Instead, they are called “to go one step further … and derive meaningful conclusions” (Fisch and Block 2018 , p. 105) without further operationalizations on how this can be achieved; for example, how “meet the editor” conference sessions regularly cause frustration among PhDs when editors call for “new,” “bold,” and “relevant” research. Filling the gaps in the six decisions with best practice examples and practical experience is the main focus of this study’s contribution. The other eight decisions are synthesized with references to the guidelines that are most helpful and relevant for the respective step in our eyes.

figure 1

The 6 steps and 14 decisions of the SLR process

4.2 Step 1: defining the research question

When initiating a research project, researchers make three key decisions.

Decision 1 considers the essential tasks of establishing a relevant and timely research question, but despite the importance of the decision, which determines large parts of further decisions (Snyder 2019 ; Tranfield et al. 2003 ), we only find scattered guidance in the literature. Hence, how can a research topic be specified to allow a strong literature review that is neither too narrow nor too broad? The latter is the danger in meta-reviews (i.e., reviews of reviews) (Aguinis et al. 2020 ; Carter and Washispack 2018 ; Kache and Seuring 2014 ). In this respect, even though the method would be robust, the findings would not be novel. In line with Carter and Washispack ( 2018 ), there should always be room for new reviews, yet over time, they must move from a descriptive overview of a field further into depth and provide detailed analyses of constructs. Clark et al. ( 2021 ) provided a detailed but very specific reflection on how they crafted a research question for an SLR and that revisiting the research question multiple times throughout the SLR process helps to coherently and efficiently move forward with the research. More generically, Kraus et al. ( 2020 ) listed six key contributions of an SLR that can guide the definition of the research question. Finally, Snyder ( 2019 ) suggested moving into more detail from existing SLRs and specified two main avenues for crafting an SLR research question that are either investigating the relationship among multiple effects, the effect of (a) specific variable(s), or mapping the evidence regarding a certain research area. For the latter, we see three possible alternative approaches, starting with a focus on certain industries. Examples are analyses of the food industry (Beske et al. 2014 ), retailing (Wiese et al. 2012 ), mining and minerals (Sauer and Seuring 2017 ), or perishable product supply chains (Lusiantoro et al. 2018 ) and traceability at the example of the apparel industry (Garcia-Torres et al. 2019 ). A second opportunity would be to assess the status of research in a geographical area that composes an interesting context from a research perspective, such as sustainable supply chain management (SSCM) in Latin America (Fritz and Silva 2018 ), yet this has to be justified explicitly, avoiding the fact that geographical focus is taken as the reason per se (e.g., Crane et al. 2016 ). A third variant addresses emerging issues, such as SCM, in a base-of-the-pyramid setting (Khalid and Seuring 2019 ) and the use of blockchain technology (Wang et al. 2019 ) or digital transformation (Hanelt et al. 2021 ). These approaches limit the reviewed field to enable a more contextualized analysis in which the novelty, continued relevance, or unjustified underrepresentation of the context can be used to specify a research gap and related research question(s). This also impacts the following decisions, as shown below.

Decision 2 concerns the option for a theoretical approach (i.e., the adoption of an inductive, abductive, or deductive approach) to theory building through the literature review. The review of previous guidance on this delivers an interesting observation. On the one hand, there are early elaborations on systematic reviews, realist synthesis, meta-synthesis, and meta-analysis by Tranfield et al. ( 2003 ) that are borrowing from the origins of systematic reviews in medical research. On the other hand, recent management-related guidelines largely neglect details of related decisions, but point out that SLRs are a suitable tool for theory building (Kraus et al. 2020 ). Seuring et al. ( 2021 ) set out to fill this gap and provided substantial guidance on how to use theory in SLRs to advance the field. To date, the option for a theoretical approach is only rarely made explicit, leaving the reader often puzzled about how advancement in theory has been crafted and impeding a review’s replicability (Seuring et al. 2021 ). Many papers still leave related choices in the dark (e.g., Rhaiem and Amara 2021 ; Rojas-Córdova et al. 2022 ) and move directly from the introduction to the method section.

In Decision 3, researchers need to adopt a theoretical framework (Durach et al. 2017 ) or at least a theoretical starting point, depending on the most appropriate theoretical approach (Seuring et al. 2021 ). Here, we find substantial guidance by Durach et al. ( 2017 ) that underlines the value of adopting a theoretical lens to investigate SCM phenomena and the literature. Moreover, the choice of a theoretical anchor enables a consistent definition and operationalization of constructs that are used to analyze the reviewed literature (Durach et al. 2017 ; Seuring et al. 2021 ). Hence, providing some upfront definitions is beneficial, clarifying what key terminology would be used in the subsequent paper, such as Devece et al. ( 2019 ) introduce their terminology on coopetition. Adding a practical hint beyond the elaborations of prior guidance papers for taking up established constructs in a deductive analysis (decision 2), there would be the question of whether these can yield interesting findings.

Here, it would be relevant to specify what kind of analysis is aimed for the SLR, where three approaches might be distinguished (i.e., bibliometric analysis, meta-analysis, and content analysis–based studies). Briefly distinguishing them, the core difference would be how many papers can be analyzed employing the respective method. Bibliometric analysis (Donthu et al. 2021 ) usually relies on the use of software, such as Biblioshiny, allowing the creation of figures on citations and co-citations. These figures enable the interpretation of large datasets in which several hundred papers can be analyzed in an automated manner. This allows for distinguishing among different research clusters, thereby following a more inductive approach. This would be contrasted by meta-analysis (e.g., Leuschner et al. 2013 ), where often a comparatively smaller number of papers is analyzed (86 in the respective case) but with a high number of observations (more than 17,000). The aim is to test for statistically significant correlations among single constructs, which requires that the related constructs and items be precisely defined (i.e., a clearly deductive approach to the analysis).

Content analysis is the third instrument frequently applied to data analysis, where an inductive or deductive approach might be taken (Seuring et al. 2021 ). Content-based analysis (see decision 9 in Sect.  4.6 ; Seuring and Gold 2012 ) is a labor-intensive step and can hardly be changed ex post. This also implies that only a certain number of papers might be analyzed (see Decision 6 in Sect.  4.5 ). It is advisable to adopt a wider set of constructs for the analysis stemming even from multiple established frameworks since it is difficult to predict which constructs and items might yield interesting insights. Hence, coding a more comprehensive set of items and dropping some in the process is less problematic than starting an analysis all over again for additional constructs and items. However, in the process of content analysis, such an iterative process might be required to improve the meaningfulness of the data and findings (Seuring and Gold 2012 ). A recent example of such an approach can be found in Khalid and Seuring ( 2019 ), building on the conceptual frameworks for SSCM of Carter and Rogers ( 2008 ), Seuring and Müller ( 2008 ), and Pagell and Wu ( 2009 ). This allows for an in-depth analysis of how SSCM constructs are inherently referred to in base-of-the-pyramid-related research. The core criticism and limitation of such an approach is the random and subjectively biased selection of frameworks for the purpose of analysis.

Beyond the aforementioned SLR methods, some reviews, similar to the one used here, apply a critical review approach. This is, however, nonsystematic, and not an SLR; thus, it is beyond the scope of this paper. Interested readers can nevertheless find some guidance on critical reviews in the available literature (e.g., Kraus et al. 2022 ; Snyder 2019 ).

4.3 Step 2: determining the required characteristics of primary studies

After setting the stage for the review, it is essential to determine which literature is to be reviewed in Decision 4. This topic is discussed by almost all existing guidelines and will thus only briefly be discussed here. Durach et al. ( 2017 ) elaborated in great detail on defining strict inclusion and exclusion criteria that need to be aligned with the chosen theoretical framework. The relevant units of analysis need to be specified (often a single paper, but other approaches might be possible) along with suitable research methods, particularly if exclusively empirical studies are reviewed or if other methods are applied. Beyond that, they elaborated on potential quality criteria that should be applied. The same is considered by a number of guidelines that especially draw on medical research, in which systematic reviews aim to pool prior studies to infer findings from their total population. Here, it is essential to ensure the exclusion of poor-quality evidence that would lower the quality of the review findings (Mulrow 1987 ; Tranfield et al. 2003 ). This could be ensured by, for example, only taking papers from journals listed on the Web of Science or Scopus or journals listed in quartile 1 of Scimago ( https://www.scimagojr.com/ ), a database providing citation and reference data for journals.

The selection of relevant publication years should again follow the purpose of the study defined in Step 1. As such, there might be a justified interest in the wide coverage of publication years if a historical perspective is taken. Alternatively, more contemporary developments or the analysis of very recent issues can justify the selection of very few years of publication (e.g., Kraus et al. 2022 ). Again, it is hard to specify a certain time period covered, but if developments of a field should be analyzed, a five-year period might be a typical lower threshold. On current topics, there is often a trend of rising publishing numbers. This scenario implies the rising relevance of a topic; however, this should be treated with caution. The total number of papers published per annum has increased substantially in recent years, which might account for the recently heightened number of papers on a certain topic.

4.4 Step 3: retrieving a sample of potentially relevant literature

After defining the required characteristics of the literature to be reviewed, the literature needs to be retrieved based on two decisions. Decision 5 concerns suitable literature sources and databases that need to be defined. Turning to Web of Science or Scopus would be two typical options found in many of the examples mentioned already (see also detailed guidance by Paul and Criado ( 2020 ) as well as Paul et al. ( 2021 )). These databases aggregate many management journals, and a typical argument for turning to the Web of Science database is the inclusion of impact factors, as they indicate a certain minimum quality of the journal (Sauer and Seuring 2017 ). Additionally, Google Scholar is increasingly mentioned as a usable search engine, often providing higher numbers of search results than the mentioned databases (e.g., Pearce 2018 ). These results often entail duplicates of articles from multiple sources or versions of the same article, as well as articles in predatory journals (Paul et al. 2021 ). Therefore, we concur with Paul et al. ( 2021 ) who underline the quality assurance mechanisms in Web of Science and Scopus, making them preferred databases for the literature search. From a practical perspective, it needs to be mentioned that SLRs in management mainly rely on databases that are not free to use. Against this limitation, Pearce ( 2018 ) provided a list of 20 search engines that are free of charge and elaborated on their advantages and disadvantages. Due to the individual limitations of the databases, it is advisable to use a combination of them (Kraus et al. 2020 , 2022 ) and build a consolidated sample by screening the papers found for duplicates, as regularly done in SLRs.

This decision also includes the choice of the types of literature to be analyzed. Typically, journal papers are selected, ensuring that the collected papers are peer-reviewed and have thus undergone an academic quality management process. Meanwhile, conference papers are usually avoided since they are often less mature and not checked for quality (e.g., Seuring et al. 2021 ). Nevertheless, for emerging topics, it might be too restrictive to consider only peer-reviewed journal articles and limit the literature to only a few references. Analyzing such rapidly emerging topics is relevant for timely and impact-oriented research and might justify the selection of different sources. Kraus et al. ( 2020 ) provided a discussion on the use of gray literature (i.e., nonacademic sources), and Sauer ( 2021 ) provided an example of a review of sustainability standards from a management perspective to derive implications for their application by managers on the one hand and for enhancing their applicability on the other hand.

Another popular way to limit the review sample is the restriction to a certain list of journals (Kraus et al. 2020 ; Snyder 2019 ). While this is sometimes favored by highly ranked journals, Carter and Washispack ( 2018 ), for example, found that many pertinent papers are not necessarily published in journals within the field. Webster and Watson ( 2002 ) quite tellingly cited a reviewer labeling the selection of top journals as an unjustified excuse for investigating the full body of relevant literature. Both aforementioned guidelines thus discourage the restriction to particular journals, a guidance that we fully support.

However, there is an argument to be made supporting the exclusion of certain lower-ranked journals. This can be done, for example, by using Scimago Journal quartiles ( www.scimagojr.com , last accessed 13. of April 2023) and restricting it to journals in the first quartile (e.g., Yavaprabhas et al. 2022 ). Other papers (e.g., Kraus et al. 2021 ; Rojas-Córdova et al. 2022 ) use certain journal quality lists to limit their sample. However, we argue for a careful check by the authors against the topic reviewed regarding what would be included and excluded.

Decision 6 entails the definition of search terms and a search string to be applied in the database just chosen. The search terms should reflect the aims of the review and the exclusion criteria that might be derived from the unit of analysis and the theoretical framework (Durach et al. 2017 ; Snyder 2019 ). Overall, two approaches to keywords can be observed. First, some guides suggest using synonyms of the key terms of interest (e.g., Durach et al. 2017 ; Kraus et al. 2020 ) in order to build a wide baseline sample that will be condensed in the next step. This is, of course, especially helpful if multiple terms delimitate a field together or different synonymous terms are used in parallel in different fields or journals. Empirical journals in supply chain management, for example, use the term “multiple supplier tiers ” (e.g., Tachizawa and Wong 2014 ), while modeling journals in the same field label this as “multiple supplier echelons ” (e.g., Brandenburg and Rebs 2015 ). Second, in some cases, single keywords are appropriate for capturing a central aspect or construct of a field if the single keyword has a global meaning tying this field together. This approach is especially relevant to the study of relatively broad terms, such as “social media” (Lim and Rasul 2022 ). However, this might result in very high numbers of publications found and therefore requires a purposeful combination with other search criteria, such as specific journals (Kraus et al. 2021 ; Lim et al. 2021 ), publication dates, article types, research methods, or the combination with keywords covering domains to which the search is aimed to be specified.

Since SLRs are often required to move into detail or review the intersections of relevant fields, we recommend building groups of keywords (single terms or multiple synonyms) for each field to be connected that are coupled via Boolean operators. To determine when a point of saturation for a keyword group is reached, one could monitor the increase in papers found in a database when adding another synonym. Once the increase is significantly decreasing or even zeroing, saturation is reached (Sauer and Seuring 2017 ). The keywords themselves can be derived from the list of keywords of influential publications in the field, while attention should be paid to potential synonyms in neighboring fields (Carter and Washispack 2018 ; Durach et al. 2017 ; Kraus et al. 2020 ).

4.5 Step 4: selecting the pertinent literature

The inclusion and exclusion criteria (Decision 6) are typically applied in Decision 7 in a two-stage process, first on the title, abstract, and keywords of an article before secondly applying them to the full text of the remaining articles (see also Kraus et al. 2020 ; Snyder 2019 ). Beyond this, Durach et al. ( 2017 ) underlined that the pertinence of the publication regarding units of analysis and the theoretical framework needs to be critically evaluated in this step to avoid bias in the review analysis. Moreover, Carter and Washispack ( 2018 ) requested the publication of the included and excluded sources to ensure the replicability of Steps 3 and 4. This can easily be done as an online supplement to an eventually published review article.

Nevertheless, the question remains: How many papers justify a literature review? While it is hard to specify how many papers comprise a body of literature, there might be certain thresholds for which Kraus et al. ( 2020 ) provide a useful discussion. As a rough guide, more than 50 papers would usually make a sound starting point (see also Paul and Criado 2020 ), while there are SLRs on emergent topics, such as multitier supply chain management, where 39 studies were included (Tachizawa and Wong 2014 ). An SLR on “learning from innovation failures” builds on 36 papers (Rhaiem and Amara 2021 ), which we would see as the lower threshold. However, such a low number should be an exception, and anything lower would certainly trigger the following question: Why is a review needed? Meanwhile, there are also limits on how many papers should be reviewed. While there are cases with 191 (Seuring and Müller 2008 ), 235 (Rojas-Córdova et al. 2022 ), or up to nearly 400 papers reviewed (Spens and Kovács 2006 ), these can be regarded as upper thresholds. Over time, similar topics seem to address larger datasets.

4.6 Step 5: synthesizing the literature

Before synthesizing the literature, Decision 8 considers the selection of a data extraction tool for which we found surprisingly little guidance. Some guidance is given on the use of cloud storage to enable remote team work (Clark et al. 2021 ). Beyond this, we found that SLRs have often been compiled with marked and commented PDFs or printed papers that were accompanied by tables (Kraus et al. 2020 ) or Excel sheets (see also the process tips by Clark et al. 2021 ). This sheet tabulated the single codes derived from the theoretical framework (Decision 3) and the single papers to be reviewed (Decision 7) by crossing out individual cells, signaling the representation of a particular code in a particular paper. While the frequency distribution of the codes is easily compiled from this data tool, the related content needs to be looked at in the papers in a tedious back-and-forth process. Beyond that, we would strongly recommend using data analysis software, such as MAXQDA or NVivo. Such programs enable the import of literature in PDF format and the automatic or manual coding of text passages, their comparison, and tabulation. Moreover, there is a permanent and editable reference of the coded text to a code. This enables a very quick compilation of content summaries or statistics for single codes and the identification of qualitative and quantitative links between codes and papers.

All the mentioned data extraction or data processing tools require a license and therefore are not free of cost. While many researchers may benefit from national or institutional subscriptions to these services, others may not. As a potential alternative, Pearce ( 2018 ) proposed a set of free open-source software (FOSS), including an elaboration on how they can be combined to perform an SLR. He also highlighted that both free and proprietary solutions have advantages and disadvantages that are worthwhile for those who do not have the required tools provided by their employers or other institutions they are members of. The same may apply to the literature databases used for the literature acquisition in Decision 5 (Pearce 2018 ).

Moreover, there is a link to Step 1, Decision 3, where bibliometric reviews and meta-analyses were mentioned. These methods, which are alternatives to content analysis–based approaches, have specific demands, so specific tools would be appropriate, such as the Biblioshiny software or VOSviewer. As we will point out for all decisions, there is a high degree of interdependence among the steps and decisions made.

Decision 9 looks at conducting the data analysis, such as coding against (pre-defined) constructs, in SLRs that rely, in most cases, on content analysis. Seuring and Gold ( 2012 ) elaborated in detail on its characteristics and application in SLRs. As this paper also explains the process of qualitative content analysis in detail, repetition is avoided here, but a summary is offered. Since different ways exist to conduct a content analysis, it is even more important to explain and justify, for example, the choice of an inductive or deductive approach (see Decision 2). In several cases, analytic variables are applied on the go, so there is no theory-based introduction of related constructs. However, to ensure the validity and replicability of the review (see Decision 11), it is necessary to explicitly define all the variables and codes used to analyze and synthesize the reviewed material (Durach et al. 2017 ; Seuring and Gold 2012 ). To build a valid framework as the SLR outcome, it is vital to ensure that the constructs used for the data analysis are sufficiently defined, mutually exclusive, and comprehensively exhaustive. For meta-analysis, the predefined constructs and items would demand quantitative coding so that the resulting data could be analyzed using statistical software tools such as SPSS or R (e.g., Xiao and Watson 2019 ). Pointing to bibliometric analysis again, the respective software would be used for data analysis, yielding different figures and paper clusters, which would then require interpretation (e.g., Donthu et al. 2021 ; Xiao and Watson 2019 ).

Decision 10, on conducting subsequent statistical analysis, considers follow-up analysis of the coding results. Again, this is linked to the chosen SLR method, and a bibliographic analysis will require a different statistical analysis than a content analysis–based SLR (e.g., Lim et al. 2022 ; Xiao and Watson 2019 ). Beyond the use of content analysis and the qualitative interpretation of its results, applying contingency analysis offers the opportunity to quantitatively assess the links among constructs and items. It provides insights into which items are correlated with each other without implying causality. Thus, the interpretation of the findings must explain the causality behind the correlations between the constructs and the items. This must be based on sound reasoning and linking the findings to theoretical arguments. For SLRs, there have recently been two kinds of applications of contingency analysis, differentiated by unit of analysis. De Lima et al. ( 2021 ) used the entire paper as the unit of analysis, deriving correlations on two constructs that were used together in one paper. This is, of course, subject to critique as to whether the constructs really represent correlated content. Moving a level deeper, Tröster and Hiete ( 2018 ) used single-text passages on one aspect, argument, or thought as the unit of analysis. Such an approach is immune against the critique raised before and can yield more valid statistical support for thematic analysis. Another recent methodological contribution employing the same contingency analysis–based approach was made by Siems et al. ( 2021 ). Their analysis employs constructs from SSCM and dynamic capabilities. Employing four subsets of data (i.e., two time periods each in the food and automotive industries), they showed that the method allows distinguishing among time frames as well as among industries.

However, the unit of analysis must be precisely explained so that the reader can comprehend it. Both examples use contingency analysis to identify under-researched topics and develop them into research directions whose formulation represents the particular aim of an SLR (Paul and Criado 2020 ; Snyder 2019 ). Other statistical tools might also be applied, such as cluster analysis. Interestingly, Brandenburg and Rebs ( 2015 ) applied both contingency and cluster analyses. However, the authors stated that the contingency analysis did not yield usable results, so they opted for cluster analysis. In effect, Brandenburg and Rebs ( 2015 ) added analytical depth to their analysis of model types in SSCM by clustering them against the main analytical categories of content analysis. In any case, the application of statistical tools needs to fit the study purpose (Decision 1) and the literature sample (Decision 7), just as in their more conventional applications (e.g., in empirical research processes).

Decision 11 regards the additional consideration of validity and reliability criteria and emphasizes the need for explaining and justifying the single steps of the research process (Seuring and Gold 2012 ), much in line with other examples of research (Davis and Crombie 2001 ). This is critical to underlining the quality of the review but is often neglected in many submitted manuscripts. In our review, we find rich guidance on this decision, to which we want to guide readers (see Table 3 ). In particular, Durach et al. ( 2017 ) provide an entire section of biases and what needs to be considered and reported on them. Moreover, Snyder ( 2019 ) regularly reflects on these issues in her elaborations. This rich guidance elaborates on how to ensure the quality of the individual steps of the review process, such as sampling, study inclusion and exclusion, coding, synthesizing, and more practical issues, including team composition and teamwork organization, which are discussed in some guidelines (e.g., Clark et al. 2021 ; Kraus et al. 2020 ). We only want to underline that the potential biases are, of course, to be seen in conjunction with Decisions 2, 3, 4, 5, 6, 7, 9, and 10. These decisions and the elaboration by Durach et al. ( 2017 ) should provide ample points of reflection that, however, many SLR manuscripts fail to address.

4.7 Step 6: reporting the results

In the final step, there are three decisions on which there is surprisingly little guidance, although reviews often fail in this critical part of the process (Kraus et al. 2020 ). The reviewed guidelines discuss the presentation almost exclusively, while almost no guidance is given on the overall paper structure or the key content to be reported.

Consequently, the first choice to be made in Decision 12 is regarding the paper structure. We suggest following the five-step logic of typical research papers (see also Fisch and Block 2018 ) and explaining only a few points in which a difference from other papers is seen.

(1) Introduction: While the introduction would follow a conventional logic of problem statement, research question, contribution, and outline of the paper (see also Webster and Watson 2002 ), the next parts might depend on the theoretical choices made in Decision 2.

(2) Literature review section: If deductive logic is taken, the paper usually has a conventional flow. After the introduction, the literature review section covers the theoretical background and the choice of constructs and variables for the analysis (De Lima et al. 2021 ; Dieste et al. 2022 ). To avoid confusion in this section with the literature review, its labeling can also be closer to the reviewed object.

If an inductive approach is applied, it might be challenging to present the theoretical basis up front, as the codes emerge only from analyzing the material. In this case, the theory section might be rather short, concentrating on defining the core concepts or terms used, for example, in the keyword-based search for papers. The latter approach is exemplified by the study at hand, which presents a short review of the available literature in the introduction and the first part of the findings. However, we do not perform a systematic but integrative review, which allows for more freedom and creativity (Snyder 2019 ).

(3) Method section: This section should cover the steps and follow the logic presented in this paper or any of the reviewed guidelines so that the choices made during the research process are transparently disclosed (Denyer and Tranfield 2009 ; Paul et al. 2021 ; Xiao and Watson 2019 ). In particular, the search for papers and their selection requires a sound explanation of each step taken, including the provision of reasons for the delimitation of the final paper sample. A stage that is often not covered in sufficient detail is data analysis (Seuring and Gold 2012 ). This also needs to be outlined so that the reader can comprehend how sense has been made of the material collected. Overall, the demands for SLR papers are similar to case studies, survey papers, or almost any piece of empirical research; thus, each step of the research process needs to be comprehensively described, including Decisions 4–10. This comprehensiveness must also include addressing measures for validity and reliability (see Decision 11) or other suitable measures of rigor in the research process since they are a critical issue in literature reviews (Durach et al. 2017 ). In particular, inductively conducted reviews are prone to subjective influences and thus require sound reporting of design choices and their justification.

(4) Findings: The findings typically start with a descriptive analysis of the literature covered, such as journals, distribution across years, or (empirical) methods applied (Tranfield et al. 2003 ). For modeling-related reviews, classifying papers against the approach chosen is a standard approach, but this can often also serve as an analytic category that provides detailed insights. The descriptive analysis should be kept short since a paper only presenting descriptive findings will not be of great interest to other researchers due to the missing contribution (Snyder 2019 ). Nevertheless, there are opportunities to provide interesting findings in the descriptive analysis. Beyond a mere description of the distributions of the single results, such as the distribution of methods used in the sample, authors should combine analytical categories to derive more detailed insights (see also Tranfield et al. 2003 ). The distribution of methods used might well be combined with the years of publication to identify and characterize different phases in the development of a field of research or its maturity. Moreover, there could be value in the analysis of theories applied in the review sample (e.g., Touboulic and Walker 2015 ; Zhu et al. 2022 ) and in reflecting on the interplay of different qualitative and quantitative methods in spurring the theoretical development of the reviewed field. This could yield detailed insights into methodological as well as theoretical gaps, and we would suggest explicitly linking the findings of such analyses to the research directions that an SLR typically provides. This link could help make the research directions much more tangible by giving researchers a clear indication of how to follow up on the findings, as, for example, done by Maestrini et al. ( 2017 ) or Dieste et al. ( 2022 ). In contrast to the mentioned examples of an actionable research agenda, a typical weakness of premature SLR manuscripts is that they ask rather superficially for more research in the different aspects they reviewed but remain silent about how exactly this can be achieved.

We would thus like to encourage future SLR authors to systematically investigate the potential to combine two categories of descriptive analysis to move this section of the findings to a higher level of quality, interest, and relevance. The same can, of course, be done with the thematic findings, which comprise the second part of this section.

Moving into the thematic analysis, we have already reached Decision 13 on the presentation of the refined theoretical framework and the discussion of its contents. A first step might present the frequencies of the codes or constructs applied in the analysis. This allows the reader to understand which topics are relevant. If a rather small body of literature is analyzed, tables providing evidence on which paper has been coded for which construct might be helpful in improving the transparency of the research process. Tables or other forms of visualization might help to organize the many codes soundly (see also Durach et al. 2017 ; Paul and Criado 2020 ; Webster and Watson 2002 ). These findings might then lead to interpretation, for which it is necessary to extract meaning from the body of literature and present it accordingly (Snyder 2019 ). To do so, it might seem needless to say that the researchers should refer back to Decisions 1, 2, and 3 taken in Step 1 and their justifications. These typically identify the research gap to be filled, but after the lengthy process of the SLR, the authors often fail to step back from the coding results and put them into a larger perspective against the research gap defined in Decision 1 (see also Clark et al. 2021 ). To support this, it is certainly helpful to illustrate the findings in a figure or graph presenting the links among the constructs and items and adding causal reasoning to this (Durach et al. 2017 ; Paul and Criado 2020 ), such as the three figures by Seuring and Müller ( 2008 ) or other examples by De Lima et al. ( 2021 ) or Tipu ( 2022 ). This presentation should condense arguments made in the assessed literature but should also chart the course for future research. It will be these parts of the paper that are decisive for a strong SLR paper.

Moreover, some guidelines define the most fruitful way of synthesizing the findings as concept-centric synthesis (Clark et al. 2021 ; Fisch and Block 2018 ; Webster and Watson 2002 ). As presented in the previous sentence, the presentation of the review findings is centered on the content or concept of “concept-centric synthesis.” It is accompanied by a reference to all or the most relevant literature in which the concept is evident. Contrastingly, Webster and Watson ( 2002 ) found that author-centric synthesis discusses individual papers and what they have done and found (just like this sentence here). They added that this approach fails to synthesize larger samples. We want to note that we used the latter approach in some places in this paper. However, this aims to actively refer the reader to these studies, as they stand out from our relatively small sample. Beyond this, we want to link back to Decision 3, the selection of a theoretical framework and constructs. These constructs, or the parts of a framework, can also serve to structure the findings section by using them as headlines for subsections (Seuring et al. 2021 ).

Last but not least, there might even be cases where core findings and relationships might be opposed, and alternative perspectives could be presented. This would certainly be challenging to argue for but worthwhile to do in order to drive the reviewed field forward. A related example is the paper by Zhu et al. ( 2022 ), who challenged the current debate at the intersection of blockchain applications and supply chain management and pointed to the limited use of theoretical foundations for related analysis.

(5) Discussion and Conclusion: The discussion needs to explain the contribution the paper makes to the extant literature, that is, which previous findings or hypotheses are supported or contradicted and which aspects of the findings are particularly interesting for the future development of the reviewed field. This is in line with the content required in the discussion sections of any other paper type. A typical structure might point to the contribution and put it into perspective with already existing research. Further, limitations should be addressed on both the theoretical and methodological sides. This elaboration of the limitations can be coupled with the considerations of the validity and reliability of the study in Decision 11. The implications for future research are a core aim of an SLR (Clark et al. 2021 ; Mulrow 1987 ; Snyder 2019 ) and should be addressed in a further part of the discussion section. Recently, a growing number of literature reviews have also provided research questions for future research that provide a very concrete and actionable output of the SLR (e.g. Dieste et al. 2022 ; Maestrini et al. 2017 ). Moreover, we would like to reiterate our call to clearly link the research implications to the SLR findings, which helps the authors craft more tangible research directions and helps the reader to follow the authors’ interpretation. Literature review papers are usually not strongly positioned toward managerial implications, but even these implications might be included.

As a kind of normal demand, the conclusion should provide an answer to the research question put forward in the introduction, thereby closing the cycle of arguments made in the paper.

Although all the works seem to be done when the paper is written and the contribution is fleshed out, there is still one major decision to be made. Decision 14 concerns the identification of an appropriate journal for submission. Despite the popularity of the SLR method, a rising number of journals explicitly limit the number of SLRs published by them. Moreover, there are only two guidelines elaborating on this decision, underlining the need for the following considerations.

Although it might seem most attractive to submit the paper to the highest-ranking journal for the reviewed topic, we argue for two critical and review-related decisions to be made during the research process that influence whether the paper fits a certain outlet:

The theoretical foundation of the SLR (Decision 3) usually relates to certain journals in which it is published or discussed. If a deductive approach was taken, the journals in which the foundational papers were published might be suitable since the review potentially contributes to the further validation or refinement of the frameworks. Overall, we need to keep in mind that a paper needs to be added to a discussion in the journal, and this can be based on the theoretical framework or the reviewed papers, as shown below.

Appropriate journals for publication can be derived from the analyzed journal papers (Decision 7) (see also Paul and Criado 2020 ). This allows for an easy link to the theoretical debate in the respective journal by submitting it. This choice is identifiable in most of the papers mentioned in this paper and is often illustrated in the descriptive analysis.

If the journal chosen for the submission was neither related to the theoretical foundation nor overly represented in the body of literature analyzed, an explicit justification in the paper itself might be needed. Alternatively, an explanation might be provided in the letter to the editor when submitting the paper. If such a statement is not presented, the likelihood of it being transferred into the review process and passing it is rather low. Finally, we want to refer readers interested in the specificities of the publication-related review process of SLRs to Webster and Watson ( 2002 ), who elaborated on this for Management Information Systems Quarterly.

5 Discussion and conclusion

Critically reviewing the currently available SLR guidelines in the management domain, this paper synthesizes 14 key decisions to be made and reported across the SLR research process. Guidelines are presented for each decision, including tasks that assist in making sound choices to complete the research process and make meaningful contributions. Applying these guidelines should improve the rigor and robustness of many review papers and thus enhance their contributions. Moreover, some practical hints and best-practice examples are provided on issues that unexperienced authors regularly struggle to present in a manuscript (Fisch and Block 2018 ) and thus frustrate reviewers, readers, editors, and authors alike.

Strikingly, the review of prior guidelines reported in Table 3 revealed their focus on the technical details that need to be reported in any SLR. Consequently, our discipline has come a long way in crafting search strings, inclusion, and exclusion criteria, and elaborating on the validity and reliability of an SLR. Nevertheless, we left critical areas underdeveloped, such as the identification of relevant research gaps and questions, data extraction tools, analysis of the findings, and a meaningful and interesting reporting of the results. Our study contributes to filling these gaps by providing operationalized guidance to SLR authors, especially early-stage researchers who craft SLRs at the outset of their research journeys. At the same time, we need to underline that our paper is, of course, not the only useful reference for SLR authors. Instead, the readers are invited to find more guidance on the many aspects to consider in an SLR in the references we provide within the single decisions, as well as in Tables 1 and 2 . The tables also identify the strongholds of other guidelines that our paper does not want to replace but connect and extend at selected occasions, especially in SLR Steps 5 and 6.

The findings regularly underline the interconnection of the 14 decisions identified and discussed in this paper. We thus support Tranfield et al. ( 2003 ) who requested a flexible approach to the SLR while clearly reporting all design decisions and reflecting their impacts. In line with the guidance synthesized in this review, and especially Durach et al. ( 2017 ), we also present a refined framework in Figs.  1 and 2 . It specifically refines the original six-step SLR process by Durach et al. ( 2017 ) in three ways:

figure 2

Enriched six-step process including the core interrelations of the 14 decisions

First, we subdivided the six steps into 14 decisions to enhance the operationalization of the process and enable closer guidance (see Fig.  1 ). Second, we added a temporal sequence to Fig.  2 by positioning the decisions from left to right according to this temporal sequence. This is based on systematically reflecting on the need to finish one decision before the following. If this need is evident, the following decision moves to the right; if not, the decisions are positioned below each other. Turning to Fig.  2 , it becomes evident that Step 2, “determining the required characteristics of primary studies,” and Step 3, “retrieving a sample of potentially relevant literature,” including their Decisions 4–6, can be conducted in an iterative manner. While this contrasts with the strict division of the six steps by Durach et al. ( 2017 ), it supports other guidance that suggests running pilot studies to iteratively define the literature sample, its sources, and characteristics (Snyder 2019 ; Tranfield et al. 2003 ; Xiao and Watson 2019 ). While this insight might suggest merging Steps 2 and 3, we refrain from this superficial change and building yet another SLR process model. Instead, we prefer to add detail and depth to Durach et al.’s ( 2017 ) model.

(Decisions: D1: specifying the research gap and related research question, D2: opting for a theoretical approach, D3: defining the core theoretical framework and constructs, D4: specifying inclusion and exclusion criteria, D5: defining sources and databases, D6: defining search terms and crafting a search string, D7: including and excluding literature for detailed analysis and synthesis, D8: selecting data extraction tool(s), D9: coding against (pre-defined) constructs, D10: conducting a subsequent (statistical) analysis (optional), D11: ensuring validity and reliability, D12: deciding on the structure of the paper, D13: presenting a refined theoretical framework and discussing its contents, and D14: deriving an appropriate journal from the analyzed papers).

This is also done through the third refinement, which underlines which previous or later decisions need to be considered within each single decision. Such a consideration moves beyond the mere temporal sequence of steps and decisions that does not reflect the full complexity of the SLR process. Instead, its focus is on the need to align, for example, the conduct of the data analysis (Decision 9) with the theoretical approach (Decision 2) and consequently ensure that the chosen theoretical framework and the constructs (Decision 3) are sufficiently defined for the data analysis (i.e., mutually exclusive and comprehensively exhaustive). The mentioned interrelations are displayed in Fig.  2 by means of directed arrows from one decision to another. The underlying explanations can be found in the earlier paper sections by searching for the individual decisions in the text on the impacted decisions. Overall, it is unsurprising to see that the vast majority of interrelations are directed from the earlier to the later steps and decisions (displayed through arrows below the diagonal of decisions), while only a few interrelations are inverse.

Combining the first refinement of the original framework (defining the 14 decisions) and the third refinement (revealing the main interrelations among the decisions) underlines the contribution of this study in two main ways. First, the centrality of ensuring validity and reliability (Decision 11) is underlined. It becomes evident that considerations of validity and reliability are central to the overall SLR process since all steps before the writing of the paper need to be revisited in iterative cycles through Decision 11. Any lack of related considerations will most likely lead to reviewer critique, putting the SLR publication at risk. On the positive side of this centrality, we also found substantial guidance on this issue. In contrast, as evidenced in Table 3 , there is a lack of prior guidance on Decisions 1, 8, 10, 12, 13, and 14, which this study is helping to fill. At the same time, these underexplained decisions are influenced by 14 of the 44 (32%) incoming arrows in Fig.  2 and influence the other decisions in 6 of the 44 (14%) instances. These interrelations among decisions to be considered when crafting an SLR were scattered across prior guidelines, lacked in-depth elaborations, and were hardly explicitly related to each other. Thus, we hope that our study and the refined SLR process model will help enhance the quality and contribution of future SLRs.

Data availablity

The data generated during this research is summarized in Table 3 and the analyzed papers are publicly available. They are clearly identified in Table 3 and the reference list.

Aguinis H, Ramani RS, Alabduljader N (2020) Best-practice recommendations for producers, evaluators, and users of methodological literature reviews. Organ Res Methods. https://doi.org/10.1177/1094428120943281

Article   Google Scholar  

Beske P, Land A, Seuring S (2014) Sustainable supply chain management practices and dynamic capabilities in the food industry: a critical analysis of the literature. Int J Prod Econ 152:131–143. https://doi.org/10.1016/j.ijpe.2013.12.026

Brandenburg M, Rebs T (2015) Sustainable supply chain management: a modeling perspective. Ann Oper Res 229:213–252. https://doi.org/10.1007/s10479-015-1853-1

Carter CR, Rogers DS (2008) A framework of sustainable supply chain management: moving toward new theory. Int Jnl Phys Dist Logist Manage 38:360–387. https://doi.org/10.1108/09600030810882816

Carter CR, Washispack S (2018) Mapping the path forward for sustainable supply chain management: a review of reviews. J Bus Logist 39:242–247. https://doi.org/10.1111/jbl.12196

Clark WR, Clark LA, Raffo DM, Williams RI (2021) Extending fisch and block’s (2018) tips for a systematic review in management and business literature. Manag Rev Q 71:215–231. https://doi.org/10.1007/s11301-020-00184-8

Crane A, Henriques I, Husted BW, Matten D (2016) What constitutes a theoretical contribution in the business and society field? Bus Soc 55:783–791. https://doi.org/10.1177/0007650316651343

Davis J, Mengersen K, Bennett S, Mazerolle L (2014) Viewing systematic reviews and meta-analysis in social research through different lenses. Springerplus 3:511. https://doi.org/10.1186/2193-1801-3-511

Davis HTO, Crombie IK (2001) What is asystematicreview? http://vivrolfe.com/ProfDoc/Assets/Davis%20What%20is%20a%20systematic%20review.pdf . Accessed 22 February 2019

De Lima FA, Seuring S, Sauer PC (2021) A systematic literature review exploring uncertainty management and sustainability outcomes in circular supply chains. Int J Prod Res. https://doi.org/10.1080/00207543.2021.1976859

Denyer D, Tranfield D (2009) Producing a systematic review. In: Buchanan DA, Bryman A (eds) The Sage handbook of organizational research methods. Sage Publications Ltd, Thousand Oaks, CA, pp 671–689

Google Scholar  

Devece C, Ribeiro-Soriano DE, Palacios-Marqués D (2019) Coopetition as the new trend in inter-firm alliances: literature review and research patterns. Rev Manag Sci 13:207–226. https://doi.org/10.1007/s11846-017-0245-0

Dieste M, Sauer PC, Orzes G (2022) Organizational tensions in industry 4.0 implementation: a paradox theory approach. Int J Prod Econ 251:108532. https://doi.org/10.1016/j.ijpe.2022.108532

Donthu N, Kumar S, Mukherjee D, Pandey N, Lim WM (2021) How to conduct a bibliometric analysis: an overview and guidelines. J Bus Res 133:285–296. https://doi.org/10.1016/j.jbusres.2021.04.070

Durach CF, Kembro J, Wieland A (2017) A new paradigm for systematic literature reviews in supply chain management. J Supply Chain Manag 53:67–85. https://doi.org/10.1111/jscm.12145

Fink A (2010) Conducting research literature reviews: from the internet to paper, 3rd edn. SAGE, Los Angeles

Fisch C, Block J (2018) Six tips for your (systematic) literature review in business and management research. Manag Rev Q 68:103–106. https://doi.org/10.1007/s11301-018-0142-x

Fritz MMC, Silva ME (2018) Exploring supply chain sustainability research in Latin America. Int Jnl Phys Dist Logist Manag 48:818–841. https://doi.org/10.1108/IJPDLM-01-2017-0023

Garcia-Torres S, Albareda L, Rey-Garcia M, Seuring S (2019) Traceability for sustainability: literature review and conceptual framework. Supp Chain Manag 24:85–106. https://doi.org/10.1108/SCM-04-2018-0152

Hanelt A, Bohnsack R, Marz D, Antunes Marante C (2021) A systematic review of the literature on digital transformation: insights and implications for strategy and organizational change. J Manag Stud 58:1159–1197. https://doi.org/10.1111/joms.12639

Kache F, Seuring S (2014) Linking collaboration and integration to risk and performance in supply chains via a review of literature reviews. Supp Chain Mnagmnt 19:664–682. https://doi.org/10.1108/SCM-12-2013-0478

Khalid RU, Seuring S (2019) Analyzing base-of-the-pyramid research from a (sustainable) supply chain perspective. J Bus Ethics 155:663–686. https://doi.org/10.1007/s10551-017-3474-x

Koufteros X, Mackelprang A, Hazen B, Huo B (2018) Structured literature reviews on strategic issues in SCM and logistics: part 2. Int Jnl Phys Dist Logist Manage 48:742–744. https://doi.org/10.1108/IJPDLM-09-2018-363

Kraus S, Breier M, Dasí-Rodríguez S (2020) The art of crafting a systematic literature review in entrepreneurship research. Int Entrep Manag J 16:1023–1042. https://doi.org/10.1007/s11365-020-00635-4

Kraus S, Mahto RV, Walsh ST (2021) The importance of literature reviews in small business and entrepreneurship research. J Small Bus Manag. https://doi.org/10.1080/00472778.2021.1955128

Kraus S, Breier M, Lim WM, Dabić M, Kumar S, Kanbach D, Mukherjee D, Corvello V, Piñeiro-Chousa J, Liguori E, Palacios-Marqués D, Schiavone F, Ferraris A, Fernandes C, Ferreira JJ (2022) Literature reviews as independent studies: guidelines for academic practice. Rev Manag Sci 16:2577–2595. https://doi.org/10.1007/s11846-022-00588-8

Leuschner R, Rogers DS, Charvet FF (2013) A meta-analysis of supply chain integration and firm performance. J Supply Chain Manag 49:34–57. https://doi.org/10.1111/jscm.12013

Lim WM, Rasul T (2022) Customer engagement and social media: revisiting the past to inform the future. J Bus Res 148:325–342. https://doi.org/10.1016/j.jbusres.2022.04.068

Lim WM, Yap S-F, Makkar M (2021) Home sharing in marketing and tourism at a tipping point: what do we know, how do we know, and where should we be heading? J Bus Res 122:534–566. https://doi.org/10.1016/j.jbusres.2020.08.051

Lim WM, Kumar S, Ali F (2022) Advancing knowledge through literature reviews: ‘what’, ‘why’, and ‘how to contribute.’ Serv Ind J 42:481–513. https://doi.org/10.1080/02642069.2022.2047941

Lusiantoro L, Yates N, Mena C, Varga L (2018) A refined framework of information sharing in perishable product supply chains. Int J Phys Distrib Logist Manag 48:254–283. https://doi.org/10.1108/IJPDLM-08-2017-0250

Maestrini V, Luzzini D, Maccarrone P, Caniato F (2017) Supply chain performance measurement systems: a systematic review and research agenda. Int J Prod Econ 183:299–315. https://doi.org/10.1016/j.ijpe.2016.11.005

Miemczyk J, Johnsen TE, Macquet M (2012) Sustainable purchasing and supply management: a structured literature review of definitions and measures at the dyad, chain and network levels. Supp Chain Mnagmnt 17:478–496. https://doi.org/10.1108/13598541211258564

Moher D, Liberati A, Tetzlaff J, Altman DG (2009) Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 6:e1000097. https://doi.org/10.1371/journal.pmed.1000097

Mukherjee D, Lim WM, Kumar S, Donthu N (2022) Guidelines for advancing theory and practice through bibliometric research. J Bus Res 148:101–115. https://doi.org/10.1016/j.jbusres.2022.04.042

Mulrow CD (1987) The medical review article: state of the science. Ann Intern Med 106:485–488. https://doi.org/10.7326/0003-4819-106-3-485

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, Shamseer L, Tetzlaff JM, Akl EA, Brennan SE, Chou R, Glanville J, Grimshaw JM, Hróbjartsson A, Lalu MM, Li T, Loder EW, Mayo-Wilson E, McDonald S, McGuinness LA, Stewart LA, Thomas J, Tricco AC, Welch VA, Whiting P, Moher D (2021) The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. J Clin Epidemiol 134:178–189. https://doi.org/10.1016/j.jclinepi.2021.03.001

Pagell M, Wu Z (2009) Building a more complete theory of sustainable supply chain management using case studies of 10 exemplars. J Supply Chain Manag 45:37–56. https://doi.org/10.1111/j.1745-493X.2009.03162.x

Paul J, Criado AR (2020) The art of writing literature review: What do we know and what do we need to know? Int Bus Rev 29:101717. https://doi.org/10.1016/j.ibusrev.2020.101717

Paul J, Lim WM, O’Cass A, Hao AW, Bresciani S (2021) Scientific procedures and rationales for systematic literature reviews (SPAR-4-SLR). Int J Consum Stud. https://doi.org/10.1111/ijcs.12695

Pearce JM (2018) How to perform a literature review with free and open source software. Pract Assess Res Eval 23:1–13

Rhaiem K, Amara N (2021) Learning from innovation failures: a systematic review of the literature and research agenda. Rev Manag Sci 15:189–234. https://doi.org/10.1007/s11846-019-00339-2

Rojas-Córdova C, Williamson AJ, Pertuze JA, Calvo G (2022) Why one strategy does not fit all: a systematic review on exploration–exploitation in different organizational archetypes. Rev Manag Sci. https://doi.org/10.1007/s11846-022-00577-x

Sauer PC (2021) The complementing role of sustainability standards in managing international and multi-tiered mineral supply chains. Resour Conserv Recycl 174:105747. https://doi.org/10.1016/j.resconrec.2021.105747

Sauer PC, Seuring S (2017) Sustainable supply chain management for minerals. J Clean Prod 151:235–249. https://doi.org/10.1016/j.jclepro.2017.03.049

Seuring S, Gold S (2012) Conducting content-analysis based literature reviews in supply chain management. Supp Chain Mnagmnt 17:544–555. https://doi.org/10.1108/13598541211258609

Seuring S, Müller M (2008) From a literature review to a conceptual framework for sustainable supply chain management. J Clean Prod 16:1699–1710. https://doi.org/10.1016/j.jclepro.2008.04.020

Seuring S, Yawar SA, Land A, Khalid RU, Sauer PC (2021) The application of theory in literature reviews: illustrated with examples from supply chain management. Int J Oper Prod Manag 41:1–20. https://doi.org/10.1108/IJOPM-04-2020-0247

Siems E, Land A, Seuring S (2021) Dynamic capabilities in sustainable supply chain management: an inter-temporal comparison of the food and automotive industries. Int J Prod Econ 236:108128. https://doi.org/10.1016/j.ijpe.2021.108128

Snyder H (2019) Literature review as a research methodology: an overview and guidelines. J Bus Res 104:333–339. https://doi.org/10.1016/j.jbusres.2019.07.039

Spens KM, Kovács G (2006) A content analysis of research approaches in logistics research. Int Jnl Phys Dist Logist Manage 36:374–390. https://doi.org/10.1108/09600030610676259

Tachizawa EM, Wong CY (2014) Towards a theory of multi-tier sustainable supply chains: a systematic literature review. Supp Chain Mnagmnt 19:643–663. https://doi.org/10.1108/SCM-02-2014-0070

Tipu SAA (2022) Organizational change for environmental, social, and financial sustainability: a systematic literature review. Rev Manag Sci 16:1697–1742. https://doi.org/10.1007/s11846-021-00494-5

Touboulic A, Walker H (2015) Theories in sustainable supply chain management: a structured literature review. Int Jnl Phys Dist Logist Manage 45:16–42. https://doi.org/10.1108/IJPDLM-05-2013-0106

Tranfield D, Denyer D, Smart P (2003) Towards a methodology for developing evidence-informed management knowledge by means of systematic review. Br J Manag 14:207–222. https://doi.org/10.1111/1467-8551.00375

Tröster R, Hiete M (2018) Success of voluntary sustainability certification schemes: a comprehensive review. J Clean Prod 196:1034–1043. https://doi.org/10.1016/j.jclepro.2018.05.240

Wang Y, Han JH, Beynon-Davies P (2019) Understanding blockchain technology for future supply chains: a systematic literature review and research agenda. Supp Chain Mnagmnt 24:62–84. https://doi.org/10.1108/SCM-03-2018-0148

Webster J, Watson RT (2002) Analyzing the past to prepare for the future: writing a literature review. MIS Q 26:xiii–xxiii

Wiese A, Kellner J, Lietke B, Toporowski W, Zielke S (2012) Sustainability in retailing: a summative content analysis. Int J Retail Distrib Manag 40:318–335. https://doi.org/10.1108/09590551211211792

Xiao Y, Watson M (2019) Guidance on conducting a systematic literature review. J Plan Educ Res 39:93–112. https://doi.org/10.1177/0739456X17723971

Yavaprabhas K, Pournader M, Seuring S (2022) Blockchain as the “trust-building machine” for supply chain management. Ann Oper Res. https://doi.org/10.1007/s10479-022-04868-0

Zhu Q, Bai C, Sarkis J (2022) Blockchain technology and supply chains: the paradox of the atheoretical research discourse. Transp Res Part E Logist Transp Rev 164:102824. https://doi.org/10.1016/j.tre.2022.102824

Download references

Open Access funding enabled and organized by Projekt DEAL.

Author information

Authors and affiliations.

EM Strasbourg Business School, Université de Strasbourg, HuManiS UR 7308, 67000, Strasbourg, France

Philipp C. Sauer

Chair of Supply Chain Management, Faculty of Economics and Management, The University of Kassel, Kassel, Germany

Stefan Seuring

You can also search for this author in PubMed   Google Scholar

Contributions

The article is based on the idea and extensive experience of SS. The literature search and data analysis has mainly been performed by PCS and supported by SS before the paper manuscript has been written and revised in a common effort of both authors.

Corresponding author

Correspondence to Stefan Seuring .

Ethics declarations

Conflict of interest.

The authors have no competing interests to declare that are relevant to the content of this article.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Sauer, P.C., Seuring, S. How to conduct systematic literature reviews in management research: a guide in 6 steps and 14 decisions. Rev Manag Sci 17 , 1899–1933 (2023). https://doi.org/10.1007/s11846-023-00668-3

Download citation

Received : 29 September 2022

Accepted : 17 April 2023

Published : 12 May 2023

Issue Date : July 2023

DOI : https://doi.org/10.1007/s11846-023-00668-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Methodology
  • Replicability
  • Research process
  • Structured literature review
  • Systematic literature review

JEL Classification

  • Find a journal
  • Publish with us
  • Track your research

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • BMJ Journals

You are here

  • Volume 28, Issue 6
  • Rapid reviews methods series: Guidance on literature search
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0001-6644-9845 Irma Klerings 1 ,
  • Shannon Robalino 2 ,
  • http://orcid.org/0000-0003-4808-3880 Andrew Booth 3 ,
  • http://orcid.org/0000-0002-2903-6870 Camila Micaela Escobar-Liquitay 4 ,
  • Isolde Sommer 1 ,
  • http://orcid.org/0000-0001-5531-3678 Gerald Gartlehner 1 , 5 ,
  • Declan Devane 6 , 7 ,
  • Siw Waffenschmidt 8
  • On behalf of the Cochrane Rapid Reviews Methods Group
  • 1 Department for Evidence-Based Medicine and Evaluation , University of Krems (Danube University Krems) , Krems , Niederösterreich , Austria
  • 2 Center for Evidence-based Policy , Oregon Health & Science University , Portland , Oregon , USA
  • 3 School of Health and Related Research (ScHARR) , The University of Sheffield , Sheffield , UK
  • 4 Research Department, Associate Cochrane Centre , Instituto Universitario Escuela de Medicina del Hospital Italiano de Buenos Aires , Buenos Aires , Argentina
  • 5 RTI-UNC Evidence-based Practice Center , RTI International , Research Triangle Park , North Carolina , USA
  • 6 School of Nursing & Midwifery, HRB TMRN , National University of Ireland Galway , Galway , Ireland
  • 7 Evidence Synthesis Ireland & Cochrane Ireland , University of Galway , Galway , Ireland
  • 8 Information Management Department , Institute for Quality and Efficiency in Healthcare , Cologne , Germany
  • Correspondence to Irma Klerings, Department for Evidence-based Medicine and Evaluation, Danube University Krems, Krems, Niederösterreich, Austria; irma.klerings{at}donau-uni.ac.at

This paper is part of a series of methodological guidance from the Cochrane Rapid Reviews Methods Group. Rapid reviews (RR) use modified systematic review methods to accelerate the review process while maintaining systematic, transparent and reproducible methods. In this paper, we address considerations for RR searches. We cover the main areas relevant to the search process: preparation and planning, information sources and search methods, search strategy development, quality assurance, reporting, and record management. Two options exist for abbreviating the search process: (1) reducing time spent on conducting searches and (2) reducing the size of the search result. Because screening search results is usually more resource-intensive than conducting the search, we suggest investing time upfront in planning and optimising the search to save time by reducing the literature screening workload. To achieve this goal, RR teams should work with an information specialist. They should select a small number of relevant information sources (eg, databases) and use search methods that are highly likely to identify relevant literature for their topic. Database search strategies should aim to optimise both precision and sensitivity, and quality assurance measures (peer review and validation of search strategies) should be applied to minimise errors.

  • Evidence-Based Practice
  • Systematic Reviews as Topic
  • Information Science

Data availability statement

No data are available.

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See:  http://creativecommons.org/licenses/by-nc/4.0/ .

https://doi.org/10.1136/bmjebm-2022-112079

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

WHAT IS ALREADY KNOWN ON THIS TOPIC

Compared with systematic reviews, rapid reviews (RR) often abbreviate or limit the literature search in some way to accelerate review production. However, RR guidance rarely specifies how to select topic-appropriate search approaches.

WHAT THIS STUDY ADDS

This paper presents an overview of considerations and recommendations for RR searching, covering the complete search process from the planning stage to record management. We also provide extensive appendices with practical examples, useful sources and a glossary of terms.

HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY

There is no one-size-fits-all solution for RR literature searching: review teams should consider what search approaches best fit their RR project.

Introduction

This paper is part of a series from the Cochrane Rapid Reviews Methods Group (RRMG) providing methodological guidance for rapid reviews (RRs). 1–3 While the RRMG’s guidance 4 5 on Cochrane RR production includes brief advice on literature searching, we aim to provide in-depth recommendations for the entire search process.

Literature searching is the foundation for all reviews; therefore, it is important to understand the goals of a specific RR. The scope of RRs varies considerably (from focused questions to overviews of broad topics). 6 As with conventional systematic reviews (SRs), there is not a one-size-fits-all approach for RR literature searches. We aim to support RR teams in choosing methods that best fit their project while understanding the limitations of modified search methods. Our recommendations derive from current systematic search guidance, evidence on modified search methods and practical experience conducting RRs.

This paper presents considerations and recommendations, described briefly in table 1 . The table also includes a comparison to the SR search process based on common recommendations. 7–10 We provide supplemental materials, including a list of additional resources, further details of our recommendations, practical examples, and a glossary (explaining the terms written in italics) in online supplemental appendices A–C .

Supplemental material

  • View inline

Recommendations for rapid review literature searching

Preparation and planning

Given that the results of systematic literature searches underpin a review, planning the searches is integral to the overall RR preparation. The RR search process follows the same steps as an SR search; therefore, RR teams must be familiar with the general standards of systematic searching . Templates (see online supplemental appendix B ) and reporting guidance 11 for SR searches can also be adapted to structure the RR search process.

Developing a plan for the literature search forms part of protocol development and should involve an information specialist (eg, librarian). Information specialists can assist in refining the research question, selecting appropriate search methods and resources, designing and executing search strategies, and reporting the search methods. At minimum, specialist input should include assessing information sources and methods and providing feedback on the primary database search strategy.

Two options exist for abbreviating the search process: (1) reducing time spent on conducting searches (eg, using automation tools, reusing existing search strategies, omitting planning or quality assurance steps) and (2) reducing the size of the search result (eg, limiting the number of information sources, increasing the precision of search strategies, using study design filters). Study selection (ie, screening search results) is usually more resource-intensive than searching, 12 particularly for topics with complex or broad concepts or diffuse terminology; thus, the second option may be more efficient for the entire RR. Investing time upfront in optimising search sensitivity (ie, completeness) and precision (ie, positive predictive value) can save time in the long run by reducing the screening and selection workload.

Preliminary or scoping searches are critical to this process. They inform the choice of search methods and identify potentially relevant literature. Texts identified through preliminary searching serve as known relevant records that can be used throughout the search development process (see sections on database selection, development and validation of search strategies).

In addition to planning the search itself, the review team should factor in time for quality assurance steps (eg, search strategy peer review) and the management of search results (eg, deduplication, full-text retrieval).

Information sources and methods

To optimise the balance of search sensitivity and precision, RR teams should prioritise the most relevant information sources for the topic and the type of evidence required. These can include bibliographic databases (eg, MEDLINE/PubMed), grey literature sources and targeted supplementary search methods. Note that this approach differs from the Methodological Expectations of Cochrane Intervention Reviews Standards 9 where the same core set of information sources is required for every review and further supplemented by additional topic-specific and evidence-specific sources.

Choosing bibliographic databases

For many review topics, most evidence is found in peer-reviewed journal articles, making bibliographic databases the main resource of systematic searching. Limiting the number of databases searched can be a viable option in RRs, but it is important to prioritise topic-appropriate databases.

MEDLINE has been found to have high coverage for studies included in SRs 13 14 and is an appealing database choice because access is free via PubMed. However, coverage varies depending on topics and relevant study designs. 15 16 Additionally, even if all eligible studies for a topic were available in MEDLINE, search strategies will usually miss some eligible studies because search sensitivity is lower than database coverage. 13 17 This means searching MEDLINE alone is not a viable option, and additional information sources or search methods are required. Known relevant records can be used to help assess the coverage of selected databases (see also online supplemental appendix C ).

Further information sources and search techniques

Supplementary systematic search methods have three main goals, to identify (1) grey literature, (2) published literature not covered by the selected bibliographic databases and (3) database-covered literature that was not retrieved by the database searches.

When RRs search only a small number of databases, supplementary searches can be particularly important to pick up eligible studies not identified via database searching. While supplementary methods might increase the time spent on searching, they sometimes better optimise search sensitivity and precision, saving time in the long run. 18 Depending on the topic and relevant evidence, such methods can offer an alternative to adding additional specialised database searches. To decide if and what supplementary searches are helpful, it is important to evaluate what literature might be missed by the database searches and how this might affect the specific RR.

Study registries and other grey literature

Some studies indicate that the omission of grey literature searches rarely affects review conclusions. 17 19 However, the relevance of study registries and other grey literature sources is topic-dependent. 16 19–21 For example, randomised controlled trials (RCTs) on newly approved drugs are typically identified in ClinicalTrials.gov. 20 For rapidly evolving topics such as COVID-19, preprints are an important source. 21 For public health interventions, various types of grey literature may be important (eg, evaluations conducted by local public health agencies). 22

Further supplementary search methods

Other supplementary techniques (eg, checking reference lists, reviewing specific websites or electronic table of contents, contacting experts) may identify additional studies not retrieved by database searches. 23 One of the most common approaches involves checking reference lists of included studies and relevant reviews. This method may identify studies missed by limited database searches. 12 Another promising citation-based approach is using the ‘similar articles’ option in PubMed, although research has focused on updating existing SRs. 24 25

Considerations for RRs of RCTs

Databases and search methods to identify RCTs have been particularly well researched. 17 20 24 26 27 For this reason, it is possible to give more precise recommendations for RRs based on RCTs than for other types of review. Table 2 provides an overview of the most important considerations; additional information can be found in online supplemental appendix C .

Information sources for identification of randomised controlled trials (RCTs)

Search strategies

We define ‘search strategy’ as a Boolean search query in a specific database (eg, MEDLINE) using a specific interface (eg, Ovid). When several databases are searched, this query is usually developed in a primary database and interface (eg, Ovid MEDLINE) and translated to other databases.

Developing search strategies

Optimising search strategy precision while aiming for high sensitivity is critical in reducing the number of records retrieved. Preliminary searches provide crucial information to aid efficient search strategy development. Reviewing the abstracts and subject headings used in known relevant records will assist in identifying appropriate search terms. Text analysis tools can also be used to support this process, 28 29 for example, to develop ‘objectively derived’ search strategies. 30

Reusing or adapting complete search strategies (eg, from SRs identified by the preliminary searches) or selecting elements of search strategies for reuse can accelerate search strategy development. Additionally, validated search filters (eg, for study design) can be used to reduce the size of the search result without compromising the sensitivity of a search strategy. 31 However, quality assurance measures are necessary whether the search strategy is purpose-built, reused or adapted (see the ‘Quality assurance’ section.)

Database-specific and interface-specific functionalities can also be used to improve searches’ precision and reduce the search result’s size. Some options are: restricting to records where subject terms have been assigned as the major focus of an article (eg, major descriptors in MeSH), using proximity operators (ie, terms adjacent or within a set number of words), frequency operators (ie, terms have to appear a minimum number of times in an abstract) or restricting search terms to the article title. 32–34

Automated syntax translation can save time and reduce errors when translating a primary search strategy to different databases. 35 36 However, manual adjustments will usually be necessary.

The time taken to learn how to use supporting technologies (eg, text analysis, syntax translation) proficiently should not be underestimated. The time investment is most likely to pay off for frequent searchers. A later paper in this series will address supporting software for the entire review process.

Limits and restrictions

Limits and restrictions (eg, publication dates, language) are another way to reduce the number of records retrieved but should be tailored to the topic and applied with caution. For example, if most studies about an intervention were published 10 years ago, then an arbitrary cut-off of ‘the last 5 years’ will miss many relevant studies. 37 Similarly, limiting to ‘English only’ is acceptable for most cases, but early in the COVID-19 pandemic, a quarter of available research articles were written in Chinese. 38 Depending on the RR topic, certain document types (eg, conference abstracts, dissertations) might be excluded if not considered relevant to the research question.

Note also that preset limiting functions in search interfaces (eg, limit to humans) often rely on subject headings (eg, MeSH) alone. They will miss eligible studies that lack or have incomplete subject indexing. Using (validated) search filters 31 is preferable.

Updating existing reviews

One approach to RR production involves updating an existing SR. In this case, preliminary searches should be used to check if new evidence is available. If the review team decide to update the review, they should assess the original search methods and adapt these as necessary.

One option is to identify the minimum set of databases required to retrieve all the original included studies. 39 Any reused search strategies should be validated and peer-reviewed (see below) and optimised for precision and/or sensitivity.

Additionally, it is important to assess whether the topic terminology or the relevant databases have changed since the original SR search.

In some cases, designing a new search process may be more efficient than reproducing the original search.

Quality assurance and search strategy peer review

Errors in search strategies are common and can impact the sensitivity and comprehensiveness of the search result. 40 If an RR search uses a small number of information sources, such errors could affect the identification of relevant studies.

Validation of search strategies

The primary database search strategy should be validated using known relevant records (if available). This means testing if the primary search strategy retrieves eligible studies found through preliminary searching. If some known studies are not identified, the searcher assesses the reasons and decides if revisions are necessary. Even a precision-focused systematic search should identify the majority—we suggest at least 80%–90%—of known studies. This is based on benchmarks for sensitivity-precision-maximising search filters 41 and assumes that the set of known studies is representative of the whole of relevant studies.

Peer review of search strategies

Ideally, an information specialist should review the planned information sources and search methods and use the PRESS (Peer Review of Electronic Search Strategies) checklist 42 to assess the primary search strategy. Turnaround time has to be factored into the process from the outset (eg, waiting for feedback, revising the search strategy). PRESS recommends a maximum turnaround time of five working days for feedback, but in-house peer review often takes only a few hours.

If the overall RR time plan does not allow for a full peer review of the search strategy, a review team member with search experience should check the search strategy for spelling errors and correct use of Boolean operators (AND, OR, NOT) at a minimum.

Reporting and record management

Record management requirements of RRs are largely identical to SRs and have to be factored into the time plan. Teams should develop a data management plan and review the relevant reporting standards at the project’s outset. PRISMA-S (Preferred Reporting Items for Systematic Reviews and Meta-Analyses literature search extension) 11 is a reporting standard for SR searches that can be adapted for RRs.

Reference management software (eg, EndNote, 43 Zotero 44 ) should be used to track search results, including deduplication. Note that record management for database searches is less time-consuming than for many supplementary or grey literature searches, which often require manual entry into reference management software. 12

Additionally, software platforms for SR production (eg, Covidence, 45 EPPI-Reviewer, 46 Systematic Review Data Repository Plus 47 ) can provide a unified way to keep track of records throughout the whole review process, which can improve management and save time. These platforms and other dedicated tools (eg, SRA Deduplicator) 48 also offer automated deduplication. However, the time and cost investment necessary to appropriately use these tools have to be considered.

Decisions about search methods for an RR need to consider where time can be most usefully invested and processes accelerated. The literature search should be considered in the context of the entire review process, for example, protocol development and literature screening: Findings of preliminary searches often affect the development and refinement of the research question and the review’s eligibility criteria . In turn, they affect the number of records retrieved by the searches and therefore the time needed for literature selection.

For this reason, focusing only on reducing time spent on designing and conducting searches can be a false economy when seeking to speed up review production. While some approaches (eg, text analysis, automated syntax translation) may save time without negatively affecting search validity, others (eg, skipping quality assurance steps, using convenient information sources without considering their topic appropriateness) may harm the entire review. Information specialists can provide crucial aid concerning the appropriate design of search strategies, choice of methods and information sources.

For this reason, we consider that investing time at the outset of the review to carefully choose a small number of highly appropriate search methods and optimise search sensitivity and precision likely leads to better and more manageable results.

Ethics statements

Patient consent for publication.

Not applicable.

  • Gartlehner G ,
  • Nussbaumer-Streit B ,
  • Nussbaumer Streit B ,
  • Garritty C ,
  • Tricco AC ,
  • Nussbaumer-Streit B , et al
  • Trivella M ,
  • Hamel C , et al
  • Hartling L ,
  • Guise J-M ,
  • Kato E , et al
  • Lefebvre C ,
  • Glanville J ,
  • Briscoe S , et al
  • Higgins JPT ,
  • Lasserson T ,
  • Chandler J , et al
  • European network for Health Technology Assessment (EUnetHTA)
  • Rethlefsen ML ,
  • Kirtley S ,
  • Waffenschmidt S , et al
  • Klerings I , et al
  • Bramer WM ,
  • Giustini D ,
  • Halladay CW ,
  • Trikalinos TA ,
  • Schmid IT , et al
  • Frandsen TF ,
  • Eriksen MB ,
  • Hammer DMG , et al
  • Klerings I ,
  • Wagner G , et al
  • Husk K , et al
  • Featherstone R ,
  • Nuspl M , et al
  • Knelangen M ,
  • Hausner E ,
  • Metzendorf M-I , et al
  • Gianola S ,
  • Bargeri S , et al
  • Hillier-Brown FC ,
  • Moore HJ , et al
  • Varley-Campbell J , et al
  • Sampson M ,
  • de Bruijn B ,
  • Urquhart C , et al
  • Fitzpatrick-Lewis D , et al
  • Affengruber L ,
  • Waffenschmidt S ,
  • Kaiser T , et al
  • The InterTASC Information Specialists’ Sub-Group
  • Kleijnen J , et al
  • Jacob C , et al
  • Kaunelis D ,
  • Mensinkai S , et al
  • Mast F , et al
  • Sanders S ,
  • Carter M , et al
  • Marshall IJ ,
  • Marshall R ,
  • Wallace BC , et al
  • Fidahic M ,
  • Runjic R , et al
  • Hopewell S ,
  • Salvador-Oliván JA ,
  • Marco-Cuenca G ,
  • Arquero-Avilés R
  • Navarro-Ruan T ,
  • Hobson N , et al
  • McGowan J ,
  • Salzwedel DM , et al
  • Clarivate Analytics
  • Corporation for Digital Scholarship
  • Veritas Health Innovation Ltd
  • Graziosi S ,
  • Brunton J , et al
  • Agency for Healthcare Research and Quality
  • Institute for Evidence-Based Healthcare

Supplementary materials

Supplementary data.

This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

  • Data supplement 1

Twitter @micaelaescb

Collaborators On behalf of the Cochrane Rapid Reviews Methods Group: Declan Devane, Gerald Gartlehner, Isolde Sommer.

Contributors IK, SR, AB, CME-L and SW contributed to the conceptualisation of this paper. IK, AB and CME-L wrote the first draft of the manuscript. All authors critically reviewed and revised the manuscript. IK is responsible for the overall content.

Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

Competing interests AB is co-convenor of the Cochrane Qualitative and Implementation Methods Group. In the last 36 months, he received royalties from Systematic Approaches To a Successful Literature Review (Sage 3rd edn), payment or honoraria form the Agency for Healthcare Research and Quality, and travel support from the WHO. DD works part time for Cochrane Ireland and Evidence Synthesis Ireland, which are funded within the University of Ireland Galway (Ireland) by the Health Research Board (HRB) and the Health and Social Care, Research and Development (HSC R&D) Division of the Public Health Agency in Northern Ireland.

Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.

Provenance and peer review Not commissioned; externally peer reviewed.

Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Linked Articles

  • Research methods and reporting Rapid reviews methods series: Guidance on team considerations, study selection, data extraction and risk of bias assessment Barbara Nussbaumer-Streit Isolde Sommer Candyce Hamel Declan Devane Anna Noel-Storr Livia Puljak Marialena Trivella Gerald Gartlehner BMJ Evidence-Based Medicine 2023; 28 418-423 Published Online First: 19 Apr 2023. doi: 10.1136/bmjebm-2022-112185
  • Research methods and reporting Rapid reviews methods series: Guidance on assessing the certainty of evidence Gerald Gartlehner Barbara Nussbaumer-Streit Declan Devane Leila Kahwati Meera Viswanathan Valerie J King Amir Qaseem Elie Akl Holger J Schuenemann BMJ Evidence-Based Medicine 2023; 29 50-54 Published Online First: 19 Apr 2023. doi: 10.1136/bmjebm-2022-112111
  • Research methods and reporting Rapid Reviews Methods Series: Involving patient and public partners, healthcare providers and policymakers as knowledge users Chantelle Garritty Andrea C Tricco Maureen Smith Danielle Pollock Chris Kamel Valerie J King BMJ Evidence-Based Medicine 2023; 29 55-61 Published Online First: 19 Apr 2023. doi: 10.1136/bmjebm-2022-112070

Read the full text or download the PDF:

  • Research article
  • Open access
  • Published: 15 August 2024

The impact of adverse childhood experiences on multimorbidity: a systematic review and meta-analysis

  • Dhaneesha N. S. Senaratne 1 ,
  • Bhushan Thakkar 1 ,
  • Blair H. Smith 1 ,
  • Tim G. Hales 2 ,
  • Louise Marryat 3 &
  • Lesley A. Colvin 1  

BMC Medicine volume  22 , Article number:  315 ( 2024 ) Cite this article

658 Accesses

17 Altmetric

Metrics details

Adverse childhood experiences (ACEs) have been implicated in the aetiology of a range of health outcomes, including multimorbidity. In this systematic review and meta-analysis, we aimed to identify, synthesise, and quantify the current evidence linking ACEs and multimorbidity.

We searched seven databases from inception to 20 July 2023: APA PsycNET, CINAHL Plus, Cochrane CENTRAL, Embase, MEDLINE, Scopus, and Web of Science. We selected studies investigating adverse events occurring during childhood (< 18 years) and an assessment of multimorbidity in adulthood (≥ 18 years). Studies that only assessed adverse events in adulthood or health outcomes in children were excluded. Risk of bias was assessed using the ROBINS-E tool. Meta-analysis of prevalence and dose–response meta-analysis methods were used for quantitative data synthesis. This review was pre-registered with PROSPERO (CRD42023389528).

From 15,586 records, 25 studies were eligible for inclusion (total participants = 372,162). The prevalence of exposure to ≥ 1 ACEs was 48.1% (95% CI 33.4 to 63.1%). The prevalence of multimorbidity was 34.5% (95% CI 23.4 to 47.5%). Eight studies provided sufficient data for dose–response meta-analysis (total participants = 197,981). There was a significant dose-dependent relationship between ACE exposure and multimorbidity ( p  < 0.001), with every additional ACE exposure contributing to a 12.9% (95% CI 7.9 to 17.9%) increase in the odds for multimorbidity. However, there was heterogeneity among the included studies ( I 2  = 76.9%, Cochran Q  = 102, p  < 0.001).

Conclusions

This is the first systematic review and meta-analysis to synthesise the literature on ACEs and multimorbidity, showing a dose-dependent relationship across a large number of participants. It consolidates and enhances an extensive body of literature that shows an association between ACEs and individual long-term health conditions, risky health behaviours, and other poor health outcomes.

Peer Review reports

In recent years, adverse childhood experiences (ACEs) have been identified as factors of interest in the aetiology of many conditions [ 1 ]. ACEs are potentially stressful events or environments that occur before the age of 18. They have typically been considered in terms of abuse (e.g. physical, emotional, sexual), neglect (e.g. physical, emotional), and household dysfunction (e.g. parental separation, household member incarceration, household member mental illness) but could also include other forms of stress, such as bullying, famine, and war. ACEs are common: estimates suggest that 47% of the UK population have experienced at least one form, with 12% experiencing four or more [ 2 ]. ACEs are associated with poor outcomes in a range of physical health, mental health, and social parameters in adulthood, with greater ACE burden being associated with worse outcomes [ 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 ].

Over a similar timescale, multimorbidity has emerged as a significant heath challenge. It is commonly defined as the co-occurrence of two or more long-term conditions (LTCs), with a long-term condition defined as any physical or mental health condition lasting, or expected to last, longer than 1 year [ 9 ]. Multimorbidity is both common and age-dependent, with a global adult prevalence of 37% that rises to 51% in adults over 60 [ 10 , 11 ]. Individuals living with multimorbidity face additional challenges in managing their health, such as multiple appointments, polypharmacy, and the lack of continuity of care [ 12 , 13 , 14 ]. Meanwhile, many healthcare systems struggle to manage the additional cost and complexity of people with multimorbidity as they have often evolved to address the single disease model [ 15 , 16 ]. As global populations continue to age, with an estimated 2.1 billion adults over 60 by 2050, the pressures facing already strained healthcare systems will continue to grow [ 17 ]. Identifying factors early in the aetiology of multimorbidity may help to mitigate the consequences of this developing healthcare crisis.

Many mechanisms have been suggested for how ACEs might influence later life health outcomes, including the risk of developing individual LTCs. Collectively, they contribute to the idea of ‘toxic stress’; cumulative stress during key developmental phases may affect development [ 18 ]. ACEs are associated with measures of accelerated cellular ageing, including changes in DNA methylation and telomere length [ 19 , 20 ]. ACEs may lead to alterations in stress-signalling pathways, including changes to the immune, endocrine, and cardiovascular systems [ 21 , 22 , 23 ]. ACEs are also associated with both structural and functional differences in the brain [ 24 , 25 , 26 , 27 ]. These diverse biological changes underpin psychological and behavioural changes, predisposing individuals to poorer self-esteem and risky health behaviours, which may in turn lead to increased risk of developing individual LTCs [ 1 , 2 , 28 , 29 , 30 , 31 , 32 ]. A growing body of evidence has therefore led to an increased focus on developing trauma-informed models of healthcare, in which the impact of negative life experiences is incorporated into the assessment and management of LTCs [ 33 ].

Given the contributory role of ACEs in the aetiology of individual LTCs, it is reasonable to suspect that ACEs may also be an important factor in the development of multimorbidity. Several studies have implicated ACEs in the aetiology of multimorbidity, across different cohorts and populations, but to date no meta-analyses have been performed to aggregate this evidence. In this review, we aim to summarise the state of the evidence linking ACEs and multimorbidity, to quantify the strength of any associations through meta-analysis, and to highlight the challenges of research in this area.

Search strategy and selection criteria

We conducted a systematic review and meta-analysis that was prospectively registered in the International Prospective Register of Systematic Reviews (PROSPERO) on 25 January 2023 (ID: CRD42023389528) and reported using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines.

We developed a search strategy based on previously published literature reviews and refined it following input from subject experts, an academic librarian, and patient and public partners (Additional File 1: Table S1). We searched the following seven databases from inception to 20 July 2023: APA PsycNET, CINAHL Plus, Cochrane CENTRAL, Embase, MEDLINE, Scopus, and Web of Science. The search results were imported into Covidence (Veritas Health Innovation, Melbourne, Australia), which automatically identified and removed duplicate entries. Two reviewers (DS and BT) independently performed title and abstract screening and full text review. Discrepancies were resolved by a third reviewer (LC).

Reports were eligible for review if they included adults (≥ 18 years), adverse events occurring during childhood (< 18 years), and an assessment of multimorbidity or health status based on LTCs. Reports that only assessed adverse events in adulthood or health outcomes in children were excluded.

The following study designs were eligible for review: randomised controlled trials, cohort studies, case–control studies, cross-sectional studies, and review articles with meta-analysis. Editorials, case reports, and conference abstracts were excluded. Systematic reviews without a meta-analysis and narrative synthesis review articles were also excluded; however, their reference lists were screened for relevant citations.

Data analysis

Two reviewers (DS and BT) independently performed data extraction into Microsoft Excel (Microsoft Corporation, Redmond, USA) using a pre-agreed template. Discrepancies were resolved by consensus discussion with a third reviewer (LC). Data extracted from each report included study details (author, year, study design, sample cohort, sample size, sample country of origin), patient characteristics (age, sex), ACE information (definition, childhood cut-off age, ACE assessment tool, number of ACEs, list of ACEs, prevalence), multimorbidity information (definition, multimorbidity assessment tool, number of LTCs, list of LTCs, prevalence), and analysis parameters (effect size, model adjustments). For meta-analysis, we extracted ACE groups, number of ACE cases, number of multimorbidity cases, number of participants, odds ratios or regression beta coefficients, and 95% confidence intervals (95% CI). Where data were partially reported or missing, we contacted the study authors directly for further information.

Two reviewers (DS and BT) independently performed risk of bias assessments of each included study using the Risk Of Bias In Non-randomized Studies of Exposures (ROBINS-E) tool [ 34 ]. The ROBINS-E tool assesses the risk of bias for the study outcome relevant to the systematic review question, which may not be the primary study outcome. It assesses risk of bias across seven domains; confounding, measurement of the exposure, participant selection, post-exposure interventions, missing data, measurement of the outcome, and selection of the reported result. The overall risk of bias for each study was determined using the ROBINS-E algorithm. Discrepancies were resolved by consensus discussion.

All statistical analyses were performed in R version 4.2.2 using the RStudio integrated development environment (RStudio Team, Boston, USA). To avoid repetition of participant data, where multiple studies analysed the same patient cohort, we selected the study with the best reporting of raw data for meta-analysis and the largest sample size. Meta-analysis of prevalence was performed with the meta package [ 35 ], using logit transformations within a generalised linear mixed model, and reporting the random-effects model [ 36 ]. Inter-study heterogeneity was assessed and reported using the I 2 statistic, Cochran Q statistic, and Cochran Q p -value. Dose–response meta-analysis was performed using the dosresmeta package [ 37 ] following the method outlined by Greenland and Longnecker (1992) [ 38 , 39 ]. Log-linear and non-linear (restricted cubic spline, with knots at 5%, 35%, 65%, and 95%) random effects models were generated, and goodness of fit was evaluated using a Wald-type test (denoted by X 2 ) and the Akaike information criterion (AIC) [ 39 ].

Patient and public involvement

The Consortium Against Pain Inequality (CAPE) Chronic Pain Advisory Group (CPAG) consists of individuals with lived experiences of ACEs, chronic pain, and multimorbidity. CPAG was involved in developing the research question. The group has experience in systematic review co-production (in progress).

The search identified 15,586 records, of which 25 met inclusion criteria for the systematic review (Fig.  1 ) [ 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 , 59 , 60 , 61 , 62 , 63 , 64 ]. The summary characteristics can be found in Additional File 1: Table S2. Most studies examined European ( n  = 11) or North American ( n  = 9) populations, with a few looking at Asian ( n  = 3) or South American ( n  = 1) populations and one study examining a mixed cohort (European and North American populations). The total participant count (excluding studies performed on the same cohort) was 372,162. Most studies had a female predominance (median 53.8%, interquartile range (IQR) 50.9 to 57.4%).

figure 1

Flow chart of selection of studies into the systematic review and meta-analysis. Flow chart of selection of studies into the systematic review and meta-analysis. ACE, adverse childhood experience; MM, multimorbidity; DRMA, dose–response meta-analysis

All studies were observational in design, and so risk of bias assessments were performed using the ROBINS-E tool (Additional File 1: Table S3) [ 34 ]. There were some consistent risks observed across the studies, especially in domain 1 (risk of bias due to confounding) and domain 3 (risk of bias due to participant selection). In domain 1, most studies were ‘high risk’ ( n  = 24) as they controlled for variables that could have been affected by ACE exposure (e.g. smoking status) [ 40 , 41 , 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 , 59 , 60 , 61 , 62 , 63 , 64 ]. In domain 3, some studies were ‘high risk’ ( n  = 7) as participant selection was based on participant characteristics that could have been influenced by ACE exposure (e.g. through recruitment at an outpatient clinic) [ 45 , 48 , 49 , 51 , 53 , 54 , 58 ]. The remaining studies were deemed as having ‘some concerns’ ( n  = 18) as participant selection occurred at a time after ACE exposure, introducing a risk of survivorship bias [ 40 , 41 , 42 , 43 , 44 , 46 , 47 , 50 , 52 , 55 , 56 , 57 , 59 , 60 , 61 , 62 , 63 , 64 ].

Key differences in risk of bias were seen in domain 2 (risk of bias due to exposure measurement) and domain 5 (risk of bias due to missing data). In domain 2, some studies were ‘high risk’ as they used a narrow or atypical measure of ACEs ( n  = 8) [ 40 , 42 , 44 , 46 , 55 , 56 , 60 , 64 ]; others were graded as having ‘some concerns’ as they used a broader but still incomplete measure of ACEs ( n  = 8) [ 43 , 45 , 48 , 49 , 50 , 52 , 54 , 62 ]; the remainder were ‘low risk’ as they used an established or comprehensive list of ACE questions [ 41 , 47 , 51 , 53 , 57 , 58 , 59 , 61 , 63 ]. In domain 5, some studies were ‘high risk’ as they failed to acknowledge or appropriately address missing data ( n  = 7) [ 40 , 42 , 43 , 45 , 51 , 53 , 60 ]; others were graded as having ‘some concerns’ as they had a significant amount of missing data (> 10% for exposure, outcome, or confounders) but mitigated for this with appropriate strategies ( n  = 6) [ 41 , 50 , 56 , 57 , 62 , 64 ]; the remainder were ‘low risk’ as they reported low levels of missing data ( n  = 12) [ 44 , 46 , 47 , 48 , 49 , 52 , 54 , 55 , 58 , 59 , 61 , 63 ].

Most studies assessed an exposure that was ‘adverse childhood experiences’ ( n  = 10) [ 41 , 42 , 50 , 51 , 53 , 57 , 58 , 61 , 63 , 64 ], ‘childhood maltreatment’ ( n  = 6) [ 44 , 45 , 46 , 48 , 49 , 59 ], or ‘childhood adversity’ ( n  = 3) [ 47 , 54 , 62 ]. The other exposures studied were ‘birth phase relative to World War Two’ [ 40 ], ‘childhood abuse’ [ 43 ], ‘childhood disadvantage’ [ 56 ], ‘childhood racial discrimination’ [ 55 ], ‘childhood trauma’ [ 52 ], and ‘quality of childhood’ (all n  = 1) [ 60 ]. More than half of studies ( n  = 13) did not provide a formal definition of their exposure of choice [ 42 , 43 , 44 , 45 , 49 , 52 , 53 , 54 , 57 , 58 , 60 , 61 , 64 ]. The upper age limit for childhood ranged from < 15 to < 18 years with the most common cut-off being < 18 years ( n  = 9). The median number of ACEs measured in each study was 7 (IQR 4–10). In total, 58 different ACEs were reported; 17 ACEs were reported by at least three studies, whilst 33 ACEs were reported by only one study. The most frequently reported ACEs were physical abuse ( n  = 19) and sexual abuse ( n  = 16) (Table  1 ). The exposure details for each study can be found in Additional File 1: Table S4.

Thirteen studies provided sufficient data to allow for a meta-analysis of the prevalence of exposure to ≥ 1 ACE; the pooled prevalence was 48.1% (95% CI 33.4 to 63.1%, I 2  = 99.9%, Cochran Q  = 18,092, p  < 0.001) (Fig.  2 ) [ 41 , 43 , 44 , 46 , 47 , 49 , 50 , 52 , 53 , 57 , 59 , 61 , 63 ]. Six studies provided sufficient data to allow for a meta-analysis of the prevalence of exposure to ≥ 4 ACEs; the pooled prevalence was 12.3% (95% CI 3.5 to 35.4%, I 2  = 99.9%, Cochran Q  = 9071, p  < 0.001) (Additional File 1: Fig. S1) [ 46 , 50 , 51 , 53 , 59 , 63 ].

figure 2

Meta-analysis of prevalence of exposure to ≥ 1 adverse childhood experiences. Meta-analysis of prevalence of exposure to ≥ 1 adverse childhood experience. ACE, adverse childhood experience; CI, confidence interval

Thirteen studies explicitly assessed multimorbidity as an outcome, and all of these defined the threshold for multimorbidity as the presence of two or more LTCs [ 40 , 41 , 42 , 44 , 46 , 47 , 50 , 55 , 57 , 60 , 61 , 62 , 64 ]. The remaining studies assessed comorbidities, morbidity, or disease counts [ 43 , 45 , 48 , 49 , 51 , 52 , 53 , 54 , 56 , 58 , 59 , 63 ]. The median number of LTCs measured in each study was 14 (IQR 12–21). In total, 115 different LTCs were reported; 36 LTCs were reported by at least three studies, whilst 63 LTCs were reported by only one study. Two studies did not report the specific LTCs that they measured [ 51 , 53 ]. The most frequently reported LTCs were hypertension ( n  = 22) and diabetes ( n  = 19) (Table  2 ). Fourteen studies included at least one mental health LTC. The outcome details for each study can be found in Additional File 1: Table S5.

Fifteen studies provided sufficient data to allow for a meta-analysis of the prevalence of multimorbidity; the pooled prevalence was 34.5% (95% CI 23.4 to 47.5%, I 2  = 99.9%, Cochran Q  = 24,072, p  < 0.001) (Fig.  3 ) [ 40 , 41 , 44 , 46 , 47 , 49 , 50 , 51 , 52 , 55 , 57 , 58 , 59 , 60 , 63 ].

figure 3

Meta-analysis of prevalence of multimorbidity. Meta-analysis of prevalence of multimorbidity. CI, confidence interval; LTC, long-term condition; MM, multimorbidity

All studies reported significant positive associations between measures of ACE and multimorbidity, though they varied in their means of analysis and reporting of the relationship. Nine studies reported an association between the number of ACEs (variably considered as a continuous or categorical parameter) and multimorbidity [ 41 , 43 , 46 , 47 , 50 , 56 , 57 , 61 , 64 ]. Eight studies reported an association between the number of ACEs and comorbidity counts in specific patient populations [ 45 , 48 , 49 , 51 , 53 , 58 , 59 , 63 ]. Six studies reported an association between individual ACEs or ACE subgroups and multimorbidity [ 42 , 43 , 44 , 47 , 55 , 62 ]. Two studies incorporated a measure of frequency within their ACE measurement tool and reported an association between this ACE score and multimorbidity [ 52 , 54 ]. Two studies reported an association between proxy measures for ACEs and multimorbidity; one reported ‘birth phase relative to World War Two’, and the other reported a self-report on the overall quality of childhood [ 40 , 60 ].

Eight studies, involving a total of 197,981 participants, provided sufficient data (either in the primary text, or following author correspondence) for quantitative synthesis [ 41 , 46 , 47 , 49 , 50 , 51 , 57 , 58 ]. Log-linear (Fig.  4 ) and non-linear (Additional File 1: Fig. S2) random effects models were compared for goodness of fit: the Wald-type test for linearity was non-significant ( χ 2  = 3.7, p  = 0.16) and the AIC was lower for the linear model (− 7.82 vs 15.86) indicating that the log-linear assumption was valid. There was a significant dose-dependent relationship between ACE exposure and multimorbidity ( p  < 0.001), with every additional ACE exposure contributing to a 12.9% (95% CI 7.9 to 17.9%) increase in the odds for multimorbidity ( I 2  = 76.9%, Cochran Q  = 102, p  < 0.001).

figure 4

Dose–response meta-analysis of the relationship between adverse childhood experiences and multimorbidity. Dose–response meta-analysis of the relationship between adverse childhood experiences and multimorbidity. Solid black line represents the estimated relationship; dotted black lines represent the 95% confidence intervals for this estimate. ACE, adverse childhood experience

This systematic review and meta-analysis synthesised the literature on ACEs and multimorbidity and showed a dose-dependent relationship across a large number of participants. Each additional ACE exposure contributed to a 12.9% (95% CI 7.9 to 17.9%) increase in the odds for multimorbidity. This adds to previous meta-analyses that have shown an association between ACEs and individual LTCs, health behaviours, and other health outcomes [ 1 , 28 , 31 , 65 , 66 ]. However, we also identified substantial inter-study heterogeneity that is likely to have arisen due to variation in the definitions, methodology, and analysis of the included studies, and so our results should be interpreted with these limitations in mind.

Although 25 years have passed since the landmark Adverse Childhood Experiences Study by Felitti et al. [ 3 ], there is still no consistent approach to determining what constitutes an ACE. This is reflected in this review, where fewer than half of the 58 different ACEs ( n  = 25, 43.1%) were reported by more than one study and no study reported more than 15 ACEs. Even ACE types that are commonly included are not always assessed in the same way [ 67 ], and furthermore, the same question can be interpreted differently in different contexts (e.g. physical punishment for bad behaviour was socially acceptable 50 years ago but is now considered physical abuse in the UK). Although a few validated questionnaires exist, they often focus on a narrow range of ACEs; for example, the childhood trauma questionnaire demonstrates good reliability and validity but focuses on interpersonal ACEs, missing out on household factors (e.g. parental separation), and community factors (e.g. bullying) [ 68 ]. Many studies were performed on pre-existing research cohorts or historic healthcare data, where the study authors had limited or no influence on the data collected. As a result, very few individual studies reported on the full breadth of potential ACEs.

ACE research is often based on ACE counts, where the types of ACEs experienced are summed into a single score that is taken as a proxy measure of the burden of childhood stress. The original Adverse Childhood Experiences Study by Felitti et al. took this approach [ 3 ], as did 17 of the studies included in this review and our own quantitative synthesis. At the population level, there are benefits to this: ACE counts provide quantifiable and comparable metrics, they are easy to collect and analyse, and in many datasets, they are the only means by which an assessment of childhood stress can be derived. However, there are clear limitations to this method when considering experiences at the individual level, not least the inherent assumptions that different ACEs in the same person are of equal weight or that the same ACE in different people carries the same burden of childhood stress. This limitation was strongly reinforced by our patient and public involvement group (CPAG). Two studies in this review incorporated frequency within their ACE scoring system [ 52 , 54 ], which adds another dimension to the assessment, but this is insufficient to understand and quantify the ‘impact’ of an ACE within an epidemiological framework.

The definitions of multimorbidity were consistent across the relevant studies but the contributory long-term conditions varied. Fewer than half of the 115 different LTCs ( n  = 52, 45.2%) were reported by more than one study. Part of the challenge is the classification of healthcare conditions. For example, myocardial infarction is commonly caused by coronary heart disease, and both are a form of heart disease. All three were reported as LTCs in the included studies, but which level of pathology should be reported? Mental health LTCs were under-represented within the condition list, with just over half of the included studies assessing at least one ( n  = 14, 56.0%). Given the strong links between ACEs and mental health, and the impact of mental health on quality of life, this is an area for improvement in future research [ 31 , 32 ]. A recent Delphi consensus study by Ho et al. may help to address these issues: following input from professionals and members of the public they identified 24 LTCs to ‘always include’ and 35 LTCs to ‘usually include’ in multimorbidity research, including nine mental health conditions [ 9 ].

As outlined in the introduction, there is a strong evidence base supporting the link between ACEs and long-term health outcomes, including specific LTCs. It is not unreasonable to extrapolate this association to ACEs and multimorbidity, though to our knowledge, the pathophysiological processes that link the two have not been precisely identified. However, similar lines of research are being independently followed in both fields and these areas of overlap may suggest possible mechanisms for a relationship. For example, both ACEs and multimorbidity have been associated with markers of accelerated epigenetic ageing [ 69 , 70 ], mitochondrial dysfunction [ 71 , 72 ], and inflammation [ 22 , 73 ]. More work is required to better understand how these concepts might be linked.

This review used data from a large participant base, with information from 372,162 people contributing to the systematic review and information from 197,981 people contributing to the dose–response meta-analysis. Data from the included studies originated from a range of sources, including healthcare settings and dedicated research cohorts. We believe this is of a sufficient scale and variety to demonstrate the nature and magnitude of the association between ACEs and multimorbidity in these populations.

However, there are some limitations. Firstly, although data came from 11 different countries, only two of those were from outside Europe and North America, and all were from either high- or middle-income countries. Data on ACEs from low-income countries have indicated a higher prevalence of any ACE exposure (consistently > 70%) [ 74 , 75 ], though how well this predicts health outcomes in these populations is unknown.

Secondly, studies in this review utilised retrospective participant-reported ACE data and so are at risk of recall and reporting bias. Studies utilising prospective assessments are rare and much of the wider ACE literature is open to a similar risk of bias. To date, two studies have compared prospective and retrospective ACE measurements, demonstrating inconsistent results [ 76 , 77 ]. However, these studies were performed in New Zealand and South Africa, two countries not represented by studies in our review, and had relatively small sample sizes (1037 and 1595 respectively). It is unclear whether these are generalisable to other population groups.

Thirdly, previous research has indicated a close relationship between ACEs and childhood socio-economic status (SES) [ 78 ] and between SES and multimorbidity [ 10 , 79 ]. However, the limitations of the included studies meant we were unable to separate the effect of ACEs from the effect of childhood SES on multimorbidity in this review. Whilst two studies included childhood SES as covariates in their models, others used measures from adulthood (such as adulthood SES, income level, and education level) that are potentially influenced by ACEs and therefore increase the risk of bias due to confounding (Additional File 1: Table S3). Furthermore, as for ACEs and multimorbidity, there is no consistently applied definition of SES and different measures of SES may produce different apparent effects [ 80 ]. The complex relationships between ACEs, childhood SES, and multimorbidity remain a challenge for research in this field.

Fourthly, there was a high degree of heterogeneity within included studies, especially relating to the definition and measurement of ACEs and multimorbidity. Whilst this suggests that our results should be interpreted with caution, it is reassuring to see that our meta-analysis of prevalence estimates for exposure to any ACE (48.1%) and multimorbidity (34.5%) are in line with previous estimates in similar populations [ 2 , 11 ]. Furthermore, we believe that the quantitative synthesis of these relatively heterogenous studies provides important benefit by demonstrating a strong dose–response relationship across a range of contexts.

Our results strengthen the evidence supporting the lasting influence of childhood conditions on adult health and wellbeing. How this understanding is best incorporated into routine practice is still not clear. Currently, the lack of consistency in assessing ACEs limits our ability to understand their impact at both the individual and population level and poses challenges for those looking to incorporate a formalised assessment. Whilst most risk factors for disease (e.g. blood pressure) are usually only relevant within healthcare settings, ACEs are relevant to many other sectors (e.g. social care, education, policing) [ 81 , 82 , 83 , 84 ], and so consistency of assessment across society is both more important and more challenging to achieve.

Some have suggested that the evidence for the impact of ACEs is strong enough to warrant screening, which would allow early identification of potential harms to children and interventions to prevent them. This approach has been implemented in California, USA [ 85 , 86 , 87 ]. However, this is controversial, and others argue that screening is premature with the current evidence base [ 88 , 89 , 90 ]. Firstly, not everyone who is exposed to ACEs develops poor health outcomes, and it is not clear how to identify those who are at highest risk. Many people appear to be vulnerable, with more adverse health outcomes following ACE exposure than those who are not exposed, whilst others appear to be more resilient, with good health in later life despite multiple ACE exposures [ 91 ] It may be that supportive environments can mitigate the long-term effects of ACE exposure and promote resilience [ 92 , 93 ]. Secondly, there are no accepted interventions for managing the impact of an identified ACE. As identified above, different ACEs may require input from different sectors (e.g. healthcare, social care, education, police), and so collating this evidence may be challenging. At present, ACEs screening does not meet the Wilson-Jungner criteria for a screening programme [ 94 ].

Existing healthcare systems are poorly designed to deal with the complexities of addressing ACEs and multimorbidity. Possibly, ways to improve this might be allocating more time per patient, prioritising continuity of care to foster long-term relationships, and greater integration between different healthcare providers (most notably primary vs secondary care teams, or physical vs mental health teams). However, such changes often demand additional resources (e.g. staff, infrastructure, processes), which are challenging to source when existing healthcare systems are already stretched [ 95 , 96 ]. Nevertheless, increasing the spotlight on ACEs and multimorbidity may help to focus attention and ultimately bring improvements to patient care and experience.

ACEs are associated with a range of poor long-term health outcomes, including harmful health behaviours and individual long-term conditions. Multimorbidity is becoming more common as global populations age, and it increases the complexity and cost of healthcare provision. This is the first systematic review and meta-analysis to synthesise the literature on ACEs and multimorbidity, showing a statistically significant dose-dependent relationship across a large number of participants, albeit with a high degree of inter-study heterogeneity. This consolidates and enhances an increasing body of data supporting the role of ACEs in determining long-term health outcomes. Whilst these observational studies do not confirm causality, the weight and consistency of evidence is such that we can be confident in the link. The challenge for healthcare practitioners, managers, policymakers, and governments is incorporating this body of evidence into routine practice to improve the health and wellbeing of our societies.

Availability of data and materials

No additional data was generated for this review. The data used were found in the referenced papers or provided through correspondence with the study authors.

Abbreviations

Adverse childhood experience

Akaike information criterion

CONSORTIUM Against pain inequality

Confidence interval

Chronic pain advisory group

Interquartile range

Long-term condition

International prospective register of systematic reviews

Preferred reporting items for systematic reviews and meta-analyses

Risk of bias in non-randomised studies of exposures

Socio-economic status

Hughes K, Bellis MA, Hardcastle KA, Sethi D, Butchart A, Mikton C, et al. The effect of multiple adverse childhood experiences on health: a systematic review and meta-analysis. Lancet Public Health. 2017;2:e356–66.

Article   PubMed   Google Scholar  

Bellis MA, Lowey H, Leckenby N, Hughes K, Harrison D. Adverse childhood experiences: retrospective study to determine their impact on adult health behaviours and health outcomes in a UK population. J Public Health Oxf Engl. 2014;36:81–91.

Article   Google Scholar  

Felitti VJ, Anda RF, Nordenberg D, Williamson DF, Spitz AM, Edwards V, et al. Relationship of childhood abuse and household dysfunction to many of the leading causes of death in adults. The Adverse Childhood Experiences (ACE) Study. Am J Prev Med. 1998;14:245–58.

Article   CAS   PubMed   Google Scholar  

Maniglio R. The impact of child sexual abuse on health: a systematic review of reviews. Clin Psychol Rev. 2009;29:647–57.

Yu J, Patel RA, Haynie DL, Vidal-Ribas P, Govender T, Sundaram R, et al. Adverse childhood experiences and premature mortality through mid-adulthood: a five-decade prospective study. Lancet Reg Health - Am. 2022;15:100349.

Wang Y-X, Sun Y, Missmer SA, Rexrode KM, Roberts AL, Chavarro JE, et al. Association of early life physical and sexual abuse with premature mortality among female nurses: prospective cohort study. BMJ. 2023;381: e073613.

Article   PubMed   PubMed Central   Google Scholar  

Rogers NT, Power C, Pereira SMP. Child maltreatment, early life socioeconomic disadvantage and all-cause mortality in mid-adulthood: findings from a prospective British birth cohort. BMJ Open. 2021;11: e050914.

Hardcastle K, Bellis MA, Sharp CA, Hughes K. Exploring the health and service utilisation of general practice patients with a history of adverse childhood experiences (ACEs): an observational study using electronic health records. BMJ Open. 2020;10: e036239.

Ho ISS, Azcoaga-Lorenzo A, Akbari A, Davies J, Khunti K, Kadam UT, et al. Measuring multimorbidity in research: Delphi consensus study. BMJ Med. 2022;1:e000247.

Barnett K, Mercer SW, Norbury M, Watt G, Wyke S, Guthrie B. Epidemiology of multimorbidity and implications for health care, research, and medical education: a cross-sectional study. Lancet Lond Engl. 2012;380:37–43.

Chowdhury SR, Das DC, Sunna TC, Beyene J, Hossain A. Global and regional prevalence of multimorbidity in the adult population in community settings: a systematic review and meta-analysis. eClinicalMedicine. 2023;57:101860.

Noël PH, Chris Frueh B, Larme AC, Pugh JA. Collaborative care needs and preferences of primary care patients with multimorbidity. Health Expect. 2005;8:54–63.

Chau E, Rosella LC, Mondor L, Wodchis WP. Association between continuity of care and subsequent diagnosis of multimorbidity in Ontario, Canada from 2001–2015: a retrospective cohort study. PLoS ONE. 2021;16: e0245193.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Nicholson K, Liu W, Fitzpatrick D, Hardacre KA, Roberts S, Salerno J, et al. Prevalence of multimorbidity and polypharmacy among adults and older adults: a systematic review. Lancet Healthy Longev. 2024;5:e287–96.

Albreht T, Dyakova M, Schellevis FG, Van den Broucke S. Many diseases, one model of care? J Comorbidity. 2016;6:12–20.

Soley-Bori M, Ashworth M, Bisquera A, Dodhia H, Lynch R, Wang Y, et al. Impact of multimorbidity on healthcare costs and utilisation: a systematic review of the UK literature. Br J Gen Pract. 2020;71:e39-46.

World Health Organization (WHO). Ageing and health. 2022. https://www.who.int/news-room/fact-sheets/detail/ageing-and-health . Accessed 23 Apr 2024.

Franke HA. Toxic stress: effects, prevention and treatment. Children. 2014;1:390–402.

Parade SH, Huffhines L, Daniels TE, Stroud LR, Nugent NR, Tyrka AR. A systematic review of childhood maltreatment and DNA methylation: candidate gene and epigenome-wide approaches. Transl Psychiatry. 2021;11:1–33.

Ridout KK, Levandowski M, Ridout SJ, Gantz L, Goonan K, Palermo D, et al. Early life adversity and telomere length: a meta-analysis. Mol Psychiatry. 2018;23:858–71.

Elwenspoek MMC, Kuehn A, Muller CP, Turner JD. The effects of early life adversity on the immune system. Psychoneuroendocrinology. 2017;82:140–54.

Danese A, Baldwin JR. Hidden wounds? Inflammatory links between childhood trauma and psychopathology. Annu Rev Psychol. 2017;68:517–44.

Brindle RC, Pearson A, Ginty AT. Adverse childhood experiences (ACEs) relate to blunted cardiovascular and cortisol reactivity to acute laboratory stress: a systematic review and meta-analysis. Neurosci Biobehav Rev. 2022;134: 104530.

Teicher MH, Samson JA, Anderson CM, Ohashi K. The effects of childhood maltreatment on brain structure, function and connectivity. Nat Rev Neurosci. 2016;17:652–66.

McLaughlin KA, Weissman D, Bitrán D. Childhood adversity and neural development: a systematic review. Annu Rev Dev Psychol. 2019;1:277–312.

Koyama Y, Fujiwara T, Murayama H, Machida M, Inoue S, Shobugawa Y. Association between adverse childhood experiences and brain volumes among Japanese community-dwelling older people: findings from the NEIGE study. Child Abuse Negl. 2022;124: 105456.

Antoniou G, Lambourg E, Steele JD, Colvin LA. The effect of adverse childhood experiences on chronic pain and major depression in adulthood: a systematic review and meta-analysis. Br J Anaesth. 2023;130:729–46.

Huang H, Yan P, Shan Z, Chen S, Li M, Luo C, et al. Adverse childhood experiences and risk of type 2 diabetes: a systematic review and meta-analysis. Metabolism. 2015;64:1408–18.

Lopes S, Hallak JEC, de Machado Sousa JP, de Osório F L. Adverse childhood experiences and chronic lung diseases in adulthood: a systematic review and meta-analysis. Eur J Psychotraumatology. 2020;11:1720336.

Hu Z, Kaminga AC, Yang J, Liu J, Xu H. Adverse childhood experiences and risk of cancer during adulthood: a systematic review and meta-analysis. Child Abuse Negl. 2021;117: 105088.

Tan M, Mao P. Type and dose-response effect of adverse childhood experiences in predicting depression: a systematic review and meta-analysis. Child Abuse Negl. 2023;139: 106091.

Zhang L, Zhao N, Zhu M, Tang M, Liu W, Hong W. Adverse childhood experiences in patients with schizophrenia: related factors and clinical implications. Front Psychiatry. 2023;14:1247063.

Emsley E, Smith J, Martin D, Lewis NV. Trauma-informed care in the UK: where are we? A qualitative study of health policies and professional perspectives. BMC Health Serv Res. 2022;22:1164.

ROBINS-E Development Group (Higgins J, Morgan R, Rooney A, Taylor K, Thayer K, Silva R, Lemeris C, Akl A, Arroyave W, Bateson T, Berkman N, Demers P, Forastiere F, Glenn B, Hróbjartsson A, Kirrane E, LaKind J, Luben T, Lunn R, McAleenan A, McGuinness L, Meerpohl J, Mehta S, Nachman R, Obbagy J, O’Connor A, Radke E, Savović J, Schubauer-Berigan M, Schwingl P, Schunemann H, Shea B, Steenland K, Stewart T, Straif K, Tilling K, Verbeek V, Vermeulen R, Viswanathan M, Zahm S, Sterne J). Risk Of Bias In Non-randomized Studies - of Exposure (ROBINS-E). Launch version, 20 June 2023. https://www.riskofbias.info/welcome/robins-e-tool . Accessed 20 Jul 2023.

Balduzzi S, Rücker G, Schwarzer G. How to perform a meta-analysis with R: a practical tutorial. Evid Based Ment Health. 2019;22:153–60.

Schwarzer G, Chemaitelly H, Abu-Raddad LJ, Rücker G. Seriously misleading results using inverse of Freeman-Tukey double arcsine transformation in meta-analysis of single proportions. Res Synth Methods. 2019;10:476–83.

Crippa A, Orsini N. Multivariate dose-response meta-analysis: the dosresmeta R Package. J Stat Softw. 2016;72:1–15.

Greenland S, Longnecker MP. Methods for trend estimation from summarized dose-response data, with applications to meta-analysis. Am J Epidemiol. 1992;135:1301–9.

Shim SR, Lee J. Dose-response meta-analysis: application and practice using the R software. Epidemiol Health. 2019;41: e2019006.

Arshadipour A, Thorand B, Linkohr B, Rospleszcz S, Ladwig K-H, Heier M, et al. Impact of prenatal and childhood adversity effects around World War II on multimorbidity: results from the KORA-Age study. BMC Geriatr. 2022;22:115.

Atkinson L, Joshi D, Raina P, Griffith LE, MacMillan H, Gonzalez A. Social engagement and allostatic load mediate between adverse childhood experiences and multimorbidity in mid to late adulthood: the Canadian Longitudinal Study on Aging. Psychol Med. 2021;53(4):1–11.

Chandrasekar R, Lacey RE, Chaturvedi N, Hughes AD, Patalay P, Khanolkar AR. Adverse childhood experiences and the development of multimorbidity across adulthood—a national 70-year cohort study. Age Ageing. 2023;52:afad062.

Cromer KR, Sachs-Ericsson N. The association between childhood abuse, PTSD, and the occurrence of adult health problems: moderation via current life stress. J Trauma Stress. 2006;19:967–71.

England-Mason G, Casey R, Ferro M, MacMillan HL, Tonmyr L, Gonzalez A. Child maltreatment and adult multimorbidity: results from the Canadian Community Health Survey. Can J Public Health. 2018;109:561–72.

Godin O, Leboyer M, Laroche DG, Aubin V, Belzeaux R, Courtet P, et al. Childhood maltreatment contributes to the medical morbidity of individuals with bipolar disorders. Psychol Med. 2023;53(15):1–9.

Hanlon P, McCallum M, Jani BD, McQueenie R, Lee D, Mair FS. Association between childhood maltreatment and the prevalence and complexity of multimorbidity: a cross-sectional analysis of 157,357 UK Biobank participants. J Comorbidity. 2020;10:2235042X1094434.

Henchoz Y, Seematter-Bagnoud L, Nanchen D, Büla C, von Gunten A, Démonet J-F, et al. Childhood adversity: a gateway to multimorbidity in older age? Arch Gerontol Geriatr. 2019;80:31–7.

Hosang GM, Fisher HL, Uher R, Cohen-Woods S, Maughan B, McGuffin P, et al. Childhood maltreatment and the medical morbidity in bipolar disorder: a case–control study. Int J Bipolar Disord. 2017;5:30.

Hosang GM, Fisher HL, Hodgson K, Maughan B, Farmer AE. Childhood maltreatment and adult medical morbidity in mood disorders: comparison of unipolar depression with bipolar disorder. Br J Psychiatry. 2018;213:645–53.

Lin L, Wang HH, Lu C, Chen W, Guo VY. Adverse childhood experiences and subsequent chronic diseases among middle-aged or older adults in China and associations with demographic and socioeconomic characteristics. JAMA Netw Open. 2021;4: e2130143.

Mendizabal A, Nathan CL, Khankhanian P, Anto M, Clyburn C, Acaba-Berrocal A, et al. Adverse childhood experiences in patients with neurologic disease. Neurol Clin Pract. 2022. https://doi.org/10.1212/CPJ.0000000000001134 .

Noteboom A, Have MT, De Graaf R, Beekman ATF, Penninx BWJH, Lamers F. The long-lasting impact of childhood trauma on adult chronic physical disorders. J Psychiatr Res. 2021;136:87–94.

Patterson ML, Moniruzzaman A, Somers JM. Setting the stage for chronic health problems: cumulative childhood adversity among homeless adults with mental illness in Vancouver. British Columbia BMC Public Health. 2014;14:350.

Post RM, Altshuler LL, Leverich GS, Frye MA, Suppes T, McElroy SL, et al. Role of childhood adversity in the development of medical co-morbidities associated with bipolar disorder. J Affect Disord. 2013;147:288–94.

Reyes-Ortiz CA. Racial discrimination and multimorbidity among older adults in Colombia: a national data analysis. Prev Chronic Dis. 2023;20:220360.

Sheikh MA. Coloring of the past via respondent’s current psychological state, mediation, and the association between childhood disadvantage and morbidity in adulthood. J Psychiatr Res. 2018;103:173–81.

Sinnott C, Mc Hugh S, Fitzgerald AP, Bradley CP, Kearney PM. Psychosocial complexity in multimorbidity: the legacy of adverse childhood experiences. Fam Pract. 2015;32:269–75.

Sosnowski DW, Feder KA, Astemborski J, Genberg BL, Letourneau EJ, Musci RJ, et al. Adverse childhood experiences and comorbidity in a cohort of people who have injected drugs. BMC Public Health. 2022;22:986.

Stapp EK, Williams SC, Kalb LG, Holingue CB, Van Eck K, Ballard ED, et al. Mood disorders, childhood maltreatment, and medical morbidity in US adults: an observational study. J Psychosom Res. 2020;137: 110207.

Tomasdottir MO, Sigurdsson JA, Petursson H, Kirkengen AL, Krokstad S, McEwen B, et al. Self reported childhood difficulties, adult multimorbidity and allostatic load. A cross-sectional analysis of the Norwegian HUNT study. PloS One. 2015;10:e0130591.

Vásquez E, Quiñones A, Ramirez S, Udo T. Association between adverse childhood events and multimorbidity in a racial and ethnic diverse sample of middle-aged and older adults. Innov Aging. 2019;3:igz016.

Yang L, Hu Y, Silventoinen K, Martikainen P. Childhood adversity and trajectories of multimorbidity in mid-late life: China health and longitudinal retirement study. J Epidemiol Community Health. 2021;75:593–600.

Zak-Hunter L, Carr CP, Tate A, Brustad A, Mulhern K, Berge JM. Associations between adverse childhood experiences and stressful life events and health outcomes in pregnant and breastfeeding women from diverse racial and ethnic groups. J Womens Health. 2023;32:702–14.

Zheng X, Cui Y, Xue Y, Shi L, Guo Y, Dong F, et al. Adverse childhood experiences in depression and the mediating role of multimorbidity in mid-late life: A nationwide longitudinal study. J Affect Disord. 2022;301:217–24.

Liu M, Luong L, Lachaud J, Edalati H, Reeves A, Hwang SW. Adverse childhood experiences and related outcomes among adults experiencing homelessness: a systematic review and meta-analysis. Lancet Public Health. 2021;6:e836–47.

Petruccelli K, Davis J, Berman T. Adverse childhood experiences and associated health outcomes: a systematic review and meta-analysis. Child Abuse Negl. 2019;97: 104127.

Bethell CD, Carle A, Hudziak J, Gombojav N, Powers K, Wade R, et al. Methods to assess adverse childhood experiences of children and families: toward approaches to promote child well-being in policy and practice. Acad Pediatr. 2017;17(7 Suppl):S51-69.

Bernstein DP, Stein JA, Newcomb MD, Walker E, Pogge D, Ahluvalia T, et al. Development and validation of a brief screening version of the Childhood Trauma Questionnaire. Child Abuse Negl. 2003;27:169–90.

Kim K, Yaffe K, Rehkopf DH, Zheng Y, Nannini DR, Perak AM, et al. Association of adverse childhood experiences with accelerated epigenetic aging in midlife. JAMA Network Open. 2023;6:e2317987.

Jain P, Binder A, Chen B, Parada H, Gallo LC, Alcaraz J, et al. The association of epigenetic age acceleration and multimorbidity at age 90 in the Women’s Health Initiative. J Gerontol A Biol Sci Med Sci. 2023;78:2274–81.

Zang JCS, May C, Hellwig B, Moser D, Hengstler JG, Cole S, et al. Proteome analysis of monocytes implicates altered mitochondrial biology in adults reporting adverse childhood experiences. Transl Psychiatry. 2023;13:31.

Mau T, Blackwell TL, Cawthon PM, Molina AJA, Coen PM, Distefano G, et al. Muscle mitochondrial bioenergetic capacities are associated with multimorbidity burden in older adults: the Study of Muscle, Mobility and Aging (SOMMA). J Gerontol A Biol Sci Med Sci. 2024;79(7):glae101.

Friedman E, Shorey C. Inflammation in multimorbidity and disability: an integrative review. Health Psychol Off J Div Health Psychol Am Psychol Assoc. 2019;38:791–801.

Google Scholar  

Satinsky EN, Kakuhikire B, Baguma C, Rasmussen JD, Ashaba S, Cooper-Vince CE, et al. Adverse childhood experiences, adult depression, and suicidal ideation in rural Uganda: a cross-sectional, population-based study. PLoS Med. 2021;18: e1003642.

Amene EW, Annor FB, Gilbert LK, McOwen J, Augusto A, Manuel P, et al. Prevalence of adverse childhood experiences in sub-Saharan Africa: a multicounty analysis of the Violence Against Children and Youth Surveys (VACS). Child Abuse Negl. 2023;150:106353.

Reuben A, Moffitt TE, Caspi A, Belsky DW, Harrington H, Schroeder F, et al. Lest we forget: comparing retrospective and prospective assessments of adverse childhood experiences in the prediction of adult health. J Child Psychol Psychiatry. 2016;57:1103–12.

Naicker SN, Norris SA, Mabaso M, Richter LM. An analysis of retrospective and repeat prospective reports of adverse childhood experiences from the South African Birth to Twenty Plus cohort. PLoS ONE. 2017;12: e0181522.

Walsh D, McCartney G, Smith M, Armour G. Relationship between childhood socioeconomic position and adverse childhood experiences (ACEs): a systematic review. J Epidemiol Community Health. 2019;73:1087–93.

Ingram E, Ledden S, Beardon S, Gomes M, Hogarth S, McDonald H, et al. Household and area-level social determinants of multimorbidity: a systematic review. J Epidemiol Community Health. 2021;75:232–41.

Darin-Mattsson A, Fors S, Kåreholt I. Different indicators of socioeconomic status and their relative importance as determinants of health in old age. Int J Equity Health. 2017;16:173.

Bateson K, McManus M, Johnson G. Understanding the use, and misuse, of Adverse Childhood Experiences (ACEs) in trauma-informed policing. Police J. 2020;93:131–45.

Webb NJ, Miller TL, Stockbridge EL. Potential effects of adverse childhood experiences on school engagement in youth: a dominance analysis. BMC Public Health. 2022;22:2096.

Stewart-Tufescu A, Struck S, Taillieu T, Salmon S, Fortier J, Brownell M, et al. Adverse childhood experiences and education outcomes among adolescents: linking survey and administrative data. Int J Environ Res Public Health. 2022;19:11564.

Frederick J, Spratt T, Devaney J. Adverse childhood experiences and social work: relationship-based practice responses. Br J Soc Work. 2021;51:3018–34.

University of California ACEs Aware Family Resilience Network (UCAAN). acesaware.org. ACEs Aware. https://www.acesaware.org/about/ . Accessed 6 Oct 2023.

Watson CR, Young-Wolff KC, Negriff S, Dumke K, DiGangi M. Implementation and evaluation of adverse childhood experiences screening in pediatrics and obstetrics settings. Perm J. 2024;28:180–7.

Gordon JB, Felitti VJ. The importance of screening for adverse childhood experiences (ACE) in all medical encounters. AJPM Focus. 2023;2: 100131.

Finkelhor D. Screening for adverse childhood experiences (ACEs): Cautions and suggestions. Child Abuse Negl. 2018;85:174–9.

Cibralic S, Alam M, Mendoza Diaz A, Woolfenden S, Katz I, Tzioumi D, et al. Utility of screening for adverse childhood experiences (ACE) in children and young people attending clinical and healthcare settings: a systematic review. BMJ Open. 2022;12: e060395.

Gentry SV, Paterson BA. Does screening or routine enquiry for adverse childhood experiences (ACEs) meet criteria for a screening programme? A rapid evidence summary. J Public Health Oxf Engl. 2022;44:810–22.

Article   CAS   Google Scholar  

Morgan CA, Chang Y-H, Choy O, Tsai M-C, Hsieh S. Adverse childhood experiences are associated with reduced psychological resilience in youth: a systematic review and meta-analysis. Child Basel Switz. 2021;9:27.

Narayan AJ, Lieberman AF, Masten AS. Intergenerational transmission and prevention of adverse childhood experiences (ACEs). Clin Psychol Rev. 2021;85: 101997.

VanBronkhorst SB, Abraham E, Dambreville R, Ramos-Olazagasti MA, Wall M, Saunders DC, et al. Sociocultural risk and resilience in the context of adverse childhood experiences. JAMA Psychiat. 2024;81:406–13.

Wilson JM, Jungner G. Principles and practice of screening for disease. World Health Organisation; 1968.

Huo Y, Couzner L, Windsor T, Laver K, Dissanayaka NN, Cations M. Barriers and enablers for the implementation of trauma-informed care in healthcare settings: a systematic review. Implement Sci Commun. 2023;4:49.

Foo KM, Sundram M, Legido-Quigley H. Facilitators and barriers of managing patients with multiple chronic conditions in the community: a qualitative study. BMC Public Health. 2020;20:273.

Download references

Acknowledgements

The authors thank the members of the CAPE CPAG patient and public involvement group for providing insights gained from relevant lived experiences.

The authors are members of the Advanced Pain Discovery Platform (APDP) supported by UK Research & Innovation (UKRI), Versus Arthritis, and Eli Lilly. DS is a fellow on the Multimorbidity Doctoral Training Programme for Health Professionals, which is supported by the Wellcome Trust [223499/Z/21/Z]. BT, BS, and LC are supported by an APDP grant as part of the Partnership for Assessment and Investigation of Neuropathic Pain: Studies Tracking Outcomes, Risks and Mechanisms (PAINSTORM) consortium [MR/W002388/1]. TH and LC are supported by an APDP grant as part of the Consortium Against Pain Inequality [MR/W002566/1]. The funding bodies had no role in study design, data collection/analysis/interpretation, report writing, or the decision to submit the manuscript for publication.

Author information

Authors and affiliations.

Chronic Pain Research Group, Division of Population Health & Genomics, School of Medicine, University of Dundee, Ninewells Hospital, Dundee, DD1 9SY, UK

Dhaneesha N. S. Senaratne, Bhushan Thakkar, Blair H. Smith & Lesley A. Colvin

Institute of Academic Anaesthesia, Division of Systems Medicine, School of Medicine, University of Dundee, Dundee, UK

Tim G. Hales

School of Health Sciences, University of Dundee, Dundee, UK

Louise Marryat

You can also search for this author in PubMed   Google Scholar

Contributions

DS and LC contributed to review conception and design. DC, BT, BS, TH, LM, and LC contributed to search strategy design. DS and BT contributed to study selection and data extraction, with input from LC. DS and BT accessed and verified the underlying data. DS conducted the meta-analyses, with input from BT, BS, TH, LM, and LC. DS drafted the manuscript, with input from DC, BT, BS, TH, LM, and LC. DC, BT, BS, TH, LM, and LC read and approved the final manuscript.

Corresponding author

Correspondence to Dhaneesha N. S. Senaratne .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

12916_2024_3505_moesm1_esm.docx.

Additional File 1: Tables S1-S5 and Figures S1-S2. Table S1: Search strategy, Table S2: Characteristics of studies included in the systematic review, Table S3: Risk of bias assessment (ROBINS-E), Table S4: Exposure details (adverse childhood experiences), Table S5: Outcome details (multimorbidity), Figure S1: Meta-analysis of prevalence of exposure to ≥4 adverse childhood experiences, Figure S2: Dose-response meta-analysis of the relationship between adverse childhood experiences and multimorbidity (using a non-linear/restricted cubic spline model).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Senaratne, D.N.S., Thakkar, B., Smith, B.H. et al. The impact of adverse childhood experiences on multimorbidity: a systematic review and meta-analysis. BMC Med 22 , 315 (2024). https://doi.org/10.1186/s12916-024-03505-w

Download citation

Received : 01 December 2023

Accepted : 14 June 2024

Published : 15 August 2024

DOI : https://doi.org/10.1186/s12916-024-03505-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Adverse childhood experiences
  • Childhood adversity
  • Chronic disease
  • Long-term conditions
  • Multimorbidity

BMC Medicine

ISSN: 1741-7015

tool systematic literature review process

  • Open access
  • Published: 20 August 2024

Orbital floor fracture (blow out) and its repercussions on eye movement: a systematic review

  • Ilan Hudson Gomes de Santana 1 ,
  • Mayara Rebeca Martins Viana 2 ,
  • Julliana Cariry Palhano-Dias 3 ,
  • Osny Ferreira-Júnior 4 ,
  • Eduardo Sant’Ana 4 ,
  • Élio Hitoshi Shinohara 5 &
  • Eduardo Dias Ribeiro 6  

European Journal of Medical Research volume  29 , Article number:  427 ( 2024 ) Cite this article

Metrics details

The aim of this systematic review was to investigate the relationship between fractures of the floor of the orbit (blow outs) and their repercussions on eye movement, based on the available scientific literature. In order to obtain more reliable results, we opted for a methodology that could answer the guiding question of this research. To this end, a systematic review of the literature was carried out, using a rigorous methodological approach. The risk of bias was assessed using version 2 of the Cochrane tool for the risk of bias in randomized trials (RoB 2). This systematic review was carried out according to a systematic review protocol previously registered on the PROSPERO platform. The searches were carried out in the PubMed (National Library of Medicine), Scopus, ScienceDirect, SciELO, Web of Science, Cochrane Library and Embase databases, initially resulting in 553 studies. After removing duplicates, 515 articles remained, 7 were considered eligible, of which 3 were selected for detailed analysis. However, the results of the included studies did not provide conclusive evidence of a direct relationship between orbital floor fractures and eye movement.

Introduction

The anatomy of the orbit is a complex and vital structure, made up of seven distinct bones that define its boundaries [ 1 ]. Within this pyramid-shaped bone cavity, a variety of essential elements are present, including the eyeball, fat, extraocular muscles, nerves, blood vessels, lacrimal sac and lacrimal gland [ 2 ]. Its lateral and medial walls are outlined by a combination of bones, most notably the greater wing of the sphenoid bone and the zygomatic bone in the lateral wall, and the lacrimal bone, ethmoid bone, maxilla and lesser wing of the sphenoid in the medial wall [ 3 , 4 ]. The orbital floor, formed mainly by the maxilla and the zygomatic bone, plays a fundamental role in maintaining the normal structure and function of the orbit. Its delicate curvature, which extends smoothly from the inferior orbital rim to the superior orbital fissure, is important in preventing complications such as enophthalmos in cases of orbital fractures [ 5 , 6 ].

Orbital fractures are injuries to the bones surrounding the orbit and represent the third most common type of facial fracture in adults and children [ 7 , 8 ]. They are generally classified based on their anatomical location, including fractures of the orbital floor, orbital roof, lateral wall and medial wall [ 9 , 10 ]. Blunt trauma to the ocular region is the main mechanism of injury, often resulting in fracture of the thin bones of the orbit, especially the floor and medial wall [ 11 , 12 ]. These injuries occur due to the transmission of kinetic energy from the bones around the eye or due to increased pressure when the eyeball presses on the orbit. They are also known as blow-out fractures, as they tend to move away from the orbit [ 6 , 13 ].

Thus, the etiology of orbital floor fractures, as well as other types of maxillofacial trauma, includes traffic accidents, assaults, falls, sports injuries, firearm injuries and other incidents [ 14 ]. In addition, industrial accidents have also been identified as a source of trauma [ 15 ]. In developing countries, such as India, traffic accidents are one of the main causes of trauma, while in studies carried out elsewhere, assaults are often cited as the main cause. Worldwide, men are significantly more affected by maxillofacial trauma than women, accounting for approximately 85% of cases [ 16 , 17 ].

In addition, the diagnosis of these fractures is based on physical examination and imaging tests. On physical examination, signs and symptoms such as periorbital ecchymosis, limited eye movement, diplopia and enophthalmos may be present [ 18 ]. Computed tomography is the most efficient test for diagnosing these fractures. Treatment should be carried out by reconstructing the fractured orbital walls with autogenous, homogenous, heterogenous or alloplastic biomaterials [ 18 , 19 , 20 ].

Therefore, the aim of this systematic review was to determine, based on the available scientific literature, the relationship between the fracture of the floor of the orbit (known as blow out) and its consequences for eye movement.

Materials and methods

In order to obtain more reliable results, we opted for a methodology that could answer the guiding question of this research. To this end, a systematic review of the literature], to assess the relationship between orbital floor fracture (blow out) and the repercussions on the ocular. The PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) was used to write the study [ 21 ]. The process followed criteria predefined by a Systematic Review Protocol registered with PROSPERO [ 22 ], guiding the selection and analysis of articles to provide a comprehensive overview of current knowledge on the subject. The methodological analysis included a clear protocol for selecting studies, extracting data and assessing methodological quality, maintaining transparency and rigor to guarantee the validity of the results. Strategies were adopted to evaluate and mitigate errors, including standardized training, initial testing, consensus meetings between evaluators and continuous monitoring. A double-blind review was carried out at all stages. When there was a small conflict regarding the exclusion of an article, a third independent reviewer was asked to resolve the disagreement, ensuring clear and consistent criteria. Once this conflict was resolved, the third reviewer excluded the paper, as did the first, as the study did not answer the research question.

Development and registration of the systematic review protocol

A meticulous protocol, covering all the essential elements of the methodology of a systematic review, was drawn up and submitted for approval on the PROSPERO (Prospective Register of Systematic Reviews) [ 22 ] platform prior to the start of this study. This protocol covered several aspects in detail, including defining the start and end date of the study, formulating the research question, the databases searched, structuring the acronym PICO (patient, intervention, comparison, outcome), designing a precise search strategy, stipulating inclusion and exclusion criteria for the study, determining outcome measures, screening methods, data extraction and analysis, as well as the approach to data synthesis. The prior registration of this protocol in the International Prospective Register of Systematic Reviews (PROSPERO) [ 22 ] was carried out in order to guarantee the transparency, integrity and methodological quality of this systematic review.

This systematic review was conducted in accordance with a systematic review protocol previously registered on the PROSPERO platform, identified by the number CRD42024497638.

PICO question

The use of the PICO components (Patient, Intervention, Comparison and Outcome) played a crucial role in defining the search strategy for evidence and the subsequent analysis of this systematic review. This specific approach was key to locating relevant studies and played a vital role in ensuring objectivity during the assessment of this work. Patient (P): Individuals diagnosed with an orbital floor fracture; Intervention (I): Exposure to orbital floor fracture; Comparison (C): Individuals without an orbital floor fracture; Outcome (O): The repercussions on eye movement, including changes in motility, diplopia and other related changes.

Guiding research question

The research question was formulated as follows: What is the relationship between orbital floor fracture (blow out) and repercussions on eye movement?

Search strategy and selection of articles

The electronic bibliographic searches were carried out through systematic searches in the PubMed (National Library of Medicine), Scopus, ScienceDirect, SciELO, Web of Science, Cochrane Library and Embase databases. Search terms and Boolean operators (AND and OR) were combined to better perform the searches in the databases, and the following search strategy was formulated:: Fractures AND Ocular Motility Disorders) OR (Oculomotor Nerve Injuries AND Ocular Motility AND Orbital Fractures AND Facial Trauma) OR (Ocular Trauma OR Orbital Fractures AND Ocular Motility AND Muscle Damage) Portuguese strategy: (Relation AND Orbital Fractures AND Ocular Motility Disorders) OR (Oculomotor Nerve Injuries AND Ocular Motility AND Orbital Fractures AND facial trauma) OR (Ocular trauma AND Orbital Fractures AND Ocular Motility AND muscle damage).

The methodological quality of the articles chosen was assessed by two independent researchers, considering both the title and the abstract (when available). The aim was to check whether these articles met the inclusion criteria; when the information in the abstract was insufficient to determine the inclusion of the study, the full text was read. After individual assessments, the researchers reached a consensus on the inclusion of studies for full text analysis.

Criteria for selection, inclusion and exclusion of studies

We included studies in English, Spanish, Japanese, Chinese, German and Portuguese which were randomized clinical trials, systematic reviews, cohort studies, case–control studies, cross-sectional studies, detailed case reports, with a sample made up of patients of all ages and both sexes who had suffered a fracture of the floor of the orbit. All studies that did not meet the inclusion criteria for this study, such as patients with medical conditions that could significantly interfere with the association between orbital floor fracture and eye movement, were excluded.

Selection of studies

The database search resulted in the initial identification of 553 studies. After removing duplicates using Rayyan © software [ 23 ], 515 articles remained, as shown in Fig.  1 . Of these, 7 were considered eligible according to the inclusion criteria and were selected for a more detailed analysis. After a thorough evaluation, taking into account the inclusion and exclusion criteria, 3 studies were identified as particularly relevant and included in this systematic review.

figure 1

Source: authors (2024), adapted from PRISMA [ 22 ]

Bibliographic search flowchart, adapted from PRISMA 2020.

Risk of bias

In this study, the risk of bias assessment was carried out using version 2 of the Cochrane tool for risk of bias in randomized trials (RoB 2). When examining each included study individually, it was observed that all three included studies raised concerns regarding the risk of bias, as illustrated in Fig.  2 :

figure 2

Source: Authors (2024), adapted from the ROB-2 Tool

Individual analysis of bias for each included study.

Given the nature of the intervention in the treatment of orbital floor fractures (blow out), improving the risk of bias is impossible. Orbital floor fractures have complex implications for eye movement, and the variability in surgical techniques, surgeon experience and individual patient characteristics contributes to the possibility of bias in the results of these studies. Therefore, it will not be possible to establish a totally reliable conclusion from this systematic review. The presence of bias can distort the findings and compromise the validity of the conclusions drawn from this analysis. This highlights the importance of adequately addressing and mitigating bias in future studies to ensure an accurate and reliable understanding of the topic in question.

This study critically analyzed the existing literature on the relationship between orbital floor fractures and their consequences for eye movement. The results obtained from the included studies did not provide conclusive evidence establishing a direct relationship between orbital floor fractures and eye movement. Although in the current literature some studies have suggested a possible association, the lack of consensus and the heterogeneity of the results highlight the need for further research to clarify this complex relationship. This review highlights the importance of multidisciplinary approaches and high-quality studies for a more comprehensive understanding of the repercussions of orbital floor fractures on ocular function.

Some studies suggest that blow-out fractures are associated with limited ocular motility and can therefore result in ocular pathologies [ 24 ]. When a fracture occurs in the floor of the orbit, possibly longitudinal rupture of the rectus muscle, vertical diplopia, muscle contusion, scarring within and around the orbital fibrous sheath network, nerve contusion and incarceration within fractures and fibrosis or incarceration involving the muscular fascial network can be common repercussions of trauma [ 25 ]. These complications not only affect visual function, but can also have a significant impact on the patient's quality of life. However, it is necessary to check the methodologies used in such research extensively, so that there are no inconsistencies in the results presented [ 26 , 27 , 28 , 29 ].

In the context of trapdoor fractures of the orbital floor, those that do not involve muscle incarceration generally have a more favorable prognosis in terms of eye movements. However, when muscle incarceration occurs in trapdoor fractures, paralysis of the inferior oblique muscle can contribute to disturbances in ocular motility, in addition to the disturbances caused by connective tissue septa [ 30 ]. Most experts believe that the restriction of motility after blow-out fractures is caused by soft tissue edema and hemorrhage, or by damage to the muscles that control eye movements, such as the inferior rectus, inferior oblique and medial rectus, or even a combination of both, due to the bony fixation of the muscles and fascia [ 31 ]. However, the results of this review revealed a lack of robust evidence to support this claim. The limited methodology of the included studies raises concerns about the reliability of the results. Thus, late motility problems after orbital fractures with or without repair remain poorly understood and challenging to treat, as they resemble other eye movement restrictions, regardless of the underlying cause [ 32 , 33 , 34 , 35 ].

Imaging tests such as Computed Tomography (CT) can be used to analyze the relationship between fractures and ocular motility before surgery in cases of blow-out orbital fractures. Although the use of CT is a relevant way of assessing this type of trauma, there is a bias in its ability to predict the recovery of post-operative motility. Thus, the interactions between bone fragments and soft tissues may not be fully represented by CT images, which can lead to inaccurate inferences about the results of post-surgical ocular motility. Furthermore, the classification of injuries as burst fractures based on CT can be subjective and may not fully reflect the extent of tissue damage or the severity of subsequent fibrosis. Therefore, the relationship between the degree of soft tissue incarceration or displacement and motility outcomes may be more complex than this approach suggests [ 36 , 37 , 38 , 39 ].

In short, there is no concrete evidence that blow out fractures alone can affect the motor function of the ocular nerve, since other factors such as trauma and the surgical intervention itself can also result in neurogenic diplopia. In addition, syndromes can also have an influence on this process. As a result, diplopia can be significantly affected by a number of factors [ 40 , 41 , 42 , 43 ].

After a systematic analysis of the literature and with the results found to compose this systematic review, it is limited to establish a direct relationship between the fracture of the floor of the orbit and repercussions on eye movement.

Data availability

No datasets were generated or analysed during the current study.

Tsyhykalo OV, Kuzniak NB, Dmytrenko RR, Perebyjnis PP, Oliinyk IY, Fedoniuk LY. Features of morphogenesis of the bones of the human orbit. Wiad Lek. 2023;76(1):189–97. https://doi.org/10.36740/WLek202301126 .

Article   PubMed   Google Scholar  

Damasceno RWF, Barbosa JAP, Cortez LRC, Belfort R. Orbital lymphatic vessels: immunohistochemical detection in the lacrimal gland, optic nerve, fat tissue, and extrinsic oculomotor muscles. Arq Bras Oftalmol. 2021;84(3):209–13. https://doi.org/10.5935/0004-2749.20210035 .

Turvey TA, Golden BA. Orbital anatomy for the surgeon. Oral Maxillofac Surg Clin North Am. 2012;24(4):525–36.

Article   PubMed   PubMed Central   Google Scholar  

Villalonga JF, Sáenz A, Revuelta Barbero JM, Calandri I, Campero Á. Surgical anatomy of the orbit. A systematic and clear study of a complex structure. Neurocirugia. 2019;30(6):259–67. https://doi.org/10.1016/j.neucir.2019.04.003 . ( English, Spanish ).

Article   Google Scholar  

Susarla S, Hopper RA, Mercan E. Intact periorbita can prevent post-traumatic enophthalmos following a large orbital blow-out fracture. Craniomaxillofac Trauma Reconstr. 2020;13(1):49–52. https://doi.org/10.1177/1943387520903545 .

Døving M, Lindal FP, Mjøen E, Galteland P. Orbital fractures. Tidsskr Nor Laegeforen. 2022. https://doi.org/10.4045/tidsskr.21.0586 . ( English, Norwegian ).

Oleck NC, Dobitsch AA, Liu FC, Halsey JN, Le TT, Hoppe IC, Lee ES, Granick MS. Traumatic falls in the pediatric population: facial fracture patterns observed in a leading cause of childhood injury. Ann Plast Surg. 2019;82(4S):S195–8. https://doi.org/10.1097/SAP.0000000000001861 .

Article   CAS   PubMed   Google Scholar  

Shivakotee S, Menon S, Sham ME, Kumar V, Archana S. Midface fracture pattern in a tertiary care hospital: a prospective study. Natl J Maxillofac Surg. 2022;13(2):238–42. https://doi.org/10.4103/njms.njms_378_21 .

Kono S, Yokota H, Naito M, Vaidya A, Kakizaki H, Kamei M, Takahashi Y. Pressure onto the orbital walls and orbital morphology in orbital floor or medial wall fracture: a 3-dimensional printer study. J Craniofac Surg. 2023;34(6):e608–12. https://doi.org/10.1097/SCS.0000000000009565 .

Kelishadi SS, Zeiderman MR, Chopra K, Kelamis JA, Mundinger GS, Rodriguez ED. Facial fracture patterns associated with traumatic optic neuropathy. Craniomaxillofac Trauma Reconstr. 2019;12(1):39–44. https://doi.org/10.1055/s-0038-1641172 .

Chung SY, Langer PD. Pediatric orbital blowout fractures. Curr Opin Ophthalmol. 2017;28(5):470–6. https://doi.org/10.1097/ICU.0000000000000407 .

Ramponi DR, Astorino T, Bessetti-Barrett CR. Orbital floor fractures. Adv Emerg Nurs J. 2017;39(4):240–7. https://doi.org/10.1097/TME.0000000000000163 .

Kansara A, Doshi H, Shah P, Bathla M, Agrawal N, Gajjar R, Shukla R, Chauhan V. A retrospective study on profile of patients with faciomaxilary fractures in a tertiary care center. Indian J Otolaryngol Head Neck Surg. 2023;75(3):1435–40. https://doi.org/10.1007/s12070-023-03574-y .

Franco VP, Gonçalves GM, Fração OC, Sungaila HYF, Cocco LF, Dobashi ET. Evaluation of the epidemiology of exposed fractures before and during the COVID-19 pandemic. Acta Ortop Bras. 2023;31(4): e268179. https://doi.org/10.1590/1413-785220233104e268179 .

Jain SM, Gehlot N, Kv A, Prasad P, Mehta P, Paul TR, Dupare A, Cvns CS, Rahman S. Ophthalmic complications in maxillofacial trauma: a prospective study. Cureus. 2022;14(8): e27608. https://doi.org/10.7759/cureus.27608 .

Septa D, Newaskar VP, Agrawal D, Tibra S. Etiology, incidence and patterns of mid-face fractures and associated ocular injuries. J Maxillofac Oral Surg. 2014;13(2):115–9. https://doi.org/10.1007/s12663-012-0452-9 .

Zamboni RA, Wagner JCB, Volkweis MR, Gerhardt EL, Buchmann EM, Bavaresco CS. Epidemiological study of facial fractures at the oral and maxillofacial surgery service, Santa Casa de Misericordia Hospital Complex, Porto Alegre-RS-Brazil. Rev Col Bras Cir. 2017;44(5):491–7. https://doi.org/10.1590/0100-69912017005011 .

Joganathan V, Gupta D, Beigi B. Monocular diplopia and nondisplaced inferior rectus muscle on computed tomography in a pediatric pure orbital-floor fracture. J Craniofac Surg. 2018;29(7):1832–3. https://doi.org/10.1097/SCS.0000000000004783 .

Scolozzi P. Reflections on a patient-centered approach to treatment of blow-out fractures: why the wisdom of the pastmust guide our decision-making. J Plast Reconstr Aesthet Surg. 2022;75(7):2268–76. https://doi.org/10.1016/j.bjps.2022.04.034 .

Ellis E 3rd. Orbital trauma. Oral Maxillofac Surg Clin North Am. 2012;24(4):629–48. https://doi.org/10.1016/j.coms.2012.07.006 .

Page MJ, Moher D, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. Explicação e elaboração do PRISMA 2020: orientação atualizada e exemplares para relatar revisões sistemáticas. BMJ. 2021;372(160):1–36. https://doi.org/10.1136/bmj.n160 .

PROSPERO. International prospective register of systematic reviews. Available at: https://www.crd.york.ac.uk/prospero . Accessed 20 Jun 2024.

Ouzzani H, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan—a web and mobile app for systematic reviews. Syst Rev. 2016;5:210. https://doi.org/10.1186/s13643-016-0384-4 .

Schneider M, Besmens IS, Luo Y, Giovanoli P, Lindenblatt N. Surgical management of isolated orbital floor and zygomaticomaxillary complex fractures with focus on surgical approaches and complications. J Plast Surg Hand Surg. 2020;54(4):200–6. https://doi.org/10.1080/2000656X.2020.1746664 .

Gowda AU, et al. Resolution of vertical gaze following a delayed presentation of orbital floor fracture with inferior rectus entrapment: the contributions. Craniomaxillofac Trauma Reconstr. 2020;13(4):253–9.

Alsaleh F, et al. Clinical correlations of extraocular motility limitation pattern in orbital fracture cases: a retrospective cohort study in a level 1 trauma centre. Orbit. 2023;42(5):487–95.

Iliff N, et al. Mechanisms of extraocular muscle injury in orbital fractures. Plastic Reconstr Surg. 1999;103(3):787–99.

Article   CAS   Google Scholar  

Kashima T, Akiyama H, Kishi S. Longitudinal tear of the inferior rectus muscle in orbital floor fracture. Orbit. 2012;31(3):171–3.

Morax S, Pascal D. Surgical-treatment of oculomotor disturbance resulting from floor fractures. J Francais D Ophtalmol. 1984;7(10):633–47.

CAS   Google Scholar  

Guerra RC, Pulino BFB, Mendes BC, Pereira RDS, Pinheiro FL, Hochuli-Vieira E. Orbital trapdoor facture in child: more predictable outcomes and less consequences. J Craniofac Surg. 2020;31(5):e469–71. https://doi.org/10.1097/SCS.0000000000006438 .

Düzgün S, Kayahan Sirkeci B. Comparison of post-operative outcomes of graft materials used in reconstruction of blow-out fractures. Ulus Travma Acil Cerrahi Derg. 2020;26(4):538–44. https://doi.org/10.14744/tjtes.2020.80552 .

Kakizaki H, et al. Prognosis of orbital floor trapdoor fractures with or without muscle incarceration. Eur J Plastic Surg. 2007;30:53–6.

Kakizaki H, et al. Incarceration of the inferior oblique muscle branch of the oculomotor nerve in two cases of orbital floor trapdoor fracture. Jpn J Ophthalmol. 2005;49:246–52.

Helveston EM. The relationship of extraocular muscle problems to orbital floor fractures: early and late management. Transactions. Section on ophthalmology. Am Acad Ophthalmol Otolaryngol. 1997;83(4 Pt 1):660–2.

Google Scholar  

Reny A, Stricker M. Oculomotoric disturbances after orbital fractures (author’s transl). Klin Monatsbl Augenheilkd. 1973;162(6):750–60.

CAS   PubMed   Google Scholar  

Harris GJ, et al. Correlation of preoperative computed tomography and postoperative ocular motility in orbital blowout fractures. Ophthal Plastic Reconstr Surg. 2000;16(3):179–87.

Harris GJ, et al. Orbital blow-out fractures: correlation of preoperative computed tomography and postoperative ocular motility. Transact Am Ophthalmol Soc. 1998;96:329.

Hong S, Kim J, Baek S. Blowout fracture assessment based on computed tomography and endoscopy: the effectiveness of endoscopy for fracture repair. J Craniofac Surg. 2022;33(4):1008–12. https://doi.org/10.1097/SCS.0000000000008170 .

Felding UNA. Blowout fractures—clinic, imaging and applied anatomy of the orbit. Dan Med J. 2018;65(3):B5459.

PubMed   Google Scholar  

Ramphul A, Hoffman G. Does preoperative diplopia determine the incidence of postoperative diplopia after repair of orbital floor fracture? An institutional review. J Oral Maxillofac Surg. 2017;75(3):565–75.

Hong S, Choi KE, Kim J, Lee H, Lee H, Baek S. Analysis of patients with blowout fracture caused by baseball trauma. J Craniofac Surg. 2022;33(4):1190–2. https://doi.org/10.1097/SCS.0000000000008492 .

Silva JD, et al. Tratamento de fratura blowout com auxílio de vídeo-cirurgia. Rev Brasil Oftalmol. 2019;78:188–91.

Download references

There was no funding for this research.

Author information

Authors and affiliations.

Health Sciences Center, Federal University of Paraíba (UFPB), João Pessoa, Paraíba, Brazil

Ilan Hudson Gomes de Santana

Centro Universitário de João Pessoa-UNIPÊ, João Pessoa, Paraíba, Brazil

Mayara Rebeca Martins Viana

Paraíba State Employees Health Care Institute - IASS, João Pessoa, Paraíba, Brazil

Julliana Cariry Palhano-Dias

Bauru School of Dentistry, University of São Paulo (FOB-USP), Bauru, São Paulo, Brazil

Osny Ferreira-Júnior & Eduardo Sant’Ana

Oral and Maxillofacial Surgeon, Department of Oral and Maxillofacial Surgery, Hospital Regional of Osasco “Dr. Vivaldo Martins Simões” SUS/SP, Osasco, São Paulo, Brazil

Élio Hitoshi Shinohara

Department of Clinical and Social Dentistry (DCOS), Health Sciences Center, Federal University of Paraíba (UFPB), João Pessoa, Paraíba, Brazil

Eduardo Dias Ribeiro

You can also search for this author in PubMed   Google Scholar

Contributions

Conception and planning of the study: Elio Hitoshi Shinohara; Data collection and analysis: Ilan Santana, Mayara Viana, Julliana Pallhano-Dias; Interpretation of results: All the authors contributed to the interpretation of the results obtained from the data analysis, collaborating in the discussion of the findings and the drawing up of well-founded conclusions. Writing the manuscript: Ilan Hudson Gomes de Santana was responsible for the initial writing of the manuscript, while all the co-authors contributed to the writing of the materials and methods, results, discussion and conclusions, ensuring the clarity and cohesion of the text. Critical revision of the content: All the authors carried out critical revisions of the content of the manuscript, incorporating feedback and suggestions from the co-authors and making the necessary adjustments to improve the quality and accuracy of the text. Approval of the final version: All the authors contributed to the review and approval of the final version of the manuscript submitted for publication, ensuring its compliance with the ethical and scientific standards required by the journal.

Corresponding author

Correspondence to Ilan Hudson Gomes de Santana .

Ethics declarations

Ethics approval and consent to participate.

As this is a systematic review, it was not necessary to obtain approval from the research ethics committee to carry out this study.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

de Santana, I.H.G., Viana, M.R.M., Palhano-Dias, J.C. et al. Orbital floor fracture (blow out) and its repercussions on eye movement: a systematic review. Eur J Med Res 29 , 427 (2024). https://doi.org/10.1186/s40001-024-02023-y

Download citation

Received : 15 May 2024

Accepted : 09 August 2024

Published : 20 August 2024

DOI : https://doi.org/10.1186/s40001-024-02023-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Orbital fractures
  • Oculomotor nerve trauma
  • Orbit and ocular motility disorders

European Journal of Medical Research

ISSN: 2047-783X

tool systematic literature review process

Grab your spot at the free arXiv Accessibility Forum

Help | Advanced Search

Computer Science > Artificial Intelligence

Title: maintainability challenges in ml: a systematic literature review.

Abstract: Background: As Machine Learning (ML) advances rapidly in many fields, it is being adopted by academics and businesses alike. However, ML has a number of different challenges in terms of maintenance not found in traditional software projects. Identifying what causes these maintainability challenges can help mitigate them early and continue delivering value in the long run without degrading ML performance. Aim: This study aims to identify and synthesise the maintainability challenges in different stages of the ML workflow and understand how these stages are interdependent and impact each other's maintainability. Method: Using a systematic literature review, we screened more than 13000 papers, then selected and qualitatively analysed 56 of them. Results: (i) a catalogue of maintainability challenges in different stages of Data Engineering, Model Engineering workflows and the current challenges when building ML systems are discussed; (ii) a map of 13 maintainability challenges to different interdependent stages of ML that impact the overall workflow; (iii) Provided insights to developers of ML tools and researchers. Conclusions: In this study, practitioners and organisations will learn about maintainability challenges and their impact at different stages of ML workflow. This will enable them to avoid pitfalls and help to build a maintainable ML system. The implications and challenges will also serve as a basis for future research to strengthen our understanding of the ML system's maintainability.
Subjects: Artificial Intelligence (cs.AI); Software Engineering (cs.SE)
Cite as: [cs.AI]
  (or [cs.AI] for this version)
  Focus to learn more arXiv-issued DOI via DataCite (pending registration)
Journal reference: 2022 48th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)
: Focus to learn more DOI(s) linking to related resources

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

A systematic literature review on channel estimation in MIMO-OFDM system: performance analysis and future direction

  • Manasa, B. M. R.
  • Venugopal P.

Multiple Input Multiple Output-Orthogonal Frequency Division Multiplexing (MIMO-OFDM) is a familiar modern wireless broadband technology due to its resistance against multipath fading, high data transmission rate, and spectral efficiency. This technology delivers dependable communication as well as a large range of coverage. The precise recovery of Channel State Information (CSI) and synchronization among the receiver and transmitter are two major challenges for MIMO-OFDM systems. Several estimate procedures, like blind, pilot-aided, and semi-blind channel estimating, are used to recover channel state information. Yet, those systems have several flaws that cause them to perform poorly. Hence, this paper describes the basic introduction of the Channel Estimation (CE) process in the MIMO-OFDM system. The main goal of this survey is to study analyzing and categorizing the channel estimation algorithms, and simulation tools in different contributions. Further, the performance analysis with different performance metrics from diverse contributions is pointed out. Thus, this review article presents a detailed overview of the various channel estimation schemes that have been exploited in the OFDM channel to enhance the estimation of the CSI in the MIMO-OFDM systems. This work presents and discusses the relevant comparative results and computational complexity for all of these CE systems. Furthermore, there is a list of open study directions for further exploration.

  • channel estimation;
  • multiple input multiple output;
  • orthogonal frequency division multiplexing;
  • performance analysis;
  • simulation tools;
  • systematic literature review

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

computation-logo

Article Menu

tool systematic literature review process

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Robust portfolio mean-variance optimization for capital allocation in stock investment using the genetic algorithm: a systematic literature review.

tool systematic literature review process

1. Introduction

2. materials and methods, 2.1. selection method, 2.1.1. identification stage, 2.1.2. screening stage, 2.1.3. eligibility stage, 2.1.4. inclusion phase, 2.2. bibliometric analysis, 3.1. bibliometric results, 3.1.1. the most globally cited documents in dataset 1, 3.1.2. the representation network of dataset 1, 3.1.3. mapping the themes in dataset 1, 3.1.4. the theme evolution of dataset 1, 3.2. results from slr, 3.2.1. rq1: study objectives, 3.2.2. rq2: study methodologies used to obtain maximum portfolio return, 3.2.3. rq3: study methodologies for portfolios under uncertainty.

  • Generate an initial population of multiple chromosomes.
  • Assess the fitness of each chromosome in the population.
  • Select “parents” from the population.
  • Form the next generation by combining parents through crossover and mutation.
  • Evaluate the fitness of the new generation.
  • Replace part or all of the current population with the new generation.
  • Repeat steps 3 to 6 until a satisfactory solution is achieved.

3.2.4. RQ4: Types of Stocks

3.2.5. rq5: role of gas, 4. discussion, 4.1. limitations in handling uncertainty, 4.2. simple assumptions on robust portfolio parameters, 4.3. limited empirical validation, 5. conclusions, author contributions, data availability statement, acknowledgments, conflicts of interest.

  • Schottle, K.; Werner, R. Robustness Properties of Mean-Variance Portfolios. Optimization 2009 , 58 , 641–663. [ Google Scholar ] [ CrossRef ]
  • Markowitz, H. Portfolio Selection. J. Financ. 1952 , 7 , 77–91. [ Google Scholar ]
  • Markowitz, H. Portfolio Selection: Efficient Diversification of Investment ; Yale University Press: New York, NY, USA, 1959. [ Google Scholar ]
  • Hollad, J.H. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence ; University of Michigan Press: Ann Arbor, MI, USA, 1975. [ Google Scholar ]
  • Arnone, S.; Loraschi, A.; Andrea, T. A Genetic Approach to Portfolio Selection. Neural Netw. World 1993 , 6 , 597–604. [ Google Scholar ]
  • Chang, T.J.; Meade, N.; Beasly, Y.M.; Sharaina, Y.M. Heuristics for Cardinality Constrained Portfolio Optimisation. Comput. Oper. Res. 2000 , 27 , 1271–1302. [ Google Scholar ] [ CrossRef ]
  • Soleimani, H.; Golmakani, H.R.; Salimi, M.H. Markowitz-based Portfolio Selection with Minimum Transaction Lots, Cardinality Constraints and Sector Capitalization Using Genetic Algorithm. Expert Syst. Appl. 2009 , 36 , 5058–5063. [ Google Scholar ] [ CrossRef ]
  • Chang, T.J.; Yang, S.C.; Chang, K.J. Portfolio Optimization Problem in Different Risk Measures Using Genetic Algorithm. Expert Syst. Appl. 2009 , 36 , 10529–10537. [ Google Scholar ] [ CrossRef ]
  • Mandal, P.K.; Thakur, M. Higher-Order Moments in Portfolio Selection Problems: A Comprehensive Literature Review. Expert Syst. Appl. 2024 , 238 , 121625. [ Google Scholar ] [ CrossRef ]
  • Xidonas, P.; Steuer, R.; Hassapis, C. Robust Portfolio Optimization: A Categorical Bibliographic Review. Ann. Oper. Res. 2020 , 292 , 1. [ Google Scholar ] [ CrossRef ]
  • Kalayci, C.B.; Ertenlice, O.; Akyer, H.A.H. A Review on the Current Applications of Genetic Algorithm in Mean-Variance Portfolio Optimization. Pamukkale Univ. J. Eng. Sci. 2017 , 23 , 470–476. [ Google Scholar ] [ CrossRef ]
  • Kalayci, O.; Ertenlice, M.A.; Akbay, M.A. A Comprehensive Review of Deterministic Models and Applications for Mean-Variance Portfolio Optimization. Expert Syst. Appl. 2019 , 129 , 345–368. [ Google Scholar ] [ CrossRef ]
  • Zhang, Y.; Li, X.; Guo, S. Portfolio Selection Problems with Markowitz’s Mean–Variance Framework: A Review of Literature. Fuzzy Optim. Decis. Mak. 2018 , 17 , 125–158. [ Google Scholar ] [ CrossRef ]
  • Sadjadi, S.J.; Gharakhani, M.; Safari, E. Robust Optimization Framework for Cardinality Constrained Portfolio Problem. Appl. Soft Comput. 2012 , 12 , 91–99. [ Google Scholar ] [ CrossRef ]
  • Chou, Y.-H.; Kuo, S.-y.; Lo, Y.-t. Portfolio Optimization Based on Funds Standardization and Genetic Algorithm. IEEE Access 2017 , 5 , 21885–21900. [ Google Scholar ] [ CrossRef ]
  • Chen, C.; Zhou, Y.S. Robust Multiobjective Portfolio with Higher Moment. Expert Syst. Appl. 2018 , 100 , 165–181. [ Google Scholar ] [ CrossRef ]
  • Caçador, S.; Dias, J.M.; Godinho, P. Portfolio Selection under Uncertainty: A New Methodology for Computing Relative-Robust Solutions. Int. Trans. Oper. Res. 2019 , 28 , 1296–1329. [ Google Scholar ] [ CrossRef ]
  • Salehpoor, B.; Molla-Alizadeh, Z.S. A Constrained Portfolio Selection Model Considering Risk-Adjusted Measure by Using Hybrid Meta-Heuristic Algorithms. Appl. Soft Comput. J. 2019 , 75 , 233–253. [ Google Scholar ] [ CrossRef ]
  • Lee, Y.; Kim, M.J.; Kim, J.H.; Jang, J.R.; Kim, W.C. Sparse and Robust Portfolio Selection via Semi-Definite Relaxation. J. Oper. Res. Soc. 2019 , 71 , 687–699. [ Google Scholar ] [ CrossRef ]
  • Chen, C.; Lu, C.Y.; Lim, C.-B. An Intelligence Approach for Group Stock Portfolio Optimization with a Trading Mechanism. Knowl. Inf. Syst. 2019 , 62 , 287–316. [ Google Scholar ] [ CrossRef ]
  • Caçador, S.; Dias, J.M.; Godinho, P. Global Minimum Variance Portfolios under Uncertainty: A Robust Optimization Approach. J. Glob. Optim. 2020 , 76 , 267–293. [ Google Scholar ] [ CrossRef ]
  • Khodamoradi, T.; Salahi, M.; Najafi, A. Robust CCMV Model with Short Selling and Risk-Neutral Interest Rate. Physica A 2020 , 547 , 124429. [ Google Scholar ] [ CrossRef ]
  • Xiaoa, H.; Renb, T.; Zhoub, Z.; Liu, W. Parameter Uncertainty in Estimation of Portfolio Efficiency: Evidence from an Interval Diversification-Consistent DEA Approach. Omega 2021 , 103 , 102357. [ Google Scholar ] [ CrossRef ]
  • Quintana, D.; Moreno, D. Resampled Efficient Frontier Integration for MOEAs. Entropy 2021 , 23 , 422. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Farid, F.; Rosadi, D. Portfolio Optimization Based on Self-Organizing Maps Clustering and Genetics Algorithm. Int. J. Adv. Intell. Inform. 2022 , 8 , 33–44. [ Google Scholar ] [ CrossRef ]
  • Min, L.; Han, Y.; Xiang, Y. A Two-Stage Robust Omega Portfolio Optimization with Cardinality Constraints. Int. J. Appl. Math. 2023 , 53 , 86–93. [ Google Scholar ]
  • Rosadi, D.; Setiawan, E.P.; Templ, M.; Filzmoser, P. Robust Covariance Estimators for Mean-Variance Portfolio Optimization with Transaction Lots. Oper. Res. Perspect. 2020 , 7 , 1000154. [ Google Scholar ] [ CrossRef ]
  • Ellegaard, O.; Wallin, J.A. The Bibliometric Analysis of: How Great is the Impact? Scientometrics 2015 , 105 , 1809–1831. [ Google Scholar ] [ CrossRef ]
  • Van Eck, N.J.; Waltman, L. VOSviewer Manual ; Leiden University: Leiden, The Netherlands, 2015; pp. 1–53. [ Google Scholar ]
  • Aria, M.; Cuccurullo, C. Bibliometrix: An R-tool for Comprehensive Science Mapping Analysis. J. Informetr. 2017 , 11 , 959–975. [ Google Scholar ] [ CrossRef ]
  • Markowitz, H. Portfolio Selection , 2nd ed.; Wiley: Cambridge, MA, USA, 1991. [ Google Scholar ]
  • Bertsimas, D.; Sim, M. The Price of Robustness. Oper. Res. 2004 , 52 , 35–53. [ Google Scholar ] [ CrossRef ]
  • Kim, W.C.; Fabozzi, F.J.; Cheridito, P.; Fox, C. Controlling Portfolio. Econ. Lett. 2014 , 122 , 1554–1558. [ Google Scholar ]
  • Beasley, J.E. OR-library: Distributing Test Problems by Electronic Mail. J. Oper. Res. Soc. 1990 , 41 , 1069–1072. [ Google Scholar ] [ CrossRef ]
  • Jegadeesh, N.; Titman, S. Returns to Buying Winners and Selling Losers: Implications for Stock Market Efficiency. J. Financ. 1993 , 48 , 65–91. [ Google Scholar ] [ CrossRef ]
  • Bondt, W.F.M.; Thaler, R. Does the Stock Market Overreact? J. Financ. 1985 , 40 , 793–805. [ Google Scholar ] [ CrossRef ]
  • Faccini, D.; Maggioni, F.; Potra, F.A. Robust and Distributionally Robust Optimization Models for Linear Support Vector Machine. Comput. Oper. Res. 2022 , 147 , 105930. [ Google Scholar ] [ CrossRef ]
  • Sehgal, R.; Jagadesh, P. Data-Driven Robust Portfolio Optimization with Semi Mean Absolute Deviation via Support Vector Clustering. Expert Syst. Appl. 2023 , 224 , 1200000. [ Google Scholar ] [ CrossRef ]
  • Savaei, E.S.; Alinezhad, E.; Eghtesadifard, M. Stock Portfolio Optimization for Risk-Averse Investors: A Novel Hybrid Possibilistic and Flexible Robust Approach. Expert Syst. Appl. 2024 , 250 , 123754. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

NoPaperContent Analysis?Article PeriodRobust Portfolio?MV?GA?
1[ ]1991–2021--
2[ ]1995–2019--
3[ ]1998–2016-
4[ ]1998–2019--
5[ ]2002–2015-
6Present study1995–2024
CodeKeywordNumber of ArticlesTotal
Scopus *Science Direct **Dimensions ***
A(“robust portfolio”)28254333243582
B(“robust portfolio”) AND (“mean-variance” OR “Markowitz”)1.338226661630
C(“robust portfolio”) AND (“mean-variance” OR “Markowitz”) AND (“stocks”)81414220976
D(“robust portfolio”) AND (“mean-variance” OR “Markowitz”) AND (“stocks”) AND (“genetic algorithm”)137130150
Total51148144106338
NoRQ1RQ2RQ3RQ4RQ5DescriptionRef
1Develop a novel portfolio modeling strategy considering data uncertainty using robust optimization methods.New portfolio modeling with uncertain data and robust optimization methods.GA.Five indices from global capital markets (1992–1997).To address the problem with a practical level of perturbation.Reference
Paper
[ ]
2Examine high- and low-return stocks, evaluate portfolio risk through fund standardization, and design a low-risk, stable-reward portfolio.Fund standardization.GA, Sharpe ratio.Taiwan Economic Journal (2010–2016).Precisely develop a portfolio that minimizes risk while maximizing rewards.Not Suitable[ ]
3Investigate portfolio problems with asymmetric distributions and uncertain parameters.Robust multi-objective portfolio models with higher moments.Multi-objective particle swarm optimization.Ten Chinese stocks (2006–2010). Not Suitable[ ]
4Introduce a novel method for calculating relative-robust portfolios.Relative-robust portfolios based on minimax regret.GA.DAX index (1992–2016).Calculation of the proposed robust portfolios for the minimax regret solutions.Reference
Paper
[ ]
5Introduce a new decision-making framework for stock portfolio optimization using hybrid meta-heuristic algorithms.The MV method has the followingrisk levels: mean absolute deviation (MAD), semi-variance (SV), and variance with skewness (VWS).Electromagnetism-like Algorithm (EM), Particle Swarm Optimization (PSO), GA, Genetic Network Programming (GNP), and Simulated Annealing (SA).Tehran Stock Exchange.-Not Suitable[ ]
6Develop portfolio selection models offering limited assets to minimize costs and remain robust.Sparse and robust portfolios.L 2 -Norm regularization and worst-case optimization. Kenneth French’s 49 industry portfolios (1975–2014).-Not Suitable[ ]
7Enhance the efficiency of a diversified stock portfolio using a grouping GA.MVPO with four fitness functions and a trading mechanism.GA. Taiwan Stock Exchange (2010–2014).To address the GSP (Group Stok Portfolio) optimization problem.Not Suitable[ ]
8Introduce methods to optimize the variance and covariance of asset returns without expected return estimates.Global minimum variance portfolio, robust optimization-Euro Stoxx50 index (1992–2016).-Not Suitable[ ]
9Examine the MV portfolio optimization model under specific constraints in uncertain conditions.Cardinality constraints mean-variance (CCMV) and robust counterpart. -S&P 500 Communication Service.-Not Suitable[ ]
10Develop Data Envelopment Analysis (DEA) models consistent with diversification and study parameter uncertainty effects.DEA under the MV framework; parameter uncertainty.-Thirty American industry portfolios.-Not Suitable[ ]
11Address potential estimation inaccuracies in MVPO. Conventional multi-objective evolutionary algorithms. -Comprehensive financial indices (2006–2020).-Not Suitable[ ]
12Analyze clustering outcomes to select top-performing stocks using a GA for portfolio weighting.Self-Organizing Maps (SOMs), MV.GA.LQ45 shares (2018–2019). To obtain the best offspring to produce the optimal solution for the problems at hand.Not Suitable[ ]
13Develop a more aggressive robust Omega portfolio.Robust Omega Portfolio.GA.The dataset of 30 U.S. industry portfolios was sourced from Kenneth R. French’s website.To solve the mixed-integer programming problem suggested in the preselection.Not Suitable[ ]
14Improve MVPO considering integer transaction lots and robust covariance matrix estimators.Markowitz portfolio, transaction lots, robust estimationGA.Six stocks in the Indonesia Stock Exchange. Distribution with contamination.To complete integer optimization.Reference
Paper
[ ]
DatabaseData Code DDuplicateAbstract and TitleFull Text
IEIEIEx
Scopus137137013124213
ScienceDirect13761010
Dimensions0000000
Total150144614 *1243 **13
Ref.Uncertainty ParametersMVCardinality ConstraintOptimization ConstraintRisk-Aversion ParameterRelative and Absolute
Robustness
Robust
Covariance Estimators
GA
[ ]----
[ ]--
[ ]----
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Fransisca, D.C.; Sukono; Chaerani, D.; Halim, N.A. Robust Portfolio Mean-Variance Optimization for Capital Allocation in Stock Investment Using the Genetic Algorithm: A Systematic Literature Review. Computation 2024 , 12 , 166. https://doi.org/10.3390/computation12080166

Fransisca DC, Sukono, Chaerani D, Halim NA. Robust Portfolio Mean-Variance Optimization for Capital Allocation in Stock Investment Using the Genetic Algorithm: A Systematic Literature Review. Computation . 2024; 12(8):166. https://doi.org/10.3390/computation12080166

Fransisca, Diandra Chika, Sukono, Diah Chaerani, and Nurfadhlina Abdul Halim. 2024. "Robust Portfolio Mean-Variance Optimization for Capital Allocation in Stock Investment Using the Genetic Algorithm: A Systematic Literature Review" Computation 12, no. 8: 166. https://doi.org/10.3390/computation12080166

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

How-to conduct a systematic literature review: A quick guide for computer science research

Angela carrera-rivera.

a Faculty of Engineering, Mondragon University

William Ochoa

Felix larrinaga.

b Design Innovation Center(DBZ), Mondragon University

Associated Data

  • No data was used for the research described in the article.

Performing a literature review is a critical first step in research to understanding the state-of-the-art and identifying gaps and challenges in the field. A systematic literature review is a method which sets out a series of steps to methodically organize the review. In this paper, we present a guide designed for researchers and in particular early-stage researchers in the computer-science field. The contribution of the article is the following:

  • • Clearly defined strategies to follow for a systematic literature review in computer science research, and
  • • Algorithmic method to tackle a systematic literature review.

Graphical abstract

Image, graphical abstract

Specifications table

Subject area:Computer-science
More specific subject area:Software engineering
Name of your method:Systematic literature review
Name and reference of original method:
Resource availability:Resources referred to in this article: ) )

Method details

A Systematic Literature Review (SLR) is a research methodology to collect, identify, and critically analyze the available research studies (e.g., articles, conference proceedings, books, dissertations) through a systematic procedure [12] . An SLR updates the reader with current literature about a subject [6] . The goal is to review critical points of current knowledge on a topic about research questions to suggest areas for further examination [5] . Defining an “Initial Idea” or interest in a subject to be studied is the first step before starting the SLR. An early search of the relevant literature can help determine whether the topic is too broad to adequately cover in the time frame and whether it is necessary to narrow the focus. Reading some articles can assist in setting the direction for a formal review., and formulating a potential research question (e.g., how is semantics involved in Industry 4.0?) can further facilitate this process. Once the focus has been established, an SLR can be undertaken to find more specific studies related to the variables in this question. Although there are multiple approaches for performing an SLR ( [5] , [26] , [27] ), this work aims to provide a step-by-step and practical guide while citing useful examples for computer-science research. The methodology presented in this paper comprises two main phases: “Planning” described in section 2, and “Conducting” described in section 3, following the depiction of the graphical abstract.

Defining the protocol is the first step of an SLR since it describes the procedures involved in the review and acts as a log of the activities to be performed. Obtaining opinions from peers while developing the protocol, is encouraged to ensure the review's consistency and validity, and helps identify when modifications are necessary [20] . One final goal of the protocol is to ensure the replicability of the review.

Define PICOC and synonyms

The PICOC (Population, Intervention, Comparison, Outcome, and Context) criteria break down the SLR's objectives into searchable keywords and help formulate research questions [ 27 ]. PICOC is widely used in the medical and social sciences fields to encourage researchers to consider the components of the research questions [14] . Kitchenham & Charters [6] compiled the list of PICOC elements and their corresponding terms in computer science, as presented in Table 1 , which includes keywords derived from the PICOC elements. From that point on, it is essential to think of synonyms or “alike” terms that later can be used for building queries in the selected digital libraries. For instance, the keyword “context awareness” can also be linked to “context-aware”.

Planning Step 1 “Defining PICOC keywords and synonyms”.

DescriptionExample (PICOC)Example (Synonyms)
PopulationCan be a specific role, an application area, or an industry domain.Smart Manufacturing• Digital Factory
• Digital Manufacturing
• Smart Factory
InterventionThe methodology, tool, or technology that addresses a specific issue.Semantic Web• Ontology
• Semantic Reasoning
ComparisonThe methodology, tool, or technology in which the is being compared (if appropriate).Machine Learning• Supervised Learning
• Unsupervised Learning
OutcomeFactors of importance to practitioners and/or the results that could produce.Context-Awareness• Context-Aware
• Context-Reasoning
ContextThe context in which the comparison takes place. Some systematic reviews might choose to exclude this element.Business Process Management• BPM
• Business Process Modeling

Formulate research questions

Clearly defined research question(s) are the key elements which set the focus for study identification and data extraction [21] . These questions are formulated based on the PICOC criteria as presented in the example in Table 2 (PICOC keywords are underlined).

Research questions examples.

Research Questions examples
• : What are the current challenges of context-aware systems that support the decision-making of business processes in smart manufacturing?
• : Which technique is most appropriate to support decision-making for business process management in smart factories?
• : In which scenarios are semantic web and machine learning used to provide context-awareness in business process management for smart manufacturing?

Select digital library sources

The validity of a study will depend on the proper selection of a database since it must adequately cover the area under investigation [19] . The Web of Science (WoS) is an international and multidisciplinary tool for accessing literature in science, technology, biomedicine, and other disciplines. Scopus is a database that today indexes 40,562 peer-reviewed journals, compared to 24,831 for WoS. Thus, Scopus is currently the largest existing multidisciplinary database. However, it may also be necessary to include sources relevant to computer science, such as EI Compendex, IEEE Xplore, and ACM. Table 3 compares the area of expertise of a selection of databases.

Planning Step 3 “Select digital libraries”. Description of digital libraries in computer science and software engineering.

DatabaseDescriptionURLAreaAdvanced Search Y/N
ScopusFrom Elsevier. sOne of the largest databases. Very user-friendly interface InterdisciplinaryY
Web of ScienceFrom Clarivate. Multidisciplinary database with wide ranging content. InterdisciplinaryY
EI CompendexFrom Elsevier. Focused on engineering literature. EngineeringY (Query view not available)
IEEE Digital LibraryContains scientific and technical articles published by IEEE and its publishing partners. Engineering and TechnologyY
ACM Digital LibraryComplete collection of ACM publications. Computing and information technologyY

Define inclusion and exclusion criteria

Authors should define the inclusion and exclusion criteria before conducting the review to prevent bias, although these can be adjusted later, if necessary. The selection of primary studies will depend on these criteria. Articles are included or excluded in this first selection based on abstract and primary bibliographic data. When unsure, the article is skimmed to further decide the relevance for the review. Table 4 sets out some criteria types with descriptions and examples.

Planning Step 4 “Define inclusion and exclusion criteria”. Examples of criteria type.

Criteria TypeDescriptionExample
PeriodArticles can be selected based on the time period to review, e.g., reviewing the technology under study from the year it emerged, or reviewing progress in the field since the publication of a prior literature review. :
From 2015 to 2021

Articles prior 2015
LanguageArticles can be excluded based on language. :
Articles not in English
Type of LiteratureArticles can be excluded if they are fall into the category of grey literature.
Reports, policy literature, working papers, newsletters, government documents, speeches
Type of sourceArticles can be included or excluded by the type of origin, i.e., conference or journal articles or books. :
Articles from Conferences or Journals

Articles from books
Impact SourceArticles can be excluded if the author limits the impact factor or quartile of the source.
Articles from Q1, and Q2 sources
:
Articles with a Journal Impact Score (JIS) lower than
AccessibilityNot accessible in specific databases. :
Not accessible
Relevance to research questionsArticles can be excluded if they are not relevant to a particular question or to “ ” number of research questions.
Not relevant to at least 2 research questions

Define the Quality Assessment (QA) checklist

Assessing the quality of an article requires an artifact which describes how to perform a detailed assessment. A typical quality assessment is a checklist that contains multiple factors to evaluate. A numerical scale is used to assess the criteria and quantify the QA [22] . Zhou et al. [25] presented a detailed description of assessment criteria in software engineering, classified into four main aspects of study quality: Reporting, Rigor, Credibility, and Relevance. Each of these criteria can be evaluated using, for instance, a Likert-type scale [17] , as shown in Table 5 . It is essential to select the same scale for all criteria established on the quality assessment.

Planning Step 5 “Define QA assessment checklist”. Examples of QA scales and questions.


Do the researchers discuss any problems (limitations, threats) with the validity of their results (reliability)?

1 – No, and not considered (Score: 0)
2 – Partially (Score: 0.5)
3 – Yes (Score: 1)

Is there a clear definition/ description/ statement of the aims/ goals/ purposes/ motivations/ objectives/ questions of the research?

1 – Disagree (Score: 1)
2 – Somewhat disagree (Score: 2)
3 – Neither agree nor disagree (Score: 3)
4 – Somewhat agree (Score: 4)
5 – Agree (Score: 5)

Define the “Data Extraction” form

The data extraction form represents the information necessary to answer the research questions established for the review. Synthesizing the articles is a crucial step when conducting research. Ramesh et al. [15] presented a classification scheme for computer science research, based on topics, research methods, and levels of analysis that can be used to categorize the articles selected. Classification methods and fields to consider when conducting a review are presented in Table 6 .

Planning Step 6 “Define data extraction form”. Examples of fields.

Classification and fields to consider for data extractionDescription and examples
Research type• focuses on abstract ideas, concepts, and theories built on literature reviews .
• uses scientific data or case studies for explorative, descriptive, explanatory, or measurable findings .

an SLR on context-awareness for S-PSS and categorized the articles in theoretical and empirical research.
By process phases, stagesWhen analyzing a process or series of processes, an effective way to structure the data is to find a well-established framework of reference or architecture. :
• an SLR on self-adaptive systems uses the MAPE-K model to understand how the authors tackle each module stage.
• presented a context-awareness survey using the stages of context-aware lifecycle to review different methods.
By technology, framework, or platformWhen analyzing a computer science topic, it is important to know the technology currently employed to understand trends, benefits, or limitations.
:
• an SLR on the big data ecosystem in the manufacturing field that includes frameworks, tools, and platforms for each stage of the big data ecosystem.
By application field and/or industry domainIf the review is not limited to a specific “Context” or “Population" (industry domain), it can be useful  to identify the field of application
:
• an SLR on adaptive training using virtual reality (VR). The review presents an extensive description of multiple application domains and examines related work.
Gaps and challengesIdentifying gaps and challenges is important in reviews to determine the research needs and further establish research directions that can help scholars act on the topic.
Findings in researchResearch in computer science can deliver multiple types of findings, e.g.:
Evaluation methodCase studies, experiments, surveys, mathematical demonstrations, and performance indicators.

The data extraction must be relevant to the research questions, and the relationship to each of the questions should be included in the form. Kitchenham & Charters [6] presented more pertinent data that can be captured, such as conclusions, recommendations, strengths, and weaknesses. Although the data extraction form can be updated if more information is needed, this should be treated with caution since it can be time-consuming. It can therefore be helpful to first have a general background in the research topic to determine better data extraction criteria.

After defining the protocol, conducting the review requires following each of the steps previously described. Using tools can help simplify the performance of this task. Standard tools such as Excel or Google sheets allow multiple researchers to work collaboratively. Another online tool specifically designed for performing SLRs is Parsif.al 1 . This tool allows researchers, especially in the context of software engineering, to define goals and objectives, import articles using BibTeX files, eliminate duplicates, define selection criteria, and generate reports.

Build digital library search strings

Search strings are built considering the PICOC elements and synonyms to execute the search in each database library. A search string should separate the synonyms with the boolean operator OR. In comparison, the PICOC elements are separated with parentheses and the boolean operator AND. An example is presented next:

(“Smart Manufacturing” OR “Digital Manufacturing” OR “Smart Factory”) AND (“Business Process Management” OR “BPEL” OR “BPM” OR “BPMN”) AND (“Semantic Web” OR “Ontology” OR “Semantic” OR “Semantic Web Service”) AND (“Framework” OR “Extension” OR “Plugin” OR “Tool”

Gather studies

Databases that feature advanced searches enable researchers to perform search queries based on titles, abstracts, and keywords, as well as for years or areas of research. Fig. 1 presents the example of an advanced search in Scopus, using titles, abstracts, and keywords (TITLE-ABS-KEY). Most of the databases allow the use of logical operators (i.e., AND, OR). In the example, the search is for “BIG DATA” and “USER EXPERIENCE” or “UX” as a synonym.

Fig 1

Example of Advanced search on Scopus.

In general, bibliometric data of articles can be exported from the databases as a comma-separated-value file (CSV) or BibTeX file, which is helpful for data extraction and quantitative and qualitative analysis. In addition, researchers should take advantage of reference-management software such as Zotero, Mendeley, Endnote, or Jabref, which import bibliographic information onto the software easily.

Study Selection and Refinement

The first step in this stage is to identify any duplicates that appear in the different searches in the selected databases. Some automatic procedures, tools like Excel formulas, or programming languages (i.e., Python) can be convenient here.

In the second step, articles are included or excluded according to the selection criteria, mainly by reading titles and abstracts. Finally, the quality is assessed using the predefined scale. Fig. 2 shows an example of an article QA evaluation in Parsif.al, using a simple scale. In this scenario, the scoring procedure is the following YES= 1, PARTIALLY= 0.5, and NO or UNKNOWN = 0 . A cut-off score should be defined to filter those articles that do not pass the QA. The QA will require a light review of the full text of the article.

Fig 2

Performing quality assessment (QA) in Parsif.al.

Data extraction

Those articles that pass the study selection are then thoroughly and critically read. Next, the researcher completes the information required using the “data extraction” form, as illustrated in Fig. 3 , in this scenario using Parsif.al tool.

Fig 3

Example of data extraction form using Parsif.al.

The information required (study characteristics and findings) from each included study must be acquired and documented through careful reading. Data extraction is valuable, especially if the data requires manipulation or assumptions and inferences. Thus, information can be synthesized from the extracted data for qualitative or quantitative analysis [16] . This documentation supports clarity, precise reporting, and the ability to scrutinize and replicate the examination.

Analysis and Report

The analysis phase examines the synthesized data and extracts meaningful information from the selected articles [10] . There are two main goals in this phase.

The first goal is to analyze the literature in terms of leading authors, journals, countries, and organizations. Furthermore, it helps identify correlations among topic s . Even when not mandatory, this activity can be constructive for researchers to position their work, find trends, and find collaboration opportunities. Next, data from the selected articles can be analyzed using bibliometric analysis (BA). BA summarizes large amounts of bibliometric data to present the state of intellectual structure and emerging trends in a topic or field of research [4] . Table 7 sets out some of the most common bibliometric analysis representations.

Techniques for bibliometric analysis and examples.

Publication-related analysisDescriptionExample
Years of publicationsDetermine interest in the research topic by years or the period established by the SLR, by quantifying the number of papers published. Using this information, it is also possible to forecast the growth rate of research interest.[ ] identified the growth rate of research interest and the yearly publication trend.
Top contribution journals/conferencesIdentify the leading journals and conferences in which authors can share their current and future work. ,
Top countries' or affiliation contributionsExamine the impacts of countries or affiliations leading the research topic.[ , ] identified the most influential countries.
Leading authorsIdentify the most significant authors in a research field.-
Keyword correlation analysisExplore existing relationships between topics in a research field based on the written content of the publication or related keywords established in the articles. using keyword clustering analysis ( ). using frequency analysis.
Total and average citationIdentify the most relevant publications in a research field.
Scatter plot citation scores and journal factor impact

Several tools can perform this type of analysis, such as Excel and Google Sheets for statistical graphs or using programming languages such as Python that has available multiple  data visualization libraries (i.e. Matplotlib, Seaborn). Cluster maps based on bibliographic data(i.e keywords, authors) can be developed in VosViewer which makes it easy to identify clusters of related items [18] . In Fig. 4 , node size is representative of the number of papers related to the keyword, and lines represent the links among keyword terms.

Fig 4

[1] Keyword co-relationship analysis using clusterization in vos viewer.

This second and most important goal is to answer the formulated research questions, which should include a quantitative and qualitative analysis. The quantitative analysis can make use of data categorized, labelled, or coded in the extraction form (see Section 1.6). This data can be transformed into numerical values to perform statistical analysis. One of the most widely employed method is frequency analysis, which shows the recurrence of an event, and can also represent the percental distribution of the population (i.e., percentage by technology type, frequency of use of different frameworks, etc.). Q ualitative analysis includes the narration of the results, the discussion indicating the way forward in future research work, and inferring a conclusion.

Finally, the literature review report should state the protocol to ensure others researchers can replicate the process and understand how the analysis was performed. In the protocol, it is essential to present the inclusion and exclusion criteria, quality assessment, and rationality beyond these aspects.

The presentation and reporting of results will depend on the structure of the review given by the researchers conducting the SLR, there is no one answer. This structure should tie the studies together into key themes, characteristics, or subgroups [ 28 ].

SLR can be an extensive and demanding task, however the results are beneficial in providing a comprehensive overview of the available evidence on a given topic. For this reason, researchers should keep in mind that the entire process of the SLR is tailored to answer the research question(s). This article has detailed a practical guide with the essential steps to conducting an SLR in the context of computer science and software engineering while citing multiple helpful examples and tools. It is envisaged that this method will assist researchers, and particularly early-stage researchers, in following an algorithmic approach to fulfill this task. Finally, a quick checklist is presented in Appendix A as a companion of this article.

CRediT author statement

Angela Carrera-Rivera: Conceptualization, Methodology, Writing-Original. William Ochoa-Agurto : Methodology, Writing-Original. Felix Larrinaga : Reviewing and Supervision Ganix Lasa: Reviewing and Supervision.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

Funding : This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant No. 814078.

Carrera-Rivera, A., Larrinaga, F., & Lasa, G. (2022). Context-awareness for the design of Smart-product service systems: Literature review. Computers in Industry, 142, 103730.

1 https://parsif.al/

Data Availability

COMMENTS

  1. 5 software tools to support your systematic review processes

    The systematic review Toolbox is a web-based catalogue of various tools, including software packages which can assist with single or multiple tasks within the evidence synthesis process. Researchers can run a quick search or tailor a more sophisticated search by choosing their approach, budget, discipline, and preferred support features, to ...

  2. 10 Best Literature Review Tools for Researchers

    6. Consensus. Researchers to work together, annotate, and discuss research papers in real-time, fostering team collaboration and knowledge sharing. 7. RAx. Researchers to perform efficient literature search and analysis, aiding in identifying relevant articles, saving time, and improving the quality of research. 8.

  3. Guidance to best tools and practices for systematic reviews

    Both AMSTAR-2 and ROBIS require systematic and comprehensive searches for evidence. This is essential for any systematic review. Both tools discourage search restrictions based on language and publication source. Given increasing globalism in health care, the practice of including English-only literature should be avoided .

  4. Steps of a Systematic Review

    Image by TraceyChandler. Steps to conducting a systematic review. Quick overview of the process: Steps and resources from the UMB HSHSL Guide. YouTube video (26 min); Another detailed guide on how to conduct and write a systematic review from RMIT University; A roadmap for searching literature in PubMed from the VU Amsterdam; Alexander, P. A. (2020).

  5. Ten Steps to Conduct a Systematic Review

    The systematic review process is a rigorous and methodical approach to synthesizing and evaluating existing research on a specific topic. The 10 steps we followed, from defining the research question to interpreting the results, ensured a comprehensive and unbiased review of the available literature.

  6. Guidance on Conducting a Systematic Literature Review

    Literature reviews establish the foundation of academic inquires. However, in the planning field, we lack rigorous systematic reviews. In this article, through a systematic search on the methodology of literature review, we categorize a typology of literature reviews, discuss steps in conducting a systematic literature review, and provide suggestions on how to enhance rigor in literature ...

  7. How-to conduct a systematic literature review: A quick guide for

    Method details Overview. A Systematic Literature Review (SLR) is a research methodology to collect, identify, and critically analyze the available research studies (e.g., articles, conference proceedings, books, dissertations) through a systematic procedure [12].An SLR updates the reader with current literature about a subject [6].The goal is to review critical points of current knowledge on a ...

  8. Tools and Software for SLR

    A systematic literature review (SLR) involves a comprehensive and structured approach to searching, selecting, and analyzing relevant research papers. To facilitate this process, various tools and software can be used to streamline tasks. Here is a list of tools commonly used for conducting a systematic literature review:

  9. Systematic reviews: Structure, form and content

    The systematic, transparent searching techniques outlined in this article can be adopted and adapted for use in other forms of literature review (Grant & Booth 2009), for example, while the critical appraisal tools highlighted are appropriate for use in other contexts in which the reliability and applicability of medical research require ...

  10. A step-by-step process

    A step-by-step process. Using the PRISMA 2020 (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines involves a step-by-step process to ensure that your systematic review or meta-analysis is reported transparently and comprehensively. Below are the key steps to follow when using PRISMA 2020:

  11. Rayyan

    Rayyan Enterprise and Rayyan Teams+ make it faster, easier and more convenient for you to manage your research process across your organization. Accelerate your research across your team or organization and save valuable researcher time. Build and preserve institutional assets, including literature searches, systematic reviews, and full-text ...

  12. How to do a systematic review

    A systematic review aims to bring evidence together to answer a pre-defined research question. This involves the identification of all primary research relevant to the defined review question, the critical appraisal of this research, and the synthesis of the findings.13 Systematic reviews may combine data from different.

  13. Full article: Digitalising the Systematic Literature Review process

    ABSTRACT. The main contribution of this paper is the development of a digital platform to support researchers in conducting systematic literature reviews (SLRs). The digital platform is able to overcome limitations of other available tools to support, in particular, reviews in the organisation and management field of study. The name of this ...

  14. Software Tools for Conducting Systematic Reviews

    Full-Featured Software Tools for Conducting Systematic Reviews. EPPI-Reviewer 4: EPPI-Reviewer is web-based software that supports reference management, screening, coding and synthesis. It is developed by the Evidence for Policy and Practice Information and Coordinating Centre in London. Pricing is based on a subscription model.

  15. The Systematic Review Toolbox: keeping up to date with tools to support

    The Systematic Review Toolbox (SR Toolbox) was developed in 2014 by Christopher Marshall (CM) as part of his PhD surrounding tools that can be used to support the systematic review process within software engineering [].Whilst originally developed for the field of computer science, the methodologies for conducting systematic reviews and evidence synthesis are applicable across disciplines.

  16. Tools

    Systematic Review Toolbox. "The Systematic Review Toolbox is a community-driven, searchable, web-based catalogue of tools that support the systematic review process across multiple domains. The resource aims to help reviewers find appropriate tools based on how they provide support for the systematic review process.

  17. Steps in the Literature Review Process

    Systematic Approaches to a Successful Literature Review by Andrew Booth; Anthea Sutton; Diana Papaioannou Showing you how to take a structured and organized approach to a wide range of literature review types, this book helps you to choose which approach is right for your research. Packed with constructive tools, examples, case studies and hands-on exercises, the book covers the full range of ...

  18. Software Tools to Support Visualising Systematic Literature Review

    The basic concepts of systematic literature review and related work are presented in Sect. 2. In Sect. 3, tools to support SLR through visualisation are described, and an overview of SLR activities that are supported within these tools is given. Conclusions and suggestions for future research are given in the Sect. 4.

  19. PDF A step-by-step guide

    Systematic Review. High-quality evidence can make a real difference for actual people. systematic reviews are central to finding the answers to these important clinical questions. Systematic reviews are highly structured and follow a standard process. The process can be broken down into a series of smaller and more manageable steps.

  20. An SLR-tool: search process in practice

    Systematic Literature Review (SLR) is a key tool for evidence-based practice as it combines results from multiple studies of a specific topic of research. Due its characteristics, it is a time consuming, hard process that requires a properly documented ...

  21. How to conduct systematic literature reviews in management ...

    Systematic literature reviews (SLRs) have become a standard tool in many fields of management research but are often considerably less stringently presented than other pieces of research. The resulting lack of replicability of the research and conclusions has spurred a vital debate on the SLR process, but related guidance is scattered across a number of core references and is overly centered ...

  22. Rapid reviews methods series: Guidance on literature search

    This paper is part of a series of methodological guidance from the Cochrane Rapid Reviews Methods Group. Rapid reviews (RR) use modified systematic review methods to accelerate the review process while maintaining systematic, transparent and reproducible methods. In this paper, we address considerations for RR searches. We cover the main areas relevant to the search process: preparation and ...

  23. The impact of adverse childhood experiences on multimorbidity: a

    This is the first systematic review and meta-analysis to synthesise the literature on ACEs and multimorbidity, showing a dose-dependent relationship across a large number of participants. It consolidates and enhances an extensive body of literature that shows an association between ACEs and individual long-term health conditions, risky health ...

  24. Sustainability

    A systematic literature review presented in explains that AI-based solutions implemented in the SC such as machine learning, intelligent applications, digital twins and smart things , facilitate the ongoing and continuous detection of errors and enable their quick elimination. In our work we distinguish the main AI-based tools applied in the SC.

  25. Full article: Evaluating the role and diagnostic performance of

    1.1 Literature Review. Soft tissue sarcomas (STS) are aggressive malignancies arising from non-epithelial and non-hematopoietic tissues, categorized into various types, including pediatric, adult, visceral, and bone sarcomas [Citation 2, Citation 15].The incidence of STS has shown an upward trend, with approximately 13,100 new cases reported in the USA in 2020, predominantly affecting older ...

  26. Orbital floor fracture (blow out) and its repercussions on eye movement

    The aim of this systematic review was to investigate the relationship between fractures of the floor of the orbit (blow outs) and their repercussions on eye movement, based on the available scientific literature. In order to obtain more reliable results, we opted for a methodology that could answer the guiding question of this research. To this end, a systematic review of the literature was ...

  27. Maintainability Challenges in ML: A Systematic Literature Review

    Method: Using a systematic literature review, we screened more than 13000 papers, then selected and qualitatively analysed 56 of them. Results: (i) a catalogue of maintainability challenges in different stages of Data Engineering, Model Engineering workflows and the current challenges when building ML systems are discussed; (ii) a map of 13 ...

  28. A systematic literature review on channel estimation in MIMO-OFDM

    Yet, those systems have several flaws that cause them to perform poorly. Hence, this paper describes the basic introduction of the Channel Estimation (CE) process in the MIMO-OFDM system. The main goal of this survey is to study analyzing and categorizing the channel estimation algorithms, and simulation tools in different contributions.

  29. Computation

    Traditional mean-variance (MV) models, considered effective in stable conditions, often prove inadequate in uncertain market scenarios. Therefore, there is a need for more robust and better portfolio optimization methods to handle the fluctuations and uncertainties in asset returns and covariances. This study aims to perform a Systematic Literature Review (SLR) on robust portfolio mean ...

  30. How-to conduct a systematic literature review: A quick guide for

    Overview. A Systematic Literature Review (SLR) is a research methodology to collect, identify, and critically analyze the available research studies (e.g., articles, conference proceedings, books, dissertations) through a systematic procedure .An SLR updates the reader with current literature about a subject .The goal is to review critical points of current knowledge on a topic about research ...