U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-Analyses, and Meta-Syntheses

Affiliations.

  • 1 Behavioural Science Centre, Stirling Management School, University of Stirling, Stirling FK9 4LA, United Kingdom; email: [email protected].
  • 2 Department of Psychological and Behavioural Science, London School of Economics and Political Science, London WC2A 2AE, United Kingdom.
  • 3 Department of Statistics, Northwestern University, Evanston, Illinois 60208, USA; email: [email protected].
  • PMID: 30089228
  • DOI: 10.1146/annurev-psych-010418-102803

Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question. The best reviews synthesize studies to draw broad theoretical conclusions about what a literature means, linking theory to evidence and evidence to theory. This guide describes how to plan, conduct, organize, and present a systematic review of quantitative (meta-analysis) or qualitative (narrative review, meta-synthesis) information. We outline core standards and principles and describe commonly encountered problems. Although this guide targets psychological scientists, its high level of abstraction makes it potentially relevant to any subject area or discipline. We argue that systematic reviews are a key methodology for clarifying whether and how research findings replicate and for explaining possible inconsistencies, and we call for researchers to conduct systematic reviews to help elucidate whether there is a replication crisis.

Keywords: evidence; guide; meta-analysis; meta-synthesis; narrative; systematic review; theory.

PubMed Disclaimer

Similar articles

  • The future of Cochrane Neonatal. Soll RF, Ovelman C, McGuire W. Soll RF, et al. Early Hum Dev. 2020 Nov;150:105191. doi: 10.1016/j.earlhumdev.2020.105191. Epub 2020 Sep 12. Early Hum Dev. 2020. PMID: 33036834
  • Summarizing systematic reviews: methodological development, conduct and reporting of an umbrella review approach. Aromataris E, Fernandez R, Godfrey CM, Holly C, Khalil H, Tungpunkom P. Aromataris E, et al. Int J Evid Based Healthc. 2015 Sep;13(3):132-40. doi: 10.1097/XEB.0000000000000055. Int J Evid Based Healthc. 2015. PMID: 26360830
  • RAMESES publication standards: meta-narrative reviews. Wong G, Greenhalgh T, Westhorp G, Buckingham J, Pawson R. Wong G, et al. BMC Med. 2013 Jan 29;11:20. doi: 10.1186/1741-7015-11-20. BMC Med. 2013. PMID: 23360661 Free PMC article.
  • A Primer on Systematic Reviews and Meta-Analyses. Nguyen NH, Singh S. Nguyen NH, et al. Semin Liver Dis. 2018 May;38(2):103-111. doi: 10.1055/s-0038-1655776. Epub 2018 Jun 5. Semin Liver Dis. 2018. PMID: 29871017 Review.
  • Publication Bias and Nonreporting Found in Majority of Systematic Reviews and Meta-analyses in Anesthesiology Journals. Hedin RJ, Umberham BA, Detweiler BN, Kollmorgen L, Vassar M. Hedin RJ, et al. Anesth Analg. 2016 Oct;123(4):1018-25. doi: 10.1213/ANE.0000000000001452. Anesth Analg. 2016. PMID: 27537925 Review.
  • Surveillance of Occupational Exposure to Volatile Organic Compounds at Gas Stations: A Scoping Review Protocol. Mendes TMC, Soares JP, Salvador PTCO, Castro JL. Mendes TMC, et al. Int J Environ Res Public Health. 2024 Apr 23;21(5):518. doi: 10.3390/ijerph21050518. Int J Environ Res Public Health. 2024. PMID: 38791733 Free PMC article. Review.
  • Association between poor sleep and mental health issues in Indigenous communities across the globe: a systematic review. Fernandez DR, Lee R, Tran N, Jabran DS, King S, McDaid L. Fernandez DR, et al. Sleep Adv. 2024 May 2;5(1):zpae028. doi: 10.1093/sleepadvances/zpae028. eCollection 2024. Sleep Adv. 2024. PMID: 38721053 Free PMC article.
  • Barriers to ethical treatment of patients in clinical environments: A systematic narrative review. Dehkordi FG, Torabizadeh C, Rakhshan M, Vizeshfar F. Dehkordi FG, et al. Health Sci Rep. 2024 May 1;7(5):e2008. doi: 10.1002/hsr2.2008. eCollection 2024 May. Health Sci Rep. 2024. PMID: 38698790 Free PMC article.
  • Studying Adherence to Reporting Standards in Kinesiology: A Post-publication Peer Review Brief Report. Watson NM, Thomas JD. Watson NM, et al. Int J Exerc Sci. 2024 Jan 1;17(7):25-37. eCollection 2024. Int J Exerc Sci. 2024. PMID: 38666001 Free PMC article.
  • Evidence for Infant-directed Speech Preference Is Consistent Across Large-scale, Multi-site Replication and Meta-analysis. Zettersten M, Cox C, Bergmann C, Tsui ASM, Soderstrom M, Mayor J, Lundwall RA, Lewis M, Kosie JE, Kartushina N, Fusaroli R, Frank MC, Byers-Heinlein K, Black AK, Mathur MB. Zettersten M, et al. Open Mind (Camb). 2024 Apr 3;8:439-461. doi: 10.1162/opmi_a_00134. eCollection 2024. Open Mind (Camb). 2024. PMID: 38665547 Free PMC article.
  • Search in MeSH

LinkOut - more resources

Full text sources.

  • Ingenta plc
  • Ovid Technologies, Inc.

Other Literature Sources

  • scite Smart Citations

Miscellaneous

  • NCI CPTAC Assay Portal
  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Systematic Review | Definition, Example, & Guide

Systematic Review | Definition, Example & Guide

Published on June 15, 2022 by Shaun Turney . Revised on November 20, 2023.

A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer.

They answered the question “What is the effectiveness of probiotics in reducing eczema symptoms and improving quality of life in patients with eczema?”

In this context, a probiotic is a health product that contains live microorganisms and is taken by mouth. Eczema is a common skin condition that causes red, itchy skin.

Table of contents

What is a systematic review, systematic review vs. meta-analysis, systematic review vs. literature review, systematic review vs. scoping review, when to conduct a systematic review, pros and cons of systematic reviews, step-by-step example of a systematic review, other interesting articles, frequently asked questions about systematic reviews.

A review is an overview of the research that’s already been completed on a topic.

What makes a systematic review different from other types of reviews is that the research methods are designed to reduce bias . The methods are repeatable, and the approach is formal and systematic:

  • Formulate a research question
  • Develop a protocol
  • Search for all relevant studies
  • Apply the selection criteria
  • Extract the data
  • Synthesize the data
  • Write and publish a report

Although multiple sets of guidelines exist, the Cochrane Handbook for Systematic Reviews is among the most widely used. It provides detailed guidelines on how to complete each step of the systematic review process.

Systematic reviews are most commonly used in medical and public health research, but they can also be found in other disciplines.

Systematic reviews typically answer their research question by synthesizing all available evidence and evaluating the quality of the evidence. Synthesizing means bringing together different information to tell a single, cohesive story. The synthesis can be narrative ( qualitative ), quantitative , or both.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

systematic literature review qualitative

Systematic reviews often quantitatively synthesize the evidence using a meta-analysis . A meta-analysis is a statistical analysis, not a type of review.

A meta-analysis is a technique to synthesize results from multiple studies. It’s a statistical analysis that combines the results of two or more studies, usually to estimate an effect size .

A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarize and evaluate previous work, without using a formal, explicit method.

Although literature reviews are often less time-consuming and can be insightful or helpful, they have a higher risk of bias and are less transparent than systematic reviews.

Similar to a systematic review, a scoping review is a type of review that tries to minimize bias by using transparent and repeatable methods.

However, a scoping review isn’t a type of systematic review. The most important difference is the goal: rather than answering a specific question, a scoping review explores a topic. The researcher tries to identify the main concepts, theories, and evidence, as well as gaps in the current research.

Sometimes scoping reviews are an exploratory preparation step for a systematic review, and sometimes they are a standalone project.

Prevent plagiarism. Run a free check.

A systematic review is a good choice of review if you want to answer a question about the effectiveness of an intervention , such as a medical treatment.

To conduct a systematic review, you’ll need the following:

  • A precise question , usually about the effectiveness of an intervention. The question needs to be about a topic that’s previously been studied by multiple researchers. If there’s no previous research, there’s nothing to review.
  • If you’re doing a systematic review on your own (e.g., for a research paper or thesis ), you should take appropriate measures to ensure the validity and reliability of your research.
  • Access to databases and journal archives. Often, your educational institution provides you with access.
  • Time. A professional systematic review is a time-consuming process: it will take the lead author about six months of full-time work. If you’re a student, you should narrow the scope of your systematic review and stick to a tight schedule.
  • Bibliographic, word-processing, spreadsheet, and statistical software . For example, you could use EndNote, Microsoft Word, Excel, and SPSS.

A systematic review has many pros .

  • They minimize research bias by considering all available evidence and evaluating each study for bias.
  • Their methods are transparent , so they can be scrutinized by others.
  • They’re thorough : they summarize all available evidence.
  • They can be replicated and updated by others.

Systematic reviews also have a few cons .

  • They’re time-consuming .
  • They’re narrow in scope : they only answer the precise research question.

The 7 steps for conducting a systematic review are explained with an example.

Step 1: Formulate a research question

Formulating the research question is probably the most important step of a systematic review. A clear research question will:

  • Allow you to more effectively communicate your research to other researchers and practitioners
  • Guide your decisions as you plan and conduct your systematic review

A good research question for a systematic review has four components, which you can remember with the acronym PICO :

  • Population(s) or problem(s)
  • Intervention(s)
  • Comparison(s)

You can rearrange these four components to write your research question:

  • What is the effectiveness of I versus C for O in P ?

Sometimes, you may want to include a fifth component, the type of study design . In this case, the acronym is PICOT .

  • Type of study design(s)
  • The population of patients with eczema
  • The intervention of probiotics
  • In comparison to no treatment, placebo , or non-probiotic treatment
  • The outcome of changes in participant-, parent-, and doctor-rated symptoms of eczema and quality of life
  • Randomized control trials, a type of study design

Their research question was:

  • What is the effectiveness of probiotics versus no treatment, a placebo, or a non-probiotic treatment for reducing eczema symptoms and improving quality of life in patients with eczema?

Step 2: Develop a protocol

A protocol is a document that contains your research plan for the systematic review. This is an important step because having a plan allows you to work more efficiently and reduces bias.

Your protocol should include the following components:

  • Background information : Provide the context of the research question, including why it’s important.
  • Research objective (s) : Rephrase your research question as an objective.
  • Selection criteria: State how you’ll decide which studies to include or exclude from your review.
  • Search strategy: Discuss your plan for finding studies.
  • Analysis: Explain what information you’ll collect from the studies and how you’ll synthesize the data.

If you’re a professional seeking to publish your review, it’s a good idea to bring together an advisory committee . This is a group of about six people who have experience in the topic you’re researching. They can help you make decisions about your protocol.

It’s highly recommended to register your protocol. Registering your protocol means submitting it to a database such as PROSPERO or ClinicalTrials.gov .

Step 3: Search for all relevant studies

Searching for relevant studies is the most time-consuming step of a systematic review.

To reduce bias, it’s important to search for relevant studies very thoroughly. Your strategy will depend on your field and your research question, but sources generally fall into these four categories:

  • Databases: Search multiple databases of peer-reviewed literature, such as PubMed or Scopus . Think carefully about how to phrase your search terms and include multiple synonyms of each word. Use Boolean operators if relevant.
  • Handsearching: In addition to searching the primary sources using databases, you’ll also need to search manually. One strategy is to scan relevant journals or conference proceedings. Another strategy is to scan the reference lists of relevant studies.
  • Gray literature: Gray literature includes documents produced by governments, universities, and other institutions that aren’t published by traditional publishers. Graduate student theses are an important type of gray literature, which you can search using the Networked Digital Library of Theses and Dissertations (NDLTD) . In medicine, clinical trial registries are another important type of gray literature.
  • Experts: Contact experts in the field to ask if they have unpublished studies that should be included in your review.

At this stage of your review, you won’t read the articles yet. Simply save any potentially relevant citations using bibliographic software, such as Scribbr’s APA or MLA Generator .

  • Databases: EMBASE, PsycINFO, AMED, LILACS, and ISI Web of Science
  • Handsearch: Conference proceedings and reference lists of articles
  • Gray literature: The Cochrane Library, the metaRegister of Controlled Trials, and the Ongoing Skin Trials Register
  • Experts: Authors of unpublished registered trials, pharmaceutical companies, and manufacturers of probiotics

Step 4: Apply the selection criteria

Applying the selection criteria is a three-person job. Two of you will independently read the studies and decide which to include in your review based on the selection criteria you established in your protocol . The third person’s job is to break any ties.

To increase inter-rater reliability , ensure that everyone thoroughly understands the selection criteria before you begin.

If you’re writing a systematic review as a student for an assignment, you might not have a team. In this case, you’ll have to apply the selection criteria on your own; you can mention this as a limitation in your paper’s discussion.

You should apply the selection criteria in two phases:

  • Based on the titles and abstracts : Decide whether each article potentially meets the selection criteria based on the information provided in the abstracts.
  • Based on the full texts: Download the articles that weren’t excluded during the first phase. If an article isn’t available online or through your library, you may need to contact the authors to ask for a copy. Read the articles and decide which articles meet the selection criteria.

It’s very important to keep a meticulous record of why you included or excluded each article. When the selection process is complete, you can summarize what you did using a PRISMA flow diagram .

Next, Boyle and colleagues found the full texts for each of the remaining studies. Boyle and Tang read through the articles to decide if any more studies needed to be excluded based on the selection criteria.

When Boyle and Tang disagreed about whether a study should be excluded, they discussed it with Varigos until the three researchers came to an agreement.

Step 5: Extract the data

Extracting the data means collecting information from the selected studies in a systematic way. There are two types of information you need to collect from each study:

  • Information about the study’s methods and results . The exact information will depend on your research question, but it might include the year, study design , sample size, context, research findings , and conclusions. If any data are missing, you’ll need to contact the study’s authors.
  • Your judgment of the quality of the evidence, including risk of bias .

You should collect this information using forms. You can find sample forms in The Registry of Methods and Tools for Evidence-Informed Decision Making and the Grading of Recommendations, Assessment, Development and Evaluations Working Group .

Extracting the data is also a three-person job. Two people should do this step independently, and the third person will resolve any disagreements.

They also collected data about possible sources of bias, such as how the study participants were randomized into the control and treatment groups.

Step 6: Synthesize the data

Synthesizing the data means bringing together the information you collected into a single, cohesive story. There are two main approaches to synthesizing the data:

  • Narrative ( qualitative ): Summarize the information in words. You’ll need to discuss the studies and assess their overall quality.
  • Quantitative : Use statistical methods to summarize and compare data from different studies. The most common quantitative approach is a meta-analysis , which allows you to combine results from multiple studies into a summary result.

Generally, you should use both approaches together whenever possible. If you don’t have enough data, or the data from different studies aren’t comparable, then you can take just a narrative approach. However, you should justify why a quantitative approach wasn’t possible.

Boyle and colleagues also divided the studies into subgroups, such as studies about babies, children, and adults, and analyzed the effect sizes within each group.

Step 7: Write and publish a report

The purpose of writing a systematic review article is to share the answer to your research question and explain how you arrived at this answer.

Your article should include the following sections:

  • Abstract : A summary of the review
  • Introduction : Including the rationale and objectives
  • Methods : Including the selection criteria, search method, data extraction method, and synthesis method
  • Results : Including results of the search and selection process, study characteristics, risk of bias in the studies, and synthesis results
  • Discussion : Including interpretation of the results and limitations of the review
  • Conclusion : The answer to your research question and implications for practice, policy, or research

To verify that your report includes everything it needs, you can use the PRISMA checklist .

Once your report is written, you can publish it in a systematic review database, such as the Cochrane Database of Systematic Reviews , and/or in a peer-reviewed journal.

In their report, Boyle and colleagues concluded that probiotics cannot be recommended for reducing eczema symptoms or improving quality of life in patients with eczema. Note Generative AI tools like ChatGPT can be useful at various stages of the writing and research process and can help you to write your systematic review. However, we strongly advise against trying to pass AI-generated text off as your own work.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Turney, S. (2023, November 20). Systematic Review | Definition, Example & Guide. Scribbr. Retrieved July 30, 2024, from https://www.scribbr.com/methodology/systematic-review/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, how to write a literature review | guide, examples, & templates, how to write a research proposal | examples & templates, what is critical thinking | definition & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

  • Open access
  • Published: 15 December 2015

Qualitative and mixed methods in systematic reviews

  • David Gough 1  

Systematic Reviews volume  4 , Article number:  181 ( 2015 ) Cite this article

23k Accesses

44 Citations

23 Altmetric

Metrics details

Expanding the range of methods of systematic review

The logic of systematic reviews is very simple. We use transparent rigorous approaches to undertake primary research, and so we should do the same in bringing together studies to describe what has been studied (a research map) or to integrate the findings of the different studies to answer a research question (a research synthesis). We should not really need to use the term ‘systematic’ as it should be assumed that researchers are using and reporting systematic methods in all of their research, whether primary or secondary. Despite the universality of this logic, systematic reviews (maps and syntheses) are much better known in health research and for answering questions of the effectiveness of interventions (what works). Systematic reviews addressing other sorts of questions have been around for many years, as in, for example, meta ethnography [ 1 ] and other forms of conceptual synthesis [ 2 ], but only recently has there been a major increase in the use of systematic review approaches to answer other sorts of research questions.

There are probably several reasons for this broadening of approach. One may be that the increased awareness of systematic reviews has made people consider the possibilities for all areas of research. A second related factor may be that more training and funding resources have become available and increased the capacity to undertake such varied review work.

A third reason could be that some of the initial anxieties about systematic reviews have subsided. Initially, there were concerns that their use was being promoted by a new managerialism where reviews, particularly effectiveness reviews, were being used to promote particular ideological and theoretical assumptions and to indirectly control research agendas. However, others like me believe that explicit methods should be used to enable transparency of perspectives driving research and to open up access to and participation in research agendas and priority setting [ 3 ] as illustrated, for example, by the James Lind Alliance (see http://www.jla.nihr.ac.uk/ ).

A fourth possible reason for the development of new approaches is that effectiveness reviews have themselves broadened. Some ‘what works’ reviews can be open to criticism for only testing a ‘black box’ hypothesis of what works with little theorizing or any logic model about why any such hypothesis should be true and the mechanisms involved in such processes. There is now more concern to develop theory and to test how variables combine and interact. In primary research, qualitative strategies are advised prior to undertaking experimental trials [ 4 , 5 ] and similar approaches are being advocated to address complexity in reviews [ 6 ], in order to ask questions and use methods that address theories and processes that enable an understanding of both impact and context.

This Special Issue of Systematic Reviews Journal is providing a focus for these new methods of review whether these use qualitative review methods on their own or mixed together with more quantitative approaches. We are linking together with the sister journal Trials for this Special Issue as there is a similar interest in what qualitative approaches can and should contribute to primary research using experimentally controlled trials (see Trials Special Issue editorial by Claire Snowdon).

Dimensions of difference in reviews

Developing the range of methods to address different questions for review creates a challenge in describing and understanding such methods. There are many names and brands for the new methods which may or may not withstand the changes of historical time, but another way to comprehend the changes and new developments is to consider the dimensions on which the approaches to review differ [ 7 , 8 ].

One important distinction is the research question being asked and the associated paradigm underlying the method used to address this question. Research assumes a particular theoretical position and then gathers data within this conceptual lens. In some cases, this is a very specific hypothesis that is then tested empirically, and sometimes, the research is more exploratory and iterative with concepts being emergent and constructed during the research process. This distinction is often labelled as quantitative or positivist versus qualitative or constructionist. However, this can be confusing as much research taking a ‘quantitative’ perspective does not have the necessary numeric data to analyse. Even if it does have such data, this might be explored for emergent properties. Similarly, research taking a ‘qualitative’ perspective may include implicit quantitative themes in terms of the extent of different qualitative findings reported by a study.

Sandelowski and colleagues’ solution is to consider the analytic activity and whether this aggregates (adds up) or configures (arranges) the data [ 9 ]. In a randomized controlled trial and an effectiveness review of such studies, the main analysis is the aggregation of data using a priori non-emergent strategies with little iteration. However, there may also be post hoc analysis that is more exploratory in arranging (configuring) data to identify patterns as in, for example, meta regression or qualitative comparative analysis aiming to identify the active ingredients of effective interventions [ 10 ]. Similarly, qualitative primary research or reviews of such research are predominantly exploring emergent patterns and developing concepts iteratively, yet there may be some aggregation of data to make statements of generalizations of extent.

Even where the analysis is predominantly configuration, there can be a wide variation in the dimensions of difference of iteration of theories and concepts. In thematic synthesis [ 11 ], there may be few presumptions about the concepts that will be configured. In meta ethnography which can be richer in theory, there may be theoretical assumptions underlying the review question framing the analysis. In framework synthesis, there is an explicit conceptual framework that is iteratively developed and changed through the review process [ 12 , 13 ].

In addition to the variation in question, degree of configuration, complexity of theory, and iteration are many other dimensions of difference between reviews. Some of these differences follow on from the research questions being asked and the research paradigm being used such as in the approach to searching (exhaustive or based on exploration or saturation) and the appraisal of the quality and relevance of included studies (based more on risk of bias or more on meaning). Others include the extent that reviews have a broad question, depth of analysis, and the extent of resultant ‘work done’ in terms of progressing a field of inquiry [ 7 , 8 ].

Mixed methods reviews

As one reason for the growth in qualitative synthesis is what they can add to quantitative reviews, it is not surprising that there is also growing interest in mixed methods reviews. This reflects similar developments in primary research in mixing methods to examine the relationship between theory and empirical data which is of course the cornerstone of much research. But, both primary and secondary mixed methods research also face similar challenges in examining complex questions at different levels of analysis and of combining research findings investigated in different ways and may be based on very different epistemological assumptions [ 14 , 15 ].

Some mixed methods approaches are convergent in that they integrate different data and methods of analysis together at the same time [ 16 , 17 ]. Convergent systematic reviews could be described as having broad inclusion criteria (or two or more different sets of criteria) for methods of primary studies and have special methods for the synthesis of the resultant variation in data. Other reviews (and also primary mixed methods studies) are sequences of sub-reviews in that one sub-study using one research paradigm is followed by another sub-study with a different research paradigm. In other words, a qualitative synthesis might be used to explore the findings of a prior quantitative synthesis or vice versa [ 16 , 17 ].

An example of a predominantly aggregative sub-review followed by a configuring sub-review is the EPPI-Centre’s mixed methods review of barriers to healthy eating [ 18 ]. A sub-review on the effectiveness of public health interventions showed a modest effect size. A configuring review of studies of children and young people’s understanding and views about eating provided evidence that the public health interventions did not take good account of such user views research, and that the interventions most closely aligned to the user views were the most effective. The already mentioned qualitative comparative analysis to identify the active ingredients within interventions leading to impact could also be considered a qualitative configuring investigation of an existing quantitative aggregative review [ 10 ].

An example of a predominantly configurative review followed by an aggregative review is realist synthesis. Realist reviews examine the evidence in support of mid-range theories [ 19 ] with a first stage of a configuring review of what is proposed by the theory or proposal (what would need to be in place and what casual pathways would have to be effective for the outcomes proposed by the theory to be supported?) and a second stage searching for empirical evidence to test for those necessary conditions and effectiveness of the pathways. The empirical testing does not however use a standard ‘what works’ a priori methods approach but rather a more iterative seeking out of evidence that confirms or undermines the theory being evaluated [ 20 ].

Although sequential mixed methods approaches are considered to be sub-parts of one larger study, they could be separate studies as part of a long-term strategic approach to studying an issue. We tend to see both primary studies and reviews as one-off events, yet reviews are a way of examining what we know and what more we want to know as a strategic approach to studying an issue over time. If we are in favour of mixing paradigms of research to enable multiple levels and perspectives and mixing of theory development and empirical evaluation, then we are really seeking mixed methods research strategies rather than simply mixed methods studies and reviews.

Noblit G. Hare RD: meta-ethnography: synthesizing qualitative studies. Newbury Park NY: Sage Publications; 1988.

Google Scholar  

Barnett-Page E, Thomas J. Methods for the synthesis of qualitative research: a critical review. BMC Med Res Methodol. 2009;9:59.

Article   PubMed   PubMed Central   Google Scholar  

Gough D, Elbourne D. Systematic research synthesis to inform policy, practice and democratic debate. Soc Pol Soc. 2002;2002:1.

Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, et al. Process evaluation of complex interventions: Medical Research Council guidance 2015. BMJ. 2015;350:h1258

Candy B, Jone L, King M, Oliver S. Using qualitative evidence to help understand complex palliative care interventions: a novel evidence synthesis approach. BMJ Support Palliat Care. 2014;4:Supp A41–A42.

Article   Google Scholar  

Noyes J, Gough D, Lewin S, Mayhew A, Michie S, Pantoja T, et al. A research and development agenda for systematic reviews that ask complex questions about complex interventions. J Clin Epidemiol. 2013;66:11.

Gough D, Oliver S, Thomas J. Introduction to systematic reviews. London: Sage; 2012.

Gough D, Thomas J, Oliver S. Clarifying differences between review designs and methods. Syst Rev. 2012;1:28.

Sandelowski M, Voils CJ, Leeman J, Crandlee JL. Mapping the mixed methods-mixed research synthesis terrain. J Mix Methods Res. 2012;6:4.

Thomas J, O’Mara-Eves A, Brunton G. Using qualitative comparative analysis (QCA) in systematic reviews of complex interventions: a worked example. Syst Rev. 2014;3:67.

Thomas J, Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Med Res Methodol. 2008;8:45.

Oliver S, Rees R, Clarke-Jones L, Milne R, Oakley AR, Gabbay J, et al. A multidimensional conceptual framework for analysing public involvement in health services research. Health Exp. 2008;11:72–84.

Booth A, Carroll C. How to build up the actionable knowledge base: the role of ‘best fit’ framework synthesis for studies of improvement in healthcare. BMJ Qual Saf. 2015. 2014-003642.

Brannen J. Mixed methods research: a discussion paper. NCRM Methods Review Papers, 2006. NCRM/005.

Creswell J. Mapping the developing landscape of mixed methods research. In: Teddlie C, Tashakkori A, editors. SAGE handbook of mixed methods in social & behavioral research. New York: Sage; 2011.

Morse JM. Principles of mixed method and multi-method research design. In: Teddlie C, Tashakkori A, editors. Handbook of mixed methods in social and behavioural research. London: Sage; 2003.

Pluye P, Hong QN. Combining the power of stories and the power of numbers: mixed methods research and mixed studies reviews. Annu Rev Public Health. 2014;35:29–45.

Harden A, Thomas J. Mixed methods and systematic reviews: examples and emerging issues. In: Tashakkori A, Teddlie C, editors. Handbook of mixed methods in the social and behavioral sciences. 2nd ed. London: Sage; 2010. p. 749–74.

Chapter   Google Scholar  

Pawson R. Evidenced-based policy: a realist perspective. London: Sage; 2006.

Book   Google Scholar  

Gough D. Meta-narrative and realist reviews: guidance, rules, publication standards and quality appraisal. BMC Med. 2013;11:22.

Download references

Author information

Authors and affiliations.

EPPI-Centre, Social Science Research Unit, University College London, London, WC1H 0NR, UK

David Gough

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to David Gough .

Additional information

Competing interests.

The author is a writer and researcher in this area. The author declares that he has no other competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Gough, D. Qualitative and mixed methods in systematic reviews. Syst Rev 4 , 181 (2015). https://doi.org/10.1186/s13643-015-0151-y

Download citation

Received : 13 October 2015

Accepted : 29 October 2015

Published : 15 December 2015

DOI : https://doi.org/10.1186/s13643-015-0151-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

systematic literature review qualitative

  • UNC Libraries
  • HSL Subject Research
  • Qualitative Research Resources
  • Integrating Qualitative Research into Systematic Reviews

Qualitative Research Resources: Integrating Qualitative Research into Systematic Reviews

Created by health science librarians.

HSL Logo

  • What is Qualitative Research?
  • Qualitative Research Basics
  • Special Topics
  • Training Opportunities: UNC & Beyond
  • Help at UNC
  • Qualitative Software for Coding/Analysis
  • Software for Audio, Video, Online Surveys
  • Finding Qualitative Studies
  • Assessing Qualitative Research
  • Writing Up Your Research

About this Page

Articles, manuals, handbooks, meta-ethnography & interpretive (vs. aggregative) approaches, websites/tutorials, quality assessment tools.

  • Publishing Qualitative Research
  • Presenting Qualitative Research
  • Qualitative & Libraries: a few gems
  • Data Repositories

Why is this information important?

  • Researchers in health science fields are increasingly recognizing the value of including qualitative studies in systematic reviews.
  • Because qualitative and quantitative studies can be so different, however, it can be hard to know how to integrate them productively.

On this page you will find the following helpful resources:

  • Articles and chapters that discuss methods for integrating qualitative studies into systematic reviews
  • Examples of systematic reviews that include qualitative studies
  • Selected books on qualitative studies and systematic reviews that are owned by UNC Libraries
  • A short list of helpful websites and tutorials on the topic

See also:  Assessing Qualitative Research

The following articles and chapters offer some advice on how to include qualitative research in systematic reviews, as well as some examples of reviews that have done so.

Methods for Including Qualitative Research in Systematic Reviews

Butler, Ashleigh, Helen Hall, Beverley Copnell. (2016). A Guide to Writing a Qualitative Systematic Review Protocol to Enhance Evidence-Based Practice in Nursing and Health Care. Worldviews on Evidence-Based Nursing 13(3): 241-249.

Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.3  (updated February 2022) . Cochrane, 2022. Available from www.training.cochrane.org/handbook .. 

  • Chapter 21: Qualitative Evidence ( Noyes J, Booth A, Cargo M, Flemming K, Harden A, Harris J, Garside R, Hannes K, Pantoja T, Thomas J. Chapter 21: Qualitative evidence. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.3 (updated February 2022). Cochrane, 2022. Available from www.training.cochrane.org/handbook . )

See Journal of Clinical Epidemiology, Volume 97, May 2018, Cochrane Qualitative and Implementation Methods Group Guidance Series. UNC Chapel Hill users direct link .

  • Current version can be accessed here: Cochrane Handbook for Systematic Reviews of Interventions version 6.3 (updated February 2022) . .
  •   See Chapter 21: Qualitative Evidence ( Noyes J, Booth A, Cargo M, Flemming K, Harden A, Harris J, Garside R, Hannes K, Pantoja T, Thomas J. Chapter 21: Qualitative evidence. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.3 (updated February 2022). Cochrane, 2022. Available from www.training.cochrane.org/handbook . )

Dixon-Woods M, Bonas S, Booth A, Jones DR, Miller T, Sutton AJ, Shaw RL, Smith JA, Young B. How can systematic reviews incorporate qualitative research? A critical perspective. Qualitative Research 2006; 6: 27-44.

Dixon-Woods, Mary, Shona Argawal, David Jones, Bridget Young, and Alex Sutton. Synthesizing qualitative and quantitative evidence: a review of possible methods.   J Health Serv Res Policy January 1, 2005 vol. 10 no. 1 45-53B

Harden, Angela. Mixed-Methods Systematic Reviews: Integrating Quantitative and Qualitative Findings. FOCUS 25, 2010.

/Joanna Briggs Institute : Lockwood C, Porritt K, Munn Z, Rittenmeyer L, Salmond S, Bjerrum M, Loveday H, Carrier J, Stannard D. Chapter 2: Systematic reviews of qualitative evidence. Aromataris E, Munn Z, editors . Manual for Evidence Synthesis.  ; 2020. Available from:  https://synthesismanual..global .   https://doi.org/10.46658/MES-20-03 . In: Aromataris E, Munn Z (Editors). Manual for Evidence Synthesis. , 2020. Available from  Manual for Evidence Synthesis  

Ludvigsen, Mette S., Elizabeth O.C. Hall, Gabriele Meyer, Liv Fegran, Hanne Aagaard, Lisbeth Uhrenfeldt. (2015) Using Sandelowski and Barroso’s Meta-Synthesis Method in Advancing Qualitative Evidence . Qualitative Health Research 26(3). PMID:25794523 DOI: 10.1177/1049732315576493

  • UNC Chapel Hill users:  link to article in SAGE Journals Qualitative Health .  

Glenton C, Bohren MA, Downe S, Paulsen EJ, Lewin S, on behalf of the Cochrane People, Health Systems and Public Health Thematic Group and Cochrane Norway. (2023 ). Cochrane Qualitative Evidence Syntheses: Protocol and review template v1.4. doi.org/10.5281/zenodo.10050961  (downloadable template available at the link) 

2021 Campbell Webinar Series, webinar 9, presented by Ruth Garside, Co-chair and Editor of the Campbell Methods Coordinating Group, on 10 August 2021.  Beyond barriers and facilitators: what questions can qualitative evidence synthesis address? www.youtube.com/watch?v=CyHbLa69Tmg

Examples of Systematic Reviews Incorporating Qualitative Research

Finfgeld-Connett, Deborah. Intimate partner abuse among older women: qualitative systematic review . Clinical Nursing Research 2014, 23:6 664-683.

Lucas, Patricia, Janis Baird, Lisa Arai, Catherine Law, and Helen M. Roberts. Worked examples of alternative methods for the synthesis of qualitative and quantitative research in systematic reviews . BMC Medical Research Methodology 2007, 7 :4  doi:10.1186/1471-2288-7-4

Added Value of Qualitative Research with Randomized Clinical Trials

O'Cathain, Alicia, Jackie Goode, Sarah J. Drabble, Kate J. Thomas, Anne Rudolph, and Jenny Hewison.  Getting added value from using qualitative research wtih randomized controlled trials:  a qualitative interview study .  Trials 2014, 15 (June): 215.  doi:10.1186/1745-6215-15-215 

Snowden, Claire  (Trials)  and David Gough  (Systematic Reviews)  (eds)  Qualitative Methods, Trials, and Systematic Reviews.  Joint Publication,  Trials  and  Systematic Reviews.  

  • Snowden, Claire (2015).  Trials  editorial about the special joint publication:  Qualitative and mixed methods research in trials .  Trials  16: 558. 
  • Gough, David (2015).   Systematic Reviews  editorial about the special joint publication:  Qualitative and mixed methods in systematic reviews .  Systemaatic Reviews  4:81.
  • The two sister-journals,  Trials  and  Systematic Reviews , have, on the face of it, different readerships and deal with different issues. In both journals there is, however, a common and growing interest in the contribution of qualitative methods. We are seeing an expansion of the use and application of a range of techniques with entry into novel research areas and pursuit of new lines of inquiry. Our contributors are working within specific methods, with mixed methods, and across paradigms.  This special issue covers these innovative and challenging areas, with the aim of sharing methodological practice, findings and reflections to drive forward and further the respective fields.

Systematic Review of Qualitative Research (Meta-Synthesis)

Korhonen, Anne, Tuovi Hakulinen-Viitanen, Virpi Jylha, Arja Holopainen.  Meta-synthesis and evidence-based heath care:  a method for systemic review.   Scandinavian Journal of Caring Sciences 2013, 27:4, 1027-1034.  doi: 10.1111/scs.12003

GRADE-CERQUAL Approach

PLOS Medicine Staff. (2016). Correction: Using Qualitative Evidence in Decision Making for Health and Social Interventions: An Approach to Assess Confidence in Findings from Qualitative Evidence Syntheses (GRADE-CERQual). PLoS Medicine , 13 (6), e1002065. https://doi.org/10.1371/journal.pmed.1002065  PMID:27284910 PMCID: PMC4902189

  • Both the corrected and uncorrected versions are available at the link above. 

See additional information in Website/Tutorial box, below. 

Cochrane Methods:  Qualitative & Implementation - Core Library of Qualitative Synthesis Methodology

  • Core Library .  NB: items represent key methodology resources.  No endorsement of individual methods is implied by inclusion in this list.  See  Supplemental Handbook Guidance .   

Meta-ethnography is generally considered an interpretative (vs. aggregative) qualitative synthesis approach.  

eMERGe Project

Funded by the National Institute for Health Research of the National Health Service (NHS) in the UK,  The aim of the eMERGe project, which ran from June 2015 to May 2017, was to develop a guideline to improve the way researchers report meta-ethnographies. The website includes many resources and publications.  From the website: 

  • Meta-ethnography is an interpretive qualitative synthesis approach developed by George W. Noblit and R. Dwight Hare, in the field of education, in the 1980s. They designed the approach to address the inability of an aggregative synthesis of five ethnographic studies to explain the failure of racial desegregation in schools. In a meta-ethnography, the reviewers conducting the meta-ethnography aim to produce new interpretations that transcend the findings of individual studies, rather than simply to aggregate findings. Noblit and Hare described it as ‘making a whole into something more than the parts alone imply’ (Noblit & Hare, 1988, p. 28), i.e. going beyond the findings of any individual study.

Meta-ethnography differs from other qualitative evidence synthesis approaches in its underpinning theory, use of the authors’ interpretations (e.g. concepts, themes) from primary qualitative studies as data, and creation of new interpretations through its unique analytic synthesis process. Researchers select, analyse and interpret qualitative studies to answer focused questions on a specific topic (e.g. people’s experiences of having and being treated for arthritis) to come up with new insights and conclusions. The aim of the eMERGe project was to develop a guideline to improve the way researchers report meta-ethnographies.

  • EQUATOR NETWORK: guidelines for reporting meta-ethnography EQUATOR Network (Enhancing the QUAlity and Transparency Of health Research)
  • The (Joanna Briggs Institute) Approach to Qualitative Synthesis "The uses a meta-aggregative approach to the synthesis of qualitative evidence. Meta aggregation is sensitive to the nature and traditions of qualitative research while being predicated on the process of systematic review (Pearson 2004). The meta-aggregative approach is sensitive to the practicality and usability of the primary author’s findings and does not seek to re-interpret those findings as some other methods of qualitative synthesis do. A strong feature of the meta-aggregative approach is that it seeks to enable generalizable statements in the form of recommendations to guide practitioners and policy makers (Hannes and Lockwood 2011). In this regard, meta aggregation contrasts with meta-ethnography or the critical interpretive approach to qualitative evidence synthesis, which have a focus on re-interpretation and theory generation rather than aggregation." more... less... Lockwood C, Porrit K, Munn Z, Rittenmeyer L, Salmond S, Bjerrum M, Loveday H, Carrier J, Stannard D. Chapter 2: Systematic reviews of qualitative evidence. In: Aromataris E, Munn Z (Editors). JBI Manual for Evidence Synthesis. JBI, 2020. Available from https://synthesismanual.jbi.global. https://doi.org/10.46658/JBIMES-20-03
  • Cochrane Methods: Qualitative & Implementation--Core Library of Qualitative Synthesis Methodology Includes the following 4 references.

Note from the Core Library: The following items represent key methodology resources.  No endorsement of individual methods is implied by inclusion in this list.  See  Supplemental Handbook Guidance .   

  • Campbell R, Pound P, Morgan M, Daker-White G, Britten N, Pill R, Yardley L, Pope C, Donovan J.  Evaluating meta-ethnography: systematic analysis and synthesis of qualitative research .  Health Technol Assess . 2011 Dec;15(43):1-164.
  • France, E. F., Ring, N., Thomas, R., Noyes, J., Maxwell, M., & Jepson, R. (2014). A methodological systematic review of what's wrong with meta-ethnography reporting. BMC medical research methodology, 14(1), 119.
  • France, E. F., Wells, M., Lang, H., & Williams, B. (2016).  Why, when and how to update a meta-ethnography qualitative synthesis . Systematic Reviews, 5(1). doi:10.1186/s13643-016-0218-4
  • Toye, F., Seers, K., Allcock, N., Briggs, M., Carr, E., Andrews, J., & Barker, K. (2013). 'Trying to pin down jelly'-exploring intuitive processes in quality assessment for meta-ethnography. BMC Medical Research Methodology, 13(1), 46.

Some additional meta-ethnography resources and examples:

  • SAGE Research Methods (Qualitative) contains many resources for meta-ethnography.  "Meta-ethnography" can be searched from the general Sage Methods page. 
  • To see some examples of published meta-ethnographies in Scopus (which indexes both social science and health-related literature), you can run a quick and dirty search such as "public health AND meta-ethnography" or "public health AND (policy OR management) and meta-ethnography)" .  

Historically speaking:

  • This is the seminal work on meta-ethnography
  • UNC Chapel Hill users e-book edition
  • " This chapter explains meta-ethnography as created by Noblit and Hare and how the method has been used since."
  • UNC Chapel Hill users e-book edition .
  • Booth, A. (2001), 'Cochrane or cock-eyed? How should we conduct systematic reviews of qualitative research?' In  Qualitative Evidence-based conference: Taking a critical stance,  Coventry University.  Download the article from ResearchGate  

systematic literature review qualitative

Temple University Libraries: What is a Meta-Synthesis

Cochrane Qualitative and Implementation Methods Group

/Joanna Briggs Institute Manual for Evidence Synthesis  Chapter 2 (Systematic Reviews of Qualitative Evidence):  Lockwood C, Porrit K, Munn Z, Rittenmeyer L, Salmond S, Bjerrum M, Loveday H, Carrier J, Stannard D. Chapter 2: Systematic reviews of qualitative evidence. In: Aromataris E, Munn Z (Editors) . Manual for Evidence Synthesis.  , 2020. Available from  https://synthesismanual..global . https://doi.org/10.46658/MES-20-03

Canadian Cochrane Center YouTube Tutorial on Qualitative Evidence Synthesis, published Dec 2, 2013

This recording is of the first webinar of the 2013-2014 Different Evidence, Different Syntheses Series. Jane Noyes of the Cochrane Qualitative and Implementation Methods Group for the discussion was the presenter for the webinar held on November 28, 2013.This webinar explored: a) when to consider undertaking a synthesis of qualitative evidence; b) some frequently used methods and examples of developing methods for synthesising qualitative evidence; and c) approaches for integrating qualitative and quantitative findings.

Seminar on CerQual: a new approach to qualitative evidence synthesis analysis Oct 13, 2014

  • Appendix A: Tools To Assess Risk of Bias of Individual Outcomes In: Viswanathan M, Ansari MT, Berkman ND, Chang S, Hartling L, McPheeters LM, Santaguida PL, Shamliyan T, Singh K, Tsertsvadze A, Treadwell JR. Assessing the Risk of Bias of Individual Studies in Systematic Reviews of Health Care Interventions. Agency for Healthcare Research and Quality Methods Guide for Comparative Effectiveness Reviews. March 2012. AHRQ Publication No. 12-EHC047-EF. Available at: www.effectivehealthcare.ahrq.gov/
  • Observational Epidemiology Quality Rating Tool Sanderson S, Tatt ID, Higgins JP. Tools for assessing quality and susceptibility to bias in observational studies in epidemiology: a systematic review and annotated bibliography. Int J Epidemiol 2007;36:666-76.
  • Diagnostic Accuracy Tool Whiting P, Rutjes AWS, Dinnes J, et al. Development and validation of methods for assessing the quality of diagnostic accuracy studies. Health Technol Assess 2004;8(25):iii, 1-234.
  • CASP Checklists A set of critical appraisal checklists. Checklists are available for systematic reviews, qualitative studies, RCTs, case-control studies, diagnostic studies, cohort studies, and economic evaluations.
  • LEGEND Evidence Evaluation Tools A series of critical appraisal tools from the Cincinnati Children's Hospital. Contains tools for a wide variety of study designs, including prospective, retrospective, qualitative, and quantitative designs.
  • The Newcastle-Ottawa Scale (NOS) for assessing the quality of nonrandomised studies in meta-analyses Validated tool for assessing case-control and cohort studies.

Search the UNC Library Catalog

The Books section of this page contains a small selection of the books available on including qualitative research in systematic reviews. To find more, click the links below to search the UNC library catalog.

  • Catalog Search -- Review Literature as Topic
  • Catalog Search -- Systematic Reviews (Medical Research)
  • Catalog Search -- Meta-Analysis as Topic

Contact HSL About Systematic Reviews

Email us

Ready to start a systematic review? HSL Librarians can help!

Fill out the Systematic Review Request Form and the best-suited librarian will get back to you promptly.  Our systematic review service is only available to faculty, staff, students, and others who are affiliated with UNC Chapel Hill.

  • << Previous: Writing Up Your Research
  • Next: Publishing Qualitative Research >>
  • Last Updated: Jul 28, 2024 4:11 PM
  • URL: https://guides.lib.unc.edu/qual

DistillerSR Logo

About Systematic Reviews

Qualitative Data Analysis in Systematic Reviews

systematic literature review qualitative

Automate every stage of your literature review to produce evidence-based research faster and more accurately.

What is a qualitative systematic review.

A qualitative systematic review aggregates integrates and interprets data from qualitative studies, which is collected through observation, interviews, and verbal interactions. Included studies may also use other qualitative methodologies of data collection in the relevant literature. The use of qualitative systematic reviews analyzes the information and focuses on the meanings derived from it.

A qualitative systematic review generally follows the same steps as indicated by most systematic review guidelines , including the application of eligibility criteria in systematic reviews , and the steps for searching and screening available literature. All of these then conclude in the final write-up, which involves tabulating the data into a summary of findings table in the systematic review , and reporting on findings and conclusions. Qualitative systematic reviews are different in that, they incorporate qualitative studies and use only qualitative methods in analyzing and synthesizing data.

Why Are Qualitative Systematic Reviews Valuable?

Apart from the rigorous, methodical, and reproducible process used, qualitative systematic reviews derive their conclusions from qualitative data, they bring a human perspective into the process of answering the focused research question. This brings valuable findings, which cannot be expressed in quantitative means, into the view of the reader. Results that are better stated that calculated, like feelings of compliance or satisfaction following treatment using a new anti-depressant.

Another example, if a systematic review that deals with pain associated with a certain drug considers qualitative data, it can come up with conclusions that consider how subjects feel when taking the medicine, e.g., the level of pain and tolerance, etc.

Types Of Qualitative Systematic Reviews

Pioneers of qualitative systematic reviews suggest that qualitative systematic reviews can be segregated into two types: aggregated and interpretive.

Aggregated Systematic Review

An aggregated systematic review simply summarizes the collected data. It generates a summary of the studies using aggregate data obtained from individual studies within the scoped literature.

Interpretive Systematic Review

An interpretive systematic review, which is the more common of the two types, analyzes the data. From the analysis, researchers can derive a new understanding that may lead to the development of a theory and can help understand or predict behavior as it relates to the topic of the review.

Learn More About DistillerSR

(Article continues below)

systematic literature review qualitative

How to Analyze Data in a Qualitative Systematic Review

Qualitative systematic reviews deal with a lot of textual studies. This is why undertaking one requires a well-planned, systematic, and sustainable approach, as defined in your protocol. It also helps to employ literature review software like DistillerSR to take out a significant amount of manual labor, as it automates key stages in the entire methodology.

Here are four steps to take for qualitative data analysis in systematic reviews.

Collect and Review the Data

Based on your eligibility criteria, search and screen the studies relevant to your review. This involves scouring libraries and databases, gathering documents, and printing or saving transcripts. You can also check for studies in the reference lists of already eligible studies. The recommendation of similar articles by databases during searching should also be checked.

Once you’ve collected your data, get a sense of what it contains by reading the collected studies (you’ll likely need to do this several times).

This step can be easier with systematic review software, such as DistillerSR which gives you access to more sources and applies AI to identify the literature you need.

Create And Identify Codes

Connect your data by creating and identifying common ideas. Highlight keywords, and categorize information; it may even be helpful to create concept maps for easy reference.

Develop Themes

Combine your codes and revise them into themes, recognizing recurring concepts, language, opinions, beliefs, etc.

Derive Conclusions and Summarize Findings

Present the themes that you’ve collected in a cohesive manner, using them to answer your review’s research question. Finally, derive conclusions from the data, and summarize your findings in a report.

3 Reasons to Connect

systematic literature review qualitative

  • Locations and Hours
  • UCLA Library
  • Research Guides
  • Biomedical Library Guides

Systematic Reviews

  • Types of Literature Reviews

What Makes a Systematic Review Different from Other Types of Reviews?

  • Planning Your Systematic Review
  • Database Searching
  • Creating the Search
  • Search Filters and Hedges
  • Grey Literature
  • Managing and Appraising Results
  • Further Resources

Reproduced from Grant, M. J. and Booth, A. (2009), A typology of reviews: an analysis of 14 review types and associated methodologies. Health Information & Libraries Journal, 26: 91–108. doi:10.1111/j.1471-1842.2009.00848.x

Aims to demonstrate writer has extensively researched literature and critically evaluated its quality. Goes beyond mere description to include degree of analysis and conceptual innovation. Typically results in hypothesis or mode Seeks to identify most significant items in the field No formal quality assessment. Attempts to evaluate according to contribution Typically narrative, perhaps conceptual or chronological Significant component: seeks to identify conceptual contribution to embody existing or derive new theory
Generic term: published materials that provide examination of recent or current literature. Can cover wide range of subjects at various levels of completeness and comprehensiveness. May include research findings May or may not include comprehensive searching May or may not include quality assessment Typically narrative Analysis may be chronological, conceptual, thematic, etc.
Mapping review/ systematic map Map out and categorize existing literature from which to commission further reviews and/or primary research by identifying gaps in research literature Completeness of searching determined by time/scope constraints No formal quality assessment May be graphical and tabular Characterizes quantity and quality of literature, perhaps by study design and other key features. May identify need for primary or secondary research
Technique that statistically combines the results of quantitative studies to provide a more precise effect of the results Aims for exhaustive, comprehensive searching. May use funnel plot to assess completeness Quality assessment may determine inclusion/ exclusion and/or sensitivity analyses Graphical and tabular with narrative commentary Numerical analysis of measures of effect assuming absence of heterogeneity
Refers to any combination of methods where one significant component is a literature review (usually systematic). Within a review context it refers to a combination of review approaches for example combining quantitative with qualitative research or outcome with process studies Requires either very sensitive search to retrieve all studies or separately conceived quantitative and qualitative strategies Requires either a generic appraisal instrument or separate appraisal processes with corresponding checklists Typically both components will be presented as narrative and in tables. May also employ graphical means of integrating quantitative and qualitative studies Analysis may characterise both literatures and look for correlations between characteristics or use gap analysis to identify aspects absent in one literature but missing in the other
Generic term: summary of the [medical] literature that attempts to survey the literature and describe its characteristics May or may not include comprehensive searching (depends whether systematic overview or not) May or may not include quality assessment (depends whether systematic overview or not) Synthesis depends on whether systematic or not. Typically narrative but may include tabular features Analysis may be chronological, conceptual, thematic, etc.
Method for integrating or comparing the findings from qualitative studies. It looks for ‘themes’ or ‘constructs’ that lie in or across individual qualitative studies May employ selective or purposive sampling Quality assessment typically used to mediate messages not for inclusion/exclusion Qualitative, narrative synthesis Thematic analysis, may include conceptual models
Assessment of what is already known about a policy or practice issue, by using systematic review methods to search and critically appraise existing research Completeness of searching determined by time constraints Time-limited formal quality assessment Typically narrative and tabular Quantities of literature and overall quality/direction of effect of literature
Preliminary assessment of potential size and scope of available research literature. Aims to identify nature and extent of research evidence (usually including ongoing research) Completeness of searching determined by time/scope constraints. May include research in progress No formal quality assessment Typically tabular with some narrative commentary Characterizes quantity and quality of literature, perhaps by study design and other key features. Attempts to specify a viable review
Tend to address more current matters in contrast to other combined retrospective and current approaches. May offer new perspectives Aims for comprehensive searching of current literature No formal quality assessment Typically narrative, may have tabular accompaniment Current state of knowledge and priorities for future investigation and research
Seeks to systematically search for, appraise and synthesis research evidence, often adhering to guidelines on the conduct of a review Aims for exhaustive, comprehensive searching Quality assessment may determine inclusion/exclusion Typically narrative with tabular accompaniment What is known; recommendations for practice. What remains unknown; uncertainty around findings, recommendations for future research
Combines strengths of critical review with a comprehensive search process. Typically addresses broad questions to produce ‘best evidence synthesis’ Aims for exhaustive, comprehensive searching May or may not include quality assessment Minimal narrative, tabular summary of studies What is known; recommendations for practice. Limitations
Attempt to include elements of systematic review process while stopping short of systematic review. Typically conducted as postgraduate student assignment May or may not include comprehensive searching May or may not include quality assessment Typically narrative with tabular accompaniment What is known; uncertainty around findings; limitations of methodology
Specifically refers to review compiling evidence from multiple reviews into one accessible and usable document. Focuses on broad condition or problem for which there are competing interventions and highlights reviews that address these interventions and their results Identification of component reviews, but no search for primary studies Quality assessment of studies within component reviews and/or of reviews themselves Graphical and tabular with narrative commentary What is known; recommendations for practice. What remains unknown; recommendations for future research
  • << Previous: Home
  • Next: Planning Your Systematic Review >>
  • Last Updated: Jul 23, 2024 3:40 PM
  • URL: https://guides.library.ucla.edu/systematicreviews

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • BMJ Journals

You are here

  • Volume 33, Issue 5
  • Equitable and accessible informed healthcare consent process for people with intellectual disability: a systematic literature review
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0002-8498-7329 Manjekah Dunn 1 , 2 ,
  • Iva Strnadová 3 , 4 , 5 ,
  • Jackie Leach Scully 4 ,
  • Jennifer Hansen 3 ,
  • Julie Loblinzk 3 , 5 ,
  • Skie Sarfaraz 5 ,
  • Chloe Molnar 1 ,
  • Elizabeth Emma Palmer 1 , 2
  • 1 Faculty of Medicine & Health , University of New South Wales , Sydney , New South Wales , Australia
  • 2 The Sydney Children's Hospitals Network , Sydney , New South Wales , Australia
  • 3 School of Education , University of New South Wales , Sydney , New South Wales , Australia
  • 4 Disability Innovation Institute , University of New South Wales , Sydney , New South Wales , Australia
  • 5 Self Advocacy Sydney , Sydney , New South Wales , Australia
  • Correspondence to Dr Manjekah Dunn, Paediatrics & Child Health, University of New South Wales Medicine & Health, Sydney, New South Wales, Australia; manjekah.dunn{at}unsw.edu.au

Objective To identify factors acting as barriers or enablers to the process of healthcare consent for people with intellectual disability and to understand how to make this process equitable and accessible.

Data sources Databases: Embase, MEDLINE, PsychINFO, PubMed, SCOPUS, Web of Science and CINAHL. Additional articles were obtained from an ancestral search and hand-searching three journals.

Eligibility criteria Peer-reviewed original research about the consent process for healthcare interventions, published after 1990, involving adult participants with intellectual disability.

Synthesis of results Inductive thematic analysis was used to identify factors affecting informed consent. The findings were reviewed by co-researchers with intellectual disability to ensure they reflected lived experiences, and an easy read summary was created.

Results Twenty-three studies were included (1999 to 2020), with a mix of qualitative (n=14), quantitative (n=6) and mixed-methods (n=3) studies. Participant numbers ranged from 9 to 604 people (median 21) and included people with intellectual disability, health professionals, carers and support people, and others working with people with intellectual disability. Six themes were identified: (1) health professionals’ attitudes and lack of education, (2) inadequate accessible health information, (3) involvement of support people, (4) systemic constraints, (5) person-centred informed consent and (6) effective communication between health professionals and patients. Themes were barriers (themes 1, 2 and 4), enablers (themes 5 and 6) or both (theme 3).

Conclusions Multiple reasons contribute to poor consent practices for people with intellectual disability in current health systems. Recommendations include addressing health professionals’ attitudes and lack of education in informed consent with clinician training, the co-production of accessible information resources and further inclusive research into informed consent for people with intellectual disability.

PROSPERO registration CRD42021290548.

  • Decision making
  • Healthcare quality improvement
  • Patient-centred care
  • Quality improvement
  • Standards of care

Data availability statement

Data are available upon reasonable request. Additional data and materials such as data collection forms, data extraction and analysis templates and QualSyst assessment data can be obtained by contacting the corresponding author.

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/ .

https://doi.org/10.1136/bmjqs-2023-016113

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

What is already known on this topic

People with intellectual disability are frequently excluded from decision-making processes and not provided equal opportunity for informed consent, despite protections outlined in the United Nations Convention on the Rights of Persons with Disabilities.

People with intellectual disability have the capacity and desire to make informed medical decisions, which can improve their well-being, health satisfaction and health outcomes.

What this review study adds

Health professionals lack adequate training in valid informed consent and making reasonable adjustments for people with intellectual disability, and continue to perpetuate assumptions of incapacity.

Health information provided to people with intellectual disability is often inaccessible and insufficient for them to make informed decisions about healthcare.

The role of support people, systemic constraints, a person-centred approach and ineffective healthcare communication also affect informed consent.

How this review might affect research, practice or policy

Health professionals need additional training on how to provide a valid informed consent process for people with intellectual disability, specifically in using accessible health information, making reasonable adjustments (e.g., longer/multiple appointments, options of a support person attending or not, using plain English), involving the individual in discussions, and communicating effectively with them.

Inclusive research is needed to hear the voices and opinions of people with intellectual disability about healthcare decision-making and about informed consent practices in specific healthcare settings.

Introduction

Approximately 1% of the world’s population have intellectual disability. 1 Intellectual disability is medically defined as a group of neurodevelopmental conditions beginning in childhood, with below average cognitive functioning and adaptive behaviour, including limitations in conceptual, social and practical skills. 2 People with intellectual disability prefer an alternative strength-based definition, reflected in the comment by Robert Strike OAM (Order of Australia Medal): ‘We can learn if the way of teaching matches how the person learns’, 3 reinforcing the importance of providing information tailored to the needs of a person with intellectual disability. A diagnosis of intellectual disability is associated with significant disparities in health outcomes. 4–7 Person-centred decision-making and better communication have been shown to improve patient satisfaction, 8 9 the physician–patient relationship 10 and overall health outcomes 11 for the wider population. Ensuring people with intellectual disability experience informed decision-making and accessible healthcare can help address the ongoing health disparities and facilitate equal access to healthcare.

Bodily autonomy is an individual’s power and agency to make decisions about their own body. 12 Informed consent for healthcare enables a person to practice bodily autonomy and is protected, for example, by the National Safety and Quality Health Service Standards (Australia), 13 Mental Capacity Act (UK) 14 and the Joint Commission Standards (USA). 15 In this article, we define informed consent according to three requirements: (1) the person is provided with information they understand, (2) the decision is free of coercion and (3) the person must have capacity. 16 For informed consent to be valid, this process must be suited to the individual’s needs so that they can understand and communicate effectively. Capacity is the ability to give informed consent for a medical intervention, 17 18 and the Mental Capacity Act outlines that ‘a person must be assumed to have capacity unless it is established that he lacks capacity’ and that incapacity can only be established if ‘all practicable steps’ to support capacity have been attempted without success. 14 These assumptions of capacity are also decision-specific, meaning an individual’s ability to consent can change depending on the situation, the choice itself and other factors. 17

Systemic issues with healthcare delivery systems have resulted in access barriers for people with intellectual disability, 19 despite the disability discrimination legislation in many countries who are signatories to the United Nations (UN) Convention on the Rights of Persons with Disabilities. 20 Patients with intellectual disability are not provided the reasonable adjustments that would enable them to give informed consent for medical procedures or interventions, 21 22 despite evidence that many people with intellectual disability have both the capacity and the desire to make their own healthcare decisions. 21 23

To support people with intellectual disability to make independent health decisions, an equitable and accessible informed consent process is needed. 24 However, current health systems have consistently failed to provide this. 21 25 To address this gap, we must first understand the factors that contribute to inequitable and inaccessible consent. To the best of our knowledge, the only current review of informed consent for people with intellectual disability is an integrative review by Goldsmith et al . 26 Many of the included articles focused on assessment of capacity 27–29 and research consent. 30–32 The review’s conclusion supported the functional approach to assess capacity, with minimal focus on how the informed consent processes can be improved. More recently, there has been a move towards ensuring that the consent process is accessible for all individuals, including elderly patients 33 and people with aphasia. 34 However, there remains a paucity of literature about the informed consent process for people with intellectual disability, with no systematic reviews summarising the factors influencing the healthcare consent process for people with intellectual disability.

To identify barriers to and enablers of the informed healthcare consent process for people with intellectual disability, and to understand how this can be made equitable and accessible.

A systematic literature review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Protocols (PRISMA-P) systematic literature review protocol. 35 The PRISMA 2020 checklist 36 and ENhancing Transparency in REporting the synthesis of Qualitative research (ENTREQ) reporting guidelines were also followed. 37 The full study protocol is included in online supplemental appendix 1 .

Supplemental material

No patients or members of the public were involved in this research for this manuscript.

Search strategy

A search strategy was developed to identify articles about intellectual disability, consent and healthcare interventions, described in online supplemental appendix 2 . Multiple databases were searched for articles published between January 1990 to January 2022 (Embase, MEDLINE, PsychINFO, PubMed, SCOPUS, Web of Science and CINAHL). These databases include healthcare and psychology databases that best capture relevant literature on this topic, including medical, nursing, social sciences and bioethical literature. The search was limited to studies published from 1990 as understandings of consent have changed since then. 38 39 This yielded 4853 unique papers which were imported into Covidence, a specialised programme for conducting systematic reviews. 40

Study selection

Citation screening by abstract and titles was completed by two independent researchers (MD and EEP). Included articles had to:

Examine the informed consent process for a healthcare intervention for people with intellectual disability.

Have collected more than 50% of its data from relevant stakeholders, including adults with intellectual disability, families or carers of a person with intellectual disability, and professionals who engage with people with intellectual disability.

Report empirical data from primary research methodology.

Be published in a peer-reviewed journal after January 1990.

Be available in English.

Full text screening was completed by two independent researchers (MD and EEP). Articles were excluded if consent was only briefly discussed or if it focused on consent for research, capacity assessment, or participant knowledge or comprehension. Any conflicts were resolved through discussion with an independent third researcher (IS).

Additional studies were identified through an ancestral search and by hand-searching three major journals relevant to intellectual disability research. Journals were selected if they had published more than one included article for this review or in previous literature reviews conducted by the research team.

Quality assessment

Two independent researchers (MD and IS) assessed study quality with the QualSyst tool, 41 which can assess both qualitative and quantitative research papers. After evaluating the distribution of scores, a threshold value of 55% was used, as suggested by QualSyst 41 to exclude poor-quality studies but capture enough studies overall. Any conflicts between the quality assessment scores were resolved by a third researcher (EEP). For mixed-method studies, both qualitative and quantitative quality scores were calculated, and the higher value used.

Data collection

Two independent researchers (MD and JH) reviewed each study and extracted relevant details, including study size, participant demographics, year, country of publication, study design, data analysis and major outcomes reported. Researchers used standardised data collection forms designed, with input from senior researchers with expertise in qualitative research (IS and EEP), to extract data relevant to the review’s research aims. The form was piloted on one study, and a second iteration made based on feedback. These forms captured data on study design, methods, participants, any factors affecting the process of informed consent and study limitations. Data included descriptions and paragraphs outlining key findings, the healthcare context, verbatim participant quotes and any quantitative analyses or statistics. Missing or unclear data were noted.

Data analysis

A pilot literature search showed significant heterogeneity in methodology of studies, limiting the applicability of traditional quantitative analysis (ie, meta-analysis). Instead, inductive thematic analysis was chosen as an alternative methodology 42 43 that has been used in recent systematic reviews examining barriers and enablers of other health processes. 44 45 The six-phase approach described by Braun and Clarke was used. 46 47 A researcher (MD) independently coded the extracted data of each study line-by-line, with subsequent data grouped into pre-existing codes or new concepts when necessary. Codes were reviewed iteratively and grouped into categories, subthemes and themes framed around the research question. Another independent researcher (JH) collated and analysed the data on study demographics, methods and limitations. The themes were reviewed by two senior researchers (EEP and IS).

Qualitative methods of effect size calculations have been described in the literature, 48 49 which was captured in this review by the number of studies that identified each subtheme, with an assigned frequency rating to compare their relative significance. Subthemes were given a frequency rating of A, B, C or D if they were identified by >10, 7–9, 4–6 or <3 articles, respectively. The overall significance of each theme was estimated by the number of studies that mentioned it and the GRADE framework, a stepwise approach to quality assessment using a four-tier rating system. Each study was evaluated for risk of bias, inconsistency, indirectness, imprecision and publication bias. 50 51 Study sensitivity was assessed by counting the number of distinct subthemes included. 52 The quality of findings was designated high, moderate or low depending on the frequency ratings, the QualSyst score and the GRADE scores of studies supporting the finding. Finally, the relative contributions of each study were evaluated by the number of subthemes described, guided by previously reported methods for qualitative reviews. 52

Co-research

The findings were reviewed by two co-researchers with intellectual disability (JL and SS), with over 30 years combined experience as members and employees of a self-advocacy organisation. Guidance on the findings and an easy read summary was produced in line with best-practice inclusive research 53 54 over multiple discussions. Input from two health professional researchers (MD and EEP) provided data triangulation and sense-checking of findings.

Twenty-three articles were identified ( figure 1 ): 14 qualitative, 6 quantitative and 3 mixed-methods. Two papers included the same population of study participants: McCarthy 55 and McCarthy, 56 but had different research questions. Fovargue et al 57 was excluded due to a quality score of 35%. Common quality limitations were a lack of verification procedures to establish credibility and limited researcher reflexivity. No studies were excluded due to language requirements (as all were in English) or age restrictions (all studies had majority adult participants).

  • Download figure
  • Open in new tab
  • Download powerpoint

PRISMA 2020 flowchart for the systematic review. 36

Studies were published from 1999 to 2020 and involved participant populations from the UK (n=18), USA (n=3), Sweden (n=1) and Ireland (n=1). Participant numbers ranged from 9 to 604 (median 21), and participants included people with intellectual disability (n=817), health professionals (n=272), carers and support people (n=48), and other professionals that work with people with intellectual disability (n=137, community service agency directors, social workers, administrative staff and care home staff). Ages of participants ranged from 8 to 84 years, though only Aman et al 58 included participants <18 years of age. This study was included as the article states very few children were included. Studies examined consent in different contexts, including contraception and sexual health (6/23 articles), 58–60 medications (5/23 articles), 58–62 emergency healthcare, 63 cervical screening, 64 community referrals, 58–61 65 mental health, 66 hydrotherapy, 64 blood collection 67 and broad decision-making consent without a specific context. 65 68–71 A detailed breakdown of each study is included in online supplemental appendix 3 .

Six major themes were identified from the studies, summarised in figure 2 . An overview of included studies showing study sensitivity, effect size, QualSyst and GRADE scores is given in online supplemental appendix 4 . Studies with higher QualSyst and GRADE scores contributed more to this review’s findings and tended to include more subthemes; specifically, Rogers et al , 66 Sowney and Barr, 63 Höglund and Larsson, 72 and McCarthy 55 and McCarthy. 56 Figure 3 gives the easy read version of theme 1, with the full easy read summary in online supplemental appendix 5 .

Summary of the identified six themes and subthemes.

Theme 1 of the easy read summary.

Theme 1—Health professionals’ attitudes and lack of education about informed consent

Health professionals’ attitudes and practices were frequently (18/21) identified as factors affecting the informed consent process, with substantial evidence supporting this theme. Studies noted the lack of training for health professionals in supporting informed consent for people with intellectual disability, their desire for further education, and stereotypes and discrimination perpetuated by health professionals.

Lack of health professional education on informed consent and disability discrimination legislation

Multiple studies reported inconsistent informed consent practices, for various reasons: some reported that health professionals ‘forgot’ to or ‘did not realise consent was necessary’, 63 73 but inconsistent consent practices were also attributed to healthcare providers’ unfamiliarity with consent guidelines and poor education on this topic. Carlson et al 73 reported that only 44% of general practitioners (GPs) were aware of consent guidelines, and there was the misconception that consent was unnecessary for people with intellectual disability. Similarly, studies of psychologists 66 and nurses 63 found that many were unfamiliar with their obligations to obtain consent, despite the existence of anti-discrimination legislation. People with intellectual disability describe feeling discriminated against by health professionals, reflected in comments such as ‘I can tell, my doctor just thinks I’m stupid – I'm nothing to him’. 74 Poor consent practices by health professionals were observed in Goldsmith et al , 67 while health professionals surveyed by McCarthy 56 were unaware of their responsibility to provide accessible health information to women with intellectual disability. Improving health professional education and training was suggested by multiple studies as a way to remove this barrier. 63 65–67 69 73

Lack of training on best practices for health professions caring for people with intellectual disability

A lack of training in caring for and communicating with people with intellectual disability was also described by midwives, 72 psychologists, 66 nurses, 63 pharmacists 61 and GPs. 56 72 75 Health professionals lacked knowledge about best practice approaches to providing equitable healthcare consent processes through reasonable adjustments such as accessible health information, 56 60 66 longer appointments times, 60 72 simple English 62 67 and flexible approaches to patient needs. 63 72

Health professionals’ stereotyping and assumptions of incapacity

Underlying stereotypes contributed to some health professionals’ (including nurses, 63 GPs 56 and physiotherapists 64 ) belief that people with intellectual disability lack capacity and therefore, do not require opportunities for informed consent. 56 64 In a survey of professionals referring people with intellectual disability to a disability service, the second most common reason for not obtaining consent was ‘patient unable to understand’. 73

Proxy consent as an inappropriate alternative

People with intellectual disability are rarely the final decision-maker in their medical choices, with many health providers seeking proxy consent from carers, support workers and family members, despite its legal invalidity. In McCarthy’s study (2010), 18/23 women with intellectual disability said the decision to start contraception was made by someone else. Many GPs appeared unaware that proxy consent is invalid in the UK. 56 Similar reports came from people with intellectual disability, 55 56 60 64 69 76 health professionals (nurses, doctors, allied health, psychologists), 56 63 64 66 77 support people 64 77 and non-medical professionals, 65 73 and capacity was rarely documented. 56 62 77

Exclusion of people with intellectual disability from decision-making discussions

Studies described instances where health professionals made decisions for their patients with intellectual disability or coerced patients into a choice. 55 72 74 76 77 In Ledger et al 77 , only 62% of women with intellectual disability were involved in the discussion about contraception, and only 38% made the final decision, and others stated in Wiseman and Ferrie 74 : ‘I was not given the opportunity to explore the different options. I was told what one I should take’. Three papers outlined instances where the choices of people with intellectual disability were ignored despite possessing capacity 65 66 69 and when a procedure continued despite them withdrawing consent. 69

Theme 2—Inadequate accessible health information

Lack of accessible health information.

The lack of accessible health information was the most frequently identified subtheme (16/23 studies). Some studies reported that health professionals provided information to carers instead, 60 avoided providing easy read information due to concerns about ‘offending’ patients 75 or only provided verbal information. 56 67 Informed consent was supported when health professionals recognised the importance of providing medical information 64 and when it was provided in an accessible format. 60 Alternative approaches to health information were explored, including virtual reality 68 and in-person education sessions, 59 with varying results. Overall, the need to provide information in different formats tailored to an individual’s communication needs, rather than a ‘one size fits all’ approach, was emphasised by both people with intellectual disability 60 and health professionals. 66

Insufficient information provided

Studies described situations where insufficient information was provided to people with intellectual disability to make informed decisions. For example, some people felt the information from their GP was often too basic to be helpful (Fish et al 60 ) and wanted additional information on consent forms (Rose et al 78 ).

Theme 3—The involvement of support people

Support people (including carers, family members and group home staff) were identified in 11 articles as both enablers of and barriers to informed consent. The antagonistic nature of these findings and lower frequency of subthemes are reflected in the lower quality assessments of evidence.

Support people facilitated communication with health professionals

Some studies reported carers bridging communication barriers with health to support informed consent. 63 64 McCarthy 56 found 21/23 of women with intellectual disability preferred to see doctors with a support person due to perceived benefits: ‘Sometimes I don’t understand it, so they have to explain it to my carer, so they can explain it to me easier’. Most GPs in this study (93%) also agreed that support people aided communication.

Support people helped people with intellectual disability make decisions

By advocating for people with intellectual disability, carers encouraged decision-making, 64 74 provided health information, 74 77 emotional support 76 and assisted with reading or remembering health information. 55 58 76 Some people with intellectual disability explicitly appreciated their support person’s involvement, 60 such as in McCarthy’s 55 study where 18/23 participants felt supported and safer when a support person was involved.

Support people impeded individual autonomy

The study by Wiseman and Ferrie 74 found that while younger participants with intellectual disability felt family members empowered their decision-making, older women felt family members impaired their ability to give informed consent. This was reflected in interviews with carers who questioned the capacity of the person with intellectual disability they supported and stated they would guide them to pick the ‘best choice’ or even over-ride their choices. 64 Studies of psychologists and community service directors described instances where the decision of family or carers was prioritised over the wishes of the person with intellectual disability. 65 66 Some women with intellectual disability in McCarthy’s studies (2010, 2009) 55 56 appeared to have been coerced into using contraception by parental pressures or fear of losing group home support.

Theme 4—Systemic constraints within healthcare systems

Time restraints affect informed consent and accessible healthcare.

Resource limitations create time constraints that impair the consent process and have been identified as a barrier by psychologists, 66 GPs, 56 hospital nurses 63 and community disability workers. 73 Rogers et al 66 highlighted that a personalised approach that could improve informed decision-making is restricted by inflexible medical models. Only two studies described flexible patient-centred approaches to consent. 60 72 A survey of primary care practices in 2007 reported that most did not modify their cervical screening information for patients with intellectual disability because it was not practical. 75

Inflexible models of consent

Both people with intellectual disability 76 and health professionals 66 recognised that consent is traditionally obtained through one-off interactions prior to an intervention. Yet, for people with intellectual disability, consent should ideally be an ongoing process that begins before an appointment and continues between subsequent ones. Other studies have tended to describe one-off interactions where decision-making was not revisited at subsequent appointments. 56 60 72 76

Lack of systemic supports

In one survey, self-advocates highlighted a lack of information on medication for people with intellectual disability and suggested a telephone helpline and a centralised source of information to support consent. 60 Health professionals also want greater systemic support, such as a health professional specialised in intellectual disability care to support other staff, 72 or a pharmacist specifically to help patients with intellectual disability. 61 Studies highlighted a lack of guidelines about healthcare needs of people with intellectual disabilities such as contraceptive counselling 72 or primary care. 75

Theme 5—Person-centred informed consent

Ten studies identified factors related to a person-centred approach to informed consent, grouped below into three subthemes. Health professionals should tailor their practice when obtaining informed consent from people with intellectual disability by considering how these subthemes relate to the individual. Each subtheme was described five times in the literature with a relative frequency rating of ‘C’, contributing to overall lower quality scores.

Previous experience with decision-making

Arscott et al 71 found that the ability of people with intellectual disability to consent changed with their verbal and memory skills and in different clinical vignettes, supporting the view of ‘functional’ capacity specific to the context of the medical decision. Although previous experiences with decision-making did not influence informed consent in this paper, other studies suggest that people with intellectual disability accustomed to independent decision-making were more able to make informed medical decisions, 66 70 and those who live independently were more likely to make independent healthcare decisions. 56 Health professionals should be aware that their patients with intellectual disability will have variable experience with decision-making and provide individualised support to meet their needs.

Variable awareness about healthcare rights

Consent processes should be tailored to the health literacy of patients, including emphasising available choices and the option to refuse treatment. In some studies, medical decisions were not presented to people with intellectual disability as a choice, 64 and people with intellectual disability were not informed of their legal right to accessible health information. 56

Power differences and acquiescence

Acquiescence by people with intellectual disability due to common and repeated experiences of trauma—that is, their tendency to agree with suggestions made by carers and health professionals, often to avoid upsetting others—was identified as an ongoing barrier. In McCarthy’s (2009) interviews with women with intellectual disability, some participants implicitly rejected the idea that they might make their own healthcare decisions: ‘They’re the carers, they have responsibility for me’. Others appeared to have made decisions to appease their carers: ‘I have the jab (contraceptive injection) so I can’t be blamed for getting pregnant’. 55 Two studies highlighted that health professionals need to be mindful of power imbalances when discussing consent with people with intellectual disability to ensure the choices are truly autonomous. 61 66

Theme 6—Effective communication between health professionals and patients

Implementation of reasonable adjustments for verbal and written information.

Simple language was always preferred by people with intellectual disability. 60 67 Other communication aids used in decision-making included repetition, short sentences, models, pictures and easy read brochures. 72 Another reasonable adjustment is providing the opportunity to ask questions, which women with intellectual disability in McCarthy’s (2009) study reported did not occur. 55

Tailored communication methods including non-verbal communication

Midwives noted that continuity of care allows them to develop rapport and understand the communication preferences of people with intellectual disability. 72 This is not always possible; for emergency nurses, the lack of background information about patients with intellectual disability made it challenging to understand their communication preferences. 63 The use of non-verbal communication, such as body language, was noted as underutilised 62 66 and people with intellectual disability supported the use of hearing loops, braille and sign language. 60

To the best of our knowledge, this is the first systematic review investigating the barriers and enablers of the informed consent process for healthcare procedures for people with intellectual disability. The integrative review by Goldsmith et al 26 examined capacity assessment and shares only three articles with this systematic review. 69 71 73 Since the 2000s, there has been a paradigm shift in which capacity is no longer considered a fixed ability that only some individuals possess 38 39 but instead as ‘functional’: a flexible ability that changes over time and in different contexts, 79 reflected in Goldsmith’s review. An individual’s capacity can be supported through various measures, including how information is communicated and how the decision-making process is approached. 18 80 By recognising the barriers and enablers identified in this review, physicians can help ensure the consent process for their patients with intellectual disability is both valid and truly informed. This review has highlighted the problems of inaccessible health information, insufficient clinical education on how to make reasonable adjustments and lack of person-centred trauma-informed care.

Recommendations

Health professionals require training in the informed consent process for people with intellectual disability, particularly in effective and respectful communication, reasonable adjustments and trauma-informed care. Reasonable adjustments include offering longer or multiple appointments, using accessible resources (such as easy read information or shared decision-making tools) and allowing patient choices (such as to record a consultation or involve a support person). Co-researchers reported that many people with intellectual disability prefer to go without a support person because they find it difficult to challenge their decisions and feel ignored if the health professional only talks to the support person. People with intellectual disability also feel they cannot seek second opinions before making medical decisions or feel pressured to provide consent, raising the possibility of coercion. These experiences contribute to healthcare trauma. Co-researchers raised the importance of building rapport with the person with intellectual disability and of making reasonable adjustments, such as actively advocating for the person’s autonomy, clearly stating all options including the choice to refuse treatment, providing opportunities to contribute to discussions and multiple appointments to ask questions and understand information. They felt that without these efforts to support consent, health professionals can reinforce traumatic healthcare experiences for people with intellectual disability. Co-researchers noted instances where choices were made by doctors without discussion and where they were only given a choice after requesting one and expressed concern that these barriers are greater for those with higher support needs.

Co-researchers showed how these experiences contributed to mistrust of health professionals and poorer health outcomes. In one situation, a co-researcher was not informed of a medication’s withdrawal effects, resulting in significant side-effects when it was ceased. Many people with intellectual disability describe a poor relationship with their health professionals, finding it difficult to trust health information provided due to previous traumatic experiences of disrespect, coercion, lack of choice and inadequate support. Many feel they cannot speak up due to the power imbalance and fear of retaliation. Poor consent practices and lack of reasonable adjustments directly harm therapeutic alliances by reducing trust, contribute to healthcare trauma and lead to poorer health outcomes for people with intellectual disability.

Additional education and training for health professionals is urgently needed in the areas of informed consent, reasonable adjustments and effective communication with people with intellectual disability. The experiences of health professionals within the research team confirmed that there is limited training in providing high-quality healthcare for people with intellectual disability, including reasonable adjustments and accessible health information. Co-researchers also suggested that education should be provided to carers and support people to help them better advocate for people with intellectual disability.

Health information should be provided in a multimodal format, including written easy read information. Many countries have regulation protecting the right to accessible health information and communication support to make an informed choice, such as UK’s Accessible Information Standard, 81 and Australia’s Charter of Health Care Rights, 24 yet these are rarely observed. Steps to facilitate this include routinely asking patients about information requirements, system alerts for an individual’s needs or routinely providing reasonable adjustments. 82 Co-researchers agreed that there is a lack of accessible health information, particularly about medications, and that diagrams and illustrations are underutilised. There is a critical need for more inclusive and accessible resources to help health professionals support informed consent in a safe and high-quality health system. These resources should be created through methods of inclusive research, such as co-production, actively involving people with intellectual disability in the planning, creation, and feedback process. 53

Strengths and limitations

This systematic review involved two co-researchers with intellectual disability in sense-checking findings and co-creating the easy read summary. Two co-authors who are health professionals provided additional sense-checking of findings from a different stakeholder perspective. In future research, this could be extended by involving people with intellectual disability in the design and planning of the study as per recommendations for best-practice inclusive research. 53 83

The current literature is limited by low use of inclusive research practices in research involving people with intellectual disability, increasing vulnerability to external biases (eg, inaccessible questionnaires, involvement of carers in data collection, overcompliance or acquiescence and absence of researcher reflexivity). Advisory groups or co-research with people with intellectual disability were only used in five studies. 58 60 68 74 76 Other limitations include unclear selection criteria, low sample sizes, missing data, using gatekeepers in patient selection and predominance of UK-based studies—increasing the risk of bias and reducing transferability. Nine studies (out of 15 involving people with intellectual disability) explicitly excluded those with severe or profound intellectual disability, reflecting a selection bias; only one study specifically focused on people with intellectual disability with higher support needs. Studies were limited to a few healthcare contexts, with a focus on consent about sexual health, contraception and medications.

The heterogeneity and qualitative nature of studies made it challenging to apply traditional meta-analysis. However, to promote consistency in qualitative research, the PRISMA and ENTREQ guidelines were followed. 36 37 Although no meta-analyses occurred, the duplication of study populations in McCarthy 2009 and 2010 likely contributed to increased significance of findings reported in both studies. Most included studies (13/23) were published over 10 years ago, reducing the current relevance of this review’s findings. Nonetheless, the major findings reflect underlying systemic issues within the health system, which are unlikely to have been resolved since the articles were published, as the just-released final report of the Australian Royal Commission into Violence, Abuse, Neglect and Exploitation of People with Disability highlights. 84 There is an urgent need for more inclusive studies to explore the recommendations and preferences of people with intellectual disability about healthcare choices.

Informed consent processes for people with intellectual disability should include accessible information and reasonable adjustments, be tailored to individuals’ needs and comply with consent and disability legislation. Resources, guidelines and healthcare education are needed and should cover how to involve carers and support people, address systemic healthcare problems, promote a person-centred approach and ensure effective communication. These resources and future research must use principles of inclusive co-production—involving people with intellectual disability at all stages. Additionally, research is needed on people with higher support needs and in specific contexts where informed consent is vital but under-researched, such as cancer screening, palliative care, prenatal and newborn screening, surgical procedures, genetic medicine and advanced therapeutics such as gene-based therapies.

Ethics statements

Patient consent for publication.

Not applicable.

Ethics approval

  • Maulik PK ,
  • Mascarenhas MN ,
  • Mathers CD , et al
  • World Health Organisation
  • Council for Intellectual Disability
  • Emerson E ,
  • Shogren KA ,
  • Wehmeyer ML ,
  • Reese RM , et al
  • Cordasco KM
  • Hallock JL ,
  • Jordens CFC ,
  • McGrath C , et al
  • Brenner LH ,
  • Brenner AT ,
  • United Nations Population Fund
  • Australian Commission on Safety and Quality in Health Care
  • The Joint Commission
  • Beauchamp TL ,
  • Childress JF
  • New South Wales Attorney General
  • United Nations General Assembly
  • Strnadová I ,
  • Loblinzk J ,
  • Scully JL , et al
  • MacPhail C ,
  • McKay K , et al
  • Keywood K ,
  • Fovargue S ,
  • Goldsmith L ,
  • Skirton H ,
  • Cash J , et al
  • Morris CD ,
  • Niederbuhl JM ,
  • Arscott K ,
  • Fisher CB ,
  • Davidson PW , et al
  • Giampieri M
  • Shamseer L ,
  • Clarke M , et al
  • McKenzie JE ,
  • Bossuyt PM , et al
  • Flemming K ,
  • McInnes E , et al
  • Appelbaum PS
  • ↵ Covidence systematic review software . Melbourne, Australia ,
  • Proudfoot K
  • Papadopoulos I ,
  • Koulouglioti C ,
  • Lazzarino R , et al
  • Onwuegbuzie AJ
  • BMJ Best Practice
  • Guyatt GH ,
  • Vist GE , et al
  • Garcia-Lee B
  • Brimblecombe J , et al
  • Benson BA ,
  • Farmer CA , et al
  • Ferguson L ,
  • Graham YNH ,
  • Gerrard D ,
  • Laight S , et al
  • Huneke NTM ,
  • Halder N , et al
  • Ferguson M ,
  • Jarrett D ,
  • McGuire BE , et al
  • Woodward V ,
  • Jackson L , et al
  • Conboy-Hill S ,
  • Leafman J ,
  • Nehrenz GM , et al
  • Höglund B ,
  • Carlson T ,
  • English S , et al
  • Wiseman P ,
  • Walmsley J ,
  • Tilley E , et al
  • Khatkar HS , et al
  • Holland AJ , et al
  • Beauchamp TL
  • England National Health Service
  • National Health Service England
  • Royal Commission into Violence, Abuse, Neglect and Exploitation of People with Disability

Supplementary materials

Supplementary data.

This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

  • Data supplement 1
  • Data supplement 2
  • Data supplement 3
  • Data supplement 4
  • Data supplement 5

Contributors MD, EEP and IS conceived the idea for the systematic review. MD drafted the search strategy which was refined by EEP and IS. MD and EEP completed article screening. MD and IS completed quality assessments of included articles. MD and JH completed data extraction. MD drafted the original manuscript. JL and SS were co-researchers who sense-checked findings and were consulted to formulate dissemination plans. JL and SS co-produced the easy read summary with MD, CM, JH, EEP and IS. MD, JLS, EEP and IS reviewed manuscript wording. All authors critically reviewed the manuscript and approved it for publication. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted. MD is the guarantor responsible for the overall content of this manuscript.

Funding This systematic literature review was funded by the National Health & Medical Research Council (NHMRC), Targeted Call for Research (TCR) into Improving health of people with intellectual disability. Research grant title "GeneEQUAL: equitable and accessible genomic healthcare for people with intellectual disability". NHMRC application ID: 2022/GNT2015753.

Competing interests None declared.

Provenance and peer review Not commissioned; externally peer reviewed.

Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Linked Articles

  • Editorial It is up to healthcare professionals to talk to us in a way that we can understand: informed consent processes in people with an intellectual disability Jonathon Ding Richard Keagan-Bull Irene Tuffrey-Wijne BMJ Quality & Safety 2024; 33 277-279 Published Online First: 30 Jan 2024. doi: 10.1136/bmjqs-2023-016830

Read the full text or download the PDF:

Warning: The NCBI web site requires JavaScript to function. more...

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Galdas P, Darwin Z, Fell J, et al. A systematic review and metaethnography to identify how effective, cost-effective, accessible and acceptable self-management support interventions are for men with long-term conditions (SELF-MAN). Southampton (UK): NIHR Journals Library; 2015 Aug. (Health Services and Delivery Research, No. 3.34.)

Cover of A systematic review and metaethnography to identify how effective, cost-effective, accessible and acceptable self-management support interventions are for men with long-term conditions (SELF-MAN)

A systematic review and metaethnography to identify how effective, cost-effective, accessible and acceptable self-management support interventions are for men with long-term conditions (SELF-MAN).

Chapter 3 qualitative review methods.

The objective of the qualitative metaethnography was to systematically identify experiences of, and perceptions of, interventions or specific activities aimed at supporting or promoting self-management of LTCs among men of differing age, ethnicity and socioeconomic background.

A summary of the methods used in the metaethnography is provided in Appendix 3 , using the enhancing transparency in reporting the synthesis of qualitative research (ENTREQ) reporting standards for qualitative evidence synthesis, developed by Tong et al. 93

The evidence synthesis was conducted using a metaethnography approach originally described by Noblit and Hare. 94 This approach was chosen because of its emphasis on conceptual development and generating new insights (i.e. being interpretive rather than integrative 94 ) and because it is compatible with synthesising all types of qualitative research. 95

Metaethnography involves seven stages: getting started, deciding what is relevant, reading the studies, determining how studies are related to each other, translating studies into each other, synthesising translations and expressing the synthesis; 94 these seven, often overlapping, stages are depicted in Figure 7 .

Seven steps of metaethnography.

  • Step 1: getting started

The first stage involved identifying a ‘worthy’ research question and one that could be addressed through qualitative evidence synthesis. 94 This stage took place in developing the original funding application for the current review and its justification is presented in Chapter 1 .

  • Step 2: deciding what is relevant

The second stage, ‘deciding what is relevant’, was viewed as comprising the search strategy, inclusion criteria and quality appraisal, consistent with the experiences of Atkins et al. 96 These are presented next, before steps 3–7 are described in the section Data extraction strategy and data analysis .

  • Search methods

Search strategy

A comprehensive electronic search strategy ( Appendix 4 ) was developed in liaison with information specialists. It sought to identify all available studies, rather than using purposive sampling to identify all available concepts. Five electronic databases were searched in July 2013 [Cumulative Index to Nursing and Allied Health Literature (CINAHL), EMBASE, MEDLINE, PsycINFO and Social Science Citation Index].

Because of challenges with methodological indexing of qualitative research, 97 the electronic search was complemented by checking reference lists, and using an adapted strategy published elsewhere 98 that includes ‘thesaurus terms’ (keywords indexed in electronic databases, e.g. ‘Qualitative Research’), ‘free text terms’ (commonly used research methodology terms searched for in the titles, abstracts and keywords) and ‘broad-based terms’ (i.e. the broad free-text terms ‘qualitative’, ‘findings’ and ‘interview$’ and the thesaurus term ‘Interviews’). Terms relating to gender were combined with other terms to narrow the search and increase the precision of the strategy (e.g. ‘men’, ‘male’, ‘masculine$’, ‘gender’, ‘sex difference$’, ‘sex factors’).

Study selection: study screening methods and inclusion criteria

Records were initially screened by one reviewer (ZD) on the basis of the title and abstract. Decisions were recorded in EndNote X7.0.2 (Thomson Reuters, CA, USA), a reference management database. All articles identified as potentially eligible for inclusion were obtained in full. Attempts were made to identify and obtain published findings for unpublished literature that was otherwise eligible, for example doctoral theses or conference proceedings.

The full-text literature was screened independently by two reviewers (ZD and PG) using the inclusion criteria listed in Table 5 . Studies that explored the experiences of men alone, or included a clear and explicit comparison between men and women, were included. Studies which focused on self-management experiences of people with LTCs more generally (i.e. did not consider experiences of, or perceptions of, a self-management support intervention or activity) were excluded. The approach to screening was inclusive; for example, studies where the qualitative findings were limited (e.g. Iredale et al. , 99 Ramachandra et al. , 100 Smith et al. 101 ) and mixed-sex studies with limited findings on gender comparisons (e.g. Barlow et al. 102 , 103 ) were retained in case they contributed to the synthesis.

TABLE 5

Screening criteria: qualitative

  • Classification of self-management interventions and support activities in the qualitative evidence synthesis

The original study protocol sought to code self-management interventions and support activities using the most up-to-date version of the taxonomy of BCT. 104 – 106 As in the quantitative review (see Chapter 2 , Coding interventions for analysis ), we found that the level of detail reported on self-management interventions or activities in the qualitative literature was limited in detail, precision and consistency, making coding with the BCT taxonomy unfeasible.

Most of the qualitative literature did not focus on behaviour change per se or seek to address men’s views and experiences of behaviour change techniques; for example, some papers were concerned with the dynamics of social support groups, or the use of other self-management support and information. The BCT taxonomy is applicable to only studies that are judged as targeting behaviour change; we were therefore limited to ‘lifestyle’ and ‘psychological’ studies. Only a minority of the studies ( n  = 13) provided sufficient information on interventions to allow even rudimentary coding with the BCT taxonomy, and these are presented in Appendix 5 . Issues around application of the BCT taxonomy are returned to in the discussion chapter (see Chapter 6 ).

The lack of detail reported in the qualitative literature also made it unfeasible to classify interventions using the system developed for the quantitative review. Whereas the quantitative review concerned trials of specific interventions, approximately half of the studies in the qualitative review 99 , 101 , 107 – 130 included more than one intervention or activity (e.g. ‘any cancer support group’).

We therefore developed a broad system for classifying interventions and support activities that offered a pragmatic way to group studies and make the analysis process more manageable. The categories are shown in Table 6 .

TABLE 6

Categories and descriptions of self-management interventions and support activities in the qualitative evidence synthesis

  • Quality assessment strategy

The purpose of quality appraisal in the review was to provide descriptive information on the quality of the included studies rather than as a basis for inclusion. We considered that studies of weaker quality either would not contribute or would contribute only minimally to the final synthesis. 94 , 131 We therefore chose not to use design-specific appraisal tools (which the original protocol stated we would) because we placed emphasis on conceptual contribution, which did not require a detailed design-specific appraisal of methodological quality. With that in mind, we used the Critical Appraisal Skills Programme (CASP) tool. 132

The CASP tool comprises 10 checklist-style questions (see Appendix 6 ) for assessing the quality of various domains (including aims, design, methods, data analysis, interpretation, findings and value of the research). Because of the checklist nature of the CASP tool, we developed some additional questions informed by other metaethnography studies 96 , 131 that enabled us to extract and record more detailed narrative summaries of the main strengths, limitations and concerns of each study (see Appendix 7 ).

The CASP tool was used in the light of the experiences reported by other researchers who recommended that, despite rather low inter-rater agreement, such an approach ‘encourag[es] the reviewers to read the papers carefully and systematically, and serves as a reminder to treat the papers as data for the synthesis’ (p. 44). 131

Its focus is on procedural aspects of the conduct of the research rather than the insights offered. 133 The quality appraisal (which focused on methodological quality) did not form part of the inclusion criteria because, as recognised by Campbell et al. , 131 it is conceptual quality that is most important for evidence synthesis and it is the process of synthesis that judges the ‘worth’ of studies, with conceptually limited studies making a limited contribution. 94 Additionally, it is acknowledged that agreement is often slight, with low reproducibility. 131 , 133 Appraisal was conducted by two reviewers independently (ZD and PG), with discrepancies resolved through discussion.

Search outcome

The electronic search strategy identified 6330 unique references. Screening based on title and abstract identified 149 papers for full-text screening. Dual screening of these full-text articles identified 34 studies (reported in 38 papers) to be included in the review. Reasons for excluding the remaining 111 articles are shown in Table 7 .

TABLE 7

Reasons for exclusion of full-text articles

Inter-rater agreement on the decision to include was 88.6%. The majority of disagreements ( n  = 17) concerned the definition of self-management intervention or activity. Having discussed the 17 disagreements, we agreed that five studies on which there was disagreement would be included. 100 , 103 , 110 , 116 , 134

An additional four studies were identified through reference checks and efforts to locate published literature linked to unpublished work identified through the electronic search. 111 , 112 , 135 , 136 An additional two papers (women only), although individually ineligible, were located as ‘linked papers’ for two of the original 34 studies, 114 , 120 giving a total of 38 studies (reported in 44 papers), as shown in Figure 8 .

Preferred Reporting Items for Systematic Reviews and Meta-Analyses flow diagram for the qualitative review.

  • Data extraction strategy and data analysis

The lead reviewer (ZD) extracted all papers using data extraction forms previously tested and refined through a pilot study of four papers. All study details (including aim, participant details, methodology, method of data collection and analysis) were extracted into Microsoft Excel ® version 14 (Microsoft Corporation, Redmond, WA, USA) and checked by a second reviewer (PG). Extraction and analysis of study findings was undertaken by a group of coreviewers within the research team (ZD, PG, LK, CB, KM, KH) and followed steps 3–7 of the metaethnography process described by Noblit and Hare. 94 Despite being numbered sequentially, these phases do not occur in a linear process. 94

Step 3: reading the studies

The metaethnography process involved three levels of constructs, as described by Schutz 137 and operationalised by Atkins et al. : 96

  • first-order: participant quotes and participant observations, while recognising that in secondary analysis these represent the participants’ views as selected by the study authors in evidencing their second-order constructs
  • second-order: study authors’ themes/concepts and interpretations, also described by Noblit and Hare 94 as ‘metaphors’
  • third-order: our ‘interpretations of interpretations of interpretations’ (p. 35), 94 based on our analysis of the first-order and second-order constructs extracted from the studies.

Each paper was read in full and copied verbatim into NVivo version 10 (QSR International, Warrington, UK) for line-by-line coding by the lead reviewer. Coding involved repeated reading and line-by-line categorising of first-order and second-order constructs, using participants’ and authors’ words wherever possible, and reading for possible third-order constructs.

Third-order constructs were developed by building second-order constructs into broader categories and themes in a framework which was revised iteratively using the hierarchical functions of the NVivo software (i.e. using ‘parent’ and ‘child’ nodes).

Rather than simply being a synthesis of the second-order constructs, third-order-constructs were derived inductively from the extracted data; this was an interpretive process that was not limited to interpretations offered by the original authors of included studies.

Coding by coreviewers (i.e. other members of the research team) was idiosyncratic but commonly involved working with printed papers, noting key ‘metaphors’ (themes, concepts and ideas) in the margins and highlighting first-order and second-order evidence that supported the coreviewers’ interpretations. The lead reviewer, ZD, met with each coreviewer to discuss/debrief coding decisions and ensure the credibility (i.e. the congruence of coding decisions with the original author interpretations) of the overall analytical process.

Step 4: determining how the studies are related

To offer a ‘way in’ to the synthesis, we adopted a similar approach to that of Campbell et al. : 131 initially grouping studies by the broad categories of self-management intervention and support activity shown in Table 6 . Each coreviewer was allocated one or more category of studies to analyse. The lead reviewer then read each category of studies in the following order: face-to-face group support, online support, online information, information, psychological, lifestyle and ‘various’; within this, she read the studies in alphabetical order of first author rather than nominating ‘key’ papers. All included papers were analysed, rather than reading until saturation of concepts.

The lead reviewer and coreviewer independently completed matrices to report the second-order constructs and emerging third-order constructs for each paper (which for the lead reviewer were based on a more comprehensive line-by-line coding using NVivo). This facilitated the juxtaposing of metaphors and/or constructs alongside each other, leading to initial assumptions about relationships between studies.

Step 5: translating studies into one another

A defining element of metaethnography is the ‘translation’ of studies into each other, whereby metaphors, together with their inter-relationships, are compared across studies. Facilitated by discussions using the matrices of second- and third-order constructs, we translated studies firstly within types of support activity and then, secondly, across types.

The lead reviewer initially developed the constructs in relation to face-to-face support (the largest category of studies) and read other categories of studies with reference to this, using a constant comparison approach to identify and refine concepts. The ‘models’ function in NVivo was used to depict relationships between third-order constructs; this helped to develop the line-of-argument synthesis, which is discussed next.

Step 6: synthesising translations

Studies can be synthesised in three ways: 94

  • reciprocal translation, where the findings are directly comparable
  • refutational translation, where the findings are in opposition
  • a line-of-argument synthesis, where both similarities and contradictions are found and translations are encompassed in one overarching interpretation that aims to discover a whole among the set of parts, uncovering aspects that may be hidden in individual studies.

Because we found similarities and contradictions, we developed a line-of-argument synthesis (rather than reciprocal or refutational translation) that encompassed four key concepts, each of which was based around a set of third-order constructs.

Step 7: expressing the synthesis

The output of the synthesis, that is communicating our third-order concepts and overarching line-of-argument synthesis, is described by Noblit and Hare 94 as ‘expressing the synthesis’ (p. 29). They state that ‘the worth of any synthesis is in its comprehensibility to some audience’ (p. 82), 94 emphasising the importance of communicating the synthesis effectively, being mindful of the intended audience and using concepts and language that are meaningful (and understandable). We worked to make the synthesis comprehensible by discussion with coreviewers and, critically, through involvement of the patient and public involvement (PPI) group. The synthesis is presented in Chapter 5 and will also be expressed through other dissemination activities, for example the SELF-MAN symposium ( www.self-man.com ), mini-manuals and journal publications.

We undertook several steps to enhance the rigour of our analysis. Authors’ themes and interpretations (second-order constructs) were independently extracted by two reviewers, each of whom additionally suggested their own interpretations of the study findings (third-order constructs).

We were influenced by a recent Health Technology Assessment metaethnography which found multiple reviewers offered ‘broad similarities in interpretation, but differences of detail’ (p. x). 131 We therefore treated the lead reviewer’s analyses as the ‘master copy’ and compared these with the coreviewers’ extractions and interpretations. Peer debriefing meetings were held between the lead reviewer and each coreviewer to discuss matrices of second-order and third-order constructs which facilitated the consideration of alternative interpretations.

The third-order constructs and line-of-argument synthesis were further refined at a full-day meeting (January 2014) attended by the lead qualitative reviewer and wider team of five coreviewers involved in coding, extraction, analysis and interpretation (PG, KH, LK, KM, CB).

We identified the need to be reflexive about our interpretations and recognised potential sources of influence on our interpretations; for example, two reviewers (PG, KH) identified having a ‘constructions of masculinity’ lens, and we agreed to focus the line-of-argument synthesis on interpretations offered by authors of studies being synthesised, rather than framing our interpretations around constructions of masculinity. We considered it a strength that the six reviewers involved reflected a wide range of backgrounds and perspectives. Although PPI colleagues were not involved in the coding process, the line-of-argument synthesis and four key concepts were discussed with the PPI group to ensure credibility.

  • Public and patient involvement

The SELF-MAN research team worked with a specially constituted public and patient advisory group comprising men living with one or more LTCs who were involved in either running or attending a LTC support group in the north of England. Members were recruited via the research team’s existing networks. Stakeholders’ support groups were all condition-specific – arthritis ( n  = 1), diabetes ( n  = 1), heart failure ( n  = 2) and Parkinson’s disease ( n  = 1) – although some men lived with multiple LTCs. All stakeholders attended a welcome meeting prior to the commencement of the study to prepare them for the involvement in the research, and were provided with ongoing support and guidance by the chief investigator throughout the research process. Members were reimbursed for travel, expenses and time throughout the duration of the project (in line with current INVOLVE recommendations 138 ).

The overarching aims of PPI in the project were, first, to help ensure that the review findings spoke to the self-management needs and priorities of men with LTCs, and, second, to ensure the development of appropriate outputs that would have benefit and relevance for service users. A recognised limitation of our group was that stakeholder representation was drawn from face-to-face group-based support interventions.

The stakeholder group met on three half-days over the course of the 12-month project. On each occasion, the group provided positive affirmation that the project was being conducted in accordance with its stated objectives. In the first two meetings, the group offered feedback and advice to the investigative team on preliminary and emerging analysis of the qualitative data throughout the research process: specifically, the development of third-order constructs and the line-of-argument synthesis. Responding to their input, we made revisions to some of our interpretations, particularly in relation to the importance of physical aspects of environments in which interventions took place. The group’s input also highlighted the need for future research to address depression as a common and often overlooked comorbidity in men (see Chapter 7 , Recommendations for future research ), and that they welcomed recommendations for sustainability of support groups and improving communication within groups. When considering the key outcomes to be assessed in the quantitative review, stakeholders also recommended that emphasis should be placed on quality-of-life outcome measures when considering whether or not a self-management support intervention is effective.

In the final meeting, the stakeholder group provided detailed recommendations for the content of the Self-Manual: Man’s Guide to Better Self-Management of Long Term Conditions (not yet available). It advised that the guide should be rephrased from ‘how to’ self-manage to ‘how to better ’ self-manage because men may view themselves as already self-managing and therefore not identify with the former.

Six or seven stakeholders attended each meeting. The female partner of one of the men attended and contributed to discussions at each meeting. Members of the group each received reimbursement of travel expenses and a £150 honorarium for each meeting they attended. In the final meeting, the stakeholders provided feedback on their involvement in the research process overall, focusing on what was done well and what could be improved. Feedback indicated that most stakeholders had a positive experience, particularly valuing the opportunity to have their ‘voices heard’ and make a potential impact on future service delivery. Recommendations for improvements mostly centred on ensuring prompt reimbursement of expenses incurred in attending the meetings.

Included under terms of UK Non-commercial Government License .

  • Cite this Page Galdas P, Darwin Z, Fell J, et al. A systematic review and metaethnography to identify how effective, cost-effective, accessible and acceptable self-management support interventions are for men with long-term conditions (SELF-MAN). Southampton (UK): NIHR Journals Library; 2015 Aug. (Health Services and Delivery Research, No. 3.34.) Chapter 3, Qualitative review methods.
  • PDF version of this title (3.9M)

In this Page

Other titles in this collection.

  • Health Services and Delivery Research

Recent Activity

  • Qualitative review methods - A systematic review and metaethnography to identify... Qualitative review methods - A systematic review and metaethnography to identify how effective, cost-effective, accessible and acceptable self-management support interventions are for men with long-term conditions (SELF-MAN)

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

This paper is in the following e-collection/theme issue:

Published on 30.7.2024 in Vol 26 (2024)

This is a member publication of University of Oxford (Jisc)

The Acceptability, Engagement, and Feasibility of Mental Health Apps for Marginalized and Underserved Young People: Systematic Review and Qualitative Study

Authors of this article:

Author Orcid Image

  • Holly Alice Bear 1 , DPhil   ; 
  • Lara Ayala Nunes 1 , DPhil   ; 
  • Giovanni Ramos 2 , PhD   ; 
  • Tanya Manchanda 1 , MEd   ; 
  • Blossom Fernandes 1 , PhD   ; 
  • Sophia Chabursky 3 , MSc   ; 
  • Sabine Walper 3 , DPhil   ; 
  • Edward Watkins 4 , DPhil   ; 
  • Mina Fazel 1 , DM  

1 Department of Psychiatry, University of Oxford, Oxford, United Kingdom

2 Department of Psychological Science, University of California, Irvine, CA, United States

3 German Youth Institute, Munich, Germany

4 School of Psychology, University of Exeter, Exeter, United Kingdom

Corresponding Author:

Holly Alice Bear, DPhil

Department of Psychiatry

University of Oxford

Warneford Hospital

Warneford Lane

Oxford, OX3 7JX

United Kingdom

Phone: 44 01865 6182

Email: [email protected]

Background: Smartphone apps may provide an opportunity to deliver mental health resources and interventions in a scalable and cost-effective manner. However, young people from marginalized and underserved groups face numerous and unique challenges to accessing, engaging with, and benefiting from these apps.

Objective: This study aims to better understand the acceptability (ie, perceived usefulness and satisfaction with an app) and feasibility (ie, the extent to which an app was successfully used) of mental health apps for underserved young people. A secondary aim was to establish whether adaptations can be made to increase the accessibility and inclusivity of apps for these groups.

Methods: We conducted 2 sequential studies, consisting of a systematic literature review of mental health apps for underserved populations followed by a qualitative study with underserved young male participants (n=20; age: mean 19). Following the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines, an electronic search of 5 databases was conducted in 2021. The search yielded 18,687 results, of which 14 articles met the eligibility criteria.

Results: The included studies comprised a range of groups, including those affected by homelessness, having physical health conditions, living in low- and middle-income countries, and those with sexual and gender minority identities. Establishing and maintaining user engagement was a pervasive challenge across mental health apps and populations, and dropout was a reported problem among nearly all the included studies. Positive subjective reports of usability, satisfaction, and acceptability were insufficient to determine users’ objective engagement.

Conclusions: Despite the significant amount of funding directed to the development of mental health apps, juxtaposed with only limited empirical evidence to support their effectiveness, few apps have been deliberately developed or adapted to meet the heterogeneous needs of marginalized and underserved young people. Before mental health apps are scaled up, a greater understanding is needed of the types of services that more at-risk young people and those in limited-resource settings prefer (eg, standard vs digital) followed by more rigorous and consistent demonstrations of acceptability, effectiveness, and cost-effectiveness. Adopting an iterative participatory approach by involving young people in the development and evaluation process is an essential step in enhancing the adoption of any intervention, including apps, in “real-world” settings and will support future implementation and sustainability efforts to ensure that marginalized and underserved groups are reached.

Trial Registration: PROSPERO CRD42021254241; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=254241

Introduction

Addressing health inequities is a key challenge for the mental health field, especially when trying to ensure that interventions and services are accessible and acceptable for all populations. Nearly 50% of lifelong mental health disorders begin by the age of 14 years, and by the age of 24 years, 75% of mental health disorders have begun [ 1 ]. Given the frequent onset of mental health problems during youth, here defined as the period between 15 and 24 years, special attention must be paid to older adolescents, including those from underserved and marginalized minority groups and socioeconomically deprived backgrounds [ 2 ]. In these groups, common barriers to accessing mental health services can be exacerbated (eg, poor mental health literacy, lack of knowledge about where to seek help, negative attitudes toward professional help seeking, embarrassment, preference for self-reliance, fear of stigma, and confidentiality concerns), and additional barriers exist (eg, reliance on informal supports, shame, lack of housing or money, and therapist factors, such as different race and level of experience), creating increased risk of untreated mental health problems and thus poorer mental health outcomes [ 3 - 7 ]. In this study, marginalized and underserved populations are defined as those with higher prevalence of mental health problems and lower rates of help seeking, such as racially and ethnically minoritized individuals, rural and remote communities, financially deprived groups, individuals experiencing homelessness, refugee and migrant populations, and sexual and gender minority groups [ 8 - 10 ] and those with lower inclusion in mental health intervention research than one would expect from population estimates [ 11 ], respectively. These groups are exposed to risk factors for poor mental health and experience disparities in mental health care, including lower access to care, poorer treatment quality, and limited engagement in treatment [ 5 , 10 , 12 ].

Smartphone apps could offer an opportunity to deliver mental health and well-being resources and interventions in a scalable, cost-effective, and potentially personalized manner, particularly for those who experience the greatest barriers to accessing health care [ 13 , 14 ]. Given that smartphone ownership is nearly ubiquitous among young people in high-income nations and increasingly across lower-resource settings, apps have the potential to address some of the accessibility issues in service provision for young people’s mental health [ 15 ]. Young people are more digitally connected (ie, they are more likely to own smartphones and spend more time on the web) and more likely to seek health information on the web than older generations, meaning that app-based interventions may be particularly well-suited for this population [ 16 , 17 ].

Not surprisingly, the number of mental health apps being developed, both commercially and in academic research programs, has expanded rapidly, outpacing scientific evaluations of their effectiveness [ 18 , 19 ]. Emerging evidence suggests that some apps may produce significant symptom improvement across multiple outcomes, compared with waitlist or control conditions [ 20 - 22 ]. Despite promise, empirical research often fails to translate into meaningful and sustained implementation in “real-world” settings [ 23 , 24 ]. Research has focused primarily on efficacy under ideal “laboratory” conditions rather than effectiveness in real-world settings [ 25 ]. Therefore, at present, most apps, especially those available to the public, lack strong empirical support [ 19 ]. The acceptability (ie, perception that a given technology is useful, agreeable, palatable, or satisfactory); accessibility (ie, the technology being easy to obtain or use); engagement (ie, initial adoption and sustained interactions with the technology, including the level of app use, intervention adherence, and premature dropout); and feasibility (ie, the actual fit, utility, or suitability and the extent to which the technology can be successfully used or conducted within a given context) of apps for marginalized and underserved groups remain poorly understood [ 25 - 28 ]. Although mental health apps may provide a possible solution, marginalized and underserved groups of young people face unique challenges to engage with and benefit from these interventions (eg, intervention cost, content that is not culturally attuned and lack of reliable access to the internet) and are typically underrepresented in intervention research [ 25 , 27 ]. Although increased access is often seen as a major benefit of digital mental health interventions, issues related to the “digital divide” describe the phenomena that technology is not equally available to all social groups due to economic, social, or cultural inequalities and is a potential ethical concern [ 29 ]. Furthermore, underrepresentation in the intervention development process potentially reinforces structural inequalities by limiting the availability of products that are culturally accessible, inclusive, and effective or by skewing the product features to attract young people from more advantaged backgrounds [ 9 , 14 ]. Therefore, considering diversity, equity, and inclusion issues at the outset of health care research as well as within app evaluation is essential to prevent the perpetuation of existing inequities [ 27 ].

To date, little has been published on the attempts to create new or adapt existing app interventions to meet the heterogeneous needs of diverse groups of young people [ 9 , 14 ]. Moving forward, careful consideration is needed to ensure optimal leveraging of all mental health intervention research, including that of mental health apps, to increase health equity while also ensuring that innovations do not inadvertently widen the digital divide and exacerbate health inequalities [ 14 ]. Although the efficacy of many mental health apps remains unclear, future attempts to translate findings for underserved populations will need to ensure that all apps are developed with enough flexibility to fit a wider range of user needs and preferences. To achieve this goal, research is needed to assess the acceptability and feasibility of mental health apps for underserved young people to ensure that they are not further excluded from research and to advance toward mental health provision that meets their needs.

This Research

We conducted two sequential studies: (1) a systematic literature review and (2) a qualitative study with a targeted sample of young people who often are underrepresented in research, with limited access to health care and socioeconomic deprivation. The overarching aim of these combined studies was to better understand whether mental health apps are feasible and acceptable to underserved young people. A secondary aim was to determine which adaptations might enable accessibility of and effective engagement with mental health apps for these groups.

The research questions of interest were as follows: (1) On the basis of the existing literature, are mental health apps acceptable, feasible, and engaging for marginalized and underserved young people and how have these constructs been measured? (2) On the basis of the qualitative study, what are young people’s experiences of using a mental well-being app, including its acceptability, feasibility, and level of engagement? (3) On the basis of both studies, are apps an acceptable, feasible, and engaging intervention approach to meet the specific needs of underserved young people? What adaptations can be made to ensure that mental health apps are accessible and inclusive for these groups?

To fully address our research questions, we adopted a 2-pronged approach. First, we conducted a systematic review of the literature to better understand the acceptability and feasibility of mental health apps for underserved young people. To explore the findings of the systematic review in greater depth and to provide further insights from multiple perspectives, we next conducted a qualitative study with young men not in education, employment, or training (NEET) in the United Kingdom and Spain and asylum seekers and refugees in Germany. The interviews were conducted between August 2021 and February 2022.

Systematic Review

Literature search and search strategy.

An electronic literature search was performed in English on the following databases from January 2009 to May 2021: Cochrane Library, Embase, MEDLINE, and PsycINFO. We used key search terms relating to (1) underserved young people; (2) mental health mobile apps; and (3) acceptability, feasibility, and engagement. The search strategy was guided by similar reviews exploring digital mental health interventions for young people [ 25 , 30 ], and the terms for apps were derived from Cochrane reviews [ 31 , 32 ]. An updated search was conducted in September 2023. The full search strategies are available in Multimedia Appendix 1 .

Inclusion and Exclusion Criteria

Screened articles were included if (1) the study targeted marginalized and underserved young people with a mean age of 15 to 25 years, including individuals who were NEET, apprentices, teenage parents, members of minoritized racial or ethnic groups, members of sexual and gender minoritized groups, residents of low- and middle-income countries (LMICs), experiencing homelessness, socioeconomic deprivation, refugees or asylum seekers, and migrants; individuals with substance use disorders; those under state or statutory care; people with physical disabilities; and individuals involved in the criminal justice system or incarcerated; (2) the intervention was a “native” mobile app (ie, not on a web browser), whose primary aim was to promote well-being, prevent mental health problems, or treat existing mental health problems, delivered as a stand-alone intervention or as an adjunct to therapist-assisted interventions; (3) the primary outcome was a measure of mental health or well-being; and (4) the study reported a measure of user acceptability or feasibility.

Articles were excluded if (1) the mean age of participants was outside of the 15 to 25 years range; (2) the intervention was not a mobile app, that is, other digital interventions, including teletherapy (eg, therapy delivered by phone, SMS text messages, video platforms, or PCs); and (3) there was no measure of acceptability or feasibility. Gray literature was not included in the search.

Study Selection

In accordance with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines [ 33 ], the flowchart presented in Figure 1 provides step-by-step details of the study selection procedure. The PRISMA checklist is provided in Multimedia Appendix 1 . The search strategy identified 11,539 citations after deduplication. After an initial screening of the titles, which resulted in the exclusion of 10,061 (87.19%) irrelevant entries, the abstracts of 1478 (12.81%) studies were screened by 4 members of the review team (LAN, HAB, BF, and TM). The identified 176 (11.91%) full texts were then screened by LAN. In this final stage, 11 (6.2%) studies, corresponding to 9 interventions, were identified for inclusion in the review, with 8 (73%) found through the electronic search and 3 (27%) through manual searches of the reference lists of relevant articles. The updated search identified a further 7148 citations after deduplication, of which 3 met inclusion criteria and were included in the review.

systematic literature review qualitative

Data Extraction

Data were extracted by 1 reviewer (LAN, HAB, or BF) and reviewed for accuracy and completeness by a second reviewer. After verifying all the extracted data, discrepancies were resolved by discussion or adjudication by another author (MF). Extracted data included information on study characteristics (ie, authors, publication year, country, study design, and study population); intervention characteristics (ie, characteristics of the technology, app name, therapeutic modality, and intervention outcomes); and feasibility and acceptability.

Quality Assessment

We used the Mixed Methods Appraisal Tool (MMAT; version 2018) to assess the methodological quality of the included studies [ 34 ]. MMAT was developed by combining the core relevant methodological criteria found in different well-known and widely used qualitative and quantitative critical appraisal tools. The MMAT consists of 2 screening questions applicable to all types of study design and a further 5 questions applicable to specific study designs. Responses were rated on a categorical scale as “no,” “unclear,” or “yes” to any of the methodological quality criteria. Quality assessments were made by 1 reviewer (TM). We did not exclude any studies based on quality assessment scores.

Data Synthesis and Analysis

The extracted data were collated and summarized to produce a narrative summary of the study; sample characteristics; and acceptability, feasibility, and engagement outcomes. A codebook approach was used to code and synthesize implementation data from all available sources according to the outcome categories [ 35 ].

Qualitative Study

Study context.

The Emotional Competence for Well-Being (ECoWeB) cohort multiple randomized controlled trial involved a longitudinal prospective cohort to examine the well-being, mental health, and emotional competence in individuals aged 16 to 22 years across 12 months. The experimental arm was an emotional competence self-help app (ie, MyMoodCoach).

Intervention

MyMoodCoach was designed to test if an app could improve different processes affecting mental well-being, including, but not limited to, improving emotion regulation by reducing maladaptive strategies such as worry and rumination and replacing them with constructive alternatives and problem-solving and enhancing emotional knowledge and perception through psychoeducation and learning tasks. The app was designed for young people broadly and was not targeted at a specific population. Full details of the ECoWeB trial are reported in the study protocol [ 36 ].

Participant Recruitment

In parallel to the trial, we additionally recruited young people to understand the views and experiences of those underrepresented in the study sample (and most other app-based studies). As there was an overrecruitment of White, university-educated female participants in the study sample, we decided to only recruit male participants and to focus on 2 specific groups: NEET men and migrant populations (including both voluntary and forced migrants). In the United Kingdom and Spain, recruitment was conducted through a variety of channels, including Twitter, targeted adverts on social media (eg, Instagram and Facebook), newsletters sent to youth and practitioner networks, outreach to third sector organizations and mental health support groups, and advertisements placed on university and charity websites. In Germany, participants were recruited directly from refugee homes, integration courses, migration services, and youth centers.

Inclusion criteria required the participants to be (1) aged between 16 and 22 years; (2) able to speak and read English, Spanish, or German; (3) male; (4) NEET (United Kingdom), NEET or migrant (Spain), or asylum seeker or refugee (Germany); (5) having access to a smartphone with the minimal technological specifications necessary for the app (ie, iOS 9 or later or Android 8.0 or later); and (6) having access to the internet via mobile data or Wi-Fi. The exclusion criteria were having current suicidal ideation, psychosis, or bipolar disorder.

The research team was led by the principal investigator, MF, professor of Adolescent Psychiatry and Consultant in Children’s Psychological Medicine, with particular expertise in the mental health needs of refugee populations. EW, professor of Experimental and Applied Clinical Psychology, Chartered Clinical Psychologist, provided additional supervisory input and guidance as a leading expert in the field of child and adolescent mental health research. SW, professor of Education with a focus on youth research and director at German Youth Institute, provided supervisory oversight of the interviews in Germany. HAB and LAN were postdoctoral researchers at the time the research was conducted, both of whom have several years of experience conducting qualitative research with young people and expertise in analyzing qualitative data. SC, researcher at the German Youth Institute, conducted and analyzed the interviews in Germany and has experience conducting qualitative research with refugee populations. The authors had no relationship with any of the participants.

In the United Kingdom and Spain, participation involved (1) short questionnaires about mood and feelings and demographic questions such as age, gender, race, ethnicity, educational attainment, and country of origin; (2) downloading MyMoodCoach and using it for 4 weeks; and (3) completing a follow-up interview. Interviews were conducted by HAB and LAN via MS Teams (Microsoft, Corp) and lasted approximately 45 minutes. A similar procedure was followed in conducting the interviews with refugees in Germany, but given the likely language barriers in navigating the app, an additional, earlier interview was included 2 to 3 weeks after the initial instructions had been sent to explore if features of the app were understood and to clarify questions. After 2 further weeks, a second interview was conducted. One interview was conducted in person, while all the others were conducted over the telephone.

Interviews followed a semistructured schedule based on a taxonomy of implementation constructs [ 26 ]. Topics included self-reported app use, satisfaction and feedback about content, usability, and acceptability ( Multimedia Appendix 2 ). In Germany, the interview topic guide was translated and adapted to the target group. As not to stigmatize the young men, the wording “refugee” was avoided, and instead, when referring to the target group, the wording “young man such as yourself” was used.

Ethical Considerations

In the United Kingdom, ethical approval was granted by the University of Exeter Research Ethics Committee (eCLESPsy000048 v10.0); in Spain, by the Jaime I University Research Ethics Committee (CD/93/2021); and in Germany, by the Ethics Committee of the Faculty of Medicine at Ludwig-Maximilians University Munich (PNr 19-0468/19-0315). Cognizant of the ethical and practical implications of conducting research with underserved populations, we conducted the interviews in private, quiet spaces and in a friendly and reassuring manner to ensure that participants felt comfortable and safe.

With prior consent, all interviews were audio recorded. Participants were reimbursed with a shopping voucher of up to €50 or £50 (US $64) for their time and app use.

Analytic Strategy

Interviews were transcribed verbatim, and the transcripts were assigned a unique pseudonym to anonymize participants. The interviews were analyzed using a combination of theory- and data-driven analysis techniques, consisting primarily of deductive, theory-driven thematic analysis [ 37 ]. Analysis of the transcripts in the United Kingdom and Spain was conducted by HAB, using NVivo (version 11; Lumivero). Following a similar procedure, the interviews in Germany were transcribed and analyzed using the coding software MAXQDA (VERBI GmbH) by SC.

Initial familiarization with the data was achieved through the transcription process and iterative rereading of the interviews. Analysis was carried out through a recursive process of open coding, when concepts were named and their properties and dimensions identified, followed by axial coding, when links and associations were drawn between codes. Codes were based on language used by the young people and were applied to each new unit of meaning. Data extracts were multiply coded when appropriate, as were contradictory and minority features of the data. The data set was iteratively reviewed, and codes were systematically applied to the whole data set until a finalized coding manual was established. Codes were organized into potential themes using thematic maps and tables. The development of the coding manual was iteratively reviewed and refined through discussion with all authors throughout the analysis process to ensure the reliability and rigor of the process and results.

Approach to Inquiry

Analysis was conducted from a critical realist perspective to provide a more nuanced understanding and explanation of participants’ experiences [ 38 ]. This position assumes that although participants’ accounts provide important insights about the real world, these accounts are not objective and represent an interpretation of reality [ 39 ]. These data require interpretation and explanation by the researcher, who also has their own perspectives on the world, to better understand the underlying mechanisms and processes and, in turn, make recommendations for practice [ 40 ].

Study Characteristics

Characteristics of the 14 included studies, examining 12 interventions, are presented in Table 1 . The included studies were published in the United States (n=3), Australia (n=3), Switzerland (n=2), India (n=2), South Korea (n=1) and Germany (n=1). As for the type of intervention, 7 apps were stand-alone, and the rest (n=5) were delivered in combination with other forms of professional support. The interventions (n=12) were targeted at apprentices and the unemployed (n=3), homeless populations (n=2), those with physical health conditions (n=2), sexual and gender minoritized individuals (n=3), those residing in LMICs (n=1) and those with co-occurring autism spectrum disorders (n=1). Most of the interventions had been co-designed with young people (11/12, 92%). The study sample size ranged from 9 to 877, with 4 of the 14 studies having a sample size of >200 participants.

Study, yearApp namePopulationCountryStudy designIntervention focusSample size, nAge (y), mean (SD)Sex (female, %)
Bohleber et al [ ], 2016Companion AppEmployed (apprentices) and unemployedSwitzerlandMixed methodsPeer mentoring and interactive health content to increase social support and reduce stress619Employed: 16.9 (1.73), unemployed: 18.4 (1.96)Employed first year: 50.2, employed second year: 56.7, unemployed group: 40.4
Deady et al [ ], 2020HeadGearApprenticesAustraliaMixed methodsBehavioral activation and mindfulness therapy5421.68 (3.62)4
Fleming et al [ ], 2017TODAY!Sexual minority male participantsUnited StatesQualitativeCBT to manage anxiety and depressive symptoms919. (0.71)0
Francis et al [ ], 2020CyFi SpaceIndividuals with CF AustraliaMixed methodsSocial connectedness and well-being of young people living with CF2212-1750
Geirhos et al [ ], 2022Minddistrict. Program: youthCOACH Individuals with a chronic illness (CF, JIA , and T1D )GermanyPilot RCT iCBT targeting symptoms of anxiety and depression3016.13 (2.34)73
Glover et al [ ], 2019A suite of 15 apps including Pocket Helper2.0 Individuals experiencing homelessnessUnited StatesPilot studyDaily coping skills; focused tips and brief CBT10020.03 (1.83)39
Gonsalves et al [ ], 2019POD AdventuresStudentsIndiaIntervention designProblem-solving for adolescents at risk of anxiety, depression, and conduct difficultiesStudents: 118, service providers: 161446
Gonsalves et al [ ], 2021POD AdventuresStudentsIndiaPilot studyProblem-solving for adolescents at risk of anxiety, depression, and conduct difficulties23015.5750
Haug et al [ ], 2017ready4lifeVocational studentsSwitzerlandPilot studyLife skills training: self-management skills, social skills, and substance use resistance87717.4 (2.7)58.3
Leonard et al [ ], 2018Calm MomMothers experiencing homelessnessUnited StatesPilot studyEmotion regulation strategies4918.54100
Schueller et al [ ], 2019Pocket Helper, Purple Chill, Slumber Time, and IntelliCare (12 mini apps) Individuals experiencing homelessnessUnited StatesPilot studyEmotional support and coping skills3519.06 (0.85)65
Escobar-Viera et al [ ] , 2023REALbotRural living LGBTQ+ youthUnited StatesPilot studyChatbot deployed on Facebook Messenger and Instagram apps to deliver educational content2016.6 (1.5)65
Torok et al [ ] , 2022LifeBuoyCommunity sample (over 50% of sample LGBQI sexual minority)AustraliaRCTDBT to treat persistent emotional dysregulation to prevent self-harm and suicidal behaviours45521.5 (2.18)84.6
Yang and Chung [ ] , 2022HARU ASDIndividuals with ASDSouth KoreaRCTCBT for anxiety and co-occurring intellectual disability3020.97 (5.06)10

a CBT: cognitive behavioral therapy.

b CF: cystic fibrosis.

c JIA: juvenile idiopathic arthritis.

d T1D: type 1 diabetes.

e RCT: randomized controlled trial.

f iCBT: internet-based cognitive behavioral therapy.

g Mobile phones were preloaded with several apps designed to promote mental health wellness and provide real-time resources. Pocket Helper was 1 app specifically designed for this study.

h Studies identified in the updated search.

Study Quality

The included studies varied in their methodological quality ( Multimedia Appendix 3 ) [ 41 - 54 ]. Most (13/14, 93%) were judged to contain possible limitations in at least 1 criterion. All studies but 1 (13/14, 93%) were clear in their description of study participants or the process of recruiting a sample representative of the population of interest. All studies addressed the research question using collected data and reported in some way on feasibility and acceptability. Most studies (12/14, 86%) effectively used appropriate qualitative, quantitative, or mixed methods to answer their research question. However, many studies (6/14, 43%) did not have a sufficiently large sample to warrant definitive conclusions about the feasibility and acceptability of the intervention studied.

Acceptability and Feasibility

User acceptability was found to be high across all included studies, with participants rating the apps positively and reporting high satisfaction with the content of the interventions ( Table 2 ). In studies where participants were asked to indicate if they would recommend the study to someone else, the vast majority reported they would [ 42 , 44 - 46 , 51 ]. It is notable that despite many users across the studies reporting high satisfaction levels and being willing to recommend the apps, they themselves did not intend to continue using the apps (ie, low predicted engagement), as they did not find them useful or relevant for their own circumstances [ 41 , 42 , 51 ]. It is also important to note that many studies incentivized participation with payments, prize draws, and vouchers, and this may have influenced acceptability ratings and engagement [ 41 - 43 , 46 , 49 - 51 , 53 , 54 ]. Furthermore, in the only study that asked if participants would pay to use apps, most were unwilling to do so [ 42 ].

Reference, yearApp nameMeasurementAcceptability and feasibilityEngagementBarriers to engagementPerceived usefulnessAccessibility and inclusivityIntervention outcomes
Bohleber et al [ ], 2016Companion AppQuestionnaires and semistructured interviewsAdolescents regarded the concept of the app as well conceived, especially the peer-mentoring system. However, the app did not compare well to other available apps.Engagement decreased markedly after the first 2 weeks. Average daily visits: in the first 2 weeks, 61; after 6 months, 8.Technical problems, unclear benefits, and lack of time.Content was judged informative and interesting. However, some reported that the purpose of app was not evident.Unemployed participants suggested that reminders to use the app would help.No significant effect on stress or the perception of social support.
Deady et al [ ], 2020)HeadGearQuestionnaires, semistructured interviews, and focus groupsApprentices rated the app positively, (average 4/5 stars). Participants had no or neutral willingness to pay for the app. Most would widely recommend the app but predicted their own use would be infrequent (3-10 times) over the next 12 months.Users completed approximately one-third of the app challenges.Noncompletion of challenges attributed to “forgetting” and choosing not to “catch up.” Users wanted to be able to skip challenges and suggested gamification and greater personalization.87.2% claimed it had at least moderately improved their mental fitness. Moderate impact on awareness, knowledge, attitudes, intention to change, help seeking, and behavior change around mental health and well-being.Participants emphasized the importance of gamification and greater personalization, for example, through the inclusion of personalized music.No significant differences between baseline and 3-month follow-up measurements. Engagement (intervention completion) directly related to effectiveness.
Fleming et al [ ], 2017Today!Semistructured interviewExpressed enthusiasm for a comprehensive mobile phone app designed to treat clinically significant symptoms of anxiety and depression among young sexual minoritized men.Not assessed in this paper.Weekly phone calls with the coach was described by participants as a barrier to engagement.Overall, participants had positive reactions to the app, but each individual found different features to be useful (eg, community resources and mood rater).Usability testers had a wealth of suggestions for topics they would like to see addressed in this kind of app.Not assessed in this paper.
Francis et al [ ], 2020CyFi SpaceQuestionnaire, group, and individual interviewsAcceptability of the app was rated moderate.Overall, 37% recruitment response rate. 77% participants used the app at least once a week. Some participants indicated that the use of the app declined as the 6-week trial progressed.40.9% reported watching the entertainment and motivational videos.Many participants found the app both useful and fun to use and agreed they would. recommend the app to others.Participants rated the app’s usability as high. Age-related accessibility measured.Not assessed in this paper.
Geirhos et al [ ], 2022Minddistrict. Program: youthCOACH QuestionnairesContent was perceived as appropriate. 58% would recommend intervention to a friend, 17% would likely recommend it, 17% would partly recommend it, and 8% would not recommend it.Intervention adherence=40%, dropout=20%.Not reported.Individual tasks perceived as particularly helpful.Not explicitly reported.No symptom improvement. Small sample.
Glover et al [ ], 2019Pocket helper 2.0Questionnaires73% would recommend the program.48% of the sample completed the 3-month assessment, while 19% completed the 6-month assessment.Use and satisfaction with various features reported.63% of participants at 3 months and 68% of participants at 6 months reported at least moderate benefit from interventionDesigned for youth experiencing homelessness based on the initial input from these youths and was refined based on the feedback received during a previous pilot trial.Not assessed in this paper.
Gonsalves et al [ ] , 2019POD AdventuresFocus group discussions, co-design workshops, and user testingService providers highlighted that self-help was not a culturally congruent concept for most Indian adolescents.Not reported in this paper.Following user testing, activities were shortened to be kept <2-minutes to minimize boredom and disengagement.Problem-solving reported as being a useful and valued skill.Design was sensitive to cultural context, language, participant media preferences, and digital access helped focus on user needs. Adaptations were made to address widespread literacy difficulties.Not assessed in this paper.
Gonsalves et al [ ], 2021POD AdventuresQuestionnaires and semistructured interviewsSatisfaction scores ranged from good to excellent.Intervention completion rate was 92%.App generally considered easy to use, but a few participants identified confusing game components and issues related to typing and difficult log-in passwords.Most participants felt that the program had positively impacted their prioritized problem.As reported in [ ].Outcomes at 4 weeks showed significant improvements on all measures.
Haug et al [ ], 2017Ready4lifeQuestionnairesLarge proportion of invited adolescents participated. Program evaluated as “very good” or “good” by 94.6% of participants.Follow-up assessments were completed by 49.7% of the participants. Of the 39 program activities, the mean number carried out was 15.5. In total, 15% failed to engage in any activity, and 52% engaged in fewer than half of the activities.Participation in the program was lower in male participants and among those reporting an immigrant background.Not explicitly reported.Not explicitly reported.Statistically significant increases in targeted life skills, decline in at-risk alcohol use, and stable rates for tobacco and cannabis use.
Leonard et al [ ], 2018Calm MomTechnology logs, questionnaires, and in-depth semistructured interviewsParticipants felt the general content of app was highly relevant. 75% were “very” satisfied, and 18% were somewhat satisfied.Mean of 14.77 minutes of using the app. Participants used at least one of the elements on the app on average on 44% of days when they had the study phone.Technology challenges.Supported their ability to effectively regulate their emotions.Majority of participants noted that the app was very accessible, and several indicated that they felt less alone and felt genuinely cared for.Not assessed in this paper.
Schueller et al [ ], 2019Pocket Helper, Purple Chill, and Slumber TimeQuestionnairesSatisfaction was high; 100% of participants would recommend the program, and 52% reported that they were very or extremely satisfied app.57% of the participants completed all 3 sessions. Mean 2.09 sessions.Mobile phone loss (through damage, theft, or other loss).43% reported app as helpful; 48% found the skills they learned to be beneficial; 43% regularly used the skills.The apps were preinstalled on all mobile phones before distribution to participants.Participants experienced limited change on clinical outcomes with small effect sizes.
Escobar-Viera et al [ ] , 2023REALbotQuestionnairesHigh user satisfaction. Acceptability rated 5.3/7 but only 25% of participants described the app as exciting or leading edge.42% of participants interacted with the app for 2 or more days.Primary challenges were that app felt robotic and not smart enough.Usability ratings were high on both measures.Lack of voiceover feature.Nonsignificant changes in scores of perceived isolation, depressive symptoms and social media self-efficacy.
Torok et al [ ] , 2022LifeBuoyQuestionnairesNot reported71.5% completed 5 or more modules (completers)Participants who completed first survey had significantly lower baseline anxiety symptoms compared to those who did not complete it.Not reportedNot reportedDepression, anxiety, distress, and well-being symptoms improved in app group and control.
Yang and Chung [ ] , 2022HARU ASDQuestionnairesAcceptable scores in the Satisfaction and Usability Questionnaire.No participants dropped out.Not reportedNot reportedNot reportedSignificant decrease in anxiety level, an increase in positive affect, and a decline in stereotypic behaviors, hyperactivity, noncompliance, and inappropriate speech.

a Studies identified in the updated search.

Regarding co-design strategies used in these studies, early stakeholder consultation and service provider focus groups were conducted in the early development phase of POD Adventures, a gamified intervention for people with or at risk of anxiety, depression, and conduct difficulties in India; the results highlighted that “self-help” was not a culturally congruent concept for most Indian adolescents [ 47 ]. This early feedback was important as it revealed the norms around seeking or receiving direct instruction from parents, teachers, and other elders and that support from a counselor might be necessary to ensure acceptability, feasibility, and engagement [ 47 ]. The app was therefore designed to incorporate a combination of teaching methods, including direct instruction, modeling, and practice to accommodate different learning styles and to emphasize self-efficacy [ 47 ]. Furthermore, user testing also highlighted the need for more direct language, particularly around problem-solving concepts [ 47 ]. The iterative study methodology used in this study enabled the participants to guide the development and provide their inputs at each stage to increase acceptability and feasibility.

Although, overall, the apps were well received by young people, poor engagement (eg, not engaging at the recommended frequency or complete the full course of the intervention), measured through both self-report, intervention adherence, and data capture was a commonly reported issue. Many studies failed to achieve continued participation, with high rates of attrition [ 41 , 42 , 44 ]. In addition, app use often decreased markedly after the initial few weeks [ 41 , 42 , 44 ]. The results of some studies suggested that engagement (in the form of intervention completion) was related to the effectiveness the intervention [ 41 , 42 ]. Although engagement was problematic in many of the stand-alone interventions, engagement and study participation in a school setting seemed more promising [ 49 ]. For example, a proactive invitation for study participation in a school enabled 4 out of 5 eligible adolescents to participate in the “ready4life” life skills program [ 49 ]. This strategy consisted of individuals who were trained in the program to be delivered, giving arranged sessions lasting 30 minutes in participating vocational schools during regular school lessons reserved for health education. Within this session, the students were informed about and invited to participate in the study, including being informed about the study’s aims and assessments, reimbursement, and data protection.

Barriers to Engagement

Qualitative interviews and user feedback provided important insights about relevant barriers to engagement. The most frequently mentioned reasons for not using the app were that participants could not see the obvious benefits of using the app [ 41 ], lack of time or forgetting [ 42 ], and technical difficulties [ 41 , 50 , 51 ]. In a life skills training app for vocational students, participation was lower in male adolescents and among those reporting an immigrant background [ 49 ], although the reasons behind this poor engagement remained unclear.

What Do Young People Want From Apps?

There was some heterogeneity between studies in terms of the features and content that participants found acceptable and appropriate. For example, findings suggested that young people who experience homelessness tended to prefer both automated and self-help features compared with ones involving more direct human interaction [ 46 ]. However, participants in other studies valued both human interaction with professionals either via the app interface or through face-to-face contact and self-help features [ 47 ]. Human support was suggested as being helpful in offering both instruction and guidance as well as personalized support when needed. Numerous participants wanted opportunities to interact with peers [ 43 , 44 , 52 ] and even suggested connecting apps to social media [ 41 ]. Others also wanted the design of the apps to be more attractive (eg, improve the layout and create a more intuitive structure) and made suggestions about how gamifying apps could make them more interesting [ 41 , 43 , 47 ].

We interviewed 13 young men in the United Kingdom (age: mean 18.7, SD 2.5 y), 2 in Spain (age: mean 17, SD 0 y), and 5 in Germany (age: mean 20.2, SD 1.6 y). In the United Kingdom, 62% (8/13) of the participants self-reported as ethnically White, compared with 50% (1/2) in Spain and 20% (1/5) in Germany ( Table 3 ).


United Kingdom (n=13)Spain (n=2)Germany (n=5)
Age (y), mean (SD)18.7 (2.5)17 (0)20.2 (1.6)

Arab or Middle Eastern0 (0)0 (0)4 (80)

Asian4 (31)0 (0)0 (0)

White8 (62)1 (50)1 (20)

Other ethnic group1 (8)1 (50)0 (0)
Refugee or an asylum seeker, n (%)0 (0)0 (0)5 (100)
Chronic medical condition, n (%)0 (0)0 (0)0 (0)
Disability, n (%)1 (8)0 (0)0 (0)

Lower secondary school6 (46)2 (100)4 (80)

Upper secondary school4 (31)0 (0)1 (20)

Other higher education1 (8)0 (0)0 (0)

Undergraduate degree1 (8)0 (0)0 (0)

Postgraduate degree1 (8)0 (0)0 (0)

In terms of participants’ mental health and well-being ( Table 4 ), the mean Patient Health Questionnaire-9 score in the United Kingdom was 9.7 (SD 7.3) compared with 5 (SD 1.4) in Spain.

MeasuresUnited Kingdom (n=13), mean (SD)Spain (n=2), mean (SD)Germany (n=5), mean (SD)
WEMWBS 44.2 (7.8)51.5 (3.5)
PHQ-9 9.7 (7.3)5 (1.4)
GAD-7 6.5 (4.4)8.5 (2.1)

a WEBWBS: Warwick-Edinburgh Mental Well-Being Scale.

b Not available.

c PHQ-9: Patient Health Questionnaire-9.

d GAD-7: Generalized Anxiety Disorder Assessment.

A key finding was that despite best efforts and financial incentives, recruiting underserved young male participants, especially in Spain and Germany, was challenging. This might suggest that these young people may not deem such an emotional competence app as relevant or useful to them, making recruitment and engagement problematic. We also assessed if the app was deemed acceptable (ie, useful, agreeable, palatable, or satisfactory) and appropriate (ie, relevant, suitable, or compatible). Overall, the app was viewed by participants in the United Kingdom, Spain, and Germany as being appropriate and relevant for young people of different ages and walks of life, as they thought that all young people had a smartphone and were adept at using technology:

So, I was able to learn about my feelings, I was able to evaluate how I actually felt today, concerning my feelings, if I was angry or I was sad. I was actually able to write them down in detail. [Participant in Germany]

Several participants commented that the content of the app was best suited to university and school students. Another common view was that the app was better suited to those struggling with their mental health and that it was less relevant for those for whom things were going well. Many participants perceived the app to be aimed at improving mental health problems, as opposed to being a universal intervention intended to improve well-being, which represented a barrier to engagement. Of those who reported that the app was not relevant to them, they did see it as being of potential use to friends and family members who were stressed, anxious, or going through a difficult time:

There will be folks who maybe aren’t going through a good time in their lives, and they will need the app to feel... to understand themselves, mostly. And I think it’s relevant at any age, because I am lucky that I don’t think I need it as much as someone else who feels like that. [Participant in Spain]
Partly it was important, partly it was not. I’ll give an example again, for example if a refugee came to Germany from a war zone, it’s going to be difficult, very difficult to find a topic that would fit him, for the future I mean, so the version now is already okay if you want all persons to use this app. Partly it’s already relevant and partly it’s not. If someone has mental problems or bad experiences, you cannot find such a topic in the app. [Participant in Germany]

Although some participants reported using the app regularly during the 4-week study period, a consistent finding was that participants tended to use the app most when they first downloaded it, with a marked reduction in use over time:

Uh, I probably used it about three times in the first week. And then not really that much at all I’m afraid. [Participant in the United Kingdom]
I don’t know, I just dropped off using a little bit after a couple of weeks, but I’ve been trying to keep on top of doing that like the daily rating things and everything.... I kind of lost my motivation to use it. [Participant in the United Kingdom]

We identified several barriers that hindered participants’ engagement and use of the app. These included the following: (1) repetitive and time-consuming app contents, (2) a paucity of new content and personalized or interactive tools (eg, matching mood to tools), (3) unclear instructions, (4) a lack of rationale for the app, (5) perceiving the app as not being relevant, (6) a lack of motivation, and (7) privacy concerns:

Yes, for example, I would not like to write in this diary, because I do not know if it would be one hundred percent anonymous and if others might read it. And maybe I have more privacy if I do not write it. [Participant in Germany]
I think by now I would slowly stop using the app. It was nice up to this point, but I think for me I might need a step further now. To really deal with my personal problems and I don’t know how much an app like this can help and that rather an expert and therapy is needed. [Participant in Germany]

For the asylum seekers and refugees in Germany, the language and content of the app was not suited to their needs. The participants would have preferred the app in their native language as some had to use translation programs to help access the content. Furthermore, specific topics of relevance to refugees were missing, such as dealing with asylum uncertainty, whereabouts of family members, and their living situation.

Finally, underserved young people, including asylum seekers and refugees, migrants, and those NEET, are more likely to experience financial deprivations and therefore less likely to pay directly for apps, especially for those that do not address their primary difficulties:

If it came to the point that I had to pay for it, I would look for free options. [Participant in Germany]

Summary of Findings

The use of mobile apps in mental health care continues to attract interest and investment; however, research geared toward understanding the needs of marginalized and underserved populations is still nascent. This study, focusing on the implementation of mental health apps in underserved young people, highlighted that little research exists to support the widespread adoption of these apps as a mental health intervention for marginalized and underserved groups. Findings from both our systematic review and qualitative study were largely consistent: markers of acceptability and usability were positive; however, engagement for underserved young people was low, which is notable given the widespread ownership of smartphones [ 55 , 56 ]. To date, research has focused primarily on efficacy studies rather than effectiveness and implementation in “real-world” settings and may have overestimated users’ “natural tendency” to adopt smartphone apps for their mental health and well-being [ 57 ]. Our findings suggest that despite the rapid proliferation of mobile mental health technology, the uptake and engagement of mental health apps among marginalized young people are low and remain a key implementation challenge.

Our data suggest that establishing and maintaining user commitment and engagement in the content of the intervention as intended is a pervasive challenge across mental health apps and marginalized populations, and premature dropout was prominent in nearly all the included studies. This is consistent with the literature that suggests that the majority of those offered these app-based interventions do not engage at the recommended frequency or complete the full course of treatment [ 58 , 59 ]. In this study, various app components were associated with engagement level, with the most engaging interventions providing young people with some form of associated real-human interaction and those having a more interactive interface. This aligned with other findings that the feedback of personalized information to participants is an especially important aspect of creating engaging and impactful digital tools [ 60 ]. Young people tend to quickly disengage if there are technical difficulties or if the app does not specifically target their perceived needs [ 41 , 50 , 51 ]. Furthermore, recruitment of marginalized groups to app-based studies is difficult. For instance, in this study, the use of advertisements, financial incentives, vouchers, and prize draw incentives seemed to be insufficient to recruit a significant number of participants in Spain and Germany.

Measuring engagement is a challenge that has likely contributed to our lack of knowledge on app components that effectively increase user engagement. Reporting engagement with mental health apps in intervention trials is highly variable, and a number of basic metrics of intervention engagement, such as rate of intervention uptake, weekly use patterns, and number of intervention completers, are available, yet not routinely reported [ 58 , 59 ]. The results of this study highlight the importance of objective engagement measures and that relying on positive subjective self-reports of usability, satisfaction, acceptability, and feasibility is insufficient to determine actual engagement. Furthermore, the findings suggest that apps involving human interactions with a professional (eg, therapist or counselor) or that are completed in a supervised setting tend to be more acceptable and effective and have higher engagement rates [ 47 , 48 ]. Our research suggests that similar to traditional face-to-face mental health services, app-based programs still face numerous barriers to reach marginalized youth, especially since the mental health apps available to the public do not seem to consider the unique developmental needs of these groups, participants do not seem to perceive an obvious benefit from using them, and some potential users prefer to interact with a professional face to face. Thus, it is also possible that the digital mental health field might be inadvertently contributing to mental health inequities among this population by not engaging marginalized groups sufficiently at the outset of research to ensure that the designed app meets their needs. However, for the studies included in this study that did engage these groups in the co-design of the apps, there was no notable improvement in engagement. Thus, we hope these findings encourage researchers and clinicians to think more critically of the role that mental health apps can truly have in addressing mental health equities among underserved groups.

As in other areas of mental health research, young people from LMICs were underrepresented in these studies, which typically originated from high-income settings, including the United States, Australia, and Canada. There are relatively few app-based interventions that were designed or adapted for young people in LMICs that have been rigorously evaluated or are even available in local languages [ 47 , 48 ]. Many living in LMIC regions, for example, adults in Asian countries, are often faced with apps that are not culturally relevant or in the right language [ 61 ]. These inequities are surprising given the high rates of smartphone use in Asia, even in rural regions [ 62 ]. Yet, it is still likely that youth in this region faced barriers related to data availability and more limited phone access, which will likely inhibit the broad implementation of apps beyond research studies [ 16 ]. Considerable work is required to ensure the availability of mental health apps that fit a wide range of user needs and preferences. It is important to ensure that the acceptability and feasibility of mental health apps for young people residing in LMICs are prioritized so that they are not further excluded from relevant mental health research.

Finally, a significant challenge is the lack of diversity in mental health app research participation, which limits our understanding of “real-world” efficacy and implementation for underserved and marginalized groups. While undoubtedly invaluable, and indeed deemed gold standard when evaluating efficacy of interventions, randomized controlled trial of mental health apps are not without flaws [ 63 , 64 ]. Trial recruitment is often highly selective due to stringent inclusion and exclusion criteria resulting in lower inclusion in research than one would expect from population estimates [ 65 ]. In the United Kingdom, the National Institute for Health and Care Research data have revealed that geographies with the highest burden of disease also have the lowest number of patients taking part in research [ 66 ]. The postcodes in which research recruitment is low also aligns closely to areas where earnings are the lowest and indexes of deprivation are the highest [ 66 ]. There are many reasons why some groups are underrepresented in research: language barriers, culturally inappropriate explanations, poor health literacy and the use of jargon, communication not being suitable for people with special learning needs, requirement to complete many administrative forms, negative financial impact in participating, lack of effective incentives for participation, or lack of clarity around incentives, and specific cultural and religious beliefs [ 66 ]. Failing to include a broad range of participants is problematic in that results may not be generalizable to a broad population.

Limitations

Although this research was carefully executed and used a robust methodological approach with an exhaustive search strategy, it is not without limitations. Foremost, although the systematic review attempted to identify and include as many articles as possible, some papers may have been missed because of the inconsistencies in how feasibility and acceptability outcomes are recorded and reported. It was also difficult to ensure that all apps for this age group were identified because those aged between 15 and 25 years are harder to differentiate in adolescent and adult studies, meaning we might have missed some relevant studies where data could not be disaggregated by age. The exclusion of gray literature (eg, institutional reports and websites) may have also made us overlook potentially relevant apps, albeit lacking the quality assurance of peer-reviewed research. It is also likely that commercial organizations, including app companies, collect rich user demographic and engagement data but do not share it publicly, thus limiting our ability to conduct empirical analyses about the “real-world” acceptability, engagement, and implementation for specific populations. We did not analyze the extent to which publication bias may have influenced the results of our search, and, therefore, there may be a much higher number of mental health apps that have been developed with an underserved sample of young people, but due to their lack of efficacy or acceptability, these studies have not been submitted or accepted for publication. The sample sizes of many of the included studies were relatively low, which potentially limits their generalizability. However, we included all study designs so as to ensure that our learning from existing research was maximized. Furthermore, many of the studies included in the systematic review, as well as our qualitative study, had some form of language competency as an inclusion criterion (eg, English speaking), which likely excludes important perspectives from the results. For the qualitative study, we were only able to gather data from those who had used the app at least once and who were therefore somewhat engaged in the app. Despite our best efforts, we were unable to recruit participants who, following consent, had never then downloaded or used the app and so could not explore barriers to engagement for the least engaged young people or understand why the app was not appealing to those who chose not to proceed or take part. Those who did participate in this research were financially incentivized to do so and often highlighted the importance of this incentive in keeping them engaged. Therefore, we were unable to draw conclusions about the naturalistic engagement, feasibility, and acceptability of the app, if it were to be made available without payment in schools, universities, and health services or to be made commercially available on the app marketplace. It is also possible that social desirability bias (ie, a tendency to present reality to align with what is perceived to be socially acceptable) occurred during the interviews, whereby participants responded to the interview questions in a manner that they believed would be more acceptable to the study team, concealing their true opinions or experiences [ 67 , 68 ]. As previously noted by others, results may be subject to further bias in that findings could be led by more articulate young people, while it is more difficult to hear the voices of those who are less articulate or digitally literate [ 69 ]. Finally, it is also possible that the positionality of the research team, including our own experiences, backgrounds, and biases, impacted what information participants disclosed to the research team as well as the interpretation of the qualitative data in this study.

Recommendations of Adaptations to Increase Acceptability, Feasibility, and Engagement

To overcome this complex engagement and implementation challenge, we have taken together our findings with relevant previous literature to generate 3 key suggestions about how to improve the feasibility and potential utility of apps for young people from marginalized and underserved populations.

Increasing Participant Diversity in Mental Health Intervention Research

Studies should aim to prioritize the inclusion of marginalized groups in trials testing the effectiveness of digital interventions by intentionally planning recruitment efforts aimed to reach these communities [ 70 ]. First, steps can be taken to build trust, connections, and credibility between the research team and these communities. NHS England [ 66 ] suggests involving representatives from those groups during the inception and implementation of recruitment efforts. This approach ensures that the intervention is relevant to the target group by meeting their preferences and needs, incorporating culturally salient factors relevant for recruitment efforts, addressing concerns about community mistrust and participant resource constraints, and establishing partnerships with key community stakeholders that can be gatekeepers in the community [ 14 , 71 ]. These strategies are likely to improve research accessibility, recruitment, and retention. Research teams need to ensure that the findings and any actionable takeaways from the research conducted with the participants are shared with them by asking participants how they would like to receive this information (eg, verbal, written, or via a trusted advocate). Equally important is to explain that the research process can be slow. These steps help create a positive legacy for the research project and build trust between individuals and public institutions, helping future health researchers to further address underrepresentation of marginalized groups in digital research.

Identifying and Addressing Needs and Preferences of Underserved and Marginalized Groups by Using Human-Centered Design Principles

A comprehensive understanding of the needs, challenges, and life circumstances of the target population is a key implementation driver for designing relevant, engaging, and effective mental health apps. This knowledge is particularly important when the app is a stand-alone intervention received during daily life outside of traditional psychotherapy or human support [ 50 ]. This goal can be best achieved through a participatory approach, which reflects a growing recognition among intervention researchers and developers that end users need to be involved in the creation of interventions and their future iterations [ 47 , 72 ]. This process may involve a series of stages, including (1) person-centered co-design to ensure that tools are developed to be acceptable to the underserved or marginalized populations as well as meet their specific needs, life circumstances, and cultural norms [ 47 ]; (2) iterative testing that incorporates users’ feedback on a rolling basis to ensure the relevance of the intervention [ 43 , 47 , 72 ]; and (3) changes and adaptations needed to meet users’ needs in “real-world” settings including consideration of economic viability and implementation [ 27 ].

Especially relevant for the underserved and marginalized groups is the need (or lack thereof) to culturally adapt app interventions for specific racial, ethnic, or cultural groups through this person-centered design. In traditional face-to-face interventions, some have argued that all treatments need to be culturally adapted to ensure their validity, relevance, and effectiveness since these interventions are often developed with individuals who can be substantially different from some marginalized populations [ 73 ]. Similar to culturally adapted face-to-face interventions [ 74 - 76 ], culturally adapted digital mental health interventions seem to be effective [ 77 , 78 ]. However, there is no evidence that these culturally adapted interventions outperform the original programs [ 79 , 80 ]. Given that culturally adapting digital interventions is a time-consuming and resource-intensive process, this approach may not be sustainable and limit the dissemination and implementation impact of app programs [ 28 ]. In lieu of culturally adapting digital interventions without careful consideration, Ramos and Chavira [ 28 ] recommend using information gathered through person-centered approaches to integrate culture into the use of already available digital interventions (including apps), using an idiographic, flexible, and personalized approach. This strategy may have a broader implementation and dissemination potential, given that few researchers and clinicians are in a position to develop new apps.

Embedding Apps Within Existing Care Structures

Several systematic reviews and meta-analyses have demonstrated that app-based mental health interventions with a human-support component are more effective and more acceptable than stand-alone, fully automatized, or self-administered apps [ 13 , 25 , 81 ]. Young people seem to want practical skills and usable tools to apply to their current daily life stressors to improve their well-being and functioning. Intervention engagement is enhanced if the intervention serves an obvious purpose, is relevant, and has a clear rationale and instructions, and embedding these interventions within the systems and structures that are already working with users (eg, clinical services, schools, universities, and community agencies) will likely improve implementation. Considering the broad and highly varied nature of intervention formats and modalities, it may be useful for future research to focus on identifying core components of app-based interventions (ie, active ingredients of interventions associated with uptake, adherence, and clinical outcomes) that will allow such integration of app interventions into the varied context of care for marginalized youth.

Conclusions

Despite the enthusiasm that has surrounded the potential of digital technologies to revolutionize mental health and health care service delivery, little evidence yet supports the use of mental health apps for marginalized and underserved young people. Despite the substantial financial and human investment directed to the development of mental health apps over several years, only a small proportion have empirical evidence to support their effectiveness, and there have been few attempts to develop or adapt interventions to meet some of the more unique and heterogeneous needs of diverse groups of young people. Although acceptability seems to be good, engagement is poor and attrition is high, particularly if not supported by in-person elements. Given that most interventions are implemented in high-income countries, very little is known about the generalizability of the findings to LMICs and to a range of adolescents and young people with different socioeconomic, cultural, and racial backgrounds. In this paper, we have drawn several insights about the feasibility and acceptability of mental health apps for underserved young people that may be useful to future app-based mental health promotion and treatment projects. However, before the widespread adoption and scaling-up of digital mental health interventions progresses further, especially for more vulnerable and underserved populations and in settings with limited resources, a greater understanding is needed on the unique barriers faced by these groups in accessing treatment and the types of services young people themselves prefer (eg, standard vs digital) followed by more rigorous and consistent demonstrations of feasibility, effectiveness, and cost-effectiveness.

Acknowledgments

This project received funding from the European Union’s Horizon 2020 research and innovation program (grant agreement number 754657).

The authors are grateful to the young people who took the time to participate in this research and who shared their insights with us. The authors would also like to thank those who supported this research including professional youth advisor Emily Bampton, research assistant Catherine Reeve, and researchers Dr Alexandra Langmeyer and Simon Weiser. Finally, the authors would like to thank the ECoWeB (Emotional Competence for Well-Being) Consortium for their support and feedback throughout the duration of this research, including, but not limited to, Dr Lexy Newbold, Dr Azucena Garcia Palacios, and Dr Guadalupe Molinari.

Data Availability

The data extracted to support the findings of the systematic review are available from the corresponding author upon reasonable request. Due to the confidential and sensitive nature of the interview transcripts, qualitative data will not be made available.

Authors' Contributions

HAB, LAN, and MF designed the systematic review including the research questions and methods. LAN carried out the database search. HAB, LAN, TM, and BF conducted the study screening and data extraction. TM did the study quality assessments, and HAB did the data synthesis and analysis. MF, SW, EW, and HAB were involved in the conception of the qualitative study. HAB, LAN, and SC conducted the quality study including conducting the qualitative interviews and analysis. HAB wrote the first draft and HAB, LAN, MF, and GR contributed substantially to manuscript drafting. All authors contributed to the manuscript and approved the submitted version.

Conflicts of Interest

None declared.

Search strategy.

Topic guide.

Study quality assessment.

  • Kessler RC, Berglund P, Demler O, Jin R, Merikangas KR, Walters EE. Lifetime prevalence and age-of-onset distributions of DSM-IV disorders in the National Comorbidity Survey Replication. Arch Gen Psychiatry. Jun 2005;62(6):593-602. [ CrossRef ] [ Medline ]
  • Solmi M, Radua J, Olivola M, Croce E, Soardo L, Salazar de Pablo G, et al. Age at onset of mental disorders worldwide: large-scale meta-analysis of 192 epidemiological studies. Mol Psychiatry. Jan 2022;27(1):281-295. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Brown A, Rice SM, Rickwood DJ, Parker AG. Systematic review of barriers and facilitators to accessing and engaging with mental health care among at-risk young people. Asia Pac Psychiatry. Mar 03, 2016;8(1):3-22. [ CrossRef ] [ Medline ]
  • Marrast L, Himmelstein DU, Woolhandler S. Racial and ethnic disparities in mental health care for children and young adults: a national study. Int J Health Serv. Oct 20, 2016;46(4):810-824. [ CrossRef ] [ Medline ]
  • Cook BL, Trinh NH, Li Z, Hou SS, Progovac AM. Trends in racial-ethnic disparities in access to mental health care, 2004-2012. Psychiatr Serv. Jan 01, 2017;68(1):9-16. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Rimes KA, Shivakumar S, Ussher G, Baker D, Rahman Q, West E. Psychosocial factors associated with suicide attempts, ideation, and future risk in lesbian, gay, and bisexual youth. Crisis. Mar 2019;40(2):83-92. [ CrossRef ] [ Medline ]
  • Reiss F. Socioeconomic inequalities and mental health problems in children and adolescents: a systematic review. Soc Sci Med. Aug 2013;90:24-31. [ CrossRef ] [ Medline ]
  • Warnecke RB, Oh A, Breen N, Gehlert S, Paskett E, Tucker KL, et al. Approaching health disparities from a population perspective: the National Institutes of Health Centers for Population Health and Health Disparities. Am J Public Health. Sep 2008;98(9):1608-1615. [ CrossRef ]
  • Schueller SM, Hunter JF, Figueroa C, Aguilera A. Use of digital mental health for marginalized and underserved populations. Curr Treat Options Psych. Jul 5, 2019;6(3):243-255. [ CrossRef ]
  • Introduction: NIH minority health and health disparities strategic plan 2021-2025. National Institutes of Health National Institute on Minority Health and Health Disparities. URL: https://www.nimhd.nih.gov/about/strategic-plan/nih-strategic-plan-directors-foreword.html [accessed 2024-06-29]
  • Improving inclusion of under-served groups in clinical research: Guidance from INCLUDE project. National Institute for Health and Care Excellence. Aug 7, 2020. URL: https:/​/www.​nihr.ac.uk/​documents/​improving-inclusion-of-under-served-groups-in-clinical-research-guidance-from-include-project/​25435 [accessed 2024-06-29]
  • Evans-Lacko S, Aguilar-Gaxiola S, Al-Hamzawi A, Alonso J, Benjet C, Bruffaerts R, et al. Socio-economic variations in the mental health treatment gap for people with anxiety, mood, and substance use disorders: results from the WHO World Mental Health (WMH) surveys. Psychol Med. Nov 27, 2017;48(9):1560-1571. [ CrossRef ]
  • Hollis C, Falconer CJ, Martin JL, Whittington C, Stockton S, Glazebrook C, et al. Annual research review: digital health interventions for children and young people with mental health problems - a systematic and meta-review. J Child Psychol Psychiatry. Apr 10, 2017;58(4):474-503. [ CrossRef ] [ Medline ]
  • Friis-Healy EA, Nagy GA, Kollins SH. It is time to REACT: opportunities for digital mental health apps to reduce mental health disparities in racially and ethnically minoritized groups. JMIR Ment Health. Jan 26, 2021;8(1):e25456. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Mobile fact sheet. Pew Research Center. Jan 31, 2024. URL: https://www.pewresearch.org/internet/fact-sheet/mobile/ [accessed 2024-07-12]
  • Silver L. 2. In emerging economies, smartphone adoption has grown more quickly among younger generations. Pew Research Center. Feb 5, 2019. URL: https:/​/www.​pewresearch.org/​global/​2019/​02/​05/​in-emerging-economies-smartphone-adoption-has-grown-more-quickly-among-younger-generations/​ [accessed 2024-07-12]
  • Pretorius C, Chambers D, Coyle D. Young people's online help-seeking and mental health difficulties: systematic narrative review. J Med Internet Res. Nov 19, 2019;21(11):e13873. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Grist R, Porter J, Stallard P. Mental health mobile apps for preadolescents and adolescents: a systematic review. J Med Internet Res. May 25, 2017;19(5):e176. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Lehtimaki S, Martic J, Wahl B, Foster KT, Schwalbe N. Evidence on digital mental health interventions for adolescents and young people: systematic overview. JMIR Ment Health. Apr 29, 2021;8(4):e25847. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Leech T, Dorstyn D, Taylor A, Li W. Mental health apps for adolescents and young adults: a systematic review of randomised controlled trials. Child Youth Serv Rev. Aug 2021;127:106073. [ CrossRef ]
  • Buttazzoni A, Brar K, Minaker L. Smartphone-based interventions and internalizing disorders in youth: systematic review and meta-analysis. J Med Internet Res. Jan 11, 2021;23(1):e16490. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Eisenstadt M, Liverpool S, Infanti E, Ciuvat RM, Carlsson C. Mobile apps that promote emotion regulation, positive mental health, and well-being in the general population: systematic review and meta-analysis. JMIR Ment Health. Nov 08, 2021;8(11):e31170. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Lipschitz J, Hogan TP, Bauer MS, Mohr DC. Closing the research-to-practice gap in digital psychiatry: the need to integrate implementation science. J Clin Psychiatry. May 14, 2019;80(3):18com12659. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hollis C. Youth mental health: risks and opportunities in the digital world. World Psychiatry. Feb 11, 2022;21(1):81-82. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Bear HA, Ayala Nunes L, DeJesus J, Liverpool S, Moltrecht B, Neelakantan L, et al. Determination of markers of successful implementation of mental health apps for young people: systematic review. J Med Internet Res. Nov 09, 2022;24(11):e40347. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. Mar 19, 2011;38(2):65-76. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ramos G, Ponting C, Labao JP, Sobowale K. Considerations of diversity, equity, and inclusion in mental health apps: a scoping review of evaluation frameworks. Behav Res Ther. Dec 2021;147:103990. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ramos G, Chavira DA. Use of technology to provide mental health care for racial and ethnic minorities: evidence, promise, and challenges. Cognit Behav Pract. Feb 2022;29(1):15-40. [ CrossRef ]
  • Wies B, Landers C, Ienca M. Digital mental health for young people: a scoping review of ethical promises and challenges. Front Digit Health. Sep 6, 2021;3:697072. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Liverpool S, Mota CP, Sales CM, Čuš A, Carletto S, Hancheva C, et al. Engaging children and young people in digital mental health interventions: systematic review of modes of delivery, facilitators, and barriers. J Med Internet Res. Jun 23, 2020;22(6):e16317. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Marcano Belisario JS, Huckvale K, Greenfield G, Car J, Gunn LH. Smartphone and tablet self management apps for asthma. Cochrane Database Syst Rev. Nov 27, 2013;2013(11):CD010013. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Marcano Belisario JS, Jamsek J, Huckvale K, O'Donoghue J, Morrison CP, Car J. Comparison of self-administered survey questionnaire responses collected using mobile apps versus other methods. Cochrane Database Syst Rev. Jul 27, 2015;2015(7):MR000042. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. Mar 29, 2021;372:n71. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hong QN, Fàbregues S, Bartlett G, Boardman F, Cargo MP, Dagenais P, et al. The Mixed Methods Appraisal Tool (MMAT) version 2018 for information professionals and researchers. Educ Inf. Dec 18, 2018;34(4):285-291. [ CrossRef ]
  • Crabtree BF, Miller WL. A template approach to text analysis: developing and using codebooks. In: Crabtree BF, Miller WL, editors. Doing Qualitative Research. Thousand Oaks, CA. SAGE Publications, Inc; 1992:93-109.
  • Newbold A, Warren FC, Taylor RS, Hulme C, Burnett S, Aas B, et al. Promotion of mental health in young adults via mobile phone app: study protocol of the ECoWeB (emotional competence for well-being in young adults) cohort multiple randomised trials. BMC Psychiatry. Sep 22, 2020;20(1):458. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. Jan 2006;3(2):77-101. [ CrossRef ]
  • Ellaway RH, Kehoe A, Illing J. Critical realism and realist inquiry in medical education. Acad Med. Jul 2020;95(7):984-988. [ CrossRef ] [ Medline ]
  • Willig C. Perspectives on the epistemological bases for qualitative research. In: Cooper H, Camic PM, Long DL, Panter AT, Rindskopf D, Sher KJ, editors. APA Handbook of Research Methods in Psychology, Vol. 1. Foundations, Planning, Measures, and Psychometrics. Washington, DC. American Psychological Association; 2012.
  • Fletcher AJ. Applying critical realism in qualitative research: methodology meets method. Int J Soc Res Methodol. Feb 29, 2016;20(2):181-194. [ CrossRef ]
  • Bohleber L, Crameri A, Eich-Stierli B, Telesko R, von Wyl A. Can we foster a culture of peer support and promote mental health in adolescence using a web-based app? A control group study. JMIR Ment Health. Sep 23, 2016;3(3):e45. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Deady M, Glozier N, Collins D, Einboden R, Lavender I, Wray A, et al. The utility of a mental health app in apprentice workers: a pilot study. Front Public Health. Sep 4, 2020;8:389. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Fleming JB, Hill YN, Burns MN. Usability of a culturally informed mHealth intervention for symptoms of anxiety and depression: feedback from young sexual minority men. JMIR Hum Factors. Aug 25, 2017;4(3):e22. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Francis J, Cross D, Schultz A, Armstrong D, Nguyen R, Branch-Smith C. Developing a smartphone application to support social connectedness and wellbeing in young people with cystic fibrosis. J Cyst Fibros. Mar 2020;19(2):277-283. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Geirhos A, Domhardt M, Lunkenheimer F, Temming S, Holl RW, Minden K, et al. Feasibility and potential efficacy of a guided internet- and mobile-based CBT for adolescents and young adults with chronic medical conditions and comorbid depression or anxiety symptoms (youthCOACH): a randomized controlled pilot trial. BMC Pediatr. Jan 29, 2022;22(1):69. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Glover AC, Schueller SM, Winiarski DA, Smith DL, Karnik NS, Zalta AK. Automated mobile phone-based mental health resource for homeless youth: pilot study assessing feasibility and acceptability. JMIR Ment Health. Oct 11, 2019;6(10):e15144. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Gonsalves PP, Hodgson ES, Kumar A, Aurora T, Chandak Y, Sharma R, et al. Design and development of the "POD adventures" smartphone game: a blended problem-solving intervention for adolescent mental health in India. Front Public Health. Aug 23, 2019;7:238. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Gonsalves PP, Hodgson ES, Bhat B, Sharma R, Jambhale A, Michelson D, et al. App-based guided problem-solving intervention for adolescent mental health: a pilot cohort study in Indian schools. Evid Based Ment Health. Feb 18, 2021;24(1):11-18. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Haug S, Paz Castro R, Meyer C, Filler A, Kowatsch T, Schaub MP. A mobile phone-based life skills training program for substance use prevention among adolescents: pre-post study on the acceptance and potential effectiveness of the program, Ready4life. JMIR Mhealth Uhealth. Oct 04, 2017;5(10):e143. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Leonard NR, Casarjian B, Fletcher RR, Praia C, Sherpa D, Kelemen A, et al. Theoretically-based emotion regulation strategies using a mobile app and wearable sensor among homeless adolescent mothers: acceptability and feasibility study. JMIR Pediatr Parent. Mar 01, 2018;1(1):e1. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Schueller SM, Glover AC, Rufa AK, Dowdle CL, Gross GD, Karnik NS, et al. A mobile phone-based intervention to improve mental health among homeless young adults: pilot feasibility trial. JMIR Mhealth Uhealth. Jul 02, 2019;7(7):e12347. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Escobar-Viera CG, Porta G, Coulter RW, Martina J, Goldbach J, Rollman BL. A chatbot-delivered intervention for optimizing social media use and reducing perceived isolation among rural-living LGBTQ+ youth: development, acceptability, usability, satisfaction, and utility. Internet Interv. Dec 2023;34:100668. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Torok M, Han J, McGillivray L, Wong Q, Werner-Seidler A, O'Dea B, et al. The effect of a therapeutic smartphone application on suicidal ideation in young adults: findings from a randomized controlled trial in Australia. PLoS Med. May 2022;19(5):e1003978. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Yang YJ, Chung KM. Pilot randomized control trial of an app-based CBT program for reducing anxiety in individuals with ASD without intellectual disability. J Autism Dev Disord. Apr 2023;53(4):1331-1346. [ CrossRef ] [ Medline ]
  • Children and parents: media use and attitudes report 2018. Ofcom. Mar 5, 2019. URL: https:/​/www.​ofcom.org.uk/​media-use-and-attitudes/​media-habits-children/​children-and-parents-media-use-and-attitudes-report-2018/​ [accessed 2024-07-12]
  • Silver L. Smartphone ownership is growing rapidly around the world, but not always equally. Pew Research Center. Feb 5, 2019. URL: https:/​/www.​pewresearch.org/​global/​2019/​02/​05/​smartphone-ownership-is-growing-rapidly-around-the-world-but-not-always-equally/​ [accessed 2024-07-12]
  • Chan A, Kow R, Cheng JK. Adolescents’ perceptions on smartphone applications (apps) for health management. JournalMTM. Aug 2017;6(2):47-55. [ CrossRef ]
  • Lipschitz JM, Van Boxtel R, Torous J, Firth J, Lebovitz JG, Burdick KE, et al. Digital mental health interventions for depression: scoping review of user engagement. J Med Internet Res. Oct 14, 2022;24(10):e39204. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ng MM, Firth J, Minen M, Torous J. User engagement in mental health apps: a review of measurement, reporting, and validity. Psychiatr Serv. Jul 01, 2019;70(7):538-544. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Oakley-Girvan I, Yunis R, Longmire M, Ouillon JS. What works best to engage participants in mobile app interventions and e-health: a scoping review. Telemed J E Health. Jun 2022;28(6):768-780. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Brian RM, Ben-Zeev D. Mobile health (mHealth) for mental health in Asia: objectives, strategies, and limitations. Asian J Psychiatr. Aug 2014;10:96-100. [ CrossRef ] [ Medline ]
  • Li H, Lewis C, Chi H, Singleton G, Williams N. Mobile health applications for mental illnesses: an Asian context. Asian J Psychiatr. Dec 2020;54:102209. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Spielmans GI, Gatlin ET, McFall JP. The efficacy of evidence-based psychotherapies versus usual care for youths: controlling confounds in a meta-reanalysis. Psychother Res. Mar 23, 2010;20(2):234-246. [ CrossRef ] [ Medline ]
  • Rapport F, Smith J, Hutchinson K, Clay-Williams R, Churruca K, Bierbaum M, et al. Too much theory and not enough practice? The challenge of implementation science application in healthcare practice. J Eval Clin Pract. Dec 15, 2022;28(6):991-1002. [ CrossRef ] [ Medline ]
  • Jensen SA, Corralejo SM. Measurement issues: large effect sizes do not mean most people get better - clinical significance and the importance of individual results. Child Adolesc Ment Health. Sep 2017;22(3):163-166. [ CrossRef ] [ Medline ]
  • Increasing diversity in research participation: a good practice guide for engaging with underrepresented groups. National Health Service England. Feb 14, 2023. URL: https://www.england.nhs.uk/aac/publication/increasing-diversity-in-research-participation/ [accessed 2023-03-16]
  • Bergen N, Labonté R. "Everything is perfect, and we have no problems": detecting and limiting social desirability bias in qualitative research. Qual Health Res. Apr 2020;30(5):783-792. [ CrossRef ] [ Medline ]
  • Vogt WP, Johnson RB. Dictionary of Statistics & Methodology: A Nontechnical Guide for the Social Sciences. Thousand Oaks, CA. SAGE Publications; 2011.
  • Midgley N, Parkinson S, Holmes J, Stapley E, Eatough V, Target M. "Did I bring it on myself?" An exploratory study of the beliefs that adolescents referred to mental health services have about the causes of their depression. Eur Child Adolesc Psychiatry. Jan 20, 2017;26(1):25-34. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Willis A, Isaacs T, Khunti K. Improving diversity in research and trial participation: the challenges of language. Lancet Public Health. Jul 2021;6(7):e445-e446. [ CrossRef ]
  • Ellard-Gray A, Jeffrey NK, Choubak M, Crann SE. Finding the hidden participant: solutions for recruiting hidden, hard-to-reach, and vulnerable populations. Int J Qual Method. Dec 17, 2015;14(5). [ CrossRef ]
  • Moltrecht B, Patalay P, Bear HA, Deighton J, Edbrooke-Childs J. A transdiagnostic, emotion regulation app (Eda) for children: design, development, and lessons learned. JMIR Form Res. Jan 19, 2022;6(1):e28300. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Bernal G, Jiménez-Chafey MI, Domenech Rodríguez MM. Cultural adaptation of treatments: a resource for considering culture in evidence-based practice. Prof Psychol Res Pract. Aug 2009;40(4):361-368. [ CrossRef ]
  • Benish SG, Quintana S, Wampold BE. Culturally adapted psychotherapy and the legitimacy of myth: a direct-comparison meta-analysis. J Couns Psychol. Jul 2011;58(3):279-289. [ CrossRef ] [ Medline ]
  • Hall GC, Ibaraki AY, Huang ER, Marti CN, Stice E. A meta-analysis of cultural adaptations of psychological interventions. Behav Ther. Nov 2016;47(6):993-1014. [ CrossRef ] [ Medline ]
  • Huey SJJ, Polo AJ. Evidence-based psychosocial treatments for ethnic minority youth. J Clin Child Adolesc Psychol. Jan 03, 2008;37(1):262-301. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ellis DM, Draheim AA, Anderson PL. Culturally adapted digital mental health interventions for ethnic/racial minorities: a systematic review and meta-analysis. J Consult Clin Psychol. Oct 2022;90(10):717-733. [ CrossRef ] [ Medline ]
  • Harper Shehadeh M, Heim E, Chowdhary N, Maercker A, Albanese E. Cultural adaptation of minimally guided interventions for common mental disorders: a systematic review and meta-analysis. JMIR Ment Health. Sep 26, 2016;3(3):e44. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Balci S, Spanhel K, Sander LB, Baumeister H. Culturally adapting internet- and mobile-based health promotion interventions might not be worth the effort: a systematic review and meta-analysis. NPJ Digit Med. Mar 23, 2022;5(1):34. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Spanhel K, Balci S, Feldhahn F, Bengel J, Baumeister H, Sander LB. Cultural adaptation of internet- and mobile-based interventions for mental disorders: a systematic review. NPJ Digit Med. Aug 25, 2021;4(1):128. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Grist R, Croker A, Denne M, Stallard P. Technology delivered interventions for depression and anxiety in children and adolescents: a systematic review and meta-analysis. Clin Child Fam Psychol Rev. Jun 18, 2019;22(2):147-171. [ FREE Full text ] [ CrossRef ] [ Medline ]

Abbreviations

Emotional Competence for Well-Being
low- and middle-income country
Mixed Methods Appraisal Tool
not in education, employment, or training
Preferred Reporting Items for Systematic Reviews and Meta-Analyses

Edited by T de Azevedo Cardoso, S Ma; submitted 13.05.23; peer-reviewed by P Whelan, I Vainieri, H Bao; comments to author 13.09.23; revised version received 26.09.23; accepted 10.06.24; published 30.07.24.

©Holly Alice Bear, Lara Ayala Nunes, Giovanni Ramos, Tanya Manchanda, Blossom Fernandes, Sophia Chabursky, Sabine Walper, Edward Watkins, Mina Fazel. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 30.07.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

COMMENTS

  1. A Guide to Writing a Qualitative Systematic Review Protocol to Enhance

    Methodology: The key elements required in a systematic review protocol are discussed, with a focus on application to qualitative reviews: Development of a research question; formulation of key search terms and strategies; designing a multistage review process; critical appraisal of qualitative literature; development of data extraction ...

  2. Are Systematic Reviews Qualitative or Quantitative

    A systematic review can be qualitative, quantitative, or a combination of the two. The approach that is chosen is determined by the research question and the scope of the research. When qualitative and quantitative techniques are used together in a given study, it is called a mixed method. In a mixed-method study, synthesis for the quantitative ...

  3. Guidance on Conducting a Systematic Literature Review

    Literature reviews establish the foundation of academic inquires. However, in the planning field, we lack rigorous systematic reviews. In this article, through a systematic search on the methodology of literature review, we categorize a typology of literature reviews, discuss steps in conducting a systematic literature review, and provide suggestions on how to enhance rigor in literature ...

  4. Qualitative systematic reviews: their importance for our understanding

    A qualitative systematic review brings together research on a topic, systematically searching for research evidence from primary qualitative studies and drawing the findings together. There is a debate over whether the search needs to be exhaustive. 1 , 2 Methods for systematic reviews of quantitative research are well established and explicit ...

  5. Guidance to best tools and practices for systematic reviews

    The gray literature and a search of trials may also reveal important details about topics that would otherwise be missed ... Qualitative systematic review: Qualitative synthesis: Synthesis of qualitative data a: Qualitative synthesis: Synthesis without meta-analysis: Narrative synthesis b, narrative summary.

  6. How to Do a Systematic Review: A Best Practice Guide for ...

    The best reviews synthesize studies to draw broad theoretical conclusions about what a literature means, linking theory to evidence and evidence to theory. This guide describes how to plan, conduct, organize, and present a systematic review of quantitative (meta-analysis) or qualitative (narrative review, meta-synthesis) information.

  7. Systematic Review

    Systematic review vs. literature review. A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarize and evaluate previous work, without using a formal, explicit method. ... Narrative (qualitative): Summarize the information in ...

  8. Guidelines for writing a systematic review

    A preliminary review, which can often result in a full systematic review, to understand the available research literature, is usually time or scope limited. Complies evidence from multiple reviews and does not search for primary studies. 3. Identifying a topic and developing inclusion/exclusion criteria.

  9. Qualitative and mixed methods in systematic reviews

    This Special Issue of Systematic Reviews Journal is providing a focus for these new methods of review whether these use qualitative review methods on their own or mixed together with more quantitative approaches. ... Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Med Res Methodol. 2008;8:45.

  10. An overview of methodological approaches in systematic reviews

    1. INTRODUCTION. Evidence synthesis is a prerequisite for knowledge translation. 1 A well conducted systematic review (SR), often in conjunction with meta‐analyses (MA) when appropriate, is considered the "gold standard" of methods for synthesizing evidence related to a topic of interest. 2 The central strength of an SR is the transparency of the methods used to systematically search ...

  11. How-to conduct a systematic literature review: A quick guide for

    Method details Overview. A Systematic Literature Review (SLR) is a research methodology to collect, identify, and critically analyze the available research studies (e.g., articles, conference proceedings, books, dissertations) through a systematic procedure [12].An SLR updates the reader with current literature about a subject [6].The goal is to review critical points of current knowledge on a ...

  12. PDF Systematic Literature Reviews: an Introduction

    Systematic literature reviews (SRs) are a way of synthesising scientific evidence to answer a particular ... SRs treat the literature review process like a scientific process, and apply concepts of empirical research in order to make the review process more transparent and replicable and to reduce the ... synthesize qualitative evidence to ...

  13. Systematic reviews: Brief overview of methods, limitations, and

    CONCLUSION. Siddaway 16 noted that, "The best reviews synthesize studies to draw broad theoretical conclusions about what the literature means, linking theory to evidence and evidence to theory" (p. 747). To that end, high quality systematic reviews are explicit, rigorous, and reproducible. It is these three criteria that should guide authors seeking to write a systematic review or editors ...

  14. Why Qualitative Research Needs More and Better Systematic Review

    Those doing qualitative research cannot "opt out" of knowing their relevant scholarly conversations. Undertaking a qualitative systematic review provides a vital means to know and tune into the past conversation in your topic area that allows the researcher to position themselves and their work substantively, ontologically, theoretically, and methodologically in this landscape.

  15. How can systematic reviews incorporate qualitative research? A critical

    We show how every stage of the review process, from asking the review question through to searching for and sampling the evidence, appraising the evidence and producing a synthesis, provoked profound questions about whether a review that includes qualitative research can remain consistent with the frame offered by current systematic review ...

  16. Literature review as a research methodology: An ...

    Provides guidelines for conducting a systematic literature review in management research. Torraco (2005) ... This is often referred to as a qualitative systematic review, which can be described as a method of comparing findings from qualitative studies (Grant & Booth, 2009). That is, a strict systematic review process is used to collect ...

  17. Integrating Qualitative Research into Systematic Reviews

    A Guide to Writing a Qualitative Systematic Review Protocol to Enhance Evidence-Based Practice in Nursing and Health Care. ... Written by two highly-respected social scientists, provides an overview of systematic literature review methods: outlines the rationale and methods of systematic reviews; gives worked examples from social science and ...

  18. A Guide to Writing a Qualitative Systematic Review Protocol to Enhance

    The key elements required in a systematic review protocol are discussed, with a focus on application to qualitative reviews: Development of a research question; formulation of key search terms and strategies; designing a multistage review process; critical appraisal of qualitative literature; development of data extraction techniques; and data ...

  19. Qualitative Data Analysis in Systematic Reviews

    A qualitative systematic review aggregates integrates and interprets data from qualitative studies, which is collected through observation, interviews, and verbal interactions. Included studies may also use other qualitative methodologies of data collection in the relevant literature. The use of qualitative systematic reviews analyzes the ...

  20. Research Guides: Systematic Reviews: Types of Literature Reviews

    Qualitative, narrative synthesis. Thematic analysis, may include conceptual models. Rapid review. Assessment of what is already known about a policy or practice issue, by using systematic review methods to search and critically appraise existing research. Completeness of searching determined by time constraints.

  21. Experiences of older people with multimorbidity regarding self

    This qualitative systematic review aimed to consolidate existing evidence on the self-management experience of older patients with multimorbidity worldwide. Methods. Nine databases were searched, for papers published from database inception to April 2023. The systematic review was conducted according to the systematic review method of ...

  22. Rationale and Standards for the Systematic Review of Qualitative

    Rationale and Standards for the Systematic Review of Qualitative Literature in Health Services Research Jennie Popay , Anne Rogers , and Gareth Williams View all authors and affiliations Volume 8 , Issue 3

  23. Method for conducting systematic literature review and meta-analysis

    Method for systematic literature review and meta-analysis studies: Name and reference of original method: 1). ... It covers both the qualitative and quantitative explanation and narration of the results, making discussion, indicating the way forward about the future research works and inferring a conclusion. The data from the final list of ...

  24. Equitable and accessible informed healthcare consent process for people

    A systematic literature review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Protocols (PRISMA-P) systematic literature review protocol.35 The PRISMA 2020 checklist36 and ENhancing Transparency in REporting the synthesis of Qualitative research (ENTREQ) reporting guidelines were also followed.37 ...

  25. Qualitative review methods

    The lack of detail reported in the qualitative literature also made it unfeasible to classify interventions using the system developed for the quantitative review. Whereas the quantitative review concerned trials of specific interventions, approximately half of the studies in the qualitative review 99 , 101 , 107 - 130 included more than one ...

  26. Journal of Medical Internet Research

    Methods: We conducted 2 sequential studies, consisting of a systematic literature review of mental health apps for underserved populations followed by a qualitative study with underserved young male participants (n=20; age: mean 19). ... Systematic Review and Qualitative Study Authors of this article: Holly Alice ...

  27. Coaching culture: An evidence review and framework for future research

    Given the colossal interest in creating 'coaching cultures', we update the 2014 literature review by Gormley and van Nieuwerburgh and extend this work by applying a Systematic Literature Review (SLR) methodology. In doing so, we detangle definitions and the conditions under which 'coaching cultures' can be developed. We also explore contemporary interventions, report on organisational ...

  28. Teachers' Beliefs About Language Diversity and Multilingual Learners: A

    Teachers' beliefs influence their teaching practices. Given the U.S. Secretary of Education's push to increase multilingualism, this systematic literature review examines teachers' beliefs about language diversity and multilingual learners in relation to teacher experiences, teaching practices, and external factors.