In terms of participants’ mental health and well-being ( Table 4 ), the mean Patient Health Questionnaire-9 score in the United Kingdom was 9.7 (SD 7.3) compared with 5 (SD 1.4) in Spain.
Measures | United Kingdom (n=13), mean (SD) | Spain (n=2), mean (SD) | Germany (n=5), mean (SD) |
WEMWBS | 44.2 (7.8) | 51.5 (3.5) | — |
PHQ-9 | 9.7 (7.3) | 5 (1.4) | — |
GAD-7 | 6.5 (4.4) | 8.5 (2.1) | — |
a WEBWBS: Warwick-Edinburgh Mental Well-Being Scale.
b Not available.
c PHQ-9: Patient Health Questionnaire-9.
d GAD-7: Generalized Anxiety Disorder Assessment.
A key finding was that despite best efforts and financial incentives, recruiting underserved young male participants, especially in Spain and Germany, was challenging. This might suggest that these young people may not deem such an emotional competence app as relevant or useful to them, making recruitment and engagement problematic. We also assessed if the app was deemed acceptable (ie, useful, agreeable, palatable, or satisfactory) and appropriate (ie, relevant, suitable, or compatible). Overall, the app was viewed by participants in the United Kingdom, Spain, and Germany as being appropriate and relevant for young people of different ages and walks of life, as they thought that all young people had a smartphone and were adept at using technology:
So, I was able to learn about my feelings, I was able to evaluate how I actually felt today, concerning my feelings, if I was angry or I was sad. I was actually able to write them down in detail. [Participant in Germany]
Several participants commented that the content of the app was best suited to university and school students. Another common view was that the app was better suited to those struggling with their mental health and that it was less relevant for those for whom things were going well. Many participants perceived the app to be aimed at improving mental health problems, as opposed to being a universal intervention intended to improve well-being, which represented a barrier to engagement. Of those who reported that the app was not relevant to them, they did see it as being of potential use to friends and family members who were stressed, anxious, or going through a difficult time:
There will be folks who maybe aren’t going through a good time in their lives, and they will need the app to feel... to understand themselves, mostly. And I think it’s relevant at any age, because I am lucky that I don’t think I need it as much as someone else who feels like that. [Participant in Spain]
Partly it was important, partly it was not. I’ll give an example again, for example if a refugee came to Germany from a war zone, it’s going to be difficult, very difficult to find a topic that would fit him, for the future I mean, so the version now is already okay if you want all persons to use this app. Partly it’s already relevant and partly it’s not. If someone has mental problems or bad experiences, you cannot find such a topic in the app. [Participant in Germany]
Although some participants reported using the app regularly during the 4-week study period, a consistent finding was that participants tended to use the app most when they first downloaded it, with a marked reduction in use over time:
Uh, I probably used it about three times in the first week. And then not really that much at all I’m afraid. [Participant in the United Kingdom]
I don’t know, I just dropped off using a little bit after a couple of weeks, but I’ve been trying to keep on top of doing that like the daily rating things and everything.... I kind of lost my motivation to use it. [Participant in the United Kingdom]
We identified several barriers that hindered participants’ engagement and use of the app. These included the following: (1) repetitive and time-consuming app contents, (2) a paucity of new content and personalized or interactive tools (eg, matching mood to tools), (3) unclear instructions, (4) a lack of rationale for the app, (5) perceiving the app as not being relevant, (6) a lack of motivation, and (7) privacy concerns:
Yes, for example, I would not like to write in this diary, because I do not know if it would be one hundred percent anonymous and if others might read it. And maybe I have more privacy if I do not write it. [Participant in Germany]
I think by now I would slowly stop using the app. It was nice up to this point, but I think for me I might need a step further now. To really deal with my personal problems and I don’t know how much an app like this can help and that rather an expert and therapy is needed. [Participant in Germany]
For the asylum seekers and refugees in Germany, the language and content of the app was not suited to their needs. The participants would have preferred the app in their native language as some had to use translation programs to help access the content. Furthermore, specific topics of relevance to refugees were missing, such as dealing with asylum uncertainty, whereabouts of family members, and their living situation.
Finally, underserved young people, including asylum seekers and refugees, migrants, and those NEET, are more likely to experience financial deprivations and therefore less likely to pay directly for apps, especially for those that do not address their primary difficulties:
If it came to the point that I had to pay for it, I would look for free options. [Participant in Germany]
The use of mobile apps in mental health care continues to attract interest and investment; however, research geared toward understanding the needs of marginalized and underserved populations is still nascent. This study, focusing on the implementation of mental health apps in underserved young people, highlighted that little research exists to support the widespread adoption of these apps as a mental health intervention for marginalized and underserved groups. Findings from both our systematic review and qualitative study were largely consistent: markers of acceptability and usability were positive; however, engagement for underserved young people was low, which is notable given the widespread ownership of smartphones [ 55 , 56 ]. To date, research has focused primarily on efficacy studies rather than effectiveness and implementation in “real-world” settings and may have overestimated users’ “natural tendency” to adopt smartphone apps for their mental health and well-being [ 57 ]. Our findings suggest that despite the rapid proliferation of mobile mental health technology, the uptake and engagement of mental health apps among marginalized young people are low and remain a key implementation challenge.
Our data suggest that establishing and maintaining user commitment and engagement in the content of the intervention as intended is a pervasive challenge across mental health apps and marginalized populations, and premature dropout was prominent in nearly all the included studies. This is consistent with the literature that suggests that the majority of those offered these app-based interventions do not engage at the recommended frequency or complete the full course of treatment [ 58 , 59 ]. In this study, various app components were associated with engagement level, with the most engaging interventions providing young people with some form of associated real-human interaction and those having a more interactive interface. This aligned with other findings that the feedback of personalized information to participants is an especially important aspect of creating engaging and impactful digital tools [ 60 ]. Young people tend to quickly disengage if there are technical difficulties or if the app does not specifically target their perceived needs [ 41 , 50 , 51 ]. Furthermore, recruitment of marginalized groups to app-based studies is difficult. For instance, in this study, the use of advertisements, financial incentives, vouchers, and prize draw incentives seemed to be insufficient to recruit a significant number of participants in Spain and Germany.
Measuring engagement is a challenge that has likely contributed to our lack of knowledge on app components that effectively increase user engagement. Reporting engagement with mental health apps in intervention trials is highly variable, and a number of basic metrics of intervention engagement, such as rate of intervention uptake, weekly use patterns, and number of intervention completers, are available, yet not routinely reported [ 58 , 59 ]. The results of this study highlight the importance of objective engagement measures and that relying on positive subjective self-reports of usability, satisfaction, acceptability, and feasibility is insufficient to determine actual engagement. Furthermore, the findings suggest that apps involving human interactions with a professional (eg, therapist or counselor) or that are completed in a supervised setting tend to be more acceptable and effective and have higher engagement rates [ 47 , 48 ]. Our research suggests that similar to traditional face-to-face mental health services, app-based programs still face numerous barriers to reach marginalized youth, especially since the mental health apps available to the public do not seem to consider the unique developmental needs of these groups, participants do not seem to perceive an obvious benefit from using them, and some potential users prefer to interact with a professional face to face. Thus, it is also possible that the digital mental health field might be inadvertently contributing to mental health inequities among this population by not engaging marginalized groups sufficiently at the outset of research to ensure that the designed app meets their needs. However, for the studies included in this study that did engage these groups in the co-design of the apps, there was no notable improvement in engagement. Thus, we hope these findings encourage researchers and clinicians to think more critically of the role that mental health apps can truly have in addressing mental health equities among underserved groups.
As in other areas of mental health research, young people from LMICs were underrepresented in these studies, which typically originated from high-income settings, including the United States, Australia, and Canada. There are relatively few app-based interventions that were designed or adapted for young people in LMICs that have been rigorously evaluated or are even available in local languages [ 47 , 48 ]. Many living in LMIC regions, for example, adults in Asian countries, are often faced with apps that are not culturally relevant or in the right language [ 61 ]. These inequities are surprising given the high rates of smartphone use in Asia, even in rural regions [ 62 ]. Yet, it is still likely that youth in this region faced barriers related to data availability and more limited phone access, which will likely inhibit the broad implementation of apps beyond research studies [ 16 ]. Considerable work is required to ensure the availability of mental health apps that fit a wide range of user needs and preferences. It is important to ensure that the acceptability and feasibility of mental health apps for young people residing in LMICs are prioritized so that they are not further excluded from relevant mental health research.
Finally, a significant challenge is the lack of diversity in mental health app research participation, which limits our understanding of “real-world” efficacy and implementation for underserved and marginalized groups. While undoubtedly invaluable, and indeed deemed gold standard when evaluating efficacy of interventions, randomized controlled trial of mental health apps are not without flaws [ 63 , 64 ]. Trial recruitment is often highly selective due to stringent inclusion and exclusion criteria resulting in lower inclusion in research than one would expect from population estimates [ 65 ]. In the United Kingdom, the National Institute for Health and Care Research data have revealed that geographies with the highest burden of disease also have the lowest number of patients taking part in research [ 66 ]. The postcodes in which research recruitment is low also aligns closely to areas where earnings are the lowest and indexes of deprivation are the highest [ 66 ]. There are many reasons why some groups are underrepresented in research: language barriers, culturally inappropriate explanations, poor health literacy and the use of jargon, communication not being suitable for people with special learning needs, requirement to complete many administrative forms, negative financial impact in participating, lack of effective incentives for participation, or lack of clarity around incentives, and specific cultural and religious beliefs [ 66 ]. Failing to include a broad range of participants is problematic in that results may not be generalizable to a broad population.
Although this research was carefully executed and used a robust methodological approach with an exhaustive search strategy, it is not without limitations. Foremost, although the systematic review attempted to identify and include as many articles as possible, some papers may have been missed because of the inconsistencies in how feasibility and acceptability outcomes are recorded and reported. It was also difficult to ensure that all apps for this age group were identified because those aged between 15 and 25 years are harder to differentiate in adolescent and adult studies, meaning we might have missed some relevant studies where data could not be disaggregated by age. The exclusion of gray literature (eg, institutional reports and websites) may have also made us overlook potentially relevant apps, albeit lacking the quality assurance of peer-reviewed research. It is also likely that commercial organizations, including app companies, collect rich user demographic and engagement data but do not share it publicly, thus limiting our ability to conduct empirical analyses about the “real-world” acceptability, engagement, and implementation for specific populations. We did not analyze the extent to which publication bias may have influenced the results of our search, and, therefore, there may be a much higher number of mental health apps that have been developed with an underserved sample of young people, but due to their lack of efficacy or acceptability, these studies have not been submitted or accepted for publication. The sample sizes of many of the included studies were relatively low, which potentially limits their generalizability. However, we included all study designs so as to ensure that our learning from existing research was maximized. Furthermore, many of the studies included in the systematic review, as well as our qualitative study, had some form of language competency as an inclusion criterion (eg, English speaking), which likely excludes important perspectives from the results. For the qualitative study, we were only able to gather data from those who had used the app at least once and who were therefore somewhat engaged in the app. Despite our best efforts, we were unable to recruit participants who, following consent, had never then downloaded or used the app and so could not explore barriers to engagement for the least engaged young people or understand why the app was not appealing to those who chose not to proceed or take part. Those who did participate in this research were financially incentivized to do so and often highlighted the importance of this incentive in keeping them engaged. Therefore, we were unable to draw conclusions about the naturalistic engagement, feasibility, and acceptability of the app, if it were to be made available without payment in schools, universities, and health services or to be made commercially available on the app marketplace. It is also possible that social desirability bias (ie, a tendency to present reality to align with what is perceived to be socially acceptable) occurred during the interviews, whereby participants responded to the interview questions in a manner that they believed would be more acceptable to the study team, concealing their true opinions or experiences [ 67 , 68 ]. As previously noted by others, results may be subject to further bias in that findings could be led by more articulate young people, while it is more difficult to hear the voices of those who are less articulate or digitally literate [ 69 ]. Finally, it is also possible that the positionality of the research team, including our own experiences, backgrounds, and biases, impacted what information participants disclosed to the research team as well as the interpretation of the qualitative data in this study.
To overcome this complex engagement and implementation challenge, we have taken together our findings with relevant previous literature to generate 3 key suggestions about how to improve the feasibility and potential utility of apps for young people from marginalized and underserved populations.
Studies should aim to prioritize the inclusion of marginalized groups in trials testing the effectiveness of digital interventions by intentionally planning recruitment efforts aimed to reach these communities [ 70 ]. First, steps can be taken to build trust, connections, and credibility between the research team and these communities. NHS England [ 66 ] suggests involving representatives from those groups during the inception and implementation of recruitment efforts. This approach ensures that the intervention is relevant to the target group by meeting their preferences and needs, incorporating culturally salient factors relevant for recruitment efforts, addressing concerns about community mistrust and participant resource constraints, and establishing partnerships with key community stakeholders that can be gatekeepers in the community [ 14 , 71 ]. These strategies are likely to improve research accessibility, recruitment, and retention. Research teams need to ensure that the findings and any actionable takeaways from the research conducted with the participants are shared with them by asking participants how they would like to receive this information (eg, verbal, written, or via a trusted advocate). Equally important is to explain that the research process can be slow. These steps help create a positive legacy for the research project and build trust between individuals and public institutions, helping future health researchers to further address underrepresentation of marginalized groups in digital research.
A comprehensive understanding of the needs, challenges, and life circumstances of the target population is a key implementation driver for designing relevant, engaging, and effective mental health apps. This knowledge is particularly important when the app is a stand-alone intervention received during daily life outside of traditional psychotherapy or human support [ 50 ]. This goal can be best achieved through a participatory approach, which reflects a growing recognition among intervention researchers and developers that end users need to be involved in the creation of interventions and their future iterations [ 47 , 72 ]. This process may involve a series of stages, including (1) person-centered co-design to ensure that tools are developed to be acceptable to the underserved or marginalized populations as well as meet their specific needs, life circumstances, and cultural norms [ 47 ]; (2) iterative testing that incorporates users’ feedback on a rolling basis to ensure the relevance of the intervention [ 43 , 47 , 72 ]; and (3) changes and adaptations needed to meet users’ needs in “real-world” settings including consideration of economic viability and implementation [ 27 ].
Especially relevant for the underserved and marginalized groups is the need (or lack thereof) to culturally adapt app interventions for specific racial, ethnic, or cultural groups through this person-centered design. In traditional face-to-face interventions, some have argued that all treatments need to be culturally adapted to ensure their validity, relevance, and effectiveness since these interventions are often developed with individuals who can be substantially different from some marginalized populations [ 73 ]. Similar to culturally adapted face-to-face interventions [ 74 - 76 ], culturally adapted digital mental health interventions seem to be effective [ 77 , 78 ]. However, there is no evidence that these culturally adapted interventions outperform the original programs [ 79 , 80 ]. Given that culturally adapting digital interventions is a time-consuming and resource-intensive process, this approach may not be sustainable and limit the dissemination and implementation impact of app programs [ 28 ]. In lieu of culturally adapting digital interventions without careful consideration, Ramos and Chavira [ 28 ] recommend using information gathered through person-centered approaches to integrate culture into the use of already available digital interventions (including apps), using an idiographic, flexible, and personalized approach. This strategy may have a broader implementation and dissemination potential, given that few researchers and clinicians are in a position to develop new apps.
Several systematic reviews and meta-analyses have demonstrated that app-based mental health interventions with a human-support component are more effective and more acceptable than stand-alone, fully automatized, or self-administered apps [ 13 , 25 , 81 ]. Young people seem to want practical skills and usable tools to apply to their current daily life stressors to improve their well-being and functioning. Intervention engagement is enhanced if the intervention serves an obvious purpose, is relevant, and has a clear rationale and instructions, and embedding these interventions within the systems and structures that are already working with users (eg, clinical services, schools, universities, and community agencies) will likely improve implementation. Considering the broad and highly varied nature of intervention formats and modalities, it may be useful for future research to focus on identifying core components of app-based interventions (ie, active ingredients of interventions associated with uptake, adherence, and clinical outcomes) that will allow such integration of app interventions into the varied context of care for marginalized youth.
Despite the enthusiasm that has surrounded the potential of digital technologies to revolutionize mental health and health care service delivery, little evidence yet supports the use of mental health apps for marginalized and underserved young people. Despite the substantial financial and human investment directed to the development of mental health apps over several years, only a small proportion have empirical evidence to support their effectiveness, and there have been few attempts to develop or adapt interventions to meet some of the more unique and heterogeneous needs of diverse groups of young people. Although acceptability seems to be good, engagement is poor and attrition is high, particularly if not supported by in-person elements. Given that most interventions are implemented in high-income countries, very little is known about the generalizability of the findings to LMICs and to a range of adolescents and young people with different socioeconomic, cultural, and racial backgrounds. In this paper, we have drawn several insights about the feasibility and acceptability of mental health apps for underserved young people that may be useful to future app-based mental health promotion and treatment projects. However, before the widespread adoption and scaling-up of digital mental health interventions progresses further, especially for more vulnerable and underserved populations and in settings with limited resources, a greater understanding is needed on the unique barriers faced by these groups in accessing treatment and the types of services young people themselves prefer (eg, standard vs digital) followed by more rigorous and consistent demonstrations of feasibility, effectiveness, and cost-effectiveness.
This project received funding from the European Union’s Horizon 2020 research and innovation program (grant agreement number 754657).
The authors are grateful to the young people who took the time to participate in this research and who shared their insights with us. The authors would also like to thank those who supported this research including professional youth advisor Emily Bampton, research assistant Catherine Reeve, and researchers Dr Alexandra Langmeyer and Simon Weiser. Finally, the authors would like to thank the ECoWeB (Emotional Competence for Well-Being) Consortium for their support and feedback throughout the duration of this research, including, but not limited to, Dr Lexy Newbold, Dr Azucena Garcia Palacios, and Dr Guadalupe Molinari.
The data extracted to support the findings of the systematic review are available from the corresponding author upon reasonable request. Due to the confidential and sensitive nature of the interview transcripts, qualitative data will not be made available.
HAB, LAN, and MF designed the systematic review including the research questions and methods. LAN carried out the database search. HAB, LAN, TM, and BF conducted the study screening and data extraction. TM did the study quality assessments, and HAB did the data synthesis and analysis. MF, SW, EW, and HAB were involved in the conception of the qualitative study. HAB, LAN, and SC conducted the quality study including conducting the qualitative interviews and analysis. HAB wrote the first draft and HAB, LAN, MF, and GR contributed substantially to manuscript drafting. All authors contributed to the manuscript and approved the submitted version.
None declared.
Search strategy.
Topic guide.
Study quality assessment.
Emotional Competence for Well-Being |
low- and middle-income country |
Mixed Methods Appraisal Tool |
not in education, employment, or training |
Preferred Reporting Items for Systematic Reviews and Meta-Analyses |
Edited by T de Azevedo Cardoso, S Ma; submitted 13.05.23; peer-reviewed by P Whelan, I Vainieri, H Bao; comments to author 13.09.23; revised version received 26.09.23; accepted 10.06.24; published 30.07.24.
©Holly Alice Bear, Lara Ayala Nunes, Giovanni Ramos, Tanya Manchanda, Blossom Fernandes, Sophia Chabursky, Sabine Walper, Edward Watkins, Mina Fazel. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 30.07.2024.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.
COMMENTS
Methodology: The key elements required in a systematic review protocol are discussed, with a focus on application to qualitative reviews: Development of a research question; formulation of key search terms and strategies; designing a multistage review process; critical appraisal of qualitative literature; development of data extraction ...
A systematic review can be qualitative, quantitative, or a combination of the two. The approach that is chosen is determined by the research question and the scope of the research. When qualitative and quantitative techniques are used together in a given study, it is called a mixed method. In a mixed-method study, synthesis for the quantitative ...
Literature reviews establish the foundation of academic inquires. However, in the planning field, we lack rigorous systematic reviews. In this article, through a systematic search on the methodology of literature review, we categorize a typology of literature reviews, discuss steps in conducting a systematic literature review, and provide suggestions on how to enhance rigor in literature ...
A qualitative systematic review brings together research on a topic, systematically searching for research evidence from primary qualitative studies and drawing the findings together. There is a debate over whether the search needs to be exhaustive. 1 , 2 Methods for systematic reviews of quantitative research are well established and explicit ...
The gray literature and a search of trials may also reveal important details about topics that would otherwise be missed ... Qualitative systematic review: Qualitative synthesis: Synthesis of qualitative data a: Qualitative synthesis: Synthesis without meta-analysis: Narrative synthesis b, narrative summary.
The best reviews synthesize studies to draw broad theoretical conclusions about what a literature means, linking theory to evidence and evidence to theory. This guide describes how to plan, conduct, organize, and present a systematic review of quantitative (meta-analysis) or qualitative (narrative review, meta-synthesis) information.
Systematic review vs. literature review. A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarize and evaluate previous work, without using a formal, explicit method. ... Narrative (qualitative): Summarize the information in ...
A preliminary review, which can often result in a full systematic review, to understand the available research literature, is usually time or scope limited. Complies evidence from multiple reviews and does not search for primary studies. 3. Identifying a topic and developing inclusion/exclusion criteria.
This Special Issue of Systematic Reviews Journal is providing a focus for these new methods of review whether these use qualitative review methods on their own or mixed together with more quantitative approaches. ... Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Med Res Methodol. 2008;8:45.
1. INTRODUCTION. Evidence synthesis is a prerequisite for knowledge translation. 1 A well conducted systematic review (SR), often in conjunction with meta‐analyses (MA) when appropriate, is considered the "gold standard" of methods for synthesizing evidence related to a topic of interest. 2 The central strength of an SR is the transparency of the methods used to systematically search ...
Method details Overview. A Systematic Literature Review (SLR) is a research methodology to collect, identify, and critically analyze the available research studies (e.g., articles, conference proceedings, books, dissertations) through a systematic procedure [12].An SLR updates the reader with current literature about a subject [6].The goal is to review critical points of current knowledge on a ...
Systematic literature reviews (SRs) are a way of synthesising scientific evidence to answer a particular ... SRs treat the literature review process like a scientific process, and apply concepts of empirical research in order to make the review process more transparent and replicable and to reduce the ... synthesize qualitative evidence to ...
CONCLUSION. Siddaway 16 noted that, "The best reviews synthesize studies to draw broad theoretical conclusions about what the literature means, linking theory to evidence and evidence to theory" (p. 747). To that end, high quality systematic reviews are explicit, rigorous, and reproducible. It is these three criteria that should guide authors seeking to write a systematic review or editors ...
Those doing qualitative research cannot "opt out" of knowing their relevant scholarly conversations. Undertaking a qualitative systematic review provides a vital means to know and tune into the past conversation in your topic area that allows the researcher to position themselves and their work substantively, ontologically, theoretically, and methodologically in this landscape.
We show how every stage of the review process, from asking the review question through to searching for and sampling the evidence, appraising the evidence and producing a synthesis, provoked profound questions about whether a review that includes qualitative research can remain consistent with the frame offered by current systematic review ...
Provides guidelines for conducting a systematic literature review in management research. Torraco (2005) ... This is often referred to as a qualitative systematic review, which can be described as a method of comparing findings from qualitative studies (Grant & Booth, 2009). That is, a strict systematic review process is used to collect ...
A Guide to Writing a Qualitative Systematic Review Protocol to Enhance Evidence-Based Practice in Nursing and Health Care. ... Written by two highly-respected social scientists, provides an overview of systematic literature review methods: outlines the rationale and methods of systematic reviews; gives worked examples from social science and ...
The key elements required in a systematic review protocol are discussed, with a focus on application to qualitative reviews: Development of a research question; formulation of key search terms and strategies; designing a multistage review process; critical appraisal of qualitative literature; development of data extraction techniques; and data ...
A qualitative systematic review aggregates integrates and interprets data from qualitative studies, which is collected through observation, interviews, and verbal interactions. Included studies may also use other qualitative methodologies of data collection in the relevant literature. The use of qualitative systematic reviews analyzes the ...
Qualitative, narrative synthesis. Thematic analysis, may include conceptual models. Rapid review. Assessment of what is already known about a policy or practice issue, by using systematic review methods to search and critically appraise existing research. Completeness of searching determined by time constraints.
This qualitative systematic review aimed to consolidate existing evidence on the self-management experience of older patients with multimorbidity worldwide. Methods. Nine databases were searched, for papers published from database inception to April 2023. The systematic review was conducted according to the systematic review method of ...
Rationale and Standards for the Systematic Review of Qualitative Literature in Health Services Research Jennie Popay , Anne Rogers , and Gareth Williams View all authors and affiliations Volume 8 , Issue 3
Method for systematic literature review and meta-analysis studies: Name and reference of original method: 1). ... It covers both the qualitative and quantitative explanation and narration of the results, making discussion, indicating the way forward about the future research works and inferring a conclusion. The data from the final list of ...
A systematic literature review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Protocols (PRISMA-P) systematic literature review protocol.35 The PRISMA 2020 checklist36 and ENhancing Transparency in REporting the synthesis of Qualitative research (ENTREQ) reporting guidelines were also followed.37 ...
The lack of detail reported in the qualitative literature also made it unfeasible to classify interventions using the system developed for the quantitative review. Whereas the quantitative review concerned trials of specific interventions, approximately half of the studies in the qualitative review 99 , 101 , 107 - 130 included more than one ...
Methods: We conducted 2 sequential studies, consisting of a systematic literature review of mental health apps for underserved populations followed by a qualitative study with underserved young male participants (n=20; age: mean 19). ... Systematic Review and Qualitative Study Authors of this article: Holly Alice ...
Given the colossal interest in creating 'coaching cultures', we update the 2014 literature review by Gormley and van Nieuwerburgh and extend this work by applying a Systematic Literature Review (SLR) methodology. In doing so, we detangle definitions and the conditions under which 'coaching cultures' can be developed. We also explore contemporary interventions, report on organisational ...
Teachers' beliefs influence their teaching practices. Given the U.S. Secretary of Education's push to increase multilingualism, this systematic literature review examines teachers' beliefs about language diversity and multilingual learners in relation to teacher experiences, teaching practices, and external factors.