• Open access
  • Published: 28 August 2024

Examining the effectiveness of food literacy interventions in improving food literacy behavior and healthy eating among adults belonging to different socioeconomic groups- a systematic scoping review

  • Arijita Manna   ORCID: orcid.org/0000-0002-9697-7718 1 ,
  • Helen Vidgen 1 &
  • Danielle Gallegos 1  

Systematic Reviews volume  13 , Article number:  221 ( 2024 ) Cite this article

Metrics details

In high-income countries, significant diet-related health inequalities exist between people of different socioeconomic backgrounds. Individuals who face socioeconomic challenges are less likely to meet dietary guidelines, leading to increased incidence and prevalence of morbidity and mortality associated with dietary risk factors. To promote healthy eating, strategies may focus on individual-level factors (e.g., knowledge, skills, and behavior) along with broader societal factors (e.g., social determinants of health). The concept of food literacy is considered an individual-level factor and has been framed as a skill set that individuals must possess to effectively navigate the complexities of the modern food system. Food literacy interventions can be a complementary but effective tool for encouraging healthy eating behavior among diverse populations, including those facing socioeconomic disadvantage. However, there is limited evidence to guide the design of food literacy intervention for vulnerable population groups. In the process of developing an ideal portfolio of solutions and strategies to promote food literacy and healthy eating for people experiencing socioeconomic disadvantage, this systematic scoping review aims to comprehensively examine the effects of food literacy interventions on promoting food literacy behavior and healthy eating in adults (18 years and above) from various socioeconomic groups (SEGs) in high-income countries.

The review includes both qualitative and quantitative papers obtained from academic databases, including MEDLINE (via EBSCOhost), Embase, Web of Science, and Google Scholar. In addition to the electronic search, manual forward and backward citation searching will be conducted to identify additional relevant papers. Food literacy interventions will be evaluated across four domains: planning and management, selection, preparation, and consumption. Papers included in the review will be analyzed for process, impact, and outcome evaluation. The main outcome of a food literacy intervention is the modification in eating behavior, while the mechanism for this action will be through impact measure of food literacy behaviors. Implementation factors will be extracted for process evaluation. This review will also include a range of dietary behavior measures, such as diet quality index and dietary intake indicator. The screening process for all citations, full-text articles, and abstract data will be carried out by two reviewers independently. In case of any potential conflicts, they will be resolved through discussion. The quality of quantitative studies will be reviewed using the JBI critical appraisal checklist for analytical cross-sectional studies. The “Consolidated Criteria for Reporting Qualitative Studies (COREQ)” will be used to report on the quality of qualitative papers. Systematic review registration: https://doi.org/10.17605/OSF.IO/TPNKU

Peer Review reports

Introduction

In recent years, there has been an epidemiological shift on a global scale, characterized by the prevalence of non-communicable chronic diseases (NCDs), part of which can be attributed to unhealthy dietary patterns [ 39 ]. Globally, 42.0 million deaths were caused by non-communicable diseases (NCDs) in 2019. Among them, dietary risk factors were responsible for 7.9 million deaths and 187.7 million DALYs (disability-adjusted life years) [ 74 ]. In response, governments and policymakers worldwide are pushing for strong facilitation of healthy eating. Healthy eating comprises a healthy diet that is defined by the World Health Organization (WHO) as one that “protects against malnutrition in all its forms, as well as non-communicable diseases (NCDs), including diabetes, heart disease, stroke and cancer” (2020). However, facilitating healthy eating is complex. Eating, as a dynamic and complex health behavior [ 48 , 56 ], is influenced by various factors that operate at individual, community, and societal levels [ 37 , 44 , 56 , 61 , 79 ]. Factors such as social context, economic conditions, and community and family factors heavily impact any health behavior, including eating [ 56 , 59 , 78 ]. These factors are collectively known as social determinants of health (SDHs). [ 78 ]. It is essential to acknowledge that social determinants play a crucial role in developing and maintaining healthy eating habits [ 22 , 26 , 58 ].

Out of all social determinants, socioeconomic position (SEP) has a significant impact on what people eat, leading to socioeconomic inequalities in healthy eating among different income groups. Education, income, occupation, gender, and ethnicity are examples of interlinked socioeconomic and sociodemographic factors that collectively can modulate eating [ 1 , 45 , 81 ]. Many high-income countries, including Australia, exhibit evidence of SEP-linked inequalities with regard to healthy eating [ 3 , 4 ]. Individuals in higher income brackets, with advanced educational backgrounds, and residing in more affluent communities are more capable of consuming a well-balanced and nutritious diet, leading to better overall health outcomes. [ 38 , 40 ]. Conversely, people facing social and economic disadvantage are less able to access and consume a healthy diet, resulting in a higher incidence and prevalence of morbidity and mortality rates from diet-related NCDs among this group [ 1 , 12 , 20 , 24 , 38 , 43 ].

Poor diet and unhealthy eating habits are considered to be risk factors for chronic diet-related diseases worldwide, even in high-income countries. In most high-income countries, the negative effects of poor diets are disproportionately felt by lower socioeconomic populations, Indigenous Peoples, and those living in rural and remote areas [ 2 , 20 , 62 ]. Interestingly, what people consider to be healthy eating varies widely between countries and cultures too, along with other social determinants of health [ 14 , 16 , 49 ]. Moreover, there are significant disparities in the food environment between low- and high-income countries [ 69 ]. These differences in perceptions limit the generalizability of the findings and highlight the need to focus specifically on high-income countries when devising policies and strategies aimed at improving dietary patterns and nutrition-related outcomes.

Improving dietary habits is a complex issue that requires a multidisciplinary approach that takes into account the social context [ 57 ]. Among the different approaches or interventions aimed at influencing eating habits positively, food literacy has emerged as crucial in potentially enhancing diet quality as well as promoting good health [ 18 ]. Within policy and practice, interventions aimed at promoting healthy eating habits frequently focus on modifying personal behavior by influencing individual-level factors such as skills, knowledge, and beliefs while also addressing the underlying determinants that impact eating behavior [ 26 , 41 ]. As outlined by (Velardo) in [ 71 ], food literacy focuses on enhancing individual knowledge that leads to the development of personal skills, such as critical decision-making, goal setting, and confidence in cooking. The importance of food literacy is that it recognizes that healthy eating is not just an individual responsibility but is also influenced by social structural factors [ 21 , 64 , 73 ].

Food literacy interventions are increasingly being developed and implemented. In accordance with the food literacy conceptual models, it is generally postulated that improvement in food literacy behavior has the potential to elicit favorable outcomes in terms of dietary intake and, as a consequence, overall health. This is essentially the underlying premise upon which all food literacy interventions or programs are running [ 7 ]. As proposed by Vidgen and Gallegos [ 73 ], the widely accepted food literacy model has four interconnected domains, which are (1) planning and management, (2) selection, (3) preparation, and (4) eating. An ideal food literacy intervention should incorporate all four domains so that participants can achieve a comprehensive understanding of the interconnected knowledge, skills, and behavior essential to strengthen their connection with food and effectively adapt diet quality through changes, thus empowering people [ 13 , 73 ]. Often many of these interventions are especially targeted at communities with less access to healthy diets, such as people living with socioeconomic disadvantage, where these interventions can make a real difference [ 7 , 13 , 76 ]. Evidence suggests that well-planned and implemented food literacy interventions can impact the healthy eating behavior of people facing socioeconomic disadvantage. For example, OZHarvest’s NEST (Nutrition Education Skills Training) program is an intensive, 6-week, 15-h public health nutrition intervention in Australia designed to enhance the nutritional knowledge, food literacy, and cooking skills of Australian adults living with socioeconomic disadvantage [ 47 , 76 ]. Attendees involved in OZHarvest’s NEST Program showed improvements in their cooking skills, used healthier ingredients, applied proper cooking methods, made cost-effective ingredient substitutions, made informed choices when selecting food items, and managed to stay within their meal budgets [ 76 ].

Food literacy in the context of socioeconomic position is not well understood. There has been some limited exploration of the connection between social determinants of health and food literacy [ 21 , 64 , 73 ]. Also, investment in FL interventions by governments is based on the assumption that developing higher food literacy levels will positively impact dietary behavior. Various food literacy programs have been initiated to improve food literacy, especially among vulnerable population groups [ 8 ]. Studies have shown that food literacy interventions have promise in promoting healthy eating habits among adults from low socioeconomic backgrounds [ 7 , 13 , 76 ]. However, many interventions fail to report on their outcomes or conduct follow-up evaluations, which is contrary to best-practice recommendations [ 33 ]. Currently, there is a lack of comprehensive reviews available to verify the effectiveness of these interventions in enhancing food literacy behavior and encouraging healthy eating among vulnerable population groups [ 5 ]. This research gap can be addressed through a scoping review, which can identify available evidence, examine research methodologies, and determine whether food literacy interventions have been beneficial in promoting healthy eating and food literacy behavior among vulnerable population groups.

Upon initial exploration of several academic databases, including MEDLINE, Embase, and Google Scholar, it has become clear that there are currently existing systematic reviews (Kelly and Nash, 2021; Vaitkeviciute et al. 2015) as well as planned protocols (Doustmohammadian et al. 2020) that examine the effectiveness of food literacy. However, it should be noted that neither of these systematic reviews specifically target the adult population, nor do they incorporate socioeconomic position as a factor of interest in the analysis. Therefore, the aim of this review is to examine, through a systematic approach, food literacy interventions and their effectiveness in improving food literacy behavior and healthy eating among different socioeconomic groups in high-income countries.

Study design

A protocol has been registered on the Open Science Framework Registries on July 17, 2023. This proposed systematic scoping review will be conducted using the JBI scoping review methodology outlined in “Chapter 11: Scoping reviews” [ 50 , 51 , 51 ]. The findings will be reported in compliance with the PRISMA extension for scoping reviews (PRISMA-ScR) [ 68 ].

Objective and review questions to guide study design

The objective of this review is to systematically determine if food literacy interventions have an impact on improving food literacy and healthy eating behavior among different socioeconomic groups living in high-income countries.

The main review question for this inquiry has been formulated as follows:

Primary review question

Are food literacy interventions effective in improving food literacy behavior and healthy eating across different socioeconomic groups?

Secondary review questions

Are food literacy interventions effective in improving food literacy behavior?

Are food literacy interventions effective in improving healthy eating behavior?

Which components within food literacy interventions are effective in improving food literacy behavior and healthy eating behavior?

Does the effectiveness of food literacy interventions vary across different socioeconomic groups?

What are the characteristics of effective food literacy interventions?

Inclusion criteria

1. participants.

Studies conducted on adults (18 years and older) of any sex or gender residing in high-income countries will be included in the review.

As this scoping review primarily focuses on the application (intervention) part of food literacy, the evidence of various food literacy interventions will be considered. Food literacy interventions can vary in design, approach, target population, time frame, outcome evaluation, theoretical model, and food literacy domains [ 72 ]). To select appropriate interventions, an established food literacy model will guide this review.

Food literacy has been defined in various ways by researchers attempting to give meaning to the emerging concept [ 19 ]. During the initial stage of conceptualizing the idea of FL, researchers perceived it as a compilation of nutritional knowledge and mechanical techniques for preparing food [ 36 , 46 ]. Newer understandings of the subject have included the necessary knowledge, personal abilities, psychological traits (such as confidence, self-efficacy, and resilience), capabilities, and actions involved in the planning, selection, and preparation of food [ 10 , 19 , 21 , 73 ]. It is worth mentioning, that among all definitions, the most cited definition is by [ 73 ], p. 54), according to a recent systematic review [ 66 ]. [ 73 ] defined FL as “the scaffolding that empowers individuals, households, communities or nations to protect diet quality through change and strengthen dietary resilience over time. It is composed of a collection of inter-related knowledge, skills and behaviours required to plan, manage, select, prepare and eat food to meet needs and determine intake”. The definition that has been presented lays the foundation for subsequent definitions that have sought to elaborate on the concept. It is worth noting that these subsequent definitions have not sought to challenge the central tenets of the original definition, but rather to build upon them. As such, this review will adopt and work within the framework of this original definition, which serves as a key reference point for further exploration of the concept.

Also, Vidgen and Gallegos [ 73 ], proposed a conceptual model for food literacy that goes beyond the basic definition. This model was developed based on primary research and the original definition. Its purpose is to illustrate their perspective on food literacy. The model consists of four domains: planning and management, selection, preparation, and consumption of food (Table 1 ). These domains comprise a total of eleven food-related activities, referred to as “components” [ 73 ]. All interventions that align with the knowledge, skills, and behavior associated with these four domains will be included in this review. Table 1 below presents the four domains of the food literacy model.

Papers will include only interventions that have been implemented in high-income countries. Most high-income countries are also considered countries with the highest human development index (HDI) [ 80 ]. HDI, as defined by the United Nations Development Programme (UNDP), is a comprehensive indicator that assesses the overall attainment of human development in crucial areas such as standard of living, educational attainment, and life expectancy [ 70 ]. The importance of focusing on high-income countries cannot be overstated due to the differences in the way how healthy eating behaviors are perceived across various nations, as highlighted by [ 49 ] . Furthermore, there are significant disparities in the food environment between low and high-income countries [ 69 ]. Therefore, it is imperative to take into account these variations when considering policies and strategies aimed at improving dietary patterns and nutrition-related outcomes.

4. Types of sources

This scoping review will include various types of studies published only in peer-reviewed journals, including quantitative, qualitative, and mixed-method designs. This may consist of systematic reviews, observational non-experimental studies, experimental studies, and case studies.

5. Types of interventions

The main focus of this review will be on scholarly papers that explicitly and accurately discuss food literacy intervention, utilizing the terms “food literacy intervention” or “food literacy program”. By limiting the scope to articles that use these specific terms, is aimed to provide a more comprehensive and in-depth analysis of the research in this field.

(1) Eligibility criteria

The emphasis of this review is largely placed on the intervention aspect of food literacy. Therefore, maximum data related to intervention will be extracted. In doing so, those studied will only be included which are (1) peer-reviewed, (2) studied on humans, (3) studied in high-income countries, (4) described a food literacy intervention implemented on adults aged 18 years or above, (6) published from 2001 to 2022 (7) published or translated in English.

(2) Search strategy

To visualize the search plan, four main theoretical constructs related to the research question were identified first (presented in Fig. 1 ).

figure 1

Visual presentation of search adaptation

As recommended in all JBI types of reviews, a three-step search strategy was developed by all three authors, along with the consultation of an academic librarian.

On July 5, 2023, an initial search was carried out on MEDLINE (via EBSCOhost), Cochrane Library, and OSFHOME (Open Science Framework) databases using the keywords “food literacy,” “intervention,” “adults,” “healthy eating,” and “socioeconomic position. No systematic review was found to be registered on any platform that sufficiently addressed the research question of this study. However, the initial search also revealed significant limitations in the search strategy.

Many articles in the database used these search terms in a different context than intended. For example, the phrase “healthy eating” has multiple meanings and has been used in various contexts. In addition, no empirical study has explored the connection between food literacy and socioeconomic position till now. As a result, relying solely on the above-mentioned keywords either failed to yield related materials that did not explicitly use the search term or returned irrelevant materials.

As such, the search strategy was adjusted to include only the keywords “food literacy” “intervention” and “adults”, along with the index terms used to describe these three constructs as identified in the titles/abstracts of articles from the initial search. The extraction of socioeconomic factors and any indication of healthy eating behavior (dietary behavior) will be carried out manually. A trial of search, using the preferred keywords and the index terms used in each database is shown in Appendix  1 .

The second search using the modified search strategy will be run in three electronic databases (MEDLINE (via EBSCOhost), Scopus, and CINHAL) by AM in November 2023.

To ensure the completeness of the search process, both forward and backward citation searches will be performed(QUT Library Guide, 2023).

After each search, all identified citations will be gathered and uploaded to the referencing software EndNote 20. Duplicates will then be removed before exporting the citations to Covidence, a screening and data extraction tool will be used in systematic reviews ( https://www.covidence.org/ ).

(3) Study selection

Integrating different types of studies.

The review will encompass diverse types of studies to gain a better understanding of multifaceted phenomena. These will include quantitative studies, which measure the effects of food literacy interventions, qualitative studies that focus on the experiences of those who attended any food literacy program, and mixed methods studies that combine both quantitative and qualitative approaches.

While reviewing quantitative studies, trials of food literacy programs/interventions that aimed to promote food literacy behavior and healthy eating will be looked at. The analysis for the trials will involve a thorough examination of the pre- and post-data on the outcomes that were reported, along with that of a comparable control or comparison group.

The aim of reviewing qualitative studies is to explore adults' perspectives and experiences of attending food literacy programs. The focus is on identifying what attendees have reported experiencing as a result of participating in these programs. Initial reviews of the available literature indicate that attendees of such programs have reported positive changes in their food habits, including eating more fruits and vegetables, gaining confidence in cooking, using healthier ingredients, adopting appropriate cooking methods, substituting ingredients with less expensive options, making informed decisions when selecting food items, and stretching their meal budgets [ 13 , 76 ].

Process of selection

All authors (AM/HV/DG) will conduct a pilot test by screening the title and abstract of 10% of the articles randomly selected from the pool of the saved articles in Covidence against the inclusion criteria. Once a consensus is reached, the first author (AM) will screen the title and abstract of the remaining articles. The exact process will be followed for assessing articles in full text. After reaching an agreement, the first author will retrieve the full text of the primarily selected citations and assess them in detail against all the inclusion criteria, including language, participants, geography, and intervention. Another reviewer (HV) will repeat this process independently. Any disputes will be resolved by consensus or with the involvement of a third reviewer. Finally, the scoping review will include all the publications that meet the eligibility criteria.

(4) Evaluation of food literacy intervention

Although previous systematic reviews of food literacy interventions expose the inadequacy of the evaluation method, it is still crucial to assess the effectiveness of the interventions through post-program follow-up evaluations [ 9 ]. It is also important to select an appropriate evaluation design that corresponds to the level of development when assessing a program. This review will follow three main types of evaluation methods: process, outcome, and impact evaluation [ 32 , 55 ].

The method of process evaluation is used to determine if program activities have been executed according to plan and if they have resulted in specific outputs [ 32 ]. The relationship between impact and outcome can be explained as follows: outcome is the goal of any project (intervention), while impact is the objective. To clarify, the outcome is characterized by the desired changes in targeted health behavior that are sustained over a long period of time. Impact evaluation provides information about the observed changes or “impacts” produced by the intervention [ 32 ]. For instance, when implementing a food literacy program for adults, the objective is to improve their food literacy behavior, resulting in a sustained improvement in their dietary behavior, which is the ultimate goal or outcome [ 5 , 6 ]. As such, for this scoping review, the impact is the modification in food literacy behavior, and the outcome is the modification in eating behavior. How the data for process, impact, and outcome evaluation will be extracted is described in the coming paragraphs.

Furthermore, the assessment method is consistent with Vidgen’s ([ 72 ], p. 75) “second model of food literacy”, which is illustrated in Fig. 2 below. This model not only illustrates the connection between food literacy and nutrition but also provides guidance for process, outcome, and impact evaluations in an ideal food literacy intervention. The insights gained from these evaluations can be applied to improve the development and execution of future interventions [ 72 ], p. 81). Hence, this model will guide the evaluation process.

figure 2

Second model of food literacy, depicting the relation between food literacy and nutrition . Note: Adapted from Food Literacy: What Is It And Does It Influence What We Eat? by [ 72 ], p. 75

Below is an outline of how three types of evaluation will be implemented when reviewing various interventions.

Process/implementation evaluation

The 11 components of food literacy may serve as a framework for the process evaluation of a food literacy intervention, as suggested by [ 72 ], p. 81. Some of the constructs that will be investigated under the “process evaluation category” are which components of food literacy were addressed in the program, how the programs were designed, the percentage of adult participants, the records of their socioeconomic and sociodemographic characteristics, if the program’s effectiveness was measured according to attendee feedback, and what were the barriers/facilitators to implementation of program activities (Table 2 ).

Impact evaluation

To ensure food literacy is held accountable for driving healthy eating practices, it is essential to measure the impact of food literacy intervention on health outcomes [ 27 ]. In regard to that, Vidgen [ 72 ] proposed analyzing the constructs of “certainty”, “choice”, and “pleasure” (Fig. 2 ) is crucial in determining the impact of a food literacy intervention. For instance, a food literacy program can have a positive “impact” by increasing “pleasure” in cooking or by providing more “choices” in selecting healthy and affordable food from the local food environment. Therefore, to evaluate the impact of food literacy intervention, this paper will gather data on changes reported in various components of food literacy. These components include planning food intake (under the domain of planning, in component 1.1), reducing consumption of fast food and sugary drinks (under the domain of eating, in component 4.2), and increasing self-reported cooking skills (under the domain of preparation, in component 3.1).

These outcomes are indicative of the successful implementation of food literacy intervention and can guide future development [ 6 ] (Table 3 ).

Outcome evaluation

This review aims to determine any enduring effects on eating/dietary behavior after the delivery of a food literacy program as “outcome evaluation”. In a prior study [ 5 ], a comparable methodology was employed to assess the impact of food literacy by tracking modification in dietary behavior, which was deemed a critical metric for measuring outcomes of food literacy interventions. Next, there is a discussion of what is meant by “eating behavior” and the method that will be utilized to track any changes in such behavior.

The concepts of “eating behavior” and its related terms, including “dietary behavior”, “dietary intake”, “eating habits”, “diet”, and “food choice”, are broad and ambiguous ideas, and these terms are used interchangeably in various academic fields [ 42 ]. In general, the term ‘eating behavior” or “dietary behavior” is a conclusive idea that encompasses all the factors related to food consumption, including diet quality, food preferences and motives, eating patterns, and diet-related chronic diseases [ 37 , 60 ]. For this paper, the term “eating behavior” will be used consistently to refer to all the above-mentioned concepts.

There are different ways to measure different aspects of healthy eating behavior. In the field of dietary behavior research, self-reported measures, such as 24-h dietary recalls, food records/diaries, and food frequency questionnaires (FFQ), are commonly employed to collect data [ 53 ]. This is because it is generally not possible to objectively assess the usual dietary intake in community-dwelling individuals [ 35 ]. Hence, as measures for dietary behavior, this review will include previous studies that have reported dietary outcomes through self-reported measures along with other measures.

This review will rely on a range of measures, including the following:

Measures of diet quality: Diet Quality Indices (DQIs) serves as tools for assessing an individual’s overall diet quality. These scores food and/or nutrition intakes and sometimes lifestyle factors based on how closely they align with dietary guidelines [ 77 ]. Examples of DQIs are the Healthy Eating Index (HEI), the Diet Quality Index (DQI), the Healthy Diet Indicator (HDI), the Mediterranean Diet Score (MDS [ 25 , 75 ], and Single-item self-rated diet measure (SRD) [ 23 ].

Dietary intake indicator: e.g., the Household Dietary Diversity Score (HDDS; measures food accessibility and socioeconomic status based on types and quantity of food consumed in 24 h [ 34 ] (Table 4 ).

(5) Data extraction

The lead researcher will extract the content of each study independently. The extracted findings then be shared with the supervisory team for approval. In the event of any conflicts, they will be resolved through discussion. The data extraction matrix will be revised and may be modified if required during the process.

Following the protocol, the data extraction matrix (an Excel sheet) will summarise the data under four main headings: (1) description of studies, (2) process evaluation, (3) impact evaluation, and (4) outcome evaluation. Under these four headings, all the single constructs will be assessed. Those constructs are listed below. An example of a data extraction matrix is attached in Appendix  3 .

Study details: (1) author, (2) study location, (3) sample size, (4) study design, (5) theoretical framework applied, (6) year of publication of the study results, (7) published in a journal

Population details : (8) socioeconomic characteristics of the target group (high and low socioeconomic group, description of socioeconomic factors, such as income, education, occupation), (9) sociodemographic characteristics of the target group

Intervention details (10) Name of the intervention (food literacy program), (11) components of the intervention (e.g., the components of food literacy addressed), (11) duration of the interventions, (12) measurement tool (e,g., food literacy scale, food literacy questionnaire, & FFQ),

Impact details : (13) report of changes in food literacy behaviour, (14) measurement tool, (15) findings

Outcome details : (16) report of changes in dietary behaviour, (17) measurement tool, (18) findings

(6)Assessment of risk bias : study appraisal

To evaluate the quantitative aspects of the articles, the JBI critical appraisal checklist for analytical cross-sectional studies, which is an eight-item questionnaire, will be employed (The Joanna Briggs [ 63 ]. Meanwhile, for the qualitative studies, the ‘Consolidated Criteria for Reporting Qualitative Studies (COREQ), which consists of 23 questions, will be used [ 67 ].

(7) Analyse and synthesize the evidence

Synthesis of qualitative papers.

After extracting the data, the information extracted from each paper, including study details, population details, intervention, impact, and outcome details, will be utilized to create evidence tables providing an overall description of the included studies.

Subsequently, two team members (AM and HV) will independently analyze the extracted data based on those predetermined categories.

Qualitative papers will be subjected to thematic analysis, as described by Braun and Clarke [ 11 ]. The thematic analysis aims to identify significant data patterns (“themes”) and establish a visual network and conceptual connections among these themes to address the primary and secondary research questions specific to this systematic review. During this process, both reviewers (AM and HV) will independently conduct line-by-line coding from the findings of the selected studies to identify recurring, unique, and contradictory content. These codes will then be utilized to form themes and a series of sub-themes [ 65 ]. The reviewers will utilize computer-assisted qualitative data analysis software (CAQDAS) such as NVivo to assist in this step. While the researcher creates the codes, NVivo can help with sorting, labeling, and organizing the codes (referred to as “nodes” in NVivo) and the data (Dhakal, 2022,NVivo, 2023). As thematic analysis is a comprehensive process, the reviewing team will convene for multiple meetings to arrive at consensus decisions. Investigator triangulation will be employed during this process, with two or more researchers involved, to mitigate personal bias and ensure the inclusion of diverse perspectives [ 15 ].

Synthesis of quantitative papers

Due to the inherent nature of systematic reviews, it is anticipated that this systematic review will encompass a wide range of quantitative studies characterized by diversity in the intervention (including duration and delivery model), study design (e.g., cross-sectional and longitudinal cohort), study participants (e.g., physical condition, age, gender, and location), and the outcomes/effects (varied measurement methods and durations). This variability is commonly referred to as "heterogeneity" in research [ 17 , 31 ]. As heterogeneity is expected, this review will use a meta-analytical method to combine study estimates and obtain a summary estimate(e.g., mean differences, standardized mean difference, and its 95% confidence interval) [ 54 ]. The most appropriate approach for the meta-analysis in this case is a random-effects meta-analysis, which will effectively assess the variations in the effects of different interventions [ 54 ]. In addition, Forest plots will be used for visual examination of heterogeneity [ 31 ]. To assess the degree of heterogeneity statistically, three measures will be employed: (1) Cochran’s Q to evaluate whether the proportion of successes is consistent across groups, (2) Higgin’s and Thompson’s I 2 to assess the percentage of variability in effect sizes not caused by sampling error [ 30 ], and (3) Tau-squared to estimate the variance of the underlying distribution of true effect sizes [ 29 ].

The results from both the quantitative and qualitative synthesis will be integrated to produce the final synthesis that will help gain a comprehensive understanding of how different aspects of the research relate to one another. The qualitative papers will be analyzed to develop a set of recommendations for interventions that are in line with the perspectives of adult attendees. These recommendations will then be utilized to evaluate the interventions analyzed through quantitative synthesis to determine the level of alignment between the interventions and our recommendations [ 28 , 52 ].

(8) Report the findings

The findings of this review are intended for publication in a scholarly journal that focuses on public health or nutrition science. Additionally, the result may be shared through other networks, such as conferences. As a part of an effort to ensure data transparency and accessibility, all data resulting from this review will be uploaded to the Queensland University of Technology’s repository. The reviewers wish for the significant findings to be widely and readily available to those who can benefit from this research.

(9) Strengths and limitations

This will be the first review to synthesize evidence on the link between food literacy and socioeconomic position and healthy eating.

The results will aid in comprehending whether previous food literacy interventions have effectively assisted individuals belonging to low-socioeconomic groups in adopting healthy eating habits.

There is a lack of studies that have evaluated post-program analysis of food literacy intervention, specifically in relation to the food literacy domains or the three levels of food literacy outcome.

It is possible that some interventions aimed at improving food literacy behavior may be missed due to the fact that not all studies use the term “food literacy” directly and instead focus on enhancing specific components related to it.

It is important to note that the review will have some limitations regarding bias. Specifically, certain countries, papers written in languages other than English, and specific population groups were intentionally excluded. As a result, the selection process was significantly biased. The decisions have been taken to make sure that the review's scope is narrowed down and relevant information is gathered. In the upcoming reviews, it would be advantageous to examine literature from low- to middle-income nations and also to involve children and elderly individuals who have firsthand experience with attending a food literacy program.

Availability of data and materials

This article does not involve data sharing since no datasets were produced or examined in the present study.

Alkerwi A, Vernier C, Sauvageot N, Crichton GE, Elias MF. Demographic and socioeconomic disparity in nutrition: application of a novel Correlated Component Regression approach. BMJ Open. 2015;5(5):e006814–e006814. https://doi.org/10.1136/bmjopen-2014-006814 .

Article   PubMed   PubMed Central   Google Scholar  

Australian Institute of Health. (2000). Australia's Health. Australian Government Pub. Service.

Australian Institute of Health & Welfare. (2016). 4.1 Social determinants of health. Retrieved from https://www.aihw.gov.au/getmedia/11ada76c-0572-4d01-93f4-d96ac6008a95/ah16-4-1-social-determinants-health.pdf.aspx

Australian Institute of Health & Welfare. (2022). Australia's children. Material deprivation. https://www.aihw.gov.au/reports/children-youth/australias-children/contents/income-finance-and-employment/material-deprivation

Begley A, Gallegos D, Vidgen H. Effectiveness of Australian cooking skill interventions. British Food Journal. 2017;119(5):973–91. https://doi.org/10.1108/bfj-10-2016-0451 .

Article   Google Scholar  

Begley, A., Paynter, E., Butcher, L., Bobongie, V., & Dhaliwal, S. S. (2020). Identifying who improves or maintains their food literacy behaviours after completing an adult program. Int J Environ Res Public Health, 17(12). https://doi.org/10.3390/ijerph17124462

Begley, A., Paynter, E., Butcher, L. M., & Dhaliwal, S. S. (2019a). Effectiveness of an adult food literacy program. Nutrients, 11(4). https://doi.org/10.3390/nu11040797

Begley, A., Paynter, E., Butcher, L. M., & Dhaliwal, S. S. (2019b). Examining the association between food literacy and food insecurity. Nutrients, 11(2). https://doi.org/10.3390/nu11020445

Begley A, Paynter E, Dhaliwal S. Evaluation tool development for food literacy programs. Nutrients. 2018;10(11):1617. https://doi.org/10.3390/nu10111617 .

Benn, J. (2014). Food, nutrition or cooking literacy-a review of concepts and competencies regarding food education [Other Journal Article]. International Journal of Home Economics, 7(1), 13-35. https://search.informit.org/doi/ https://doi.org/10.3316/informit.511373079815906

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77-101. https://doi.org/10.1191/1478088706qp063oa

Braveman, P., & Gottlieb, L. (2014). The Social Determinants of Health: It's Time to Consider the Causes of the Causes. Public Health Reports, 129(1_suppl2), 19-31. https://doi.org/10.1177/00333549141291s206

Butcher LM, Platts JR, Le N, McIntosh MM, Celenza CA, Foulkes-Taylor F. Can addressing food literacy across the life cycle improve the health of vulnerable populations? A case study approach. Health Promotion Journal of Australia. 2021;32(S1):5–16. https://doi.org/10.1002/hpja.414 .

Article   PubMed   Google Scholar  

Cardoso, A. P., Ferreira, V., Leal, M., Ferreira, M., Campos, S., & Guiné, R. P. F. (2020). Perceptions about healthy eating and emotional factors conditioning eating behaviour: a study involving Portugal, Brazil and Argentina. Foods, 9(9), 1236. https://www.mdpi.com/2304-8158/9/9/1236

Carter, N. R. N. P., Bryant-Lukosius, D. R. N. P., DiCenso, A. R. N. P., Blythe, J. P., & Neville, A. J. M. M. M. F. (2014). The use of triangulation in qualitative research. Oncology Nursing Forum, 41(5), 545-547. https://www.proquest.com/scholarly-journals/use-triangulation-qualitative-research/docview/1559261620/se-2?accountid=13380 . https://libkey.io/libraries/772/openurl?genre=article&au=Carter%2C+Nancy%2C+RN%2C+PhD%3BBryant-Lukosius%2C+Denise%2C+RN%2C+PhD%3BDiCenso%2C+Alba%2C+RN%2C+PhD%3BBlythe%2C+Jennifer%2C+PhD%3BNeville%2C+Alan+J%2C+MBChB%2C+MEd%2C+MRCP%2C+FRCP%28c%29&aulast=Carter&issn=0190535X&isbn=&title=The+Use+of+Triangulation+in+Qualitative+Research&jtitle=Oncology+Nursing+Forum&pubname=Oncology+Nursing+Forum&btitle=&atitle=The+Use+of+Triangulation+in+Qualitative+Research&volume=41&issue=5&spage=545&date=2014&doi=&sid=ProQuest

Chapman GE, Beagan B. Women’s perspectives on nutrition, health, and breast cancer. Journal of nutrition education and behavior. 2003;35(3):135–41. https://doi.org/10.1016/S1499-4046(06)60197-8 .

CochraneTraining. (2024). Handling heterogeneity in Cochrane reviews. https://training.cochrane.org/msu-web-clinic-april-2023

Colatruglio, S., & Slater, J. (2014). Food literacy: bridging the gap between food, nutrition and well-being. Sustainable well-being: Concepts, issues, and educational practices, 37-55. https://www.researchgate.net/profile/Joyce-Slater/publication/269394694_Food_Literacy_Bridging_the_Gap_between_Food_Nutrition_and_Well-Being/links/54889f910cf289302e30b685/Food-Literacy-Bridging-the-Gap-between-Food-Nutrition-and-Well-Being.pdf

Cullen, T., Hatch, J., Martin, W., Higgins, J. W., & Sheppard, R. (2015). Food literacy: definition and framework for action. Canadian Journal of Dietetic Practice and Research, 76(3), 140-145. https://doi.org/10.3148/cjdpr-2015-010 )

Darmon N, Drewnowski A. Does social class predict diet quality? The American journal of clinical nutrition. 2008;87(5):1107–17.

Article   CAS   PubMed   Google Scholar  

Desjardins, E., & Azevedo, E. (2013). " Making something out of nothing": food literacy among youth, young pregnant women and young parents who are at risk for poor health. Ontario Public Health Association. https://foodsecurecanada.org/sites/foodsecurecanada.org/files/food_literacy_study_technical_report_web_final.pdf

Friel, S., Hattersley, L., Ford, L., & O'Rourke, K. (2015). Addressing inequities in healthy eating. Health Promotion International, 30(suppl_2), ii77-ii88. https://doi.org/10.1093/heapro/dav073

Gago CM, Lopez-Cepero A, O’Neill J, Tamez M, Tucker K, Orengo JFR, Mattei J. Association of a single-item self-rated diet construct with diet quality measured with the alternate healthy eating index. Front Nutr. 2021;8: 646694. https://doi.org/10.3389/fnut.2021.646694 .

Galobardes B, Morabia A, Bernstein MS. Diet and socioeconomic position: does the use of different indicators matter? International Journal of Epidemiology. 2001;30(2):334–40. https://doi.org/10.1093/ije/30.2.334 .

Gil Á, de Victoria EM, Olza J. Indicators for the evaluation of diet quality. Nutricion hospitalaria. 2015;31(3):128–44.

PubMed   Google Scholar  

Gillies C, Super S, Te Molder H, De Graaf K, Wagemakers A. Healthy eating strategies for socioeconomically disadvantaged populations: a meta-ethnography. International Journal of Qualitative Studies on Health and Well-being. 2021;16(1):1942416. https://doi.org/10.1080/17482631.2021.1942416 .

Gillis, D. E. (2016). Using a healthy literacy frame to conceptulize food literacy. In H. Vidgen (Ed.), Food literacy. Key concepts for health and education (pp. 85-101). Routledge. Taylor & Francis Group.

Gough D. Qualitative and mixed methods in systematic reviews. Systematic Reviews. 2015;4(1):181. https://doi.org/10.1186/s13643-015-0151-y .

Harrer, M., Cuijpers, P., Furukawa, T. A., & Ebert, D. D. (2021). Doing meta-analysis with R: a hands-on guide (1st ed.). Chapman & Hall/CRC Press. https://www.routledge.com/ng-Meta-Analysis-with-R-A-Hands-On-Guide/Harrer-Cuijpers-Furukawa-Ebert/p/book/9780367610074

Higgins JP, Thompson SG. Quantifying heterogeneity in a meta-analysis. Statistics in medicine. 2002;21(11):1539–58.

Higgins, J. P. T., & Li, T. (2022). Exploring heterogeneity. In Systematic Reviews in Health Research (pp. 185-203). https://doi.org/10.1002/9781119099369.ch10

Hughes, R. (2011). Practical Public Health Nutrition. John Wiley & Sons, Incorporated. http://ebookcentral.proquest.com/lib/qut/detail.action?docID=624761

Hutchinson J, Watt JF, Strachan EK, Cade JE. Evaluation of the effectiveness of the Ministry of Food cooking programme on self-reported food consumption and confidence with cooking. Public Health Nutrition. 2016;19(18):3417–27. https://doi.org/10.1017/s1368980016001476 .

Kennedy, G., Ballard, T., & Dop, M. C. (2011). Guidelines for measuring household and individual dietary diversity. Food and Agriculture Organization of the United Nations. https://www.fao.org/fileadmin/user_upload/wa_workshop/docs/FAO-guidelines-dietary-diversity2011.pdf

Kirkpatrick, S., & Raffoul, A. (2017). https://www.nccor.org/tools-mruserguides/individual-diet/key-considerations-in-measuring-dietary-behavior-among-children/#box1Measures registry user guide: individual diet. Washington DC: National Collaborative on Childhood Obesity Research.

Kolasa KM, Peery A, Harris NG, Shovelin K. Food literacy partners program: a strategy to increase community food literacy. Topics in clinical nutrition. 2001;16(4):1–10. https://doi.org/10.1097/00008486-200116040-00002 .

Lacaille, L., Patino-Fernandez, A. M., Monaco, J., Ding, D., Upchurch Sweeney, C. R., Butler, C. D., Soskolne, C. L., Gidron, Y., Gidron, Y., Turner, J. R., Turner, J. R., Butler, J., Burns, M. N., Mohr, D. C., Molton, I., Carroll, D., Critchley, H., Nagai, Y., Baumann, L. C., . . . Söderback, I. (2013). Eating Behavior. In (pp. 641-642). Springer New York. https://doi.org/10.1007/978-1-4419-1005-9_1613

Lewis M, McNaughton SA, Rychetnik L, Chatfield MD, Lee AJ. Dietary intake, cost, and affordability by socioeconomic group in Australia. International Journal of Environmental Research and Public Health. 2021;18(24):13315. https://doi.org/10.3390/ijerph182413315 .

Lim, S. S., Vos, T., Flaxman, A. D., Danaei, G., Shibuya, K., Adair-Rohani, H., Amann, M., Anderson, H. R., Andrews, K. G., Aryee, M., Atkinson, C., Bacchus, L. J., Bahalim, A. N., Balakrishnan, K., Balmes, J., Barker-Collo, S., Baxter, A., Bell, M. L., Blore, J. D., . . . Memish, Z. A. (2012). A comparative risk assessment of burden of disease and injury attributable to 67 risk factors and risk factor clusters in 21 regions, 1990-2010: a systematic analysis for the Global Burden of Disease Study 2010. Lancet, 380(9859), 2224-2260. https://doi.org/10.1016/s0140-6736(12)61766-8

Livingstone KM, Olstad DL, Leech RM, Ball K, Meertens B, Potter J, Cleanthous X, Reynolds R, McNaughton SA. Socioeconomic inequities in diet quality and nutrient intakes among Australian adults: findings from a nationally representative cross-sectional study. Nutrients. 2017;9(10):1092. https://doi.org/10.3390/nu9101092 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Macías, Y. F., & Glasauer, P. (2014). Guidelines for assessing nutrition-related knowledge, attitudes and practices. Food and Agriculture Organization of the United Nations (FAO).

Marijn Stok F, Renner B, Allan J, Boeing H, Ensenauer R, Issanchou S, Kiesswetter E, Lien N, Mazzocchi M, Monsivais P, Stelmach-Mardas M, Volkert D, Hoffmann S. Dietary behavior: an interdisciplinary conceptual analysis and taxonomy. Front Psychol. 2018;9:1689. https://doi.org/10.3389/fpsyg.2018.01689 .

Martinez-Lacoba R, Pardo-Garcia I, Amo-Saus E, Escribano-Sotos F. Social determinants of food group consumption based on Mediterranean diet pyramid: a cross-sectional study of university students. PLOS ONE. 2020;15(1): e0227620. https://doi.org/10.1371/journal.pone.0227620 .

Martínez-Vargas L, Vermandere H, Bautista-Arredondo S, Colchero MA. The role of social determinants on unhealthy eating habits in an urban area in Mexico: a qualitative study in low-income mothers with a young child at home. Appetite. 2022;169: 105852. https://doi.org/10.1016/j.appet.2021.105852 .

Nicholls R, Perry L, Duffield C, Gallagher R, Pierce H. Barriers and facilitators to healthy eating for nurses in the workplace: an integrative review. J Adv Nurs. 2017;73(5):1051–65. https://doi.org/10.1111/jan.13185 .

Ontario Ministry of Health Promotion. (2010). Healthy eating, physical activity &healthy weights guidance document. Ontario: Ministry of Health Promotion Retrieved from https://www.health.gov.on.ca/en/pro/programs/publichealth/oph_standards/docs/mhp/HealthyEating-PhysicalActivity-HealthyWeights.pdf

OZHarvest. (2023). NEST. https://www.ozharvest.org/education/nest/

Pampel FC, Krueger PM, Denney JT. Socioeconomic disparities in health behaviors. Annu Rev Sociol. 2010;36:349–70. https://doi.org/10.1146/annurev.soc.012809.102529 .

Paquette M-C. Perceptions de la saine alimentation: État actuel des connaissances et lacunes au niveau de la recherche. Canadian Journal of Public Health. 2005;96(S3):S16–21. https://doi.org/10.1007/bf03405196 .

Article   PubMed Central   Google Scholar  

Peters MD, Godfrey C, McInerney P, Munn Z, Tricco AC, Khalil H. Scoping reviews. Joanna Briggs Institute reviewer’s manual. 2017;2015:1–24.

Google Scholar  

Peters, M. D., Godfrey, C., McInerney, P., Munn, Z., Tricco, A. C., & Khalil, H. (2020). Chapter 11: Scoping Reviews (2020 version). JBI Manual for Evidence Synthesis, , 406-451. https://jbi-global-wiki.refined.site/space/MANUAL/4687342/Chapter+11%3A+Scoping+reviews

Petticrew M, Rehfuess E, Noyes J, Higgins JP, Mayhew A, Pantoja T, Shemilt I, Sowden A. Synthesizing evidence on complex interventions: how meta-analytical, qualitative, and mixed-method approaches can contribute. Journal of clinical epidemiology. 2013;66(11):1230–43.

Ravelli MN, Schoeller DA. Traditional self-reported dietary instruments are prone to inaccuracies and new approaches are needed. Front Nutr. 2020;7:90. https://doi.org/10.3389/fnut.2020.00090 .

Riley RD, Higgins JPT, Deeks JJ. Interpretation of random effects meta-analyses. BMJ. 2011;342: d549. https://doi.org/10.1136/bmj.d549 .

Salabarría-Peña, Y., Apt, B., & Walsh, C. (2007). Practical use of program evaluation among sexually transmitted disease (STD) programs. Atlanta, GA: Centers for Disease Control and Prevention. https://www.cdc.gov/std/program/pupestd/Step3_0215.pdf

Short S, Mollborn S. Social determinants and health behaviors: conceptual frames and empirical advances. Curr Opin Psychol. 2015;5:78–84. https://doi.org/10.1016/j.copsyc.2015.05.002 .

Sobal J, Bisogni CA. Constructing food choice decisions. Ann Behav Med. 2009;38(Suppl 1):S37-46. https://doi.org/10.1007/s12160-009-9124-5 .

Sobal, J., Bisogni, C. A., & Jastran, M. (2014). Food choice is multifaceted, contextual, dynamic, multilevel, integrated, and diverse. Mind, Brain, and Education, 8(1), 6-12. https://doi.org/10.1111/mbe.12044

Solar O, Irwin A. A conceptual framework for action on the social determinants of health. In. Geneva: World Health Organization; 2010.

Stok, F. M., Hoffmann, S., Volkert, D., Boeing, H., Ensenauer, R., Stelmach-Mardas, M., Kiesswetter, E., Weber, A., Rohm, H., Lien, N., Brug, J., Holdsworth, M., & Renner, B. (2017). The DONE framework: Creation, evaluation, and updating of an interdisciplinary, dynamic framework 2.0 of determinants of nutrition and eating. PLOS ONE, 12(2), e0171077. https://doi.org/10.1371/journal.pone.0171077

Story M, Kaphingst KM, Robinson-O’Brien R, Glanz K. Creating Healthy Food and Eating Environments: Policy and Environmental Approaches. Annual Review of Public Health. 2008;29(1):253–72. https://doi.org/10.1146/annurev.publhealth.29.020907.090926 .

Swinburn BA, Sacks G, Hall KD, McPherson K, Finegood DT, Moodie ML, Gortmaker SL. The global obesity pandemic: shaped by global drivers and local environments. The lancet. 2011;378(9793):804–14.

The Joanna Briggs Institute. (2017). The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews.Checklist for Analytical Cross Sectional Studies. https://jbi.global/sites/default/files/2019-05/JBI_Critical_Appraisal-Checklist_for_Analytical_Cross_Sectional_Studies2017_0.pdf

Thomas, H., Azevedo Perry, E., Slack, J., Samra, H. R., Manowiec, E., Petermann, L., Manafò, E., & Kirkpatrick, S. I. (2019). Complexities in conceptualizing and measuring food literacy. Journal of the Academy of Nutrition and Dietetics, 119(4), 563-573. https://doi.org/10.1016/j.jand.2018.10.015

Thomas J, Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Medical Research Methodology. 2008;8(1):45. https://doi.org/10.1186/1471-2288-8-45 .

Thompson C, Adams J, Vidgen HA. Are we closer to international consensus on the term ‘food literacy’? A systematic scoping review of its use in the academic literature (1998–2019). Nutrients. 2021;13(6):2006.

Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. International Journal for Quality in Health Care. 2007;19(6):349–57. https://doi.org/10.1093/intqhc/mzm042 .

Tricco AC, Lillie E, Zarin W, O’Brien KK, Colquhoun H, Levac D, Moher D, Peters MD, Horsley T, Weeks L. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Annals of internal medicine. 2018;169(7):467–73.

Turner, C., Kalamatianou, S., Drewnowski, A., Kulkarni, B., Kinra, S., & Kadiyala, S. (2020). Food environment research in low- and middle-income countries: a systematic scoping review. Advances in Nutrition, 11(2), 387-397. https://doi.org/10.1093/advances/nmz031

United Nations Development Program. (2023). Human Development Index (HDI). https://hdr.undp.org/data-center/human-development-index#/indicies/HDI

Velardo SP. The nuances of health literacy, nutrition literacy, and food literacy. Journal of nutrition education and behavior. 2015;47(4):385-389.e381. https://doi.org/10.1016/j.jneb.2015.04.328 .

Vidgen, H. (2016). Food Literacy: Key concepts for health and education. https://www.routledge.com/Food-Literacy-Key-concepts-for-health-and-education/Vidgen/p/book/9781138898523

Vidgen HA, Gallegos D. Defining food literacy and its components. Appetite. 2014;76:50–9. https://doi.org/10.1016/j.appet.2014.01.010 .

Vos, T., Lim, S. S., Abbafati, C., Abbas, K. M., Abbasi, M., Abbasifard, M., Abbasi-Kangevari, M., Abbastabar, H., Abd-Allah, F., Abdelalim, A., Abdollahi, M., Abdollahpour, I., Abolhassani, H., Aboyans, V., Abrams, E. M., Abreu, L. G., Abrigo, M. R. M., Abu-Raddad, L. J., Abushouk, A. I., . . . Murray, C. J. L. (2020). Global burden of 369 diseases and injuries in 204 countries and territories, 1990–2019: a systematic analysis for the Global Burden of Disease Study 2019. The Lancet, 396(10258), 1204-1222. https://doi.org/10.1016/s0140-6736(20)30925-9

Waijers PMCM, Feskens EJM, Ocké MC. A critical review of predefined diet quality scores. British Journal of Nutrition. 2007;97(2):219–31. https://doi.org/10.1017/s0007114507250421 .

West EG, Lindberg R, Ball K, McNaughton SA. The role of a food literacy intervention in promoting food security and food literacy—OzHarvest’s NEST Program. Nutrients. 2020;12(8):2197. https://doi.org/10.3390/nu12082197 .

Wirt A, Collins CE. Diet quality–what is it and does it matter? Public Health Nutr. 2009;12(12):2473–92. https://doi.org/10.1017/s136898000900531x .

World Health Organisation. (2023). Social determinants of health. https://www.who.int/health-topics/social-determinants-of-health#tab=tab_1

World Health Organization. (2020). Healthy diet. https://www.who.int/news-room/fact-sheets/detail/healthy-diet

World Population Review. (2023). Human Development Index (HDI) by Country 2023. https://worldpopulationreview.com/country-rankings/hdi-by-country

Zorbas C, Palermo C, Chung A, Iguacel I, Peeters A, Bennett R, Backholer K. Factors perceived to influence healthy eating: a systematic review and meta-ethnographic synthesis of the literature. Nutr Rev. 2018;76(12):861–74. https://doi.org/10.1093/nutrit/nuy043 .

Download references

Acknowledgements

The authors express their gratitude to Mr Peter Sondergeld, the liaison librarian (Faculty of Health) of the Queensland University of Technology, for his assistance with the search process.

This article receives no external funding

Author information

Authors and affiliations.

Queensland University of Technology Brisbane, Brisbane, Queensland, Australia

Arijita Manna, Helen Vidgen & Danielle Gallegos

You can also search for this author in PubMed   Google Scholar

Contributions

The study was conceptualized and designed by AM and HV, who also developed the search strategy. AM initially drafted the protocol, which was later revised by HV and DG. All authors carefully reviewed the final protocol and provided their approval.

Corresponding author

Correspondence to Arijita Manna .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent of publication

This article is freely available to everyone under the Creative Commons Attribution Non-Commercial (CC BY-NC 4.0) license. This means that others are allowed to share, modify, and build upon it non-commercially as long as they properly cite the original work, credit the author, indicate any changes made, and do not use it for commercial purposes.

Competing interests

The authors declare that there are no conflicts of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Manna, A., Vidgen, H. & Gallegos, D. Examining the effectiveness of food literacy interventions in improving food literacy behavior and healthy eating among adults belonging to different socioeconomic groups- a systematic scoping review. Syst Rev 13 , 221 (2024). https://doi.org/10.1186/s13643-024-02632-y

Download citation

Received : 30 July 2023

Accepted : 30 July 2024

Published : 28 August 2024

DOI : https://doi.org/10.1186/s13643-024-02632-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Food literacy
  • Intervention
  • Healthy eating
  • Socioeconomic position

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

jbi critical appraisal checklist for qualitative research 2020

jbi critical appraisal checklist for qualitative research 2020

  • Subscribe to journal Subscribe
  • Get new issue alerts Get alerts
  • Submit your manuscript

Secondary Logo

Journal logo.

Colleague's E-mail is Invalid

Your message has been successfully sent to your colleague.

Save my selection

The revised JBI critical appraisal tool for the assessment of risk of bias for randomized controlled trials

Barker, Timothy Hugh 1 ; Stone, Jennifer C. 1 ; Sears, Kim 2 ; Klugar, Miloslav 3 ; Tufanaru, Catalin 4 ; Leonardi-Bee, Jo 5 ; Aromataris, Edoardo 1 ; Munn, Zachary 1

1 JBI, Faculty of Health and Medical Sciences, The University of Adelaide, Adelaide, SA, Australia

2 Queen’s Collaboration for Health Care Quality, Queen’s University, Kingston, ON, Canada

3 Czech National Centre for Evidence-Based Healthcare and Knowledge Translation (Cochrane Czech Republic, The Czech Republic (Middle European) Centre for Evidence-Based Healthcare: A JBI Centre of Excellence, Masaryk University GRADE Centre), Faculty of Medicine, Institute of Biostatistics and Analyses, Masaryk University, Brno, Czech Republic

4 Centre for Health Informatics, Australian Institute of Health Innovation, Macquarie University, Sydney, NSW, Australia

5 The Nottingham Centre for Evidence Based Healthcare: A JBI Centre of Excellence, School of Medicine, University of Nottingham, Nottingham, UK

THB, JCS, EA, and ZM are paid employees of JBI, The University of Adelaide, and are members of the JBI Scientific Committee. THB, JCS, KS, MK, JLB, EA, and ZM are members of the JBI Effectiveness Methodology Group. MK is an associate editor, JLB is a senior associate editor, and EA is editor in chief of JBI Evidence Synthesis . No authors were involved in the editorial processing of this manuscript.

Correspondence: Timothy Hugh Barker, [email protected]

Supplemental Digital Content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's website, www.jbievidencesynthesis.com .

JBI recently began the process of updating and revising its suite of critical appraisal tools to ensure that these tools remain compatible with recent developments within risk of bias science. Following a rigorous development process led by the JBI Effectiveness Methodology Group, this paper presents the revised critical appraisal tool for the assessment of risk of bias for randomized controlled trials. This paper also presents practical guidance on how the questions of this tool are to be interpreted and applied by systematic reviewers, while providing topical examples. We also discuss the major changes made to this tool compared to the previous version and justification for why these changes facilitate best-practice methodologies in this field.

Introduction

Systematic reviews are a foundational and fundamental component in the practice of evidence-based health care. Systematic reviews involve the collation and synthesis of the results of multiple independent studies that address the same research question. Prior to the creation of these synthesized results, all studies that have been selected for inclusion in the review (ie, those that meet the a priori eligibility criteria) 1 must undergo a process of critical appraisal. 2,3 The purpose of this appraisal (for quantitative evidence) is to determine the extent to which a study has addressed the possibility of bias in its design, conduct, or analysis. By subjecting every study included in a systematic review to rigorous critical appraisal, it allows reviewers to appropriately consider how the conduct of individual studies may impact the synthesized result, thus enabling the synthesized result to be correctly interpreted. 4

Recent advancements in the science of risk of bias assessment 5–7 have argued that only questions related to the internal validity of that study should be considered in the assessment of that study’s inherent biases. The assessment of a study’s risk of bias often occurs during a structured and transparent critical appraisal process. For example, a question related to how generalizable a participant sample is to the broader population does not impact on that study’s internal validity, and thus its inherent biases, 5–8 but is still useful to describe the external validity of that study. There is also now an expectation that assessments of bias occur at different levels, including outcome level and result level assessments, which may be different within the same study depending on the outcome or result being assessed. 5,8 These (and other) advancements have been discussed previously in an introduction to this body of work. 8

It is acknowledged that the existing suite of JBI critical appraisal instruments are not aligned to these recent advancements, and conflate and confuse the process of critical appraisal with that of risk of bias assessment. Therefore, the JBI Effectiveness Methodology Group, under the auspices of the JBI Scientific Committee, updated the entire suite of JBI critical appraisal tools to be better aligned to best-practice methodologies. 8 This paper introduces the revised critical appraisal tool for randomized controlled trials (RCTs) and provides step-by-step guidance on how to use and implement this tool in future systematic reviews. We also clearly document and justify each major change made in this revised tool.

In 2021, a working group of researchers and methodologists known as the JBI Effectiveness Methodology Group was tasked by the JBI Scientific Committee 9 to revise the current suite of JBI critical appraisal tools for quantitative analytical study designs. The aim of this work was to improve the longevity and usefulness of these tools, and to reflect current advancements made in this space, 5–7 while adhering to the reporting and methodological requirements as established by PRISMA 2020 10 and GRADE. 11 To summarize this process, the JBI Effectiveness Methodology Group began by cataloguing the questions asked in each JBI critical appraisal tool for study designs that employ quantitative data. These questions were ordered into constructs of validity (internal, statistical conclusion, comprehensiveness of reporting, external) through a series of roundtable discussions between members of the JBI Effectiveness Methodology Group. For questions that were related to the internal validity construct, they were further catalogued to a domain of bias through a series of mapping exercises and roundtable discussions. Finally, questions were then separated based on whether they were answered at the study, outcome, or result level. The full methodological processes undertaken for this revision, including the rationale for all decisions made, have been documented in a separate paper. 8

How to use the revised tool

The key changes.

Similar to previous versions of these tools, the revised JBI critical appraisal tool for RCTs presents a series of questions. These questions aim to identify whether certain safeguards have been implemented by the study to minimize risk of bias or to address other aspects relating to the validity or quality of the study. Each question can be scored as being met (yes), unmet (no), unclear, or not applicable. As described previously, 8 the wording of the questions presented in the revised JBI critical appraisal tool for RCTs has not been altered from the wording of the questions presented in the previous version of the JBI critical appraisal tool for RCTs. 4 However, the organization of these questions, the order in which they should be addressed and answered, and the means to answer them have been changed.

The questions of this revised tool have been presented according to the construct of validity to which they pertain. The specific validity constructs that are pertinent to the revised JBI critical appraisal tool for RCTs include internal validity and statistical conclusion validity. Questions that have been organized under the internal validity construct have been further organized according to the domain of bias that they are specifically addressing. The domains of bias relevant to the revised JBI critical appraisal tool for RCTs include bias related to selection and allocation; administration of intervention/exposure; assessment, detection and measurement of the outcome; and participant retention. A detailed description of these validity constructs and domains of biases is reported in a separate paper. 8

The principal differences between the revised JBI critical appraisal tool for RCTs and its predecessor are its structure and organization, which are now deliberately designed to facilitate judgments related to risk of bias at different levels (eg, bias at the study level, outcome level, or result level), where appropriate. 8 For the questions that are to be answered at the outcome level (questions 7–12), the tool provides the ability to respond to the questions for up to 7 outcomes. The limit of 7 outcomes ensures that the tool aligns with the maximum number of outcomes recommended to be included in a GRADE Summary of Findings or Evidence Profile. 12 For the questions to be answered at the result level (questions 10–12), the tool presents the option to record a different decision for 3 results for each outcome presented (by default). Reviewers may face cases where there are fewer than 7 outcomes being appraised for a particular RCT, and there are more than 3 results being appraised per outcome. The tool can be edited as required by the review team to facilitate their use in these cases.

For example, consider a hypothetical RCT that has included 2 outcomes relevant to a question of a systematic review team. These outcomes are mortality and quality of life, both of which have been measured at 2 time points within the study. When using this tool, questions 1–6 and 13 are universal to both outcomes, as they are addressed at the study level and are only answered once. The reviewer should then address question 7–9 twice, once for each outcome that is being appraised. Likewise, questions 10–12 should be addressed separately for both outcomes but should also be assessed for each of the results that has contributed data toward that outcome (eg, mortality at time point 1 and 2). In this example, the reviewer would assess questions 10–12 4 different times. It is also important to note that, as with other critical appraisal tools, 3,13 this tool should also be applied in duplicate and independently during the systematic review process. Reviewers should also be wary to only appraise outcomes that are relevant to their systematic review question. If the only relevant outcome from this RCT for the systematic review question was mortality, then appraising the outcome quality of life would not be expected.

Interpretation of critical appraisal

Some reviewers may take the approach of removing studies from progressing to data extraction or synthesis in their review following the critical appraisal process. Removal of a study following critical appraisal may involve considering whether a certain criterion had not been met (eg, randomization not being demonstrated may warrant removal, assuming the review was not also including other study designs with lesser internal validity due to not attempting randomization). Another approach may include the review team weighting each question of the tool (eg, randomization may be twice as important as blinding of the outcome assessors); if a study fails to meet a predetermined weight (decided by the review team), then it may be removed. Other approaches may be to use simple cutoff scores (eg, if a study is scored with 10 “yes” responses, then it is included) or to exclude studies that have been judged as having a high risk of bias. 8 However, we do not recommend that studies be removed from a systematic review following critical appraisal.

By removing studies, it presupposes that the purpose of a systematic review is to only permit the synthesis of high-quality studies. While it may readily promote alignment to the best available evidence, it limits the full potential of the processes of evidence synthesis to fully investigate eligible studies, their data, and provide a complete view of the evidence available to inform the review question. 14,15 There are several other approaches to incorporate the results of critical appraisal into the systematic review or meta-analysis. These approaches can include meta-regression, elicitation of expert opinion, using prior distributions, and quality-effect modeling. 16 However, these techniques demand appropriate statistical expertise and are beyond the scope of this paper. Regardless of the approach ultimately chosen by the reviewers, the results of the critical appraisal process should always be considered in the analysis and interpretation of the findings of the synthesis.

Overall assessment and presentation of results

Previous iterations of the JBI critical appraisal tool for RCTs intuitively supported reviewers assessing the overall quality of a study using a checklist-based or scale-based tool structure (each item can be quantified, which is enumerated to provide an overall quality score). 8 The revised tool has been designed to also facilitate judgments specific to the domains of bias in which the questions belong

A reviewer may determine that for study 1, there was a low risk of bias for the domain “selection and allocation,” as all questions received a response of “yes.” However, for study 2, a reviewer may determine a moderate risk of bias for the same domain, as the response to one of the questions was “no.” Importantly, we provide no thresholds for grading of bias severity (ie, low, moderate, high, critical, or other approaches) and leave this to the discretion of the user and specific context in which they are working. Considering the questions and assessments in this regard, looking across all included studies (or a single study) can permit the reviewer to readily comment on how the risk of bias may impact on the certainty of their results at this domain level in the GRADE approach. A judgment-based approach as described here is one way for users to adopt the revised JBI critical appraisal tool for RCTs; however, the tool is still compatible with either a checklist-based or scale-based structure, 8 and the decision of which approach to follow is left to the discretion of the review team.

Current tools to appraise RCTs 5 ask the reviewer to establish an overall assessment of the risk of bias for each appraised study and for the overall body of evidence (ie, all appraised studies). The revised JBI critical appraisal tool for RCTs does not strictly prescribe to this, regardless of the approach followed. However, if reviewers opt to establish an overall assessment, then these assessments should not take into consideration the questions regarding statistical conclusion validity (questions 11–13) because risk of bias is only impacted by the internal validity construct. 8

Irrespective of the approach taken, the results of critical appraisal using the revised JBI critical appraisal tool for RCTs should be reported narratively in the review. This narrative summary should provide both an overall description of the methodological quality and risk of bias at the domain level of the included studies. There should also be a statement regarding any important or interesting deviations from the observed trends. This narrative summary can be supported with the use of a table or graphic that shows how each included study responded. We recommend presenting the results of critical appraisal for all questions via a table rather than summarizing with a score; see the example in Table 1 . (Note that this design is not prescriptive and only serves as an example).

INTERNAL VALIDITY Bias related to:
DOMAIN Selection and allocation Administration of intervention/exposure Assessment, detection, and measurement of the outcome Participant retention STATISTICAL CONCLUSION VALIDITY
QUESTION NO. 1 2 3 4 5 6 7 8 9 10 11 12 13
STUDY ID OUTCOME RESULT
Study 1 Mortality Time 1 Y Y Y N Y Y Y Y Y Y Y Y Y
Mortality Time 2 Y Y Y
QOL Time 1 N Y Y Y Y Y
QOL Time 2 N Y Y
Study 2 Mortality Time 1 Y Y N Y Y Y Y Y Y Y Y Y Y
Mortality Time 2 Y Y Y
QOL Time 1 Y Y Y Y Y Y
QOL Time 2 Y Y Y

The revised JBI critical appraisal tool for randomized controlled trials

The criteria and considerations that should be made by reviewers when answering the questions in the revised JBI critical appraisal tool for RCTs are shown in Table 2 . This tool is also available to download as Supplemental Digital Content 1 at https://links.lww.com/SRX/A7 .

RoB assessor: Date of appraisal: Record number:
Study author: Study title: Study year:
Internal validity Choice - comments/justification Yes No Unclear N/A
1 Was true randomization used for assignment of participants to treatment groups?
2 Was allocation to treatment groups concealed?
3 Were treatment groups similar at the baseline?
4 Were participants blind to treatment assignment?
5 Were those delivering the treatment blind to treatment assignment?
6 Were treatment groups treated identically other than the intervention of interest?
7 Were outcome assessors blind to treatment assignment? Yes No Unclear N/A
Outcome 1
Outcome 2
Outcome 3
Outcome 4
Outcome 5
Outcome 6
Outcome 7
8 Were outcomes measured in the same way for treatment groups? Yes No Unclear N/A
Outcome 1
Outcome 2
Outcome 3
Outcome 4
Outcome 5
Outcome 6
Outcome 7
9 Were outcomes measured in a reliable way? Yes No Unclear N/A
Outcome 1
Outcome 2
Outcome 3
Outcome 4
Outcome 5
Outcome 6
Outcome 7
10 Was follow-up complete and, if not, were differences between groups in terms of their follow-up adequately described and analyzed?
Outcome 1 Yes No Unclear N/A
Result 1
Result 2
Result 3
Outcome 2 Yes No Unclear N/A
Result 1
Result 2
Result 3
Outcome 3 Yes No Unclear N/A
Result 1
Result 2
Result 3
Outcome 4 Yes No Unclear N/A
Result 1
Result 2
Result 3
Outcome 5 Yes No Unclear N/A
Result 1
Result 2
Result 3
Outcome 6 Yes No Unclear N/A
Result 1
Result 2
Result 3
Outcome 7 Yes No Unclear N/A
Result 1
Result 2
Result 3
Statistical conclusion validity Choice - comments/justification Yes No Unclear N/A
11 Were participants analyzed in the groups to which they were randomized?
Outcome 1 Yes No Unclear N/A
Result 1
Result 2
Result 3
Outcome 2 Yes No Unclear N/A
Result 1
Result 2
Result 3
Outcome 3 Yes No Unclear N/A
Result 1
Result 2
Result 3
Outcome 4 Yes No Unclear N/A
Result 1
Result 2
Result 3
Outcome 5 Yes No Unclear N/A
Result 1
Result 2
Result 3
Outcome 6 Yes No Unclear N/A
Result 1
Result 2
Result 3
Outcome 7 Yes No Unclear N/A
Result 1
Result 2
Result 3
12 Was appropriate statistical analysis used?
Outcome 1 Yes No Unclear N/A
Result 1
Result 2
Result 3
Outcome 2 Yes No Unclear N/A
Result 1
Result 2
Result 3
Outcome 3 Yes No Unclear N/A
Result 1
Result 2
Result 3
Outcome 4 Yes No Unclear N/A
Result 1
Result 2
Result 3
Outcome 5 Yes No Unclear N/A
Result 1
Result 2
Result 3
Outcome 6 Yes No Unclear N/A
Result 1
Result 2
Result 3
Outcome 7 Yes No Unclear N/A
Result 1
Result 2
Result 3
13 Was the trial design appropriate and any deviations from the standard RCT design (individual randomization, parallel groups) accounted for in the conduct and analysis of the trial?

Question 1: Was true randomization used for assignment of participants to treatment groups?

Category: Internal validity

Domain: Bias related to selection and allocation

Appraisal: Study level

If participants are not allocated to treatment and control groups by random assignment, there is a risk that this assignment to groups can be influenced by the known characteristics of the participants themselves. These known characteristics of the participants may distort the comparability of the groups (ie, does the intervention group contain more people over the age of 65 compared to the control?). A true random assignment of participants to the groups means that a procedure is used that allocates the participants to groups purely based on chance, not influenced by any known characteristics of the participants. Reviewers should check the details about the randomization procedure used for allocation of the participants to study groups. Was a true chance (random) procedure used? For example, was a list of random numbers used? Was a computer-generated list of random numbers used? Was a statistician, external to the research team, consulted for the randomization sequence generation? Additionally, reviewers should check that the authors are not stating they have used random approaches when they have instead used systematic approaches (such as allocating by days of the week).

Question 2: Was allocation to groups concealed?

If those allocating participants to the compared groups are aware of which group is next in the allocation process (ie, the treatment or control group), there is a risk that they may deliberately and purposefully intervene in the allocation of patients. This may result in the preferential allocation of patients to the treatment group or to the control group. This may directly distort the results of the study, as participants no longer have an equal and random chance to belong to each group. Concealment of allocation refers to procedures that prevent those allocating patients from knowing before allocation which treatment or control is next in the allocation process. Reviewers should check the details about the procedure used for allocation concealment. Was an appropriate allocation concealment procedure used? For example, was central randomization used? Were sequentially numbered, opaque, and sealed envelopes used? Were coded drug packs used?

Question 3: Were treatment groups similar at the baseline?

As with question 1, any difference between the known characteristics of participants included in the compared groups constitutes a threat to internal validity. If differences in these characteristics do exist, then there is potential that the effect cannot be attributed to the potential cause (the examined intervention or treatment). This is because the effect may be explained by the differences between participant characteristics and not the intervention/treatment of interest. Reviewers should check the characteristics reported for participants. Are the participants from the compared groups similar with regard to the characteristics that may explain the effect, even in the absence of the cause (eg, age, severity of the disease, stage of the disease, coexisting conditions)? Reviewers should check the proportion of participants with specific relevant characteristics in the compared groups. (Note: Do not only consider the P value for the statistical testing of the differences between groups with regard to the baseline characteristics.)

Question 4: Were participants blind to treatment assignment?

Domain: Bias related to administration of intervention/exposure

Participants who are aware of their allocation to either the treatment or the control may behave, respond, or react differently to their assigned treatment (or control) compared with participants who remain unaware of their allocation. Blinding of participants is a technique used to minimize this risk. Blinding refers to procedures that prevent participants from knowing which group they are allocated. If blinding has been followed, participants are not aware if they are in the group receiving the treatment of interest or if they are in another group receiving the control intervention. Reviewers should check the details reported in the article about the blinding of participants with regard to treatment assignment. Was an appropriate blinding procedure used? For example, were identical capsules or syringes used? Were identical devices used? Be aware of different terms used; blinding is sometimes also called masking.

Question 5: Were those delivering the treatment blind to treatment assignment?

Like question 4, those delivering the treatment who are aware of participant allocation to either treatment or control may treat participants differently compared to those who remain unaware of participant allocation. There is a risk that any potential change in behavior may influence the implementation of the compared treatments, and the results of the study may be distorted. Blinding of those delivering treatment is used to minimize this risk. When this level of blinding has been achieved, those delivering the treatment are not aware if they are treating the group receiving the treatment of interest or if they are treating any other group receiving the control intervention. Reviewers should check the details reported in the article about the blinding of those delivering treatment with regard to treatment assignment. Is there any information in the article about those delivering the treatment? Were those delivering the treatment unaware of the assignments of participants to the compared groups?

Question 6: Were treatment groups treated identically other than the intervention of interest?

To attribute the effect to the cause (assuming there is no bias related to selection and allocation), there should be no difference between the groups in terms of treatment or care received, other than the treatment or intervention controlled by the researchers. If there are other exposures or treatments occurring at the same time as the cause (the treatment or intervention of interest), then the effect can potentially be attributed to something other than the examined cause (the investigated treatment). This is because it is plausible that the effect may be explained by other exposures or treatments that occurred at the same time as the cause. Reviewers should check the reported exposures or interventions received by the compared groups. Are there other exposures or treatments occurring at the same time as the cause? Is it plausible that the effect may be explained by other exposures or treatments occurring at the same time as the cause? Is it clear that there is no other difference between the groups in terms of treatment or care received, other than the treatment or intervention of interest?

Question 7: Were outcome assessors blind to treatment assignment?

Domain: Bias related to assessment, detection, and measurement of the outcome

Appraisal: Outcome level

Like questions 4 and 5, if those assessing the outcomes are aware of participant allocation to either treatment or control, they may treat participants differently compared with those who remain unaware of participant allocation. Therefore, there is a risk that the measurement of the outcomes between groups may be distorted, and the results of the study may themselves be distorted. Blinding of outcomes assessors is used to minimize this risk. Reviewers should check the details reported in the article about the blinding of outcomes assessors with regard to treatment assignment. Is there any information in the article about outcomes assessors? Were those assessing the treatment’s effects on outcomes unaware of the assignments of participants to the compared groups?

Question 8: Were outcomes measured in the same way for treatment groups?

If the outcome is not measured in the same way in the compared groups, there is a threat to the internal validity of a study. Any differences in outcome measurements may be due to the method of measurement employed between the 2 groups and not the intervention/treatment of interest. Reviewers should check whether the outcomes were measured in the same way. Was the same instrument or scale used? Was the measurement timing the same? Were the measurement procedures and instructions the same?

Question 9: Were outcomes measured in a reliable way?

Unreliability of outcome measurements is one threat that weakens the validity of inferences about the statistical relationship between the cause and the effect estimated in a study exploring causal effects. Unreliability of outcome measurements is one of the plausible explanations for errors of statistical inference with regard to the existence and the magnitude of the effect determined by the treatment (cause). Reviewers should check the details about the reliability of the measurement used, such as the number of raters, the training of raters, and the reliability of the intra-rater and the inter-raters within the study (not as reported in external sources). This question is about the reliability of the measurement performed in the study, and not about the validity of the measurement instruments/scales used in the study. Finally, some outcomes may not rely on instruments or scales (eg, death), and reliability of the measurements may need to be assessed in the context of the study being reviewed. (Note: Two other important threats that weaken the validity of inferences about the statistical relationship between the cause and the effect are low statistical power and the violation of the assumptions of statistical tests. These threats are explored within question 12.)

Question 10: Was follow-up complete and, if not, were differences between groups in terms of their follow-up adequately described and analyzed?

Domain: Bias related to participant retention

Appraisal: Result level

For this question, follow-up refers to the period from the moment of randomization to any point in which the groups are compared during the trial. This question asks whether there is complete knowledge (eg, measurements, observations) for the entire duration of the trial for all randomly allocated participants. If there is incomplete follow-up from all randomly allocated participants, this is known as post-assignment attrition. Because RCTs are not perfect, there is almost always post-assignment attrition, and the focus of this question is on the appropriate exploration of post-assignment attrition. If differences exist with regard to the post-assignment attrition between the compared groups of an RCT, then there is a threat to the internal validity of that study. This is because these differences may provide a plausible alternative explanation for the observed effect even in the absence of the cause (the treatment or intervention of interest). It is important to note that with regard to post-assignment attrition, it is not enough to know the number of participants and the proportions of participants with incomplete data; the reasons for loss to follow-up are essential in the analysis of risk of bias.

Reviewers should check whether there were differences with regard to the loss to follow-up between the compared groups. If follow-up was incomplete (incomplete information on all participants), examine the reported details about the strategies used to address incomplete follow-up. This can include descriptions of loss to follow-up (eg, absolute numbers, proportions, reasons for loss to follow-up) and impact analyses (the analyses of the impact of loss to follow-up on results). Was there a description of the incomplete follow-up including the number of participants and the specific reasons for loss to follow-up? Even if follow-up was incomplete but balanced between groups, if the reasons for loss to follow-up are different (eg, side effects caused by the intervention of interest), these may impose a risk of bias if not appropriately explored in the analysis. If there are differences between groups with regard to the loss to follow-up (numbers/proportions and reasons), was there an analysis of patterns of loss to follow-up? If there are differences between the groups with regard to the loss to follow-up, was there an analysis of the impact of the loss to follow-up on the results? (Note: Question 10 is not about intention-to-treat [ITT] analysis; question 11 is about ITT analysis.)

Question 11: Were participants analyzed in the groups to which they were randomized?

Category: Statistical conclusion validity

This question is about the ITT analysis. There are different statistical analysis strategies available for the analysis of data from RCTs, such as ITT, per-protocol analysis, and as-treated analysis. In the ITT analysis, the participants are analyzed in the groups to which they were randomized. This means that regardless of whether participants received the intervention or control as assigned, were compliant with their planned assignment, or participated for the entire study duration, they are still included in the analysis. The ITT analysis compares the outcomes for participants from the initial groups created by the initial random allocation of participants to those groups. Reviewers should check whether an ITT analysis was reported and the details of the ITT. Were participants analyzed in the groups to which they were initially randomized, regardless of whether they participated in those groups and regardless of whether they received the planned interventions?

Note: The ITT analysis is a type of statistical analysis recommended in the Consolidated Standards of Reporting Trials (CONSORT) statement on best practices in trials reporting, and it is considered a marker of good methodological quality of the analysis of results of a randomized trial. The ITT is estimating the effect of offering the intervention (ie, the effect of instructing the participants to use or take the intervention); the ITT is not estimating the effect of receiving the intervention of interest.

Question 12: Was appropriate statistical analysis used?

Inappropriate statistical analysis may cause errors of statistical inference with regard to the existence and the magnitude of the effect determined by the treatment (cause). Low statistical power and the violation of the assumptions of statistical tests are 2 important threats that weaken the validity of inferences about the statistical relationship between the cause and the effect. Reviewers should check the following aspects: if the assumptions of the statistical tests were respected; if appropriate statistical power analysis was performed; if appropriate effect sizes were used; if appropriate statistical methods were used given the nature of the data and the objectives of statistical analysis (eg, association between variables, prediction, survival analysis).

Question 13: Was the trial design appropriate and any deviations from the standard RCT design (individual randomization, parallel groups) accounted for in the conduct and analysis of the trial?

The typical, parallel group RCT may not always be appropriate depending on the nature of the question. Therefore, some additional RCT designs may have been employed that come with their own additional considerations.

Crossover trials should only be conducted with people with a chronic, stable condition, where the intervention produces a short-term effect (eg, relief in symptoms). Crossover trials should ensure there is an appropriate period of washout between treatments. This may also be considered under question 6.

Cluster RCTs randomize individuals or groups (eg, communities, hospital wards), forming clusters. When we assess outcomes on an individual level in cluster trials, there are unit-of-analysis issues, because individuals within a cluster are correlated. This should be considered by the study authors when conducting analysis, and ideally authors will report the intra-cluster correlation coefficient. This may also be considered under question 12.

Stepped-wedge RCTs may be appropriate to establish when and how a beneficial intervention may be best implemented within a defined setting, or due to logistical, practical, or financial considerations in the rollout of a new treatment/intervention. Data analysis in these trials should be conducted appropriately, considering the effects of time. This may also be considered under question 12.

Randomized controlled studies are the ideal, and often the only, included study design for systematic reviews assessing the effectiveness of interventions. All included studies must undergo rigorous critical appraisal, which, in the case of quantitative study designs, is predominantly focused on assessment of risk of bias in the conduct of the study. The revised JBI critical appraisal tool for RCTs presents an adaptable and robust new method for assessing this risk of bias. The tool has been designed to complement recent advancements in the field while maintaining its easy-to-follow questions. The revised JBI critical appraisal tool for RCTs offers systematic reviewers an improved and up-to-date method to assess the risk of bias for RCTs included in their systematic review.

Acknowledgments

The JBI Scientific Committee members for their feedback and contributions regarding the concept of this work and both the draft and final manuscript.

Coauthor Catalin Tufanaru passed away July 29, 2021.

MK is supported by the INTER-EXCELLENCE grant number LTC20031—Towards an International Network for Evidence-based Research in Clinical Health Research in the Czech Republic.

ZM is supported by an NHMRC Investigator Grant, APP1195676.

  • Cited Here |
  • Google Scholar

critical appraisal tool; methodological quality; methodology; randomized controlled trial; risk of bias

Supplemental Digital Content

  • SRX_2023_01_20_BARKER_JBIES-22-00430R1_SDC1.docx; [Word] (55 KB)
  • + Favorites
  • View in Gallery

Readers Of this Article Also Read

Revising the jbi quantitative critical appraisal tools to improve their..., methodological quality of case series studies: an introduction to the jbi..., assessing the risk of bias of quantitative analytical studies: introducing the..., the revised jbi critical appraisal tool for the assessment of risk of bias for..., from critical appraisal to risk of bias assessment: clarifying the terminology....

Banner

Critical Appraisal Resources for Evidence-Based Nursing Practice

  • Levels of Evidence
  • Systematic Reviews
  • Randomized Controlled Trials
  • Quasi-Experimental Studies
  • Case-Control Studies
  • Cohort Studies
  • Analytical Cross-Sectional Studies
  • Qualitative Research

What is Qualitative Research?

Pro tips: qualitative research checklist, articles on qualitative research design & methodology, e-books for terminology and definitions.

Cover Art

Qualitative research , in contrast to quantitative research, analyzes words and text instead of numbers and figures.  Qualitative research is exploratory and non-experimental.  It seeks to explore meaning, experiences and phenomena among study participants.  Qualitative data is generated from participants' stories, open-ended responses, and viewpoints collected from focus groups, interviews, observations or detailed records (Schmidt & Brown, 2019, pp. 221-224).  

Schmidt N. A. & Brown J. M. (2019). Evidence-based practice for nurses: Appraisal and application of research  (4th ed.). Jones & Bartlett Learning. 

Each JBI Checklist provides tips and guidance on what to look for to answer each question.   These tips begin on page 4. 

Below are some additional  Frequently Asked Questions  about the Qualitative Research Checklist  that have been asked students in previous semesters. 

Frequently Asked Question Response
In a qualitative study, it is important that all elements of the study - the objectives, methods, theoretical/conceptual framework, qualitative data gathering - all fit together in agreement and that they make sense. Please see page 4 of the JBI Qualitative Checklist for explanatory notes for each question, which elaborate on this concept further. 
Sometimes, authors of a qualitative study will provide details about their own cultural or theoretical background. Look for this information in the beginning of the study or in the methods section. 

For more help:  Each JBI Checklist provides detailed guidance on what to look for to answer each question on the checklist.  These explanatory notes begin on page four of each Checklist. Please review these carefully as you conduct critical appraisal using JBI tools. 

Danford, C. A. (2023). Understanding the evidence: Qualitative research designs .  Urologic Nursing ,  43 (1), 41–45. https://doi.org/10.7257/2168-4626.2023.43.1.41

Doyle, L., McCabe, C., Keogh, B., Brady, A., & McCann, M. (2020). An overview of the qualitative descriptive design within nursing research .  Journal of Research in Nursing ,  25 (5), 443–455. https://doi.org/10.1177/1744987119880234

Luciani, M., Jack, S. M., Campbell, K., Orr, E., Durepos, P., Li, L., Strachan, P., & Di Mauro, S. (2019).  An introduction to qualitative health research .  Professioni infermieristiche ,  72 (1), 60–68.

  • << Previous: Analytical Cross-Sectional Studies
  • Next: Questions? >>
  • Last Updated: Feb 22, 2024 11:26 AM
  • URL: https://libguides.utoledo.edu/nursingappraisal
  • UNC Libraries
  • HSL Subject Research
  • Qualitative Research Resources
  • Assessing Qualitative Research

Qualitative Research Resources: Assessing Qualitative Research

Created by health science librarians.

HSL Logo

  • What is Qualitative Research?
  • Qualitative Research Basics
  • Special Topics
  • Training Opportunities: UNC & Beyond
  • Help at UNC
  • Qualitative Software for Coding/Analysis
  • Software for Audio, Video, Online Surveys
  • Finding Qualitative Studies

About this Page

Legend (let evidence guide every new decision) assessment tools: cincinnati children's hospital, equator network: enhancing the quality and transparency of health research, other tools for assessing qualitative research.

  • Writing Up Your Research
  • Integrating Qualitative Research into Systematic Reviews
  • Publishing Qualitative Research
  • Presenting Qualitative Research
  • Qualitative & Libraries: a few gems
  • Data Repositories

Why is this information important?

  • Qualitative research typically focuses on collecting very detailed information on a few cases and often addresses meaning, rather than objectively identifiable factors.
  • This means that typical markers of research quality for quantitative studies, such as validity and reliability, cannot be used to assess qualitative research.

On this page you'll find:

The resources on this page will guide you to some of the alternative measures/tools or means you can use to assess qualitative research.

Evidence Evaluation Tools and Resources

This website has a number of resources for evaluating health sciences research across a variety of designs/study types, including an Evidence Appraisal form for qualitative research (in table), as well as forms for mixed methods studies from a variety of clinical question domains. The site includes information on the following:

  • Evaluating the Evidence Algorithm (pdf download)
  • Evidence Appraisal Forms ( see Domain of Clinical Questions Table )
  • Table of Evidence Levels (pdf download)
  • Grading a Body of Evidence (pdf download)
  • Judging the Strength of a Recommendation (pdf download)
  • LEGEND Glossary (pdf download)
  • EQUATOR: Qualitative Research Reporting Guidelines
  • EQUATOR Network Home

The EQUATOR Network is an ‘umbrella’ organisation that brings together researchers, medical journal editors, peer reviewers, developers of reporting guidelines, research funding bodies and other collaborators with mutual interest in improving the quality of research publications and of research itself. 

The EQUATOR Library contains a comprehensive searchable database of reporting guidelines for many study types--including qualitative--and also links to other resources relevant to research reporting:

  • Library for health research reporting:  provides an up-to-date collection of guidelines and policy documents related to health research reporting. These are aimed mainly at authors of research articles, journal editors, peer reviewers and reporting guideline developers.
  • Toolkits to support writing research, using guidelines, teaching research skills, selecting the appropriate reporting guideline
  • Courses and events
  • Librarian Network

Also see Articles box, below, some of which contain checklists or tools. 

Most checklists or tools are meant to help you think critically and systematically when appraising research.  Users should generally consult accompanying materials such as manuals, handbooks, and cited literature to use these tools appropriately.  Broad understanding of the variety and complexity of qualitative research is generally necessary, along with an understanding of the philosophical perspectives plus knowledge about specific qualitative research methods and their implementation.  

  • CASP/Critical Assessment Skills Programme Tool for Evaluating Qualitative Research 2018
  • CASP Knowledge Hub Includes critical appraisal checklists for key study designs; glossary of key research terms; key links related to evidence based healthcare, statistics, and research; a bibliography of articles and research papers about CASP and other critical appraisal tools and approaches 1993-2012.
  • (Joanna Briggs Institute) Manual for Evidence Synthesis (2024) See the following chapters: Chapter 2: Systematic reviews of qualitative evidence. Includes appendices: • Appendix 2.1: Critical Appraisal Checklist for Qualitative Research • Appendix 2.2: Discussion of Qualitative critical appraisal criteria • Appendix 2.3 Qualitative data extraction tool Chapter 8: Mixed methods systematic reviews more... less... Aromataris E, Munn Z (Editors). JBI Manual for Evidence Synthesis. JBI, 2020. Available from https://synthesismanual.jbi.global. https://doi.org/10.46658/JBIMES-20-01
  • McGill Mixed Methods Appraisal Tool (MMAT) Front Page Public wiki site for the MMAT: The MMAT is intended to be used as a checklist for concomitantly appraising and/or describing studies included in systematic mixed studies reviews (reviews including original qualitative, quantitative and mixed methods studies). The MMAT was first published in 2009. Since then, it has been validated in several studies testing its interrater reliability, usability and content validity. The latest version of the MMAT was updated in 2018.
  • McGill Mixed Methods Appraisal Tool (MMAT) 2018 User Guide See full site (public wiki link above) for additional information, including FAQ's, references and resources, earlier versions, and more.
  • McMaster University Critical Review Form & Guidelines for Qualitative Studies v2.0 Includes links to Qualitative Review Form (v2.0) and accompanying Guidelines from the Evidence Based Practice Research Group of McMaster University's School of Rehabilitation Science). Links are also provided for Spanish, German, and French versions.
  • NICE Quality Appraisal Checklist-Qualitative Studies, 3rd ed, 2012, from UK National Institute for Health and Care Excellence Includes checklist and notes on its use. From Methods for the Development of NICE Public Health Guidance, 3rd edition. more... less... Produced by the National Institute for Health and Clinical Excellence © Copyright National Institute for Health and Clinical Excellence, 2006 (updated 2012). All rights reserved. This material may be freely reproduced for educational and not-for-profit purposes. No reproduction by or for commercial organisations, or for commercial purposes, is allowed without the express written permission of the Institute.
  • NICE Quality Appraisal Checklist-Qualitative Studies, 3rd ed. (.pdf download) Appendix H Checklist and Notes download. © Copyright National Institute for Health and Clinical Excellence, 2006 (updated 2012). All rights reserved. This material may be freely reproduced for educational and not-for-profit purposes. No reproduction by or for commercial organisations, or for commercial purposes, is allowed without the express written permission of the Institute.
  • Qualitative Research Review Guidelines, RATS
  • SBU Swedish Agency for Health Technology Assessment and Assessment of Social Services Evaluation and synthesis of studies using qualitative methods of analysis, 2016. Appendix 2 of this document (at the end) contains a checklist for evaluating qualitative research. more... less... SBU. Evaluation and synthesis of studies using qualitative methods of analysis. Stockholm: Swedish Agency for Health Technology Assessment and Assessment of Social Services (SBU); 2016.
  • Users' Guides to the Medical Literature: A Manual for Evidence-Based Clinical Practice, 3rd ed (JAMA Evidence) Chapter 13.5 Qualitative Research
  • Slides: Appraising Qualitative Research from Users' Guide to the Medical Literature, 3rd edition Click on the 'Related Content' tab to find the link to download the Appraising Qualitative Research slides.

These articles address a range of issues related to understanding and evaluating qualitative research; some  include checklists or tools.

Clissett, P. (2008) "Evaluating Qualitative Research." Journal of Orthopaedic Nursing 12: 99-105.

Cohen, Deborah J. and Benjamin F. Crabtree. (2008) "Evidence for Qualitative Research in Health Care: Controversies and Recommendations." Annals of Family Medicine 6(4): 331-339.

  • Supplemental Appendix 1. Search Strategy for Criteria for Qualitative Research in Health Care
  • Supplemental Appendix 2. Publications Analyzed: Health Care Journals and Frequently Referenced Books and Book Chapters (1980-2005) That Posited Criteria for "Good" Qualitative Research.

Dixon-Woods, M.,  R.L. Shaw, S. Agarwal, and J.A. Smith. (2004) "The Problem of Appraising Qualitative Research." Qual Safe Health Care 13: 223-225.

Fossey, E., C. Harvey, F. McDermott, and L. Davidson. (2002) "Understanding and Evaluating Qualitative Research." Australian and New Zealand Journal of Psychiatry 36(6): 717-732.

Hammarberg, K., M. Kirkman, S. de Lacey. (2016) "Qualitative Research Methods: When to Use and How to Judge them." Human Reproduction 31 (3): 498-501.

Lee, J. (2014) "Genre-Appropriate Judgments of Qualitative Research." Philosophy of the Social Sciences 44(3): 316-348. (This provides 3 strategies for evaluating qualitative research, 2 that the author is not crazy about and one that he considers more appropriate/accurate).

Majid, Umair and Vanstone,Meredith (2018). "Appraising Qualitative Research for Evidence Syntheses: A Compendium of Quality Appraisal Tools." Qualitative Health Research  28(13): 2115-2131.   PMID: 30047306 DOI:  10.1177/1049732318785358

Meyrick, Jane. (2006) "What is Good Qualitative Research? A First Step towards a Comprehensive Approach to Judging Rigour/Quality." Journal of Health Psychology 11(5): 799-808.

Miles, MB, AM Huberman, J Saldana. (2014) Qualitative Data Analysis.  Thousand Oaks, Califorinia, SAGE Publications, Inc. Chapter 11: Drawing and Verifying Conclusions . Check Availability of Print Book . 

Morse, JM. (1997) "Perfectly Healthy but Dead:"The Myth of Inter-Rater Reliability. Qualitative Health Research 7(4): 445-447.  

O’Brien BC, Harris IB, Beckman TJ, et al. (2014) Standards for reporting qualitative research: a synthesis of recommendations . Acad Med 89(9):1245–1251. DOI: 10.1097/ACM.0000000000000388 PMID: 24979285

The Standards for Reporting Qualitative Research (SRQR) consists of 21 items. The authors define and explain key elements of each item and provide examples from recently published articles to illustrate ways in which the standards can be met. The SRQR aims to improve the transparency of all aspects of qualitative research by providing clear standards for reporting qualitative research. These standards will assist authors during manuscript preparation, editors and reviewers in evaluating a manuscript for potential publication, and readers when critically appraising, applying, and synthesizing study findings.

Ryan, Frances, Michael Coughlin, and Patricia Cronin. (2007) "Step by Step Guide to Critiquing Research: Part 2, Qualitative Research." British Journal of Nursing 16(12): 738-744.

Stige, B, K. Malterud, and T. Midtgarden. (2009) "Toward an Agenda for Evaluation of Qualitative Research." Qualitative Health Research 19(10): 1504-1516.

Tong, Allison and Mary Amanda Dew. (2016-EPub ahead of print). "Qualitative Research in Transplantation: Ensuring Relevance and Rigor. "   Transplantation 

Allison Tong, Peter Sainsbury, Jonathan Craig; Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups , International Journal for Quality in Health Care , Volume 19, Issue 6, 1 December 2007, Pages 349–357, https://doi.org/10.1093/intqhc/mzm042

The criteria included in COREQ, a 32-item checklist, can help researchers to report important aspects of the research team, study methods, context of the study, findings, analysis and interpretations. Items most frequently included in the checklists related to sampling method, setting for data collection, method of data collection, respondent validation of findings, method of recording data, description of the derivation of themes and inclusion of supporting quotations. We grouped all items into three domains: (i) research team and reflexivity, (ii) study design and (iii) data analysis and reporting.

Tracy, Sarah (2010) “Qualitative Quality: Eight ‘Big-Tent’ Criteria for Excellent Qualitative Research.” Qualitative Inquiry 16(10):837-51

  • Critical Appraisal Skills Programme
  • IMPSCI (Implementation Science) Tutorials
  • Johns Hopkins: Why Mixed Methods?
  • Measuring, Learning, and Evaluation Project for the Urban Reproductive Health Initiative This project ran 2010-2015. Some project resources are still available.
  • NIH OBSSR (Office of Behavioral & Social Sciences Research) Best Practices for Mixed Methods Research in Health Sciences, 2011 The OBSSR commissioned a team in 2010 to develop a resource that would provide guidance to NIH investigators on how to rigorously develop and evaluate mixed methods research applications. more... less... John W. Creswell, Ph.D., University of Nebraska-Lincoln Ann Carroll Klassen, Ph.D., Drexel University Vicki L. Plano Clark, Ph.D., University of Nebraska-Lincoln Katherine Clegg Smith, Ph.D., Johns Hopkins University With the Assistance of a Specially Appointed Working Group
  • NIH OBSSR eSource: Introductory Social and Behavioral Science Training Materials eSource is a collection of online chapters that provide an introduction to selected behavioral and social science research approaches, including theory development and testing, survey methods, measurement, and study design. more... less... Link not working on OBSSR website, here https://obssr.od.nih.gov/about-us/publications/ Formerly: https://obssr-archive.od.nih.gov/pdf/Qualitative.PDF
  • NSF Workshop on Interdisciplinary Standards for Systematic Qualitative Research On May 19-20, 2005, a workshop on Interdisciplinary Standards for Systematic Qualitative Research was held at the National Science Foundation (NSF) in Arlington, Virginia. The workshop was cofunded by a grant from four NSF Programs—Cultural Anthropology, Law and Social Science, Political Science, and Sociology… It is well recognized that each of the four disciplines have different research design and evaluation cultures as well as considerable variability in the emphasis on interpretation and explanation, commitment to constructivist and positivist epistemologies, and the degree of perceived consensus about the value and prominence of qualitative research methods. more... less... Within this multidisciplinary and multimethods context, twenty-four scholars from the four disciplines were charged to (1) articulate the standards used in their particular field to ensure rigor across the range of qualitative methodological approaches;1* (2) identify common criteria shared across the four disciplines for designing and evaluating research proposals and fostering multidisciplinary collaborations; and (3) develop an agenda for strengthening the tools, training, data, research design, and infrastructure for research using qualitative approaches.
  • Technical Note: Mixed-Methods Evaluations (USAID) This open source resource from USAID (2013) discusses the mixing of qualitative and quantitative methods in mixed methods research.
  • Qualitative Research Methods: A Data Collector's Field Guide (2005) From FHI 360/Family Health International with support from US AID. Natasha Mack, Cynthia Woodsong, Kathleen M. MacQueen, Greg Guest, and Emily Name. The guide is divided into five modules covering the following topics: Module 1 – Qualitative Research Methods Overview Module 2 – Participant Observation Module 3 – In-Depth Interviews Module 4 – Focus Groups Module 5 – Data Documentation and Management
  • Robert Wood Johnson Foundation Guidelines for Designing, Analyzing, and Reporting Qualitative Research
  • Robert Wood Johnson Foundation: Qualitative Research Guidelines Project

Qualitative Literacy

Cover Art

Not a checklist, this is a thorough discussion of assessing the scientific merit of a study based on in-depth interviews or participant observation, first by assessing exposure (e.g. time exposed in the field). Then, assuming sufficient exposure, the authors propose looking for signs of

  • cognitive empathy, how those interviewed or observed perceive themselves and their social world, the meaning they attach to those perceptions, the motives they express for their actions 
  • palpability: the evidence would be palpable or concrete rather than abstract or general
  • heterogeneity: showing diversity across people, over time, among situations, or between contexts 
  • follow-up: responding to the unexpected; following up on unanticipated statements or observations 
  • self-awareness: showing that the author is explicitly aware of the impact of their presence on who was accessed and what they disclosed
  • << Previous: Finding Qualitative Studies
  • Next: Writing Up Your Research >>
  • Last Updated: Jul 28, 2024 4:11 PM
  • URL: https://guides.lib.unc.edu/qual

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Methodological quality of case series studies: an introduction to the JBI critical appraisal tool

Affiliations.

  • 1 JBI, Faculty of Health and Medical Sciences, The University of Adelaide, Adelaide, SA, Australia.
  • 2 The George Institute for Global Health, Telangana, India.
  • 3 Australian Institute of Health Innovation, Faculty of Medicine and Health Sciences, Sydney, NSW, Australia.
  • PMID: 33038125
  • DOI: 10.11124/JBISRIR-D-19-00099

Introduction: Systematic reviews provide a rigorous synthesis of the best available evidence regarding a certain question. Where high-quality evidence is lacking, systematic reviewers may choose to rely on case series studies to provide information in relation to their question. However, to date there has been limited guidance on how to incorporate case series studies within systematic reviews assessing the effectiveness of an intervention, particularly with reference to assessing the methodological quality or risk of bias of these studies.

Methods: An international working group was formed to review the methodological literature regarding case series as a form of evidence for inclusion in systematic reviews. The group then developed a critical appraisal tool based on the epidemiological literature relating to bias within these studies. This was then piloted, reviewed, and approved by JBI's international Scientific Committee.

Results: The JBI critical appraisal tool for case series studies includes 10 questions addressing the internal validity and risk of bias of case series designs, particularly confounding, selection, and information bias, in addition to the importance of clear reporting.

Conclusion: In certain situations, case series designs may represent the best available evidence to inform clinical practice. The JBI critical appraisal tool for case series offers systematic reviewers an approved method to assess the methodological quality of these studies.

PubMed Disclaimer

Similar articles

  • Conducting systematic reviews of economic evaluations. Gomersall JS, Jadotte YT, Xue Y, Lockwood S, Riddle D, Preda A. Gomersall JS, et al. Int J Evid Based Healthc. 2015 Sep;13(3):170-8. doi: 10.1097/XEB.0000000000000063. Int J Evid Based Healthc. 2015. PMID: 26288063
  • Updated methodological guidance for the conduct of scoping reviews. Peters MDJ, Marnie C, Tricco AC, Pollock D, Munn Z, Alexander L, McInerney P, Godfrey CM, Khalil H. Peters MDJ, et al. JBI Evid Synth. 2020 Oct;18(10):2119-2126. doi: 10.11124/JBIES-20-00167. JBI Evid Synth. 2020. PMID: 33038124 Review.
  • The methodological quality assessment tools for preclinical and clinical studies, systematic review and meta-analysis, and clinical practice guideline: a systematic review. Zeng X, Zhang Y, Kwong JS, Zhang C, Li S, Sun F, Niu Y, Du L. Zeng X, et al. J Evid Based Med. 2015 Feb;8(1):2-10. doi: 10.1111/jebm.12141. J Evid Based Med. 2015. PMID: 25594108 Review.
  • Updated methodological guidance for the conduct of scoping reviews. Peters MDJ, Marnie C, Tricco AC, Pollock D, Munn Z, Alexander L, McInerney P, Godfrey CM, Khalil H. Peters MDJ, et al. JBI Evid Implement. 2021 Mar;19(1):3-10. doi: 10.1097/XEB.0000000000000277. JBI Evid Implement. 2021. PMID: 33570328
  • Assessing the risk of bias of quantitative analytical studies: introducing the vision for critical appraisal within JBI systematic reviews. Munn Z, Stone JC, Aromataris E, Klugar M, Sears K, Leonardi-Bee J, Barker TH. Munn Z, et al. JBI Evid Synth. 2023 Mar 1;21(3):467-471. doi: 10.11124/JBIES-22-00224. JBI Evid Synth. 2023. PMID: 36476419
  • Psychotherapeutic interventions for depressive symptoms in older adults in a community setting: a systematic review protocol. Morgado B, Silva C, Agostinho I, Brás F, Amaro P, Lusquinhos L, Schneider BC, Fonseca C, Albacar-Riobóo N, Pinho L. Morgado B, et al. Front Psychiatry. 2024 Aug 9;15:1448771. doi: 10.3389/fpsyt.2024.1448771. eCollection 2024. Front Psychiatry. 2024. PMID: 39184451 Free PMC article.
  • Neuromonitoring practices for neonates with congenital heart disease: a scoping review. Pardo AC, Carrasco M, Wintermark P, Nunes D, Chock VY, Sen S, Wusthoff CJ; Newborn Brain Society, Guidelines and Publications Committee. Pardo AC, et al. Pediatr Res. 2024 Aug 25. doi: 10.1038/s41390-024-03484-x. Online ahead of print. Pediatr Res. 2024. PMID: 39183308
  • Large macular hole and autologous retinal transplantation: a systematic review and meta-analysis. Hanai M, Amaral DC, Jacometti R, Aguiar EHC, Gomes FC, Cyrino LG, Alves MR, Monteiro MLR, Fuganti RM, Casella AMB, Louzada RN. Hanai M, et al. Int J Retina Vitreous. 2024 Aug 22;10(1):56. doi: 10.1186/s40942-024-00573-1. Int J Retina Vitreous. 2024. PMID: 39175026 Free PMC article. Review.
  • Health-related outcomes of structured home-based rehabilitation programs among older adults: A systematic literature review. Alves E, Gonçalves C, Oliveira H, Ribeiro R, Fonseca C. Alves E, et al. Heliyon. 2024 Jul 27;10(15):e35351. doi: 10.1016/j.heliyon.2024.e35351. eCollection 2024 Aug 15. Heliyon. 2024. PMID: 39170553 Free PMC article.
  • Search in MeSH

Related information

  • Cited in Books

LinkOut - more resources

Full text sources.

  • Ovid Technologies, Inc.
  • Wolters Kluwer

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Mil Med Res

Logo of milmedres

Methodological quality (risk of bias) assessment tools for primary and secondary medical studies: what are they and which is better?

1 Center for Evidence-Based and Translational Medicine, Zhongnan Hospital, Wuhan University, 169 Donghu Road, Wuchang District, Wuhan, 430071 Hubei China

Yun-Yun Wang

2 Department of Evidence-Based Medicine and Clinical Epidemiology, The Second Clinical College, Wuhan University, Wuhan, 430071 China

Zhi-Hua Yang

Xian-tao zeng.

3 Center for Evidence-Based and Translational Medicine, Wuhan University, Wuhan, 430071 China

4 Global Health Institute, Wuhan University, Wuhan, 430072 China

Associated Data

The data and materials used during the current review are all available in this review.

Methodological quality (risk of bias) assessment is an important step before study initiation usage. Therefore, accurately judging study type is the first priority, and the choosing proper tool is also important. In this review, we introduced methodological quality assessment tools for randomized controlled trial (including individual and cluster), animal study, non-randomized interventional studies (including follow-up study, controlled before-and-after study, before-after/ pre-post study, uncontrolled longitudinal study, interrupted time series study), cohort study, case-control study, cross-sectional study (including analytical and descriptive), observational case series and case reports, comparative effectiveness research, diagnostic study, health economic evaluation, prediction study (including predictor finding study, prediction model impact study, prognostic prediction model study), qualitative study, outcome measurement instruments (including patient - reported outcome measure development, content validity, structural validity, internal consistency, cross-cultural validity/ measurement invariance, reliability, measurement error, criterion validity, hypotheses testing for construct validity, and responsiveness), systematic review and meta-analysis, and clinical practice guideline. The readers of our review can distinguish the types of medical studies and choose appropriate tools. In one word, comprehensively mastering relevant knowledge and implementing more practices are basic requirements for correctly assessing the methodological quality.

In the twentieth century, pioneering works by distinguished professors Cochrane A [ 1 ], Guyatt GH [ 2 ], and Chalmers IG [ 3 ] have led us to the evidence-based medicine (EBM) era. In this era, how to search, critically appraise, and use the best evidence is important. Moreover, systematic review and meta-analysis is the most used tool for summarizing primary data scientifically [ 4 – 6 ] and also the basic for developing clinical practice guideline according to the Institute of Medicine (IOM) [ 7 ]. Hence, to perform a systematic review and/ or meta-analysis, assessing the methodological quality of based primary studies is important; naturally, it would be key to assess its own methodological quality before usage. Quality includes internal and external validity, while methodological quality usually refers to internal validity [ 8 , 9 ]. Internal validity is also recommended as “risk of bias (RoB)” by the Cochrane Collaboration [ 9 ].

There are three types of tools: scales, checklists, and items [ 10 , 11 ]. In 2015, Zeng et al. [ 11 ] investigated methodological quality tools for randomized controlled trial (RCT), non-randomized clinical intervention study, cohort study, case-control study, cross-sectional study, case series, diagnostic accuracy study which also called “diagnostic test accuracy (DTA)”, animal study, systematic review and meta-analysis, and clinical practice guideline (CPG). From then on, some changes might generate in pre-existing tools, and new tools might also emerge; moreover, the research method has also been developed in recent years. Hence, it is necessary to systematically investigate commonly-used tools for assessing methodological quality, especially those for economic evaluation, clinical prediction rule/model, and qualitative study. Therefore, this narrative review presented related methodological quality (including “RoB”) assessment tools for primary and secondary medical studies up to December 2019, and Table  1 presents their basic characterizes. We hope this review can help the producers, users, and researchers of evidence.

The basic characteristics of the included methodological quality (risk of bias) assessment tools

No.Development OrganizationTool’s nameType of study
1The Cochrane CollaborationCochrane RoB tool and RoB 2.0 tool

Randomized controlled trial

Diagnostic accuracy study

2The Physiotherapy Evidence Database (PEDro)PEDro scaleRandomized controlled trial
3The Effective Practice and Organisation of Care (EPOC) GroupEPOC RoB tool

Randomized controlled trial

Clinical controlled trials

Controlled before-and-after study

Interrupted time series studies

4The Critical Appraisal Skills Programme (CASP)CASP checklist

Randomized controlled trial

Cohort study

Case-control study

Cross-sectional study

Diagnostic test study

Clinical prediction rule

Economic evaluation

Qualitative study

Systematic review

5The National Institutes of Health (NIH)NIH quality assessment tool

Controlled intervention study

Cohort study

Cross-sectional study

Case-control study

Before-after (Pre-post) study with no control group

Case-series (Interventional)

Systematic review and meta-analysis

6The Joanna Briggs Institute (JBI)JBI critical appraisal checklist

Randomized controlled trial

Non-randomized experimental study

Cohort study

Case-control study

Cross-sectional study

Prevalence data

Case reports

Economic evaluation

Qualitative study

Text and expert opinion papers

Systematic reviews and research syntheses

7The Scottish Intercollegiate Guidelines Network (SIGN)SIGN methodology checklist

Randomized controlled trial

Cohort study

Case-control study

Diagnostic study

Economic evaluation

Systematic reviews and meta-analyses

8The Stroke Therapy Academic Industry Roundtable (STAIR) GroupCAMARADES toolAnimal study
9The SYstematic Review Center for Laboratory animal Experimentation (SYRCLE)SYRCLE’s RoB toolAnimal study
10Sterne JAC et al.ROBINS-I toolNon-randomised interventional study
11Slim K et al.MINORS toolNon-randomised interventional study
12The Canada Institute of Health Economics (IHE)IHE quality appraisal toolCase-series (Interventional)
13Wells GA et al.Newcastle-Ottawa Scale (NOS)

Cohort study

Case-control study

14Downes MJ et al.AXIS toolCross-sectional study
15The Agency for Healthcare Research and Quality (AHRQ)AHRQ methodology checklistCross-sectional/ Prevalence study
16Crombie ICrombie’s itemsCross-sectional study
17The Good Research for Comparative Effectiveness (GRACE) InitiativeGRACE checklistComparative effectiveness research
18Whiting PF et al.QUADAS tool and QUADAS-2 toolDiagnostic accuracy study
19The National Institute for Clinical Excellence (NICE)NICE methodology checklistEconomic evaluation
20The Cabinet OfficeThe Quality Framework: Cabinet Office checklistQualitative study (social research)
21Hayden JA et al.QIPS toolPrediction study (predictor finding study)
22Wolff RF et al.PROBASTPrediction study (prediction model study)
23The (COnsensus-based Standards for the selection of health Measurement INstruments) initiativeCOSMIN RoB checklist

Patient-reported outcome measure development

Content validity

Structural validity

Internal consistency

Cross-cultural validity/ measurement invariance

Reliability

Measurement error

Criterion validity

Hypotheses testing for construct validity

Responsiveness

24Shea BJ et al.AMSTAR and AMSTAR 2Systematic review
25The Decision Support Unit (DSU)DSU network meta-analysis (NMA) methodology checklistNetwork meta-analysis
26Whiting P et al.ROBIS toolSystematic review
27Brouwers MC et al.AGREE instrument and AGREE II instrumentClinical practice guideline

AMSTAR A measurement tool to assess systematic reviews, AHRQ Agency for healthcare research and quality, AXIS Appraisal tool for cross-sectional studies, CASP Critical appraisal skills programme, CAMARADES The collaborative approach to meta-analysis and review of animal data from experimental studies, COSMIN Consensus-based standards for the selection of health measurement instruments, DSU Decision support unit, EPOC the effective practice and organisation of care group, GRACE The god research for comparative effectiveness initiative, IHE Canada institute of health economics, JBI Joanna Briggs Institute, MINORS Methodological index for non-randomized studies, NOS Newcastle-Ottawa scale, NMA network meta-analysis, NIH national institutes of health, NICE National institute for clinical excellence, PEDro physiotherapy evidence database, PROBAST The prediction model risk of bias assessment tool, QUADAS Quality assessment of diagnostic accuracy studies, QIPS Quality in prognosis studies, RoB Risk of bias, ROBINS-I Risk of bias in non-randomised studies - of interventions, ROBIS Risk of bias in systematic review, SYRCLE Systematic review center for laboratory animal experimentation, STAIR Stroke therapy academic industry roundtable, SIGN The Scottish intercollegiate guidelines network

Tools for intervention studies

Randomized controlled trial (individual or cluster).

The first RCT was designed by Hill BA (1897–1991) and became the “gold standard” for experimental study design [ 12 , 13 ] up to now. Nowadays, the Cochrane risk of bias tool for randomized trials (which was introduced in 2008 and edited on March 20, 2011) is the most commonly recommended tool for RCT [ 9 , 14 ], which is called “RoB”. On August 22, 2019 (which was introduced in 2016), the revised revision for this tool to assess RoB in randomized trials (RoB 2.0) was published [ 15 ]. The RoB 2.0 tool is suitable for individually-randomized, parallel-group, and cluster- randomized trials, which can be found in the dedicated website https://www.riskofbias.info/welcome/rob-2-0-tool . The RoB 2.0 tool consists of five bias domains and shows major changes when compared to the original Cochrane RoB tool (Table S 1 A-B presents major items of both versions).

The Physiotherapy Evidence Database (PEDro) scale is a specialized methodological assessment tool for RCT in physiotherapy [ 16 , 17 ] and can be found in http://www.pedro.org.au/english/downloads/pedro-scale/ , covering 11 items (Table S 1 C). The Effective Practice and Organisation of Care (EPOC) Group is a Cochrane Review Group who also developed a tool (called as “EPOC RoB Tool”) for complex interventions randomized trials. This tool has 9 items (Table S 1 D) and can be found in https://epoc.cochrane.org/resources/epoc-resources-review-authors . The Critical Appraisal Skills Programme (CASP) is a part of the Oxford Centre for Triple Value Healthcare Ltd. (3 V) portfolio, which provides resources and learning and development opportunities to support the development of critical appraisal skills in the UK ( http://www.casp-uk.net/ ) [ 18 – 20 ]. The CASP checklist for RCT consists of three sections involving 11 items (Table S 1 E). The National Institutes of Health (NIH) also develops quality assessment tools for controlled intervention study (Table S 1 F) to assess methodological quality of RCT ( https://www.nhlbi.nih.gov/health-topics/study-quality-assessment-tools ).

The Joanna Briggs Institute (JBI) is an independent, international, not-for-profit researching and development organization based in the Faculty of Health and Medical Sciences at the University of Adelaide, South Australia ( https://joannabriggs.org/ ). Hence, it also develops many critical appraisal checklists involving the feasibility, appropriateness, meaningfulness and effectiveness of healthcare interventions. Table S 1 G presents the JBI Critical appraisal checklist for RCT, which includes 13 items.

The Scottish Intercollegiate Guidelines Network (SIGN) was established in 1993 ( https://www.sign.ac.uk/ ). Its objective is to improve the quality of health care for patients in Scotland via reducing variations in practices and outcomes, through developing and disseminating national clinical guidelines containing recommendations for effective practice based on current evidence. Hence, it also develops many critical appraisal checklists for assessing methodological quality of different study types, including RCT (Table S 1 H).

In addition, the Jadad Scale [ 21 ], Modified Jadad Scale [ 22 , 23 ], Delphi List [ 24 ], Chalmers Scale [ 25 ], National Institute for Clinical Excellence (NICE) methodology checklist [ 11 ], Downs & Black checklist [ 26 ], and other tools summarized by West et al. in 2002 [ 27 ] are not commonly used or recommended nowadays.

Animal study

Before starting clinical trials, the safety and effectiveness of new drugs are usually tested in animal models [ 28 ], so animal study is considered as preclinical research, possessing important significance [ 29 , 30 ]. Likewise, the methodological quality of animal study also needs to be assessed [ 30 ]. In 1999, the initial “Stroke Therapy Academic Industry Roundtable (STAIR)” recommended their criteria for assessing the quality of stroke animal studies [ 31 ] and this tool is also called “STAIR”. In 2009, the STAIR Group updated their criteria and developed “Recommendations for Ensuring Good Scientific Inquiry” [ 32 ]. Besides, Macleod et al. [ 33 ] proposed a 10-point tool based on STAIR to assess methodological quality of animal study in 2004, which is also called “CAMARADES (The Collaborative Approach to Meta-Analysis and Review of Animal Data from Experimental Studies)”; with “S” presenting “Stroke” at that time and now standing for “Studies” ( http://www.camarades.info/ ). In CAMARADES tool, every item could reach a highest score of one point and the total score for this tool could achieve 10 points (Table S 1 J).

In 2008, the Systematic Review Center for Laboratory animal Experimentation (SYRCLE) was established in Netherlands and this team developed and released an RoB tool for animal intervention studies - SYRCLE’s RoB tool in 2014, based on the original Cochrane RoB Tool [ 34 ]. This new tool contained 10 items which had become the most recommended tool for assessing the methodological quality of animal intervention studies (Table S 1 I).

Non-randomised studies

In clinical research, RCT is not always feasible [ 35 ]; therefore, non-randomized design remains considerable. In non-randomised study (also called quasi-experimental studies), investigators control the allocation of participants into groups, but do not attempt to adopt randomized operation [ 36 ], including follow-up study. According to with or without comparison, non-randomized clinical intervention study can be divided into comparative and non-comparative sub-types, the Risk Of Bias In Non-randomised Studies - of Interventions (ROBINS-I) tool [ 37 ] is the preferentially recommended tool. This tool is developed to evaluate risk of bias in estimating comparative effectiveness (harm or benefit) of interventions in studies not adopting randomization in allocating units (individuals or clusters of individuals) into comparison groups. Besides, the JBI critical appraisal checklist for quasi-experimental studies (non-randomized experimental studies) is also suitable, which includes 9 items. Moreover, the methodological index for non-randomized studies (MINORS) [ 38 ] tool can also be used, which contains a total of 12 methodological points; the first 8 items could be applied for both non-comparative and comparative studies, while the last 4 items appropriate for studies with two or more groups. Every item is scored from 0 to 2, and the total scores over 16 or 24 give an overall quality score. Table S 1 K-L-M presented the major items of these three tools.

Non-randomized study with a separate control group could also be called clinical controlled trial or controlled before-and-after study. For this design type, the EPOC RoB tool is suitable (see Table S 1 D). When using this tool, the “random sequence generation” and “allocation concealment” should be scored as “High risk”, while grading for other items could be the same as that for randomized trial.

Non-randomized study without a separate control group could be a before-after (Pre-Post) study, a case series (uncontrolled longitudinal study), or an interrupted time series study. A case series is described a series of individuals, who usually receive the same intervention, and contains non control group [ 9 ]. There are several tools for assessing the methodological quality of case series study. The latest one was developed by Moga C et al. [ 39 ] in 2012 using a modified Delphi technique, which was developed by the Canada Institute of Health Economics (IHE); hence, it is also called “IHE Quality Appraisal Tool” (Table S 1 N). Moreover, NIH also develops a quality assessment tool for case series study, including 9 items (Table S 1 O). For interrupted time series studies, the “EPOC RoB tool for interrupted time series studies” is recommended (Table S 1 P). For the before-after study, we recommend the NIH quality assessment tool for before-after (Pre-Post) study without control group (Table S 1 Q).

In addition, for non-randomized intervention study, the Reisch tool (Check List for Assessing Therapeutic Studies) [ 11 , 40 ], Downs & Black checklist [ 26 ], and other tools summarized by Deeks et al. [ 36 ] are not commonly used or recommended nowadays.

Tools for observational studies and diagnostic study

Observational studies include cohort study, case-control study, cross-sectional study, case series, case reports, and comparative effectiveness research [ 41 ], and can be divided into analytical and descriptive studies [ 42 ].

Cohort study

Cohort study includes prospective cohort study, retrospective cohort study, and ambidirectional cohort study [ 43 ]. There are some tools for assessing the quality of cohort study, such as the CASP cohort study checklist (Table S 2 A), SIGN critical appraisal checklists for cohort study (Table S 2 B), NIH quality assessment tool for observational cohort and cross-sectional studies (Table S 2 C), Newcastle-Ottawa Scale (NOS; Table S 2 D) for cohort study, and JBI critical appraisal checklist for cohort study (Table S 2 E). However, the Downs & Black checklist [ 26 ] and the NICE methodology checklist for cohort study [ 11 ] are not commonly used or recommended nowadays.

The NOS [ 44 , 45 ] came from an ongoing collaboration between the Universities of Newcastle, Australia and Ottawa, Canada. Among all above mentioned tools, the NOS is the most commonly used tool nowadays which also allows to be modified based on a special subject.

Case-control study

Case-control study selects participants based on the presence of a specific disease or condition, and seeks earlier exposures that may lead to the disease or outcome [ 42 ]. It has an advantage over cohort study, that is the issue of “drop out” or “loss in follow up” of participants as seen in cohort study would not arise in such study. Nowadays, there are some acceptable tools for assessing the methodological quality of case-control study, including CASP case-control study checklist (Table S 2 F), SIGN critical appraisal checklists for case-control study (Table S 2 G), NIH quality assessment tool of case-control study (Table S 2 H), JBI critical appraisal checklist for case-control study (Table S 2 I), and the NOS for case-control study (Table S 2 J). Among them, the NOS for case-control study is also the most frequently used tool nowadays and allows to be modified by users.

In addition, the Downs & Black checklist [ 26 ] and the NICE methodology checklist for case-control study [ 11 ] are also not commonly used or recommended nowadays.

Cross-sectional study (analytical or descriptive)

Cross-sectional study is used to provide a snapshot of a disease and other variables in a defined population at a time point. It can be divided into analytical and purely descriptive types. Descriptive cross-sectional study merely describes the number of cases or events in a particular population at a time point or during a period of time; whereas analytic cross-sectional study can be used to infer relationships between a disease and other variables [ 46 ].

For assessing the quality of analytical cross-sectional study, the NIH quality assessment tool for observational cohort and cross-sectional studies (Table S 2 C), JBI critical appraisal checklist for analytical cross-sectional study (Table S 2 K), and the Appraisal tool for Cross-Sectional Studies (AXIS tool; Table S 2 L) [ 47 ] are recommended tools. The AXIS tool is a critical appraisal tool that addresses study design and reporting quality as well as the risk of bias in cross-sectional study, which was developed in 2016 and contains 20 items. Among these three tools, the JBI checklist is the most preferred one.

Purely descriptive cross-sectional study is usually used to measure disease prevalence and incidence. Hence, the critical appraisal tool for analytic cross-sectional study is not proper for the assessment. Only few quality assessment tools are suitable for descriptive cross-sectional study, like the JBI critical appraisal checklist for studies reporting prevalence data [ 48 ] (Table S 2 M), Agency for Healthcare Research and Quality (AHRQ) methodology checklist for assessing the quality of cross-sectional/ prevalence study (Table S 2 N), and Crombie’s items for assessing the quality of cross-sectional study [ 49 ] (Table S 2 O). Among them, the JBI tool is the newest.

Case series and case reports

Unlike above mentioned interventional case series, case reports and case series are used to report novel occurrences of a disease or a unique finding [ 50 ]. Hence, they belong to descriptive studies. There is only one tool – the JBI critical appraisal checklist for case reports (Table S 2 P).

Comparative effectiveness research

Comparative effectiveness research (CER) compares real-world outcomes [ 51 ] resulting from alternative treatment options that are available for a given medical condition. Its key elements include the study of effectiveness (effect in the real world), rather than efficacy (ideal effect), and the comparisons among alternative strategies [ 52 ]. In 2010, the Good Research for Comparative Effectiveness (GRACE) Initiative was established and developed principles to help healthcare providers, researchers, journal readers, and editors evaluate inherent quality for observational research studies of comparative effectiveness [ 41 ]. And in 2016, a validated assessment tool – the GRACE Checklist v5.0 (Table S 2 Q) was released for assessing the quality of CER.

Diagnostic study

Diagnostic tests, also called “Diagnostic Test Accuracy (DTA)”, are used by clinicians to identify whether a condition exists in a patient or not, so as to develop an appropriate treatment plan [ 53 ]. DTA has several unique features in terms of its design which differ from standard intervention and observational evaluations. In 2003, Penny et al. [ 53 , 54 ] developed a tool for assessing the quality of DTA, namely Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tool. In 2011, a revised “QUADAS-2” tool (Table S 2 R) was launched [ 55 , 56 ]. Besides, the CASP diagnostic checklist (Table S 2 S), SIGN critical appraisal checklists for diagnostic study (Table S 2 T), JBI critical appraisal checklist for diagnostic test accuracy studies (Table S 2 U), and the Cochrane risk of bias assessing tool for diagnostic test accuracy (Table S 2 V) are also common useful tools in this field.

Of them, the Cochrane risk of bias tool ( https://methods.cochrane.org/sdt/ ) is based on the QUADAS tool, and the SIGN and JBI tools are based on the QUADAS-2 tool. Of course, the QUADAS-2 tool is the first recommended tool. Other relevant tools reviewed by Whiting et al. [ 53 ] in 2004 are not used nowadays.

Tools for other primary medical studies

Health economic evaluation.

Health economic evaluation research comparatively analyses alternative interventions with regard to their resource uses, costs and health effects [ 57 ]. It focuses on identifying, measuring, valuing and comparing resource use, costs and benefit/effect consequences for two or more alternative intervention options [ 58 ]. Nowadays, health economic study is increasingly popular. Of course, its methodological quality also needs to be assessed before its initiation. The first tool for such assessment was developed by Drummond and Jefferson in 1996 [ 59 ], and then many tools have been developed based on the Drummond’s items or its revision [ 60 ], such as the SIGN critical appraisal checklists for economic evaluations (Table S 3 A), CASP economic evaluation checklist (Table S 3 B), and the JBI critical appraisal checklist for economic evaluations (Table S 3 C). The NICE only retains one methodology checklist for economic evaluation (Table S 3 D).

However, we regard the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement [ 61 ] as a reporting tool rather than a methodological quality assessment tool, so we do not recommend it to assess the methodological quality of health economic evaluation.

Qualitative study

In healthcare, qualitative research aims to understand and interpret individual experiences, behaviours, interactions, and social contexts, so as to explain interested phenomena, such as the attitudes, beliefs, and perspectives of patients and clinicians; the interpersonal nature of caregiver and patient relationships; illness experience; and the impact of human sufferings [ 62 ]. Compared with quantitative studies, assessment tools for qualitative studies are fewer. Nowadays, the CASP qualitative research checklist (Table S 3 E) is the most frequently recommended tool for this issue. Besides, the JBI critical appraisal checklist for qualitative research [ 63 , 64 ] (Table S 3 F) and the Quality Framework: Cabinet Office checklist for social research [ 65 ] (Table S 3 G) are also suitable.

Prediction studies

Clinical prediction study includes predictor finding (prognostic factor) studies, prediction model studies (development, validation, and extending or updating), and prediction model impact studies [ 66 ]. For predictor finding study, the Quality In Prognosis Studies (QIPS) tool [ 67 ] can be used for assessing its methodological quality (Table S 3 H). For prediction model impact studies, if it uses a randomized comparative design, tools for RCT can be used, especially the RoB 2.0 tool; if it uses a nonrandomized comparative design, tools for non-randomized studies can be used, especially the ROBINS-I tool. For diagnostic and prognostic prediction model studies, the Prediction model Risk Of Bias Assessment Tool (PROBAST; Table S 3 I) [ 68 ] and CASP clinical prediction rule checklist (Table S 3 J) are suitable.

Text and expert opinion papers

Text and expert opinion-based evidence (also called “non-research evidence”) comes from expert opinions, consensus, current discourse, comments, and assumptions or assertions that appear in various journals, magazines, monographs and reports [ 69 – 71 ]. Nowadays, only the JBI has a critical appraisal checklist for the assessment of text and expert opinion papers (Table S 3 K).

Outcome measurement instruments

An outcome measurement instrument is a “device” used to collect a measurement. The range embraced by the term ‘instrument’ is broad, and can refer to questionnaire (e.g. patient-reported outcome such as quality of life), observation (e.g. the result of a clinical examination), scale (e.g. a visual analogue scale), laboratory test (e.g. blood test) and images (e.g. ultrasound or other medical imaging) [ 72 , 73 ]. Measurements can be subjective or objective, and either unidimensional (e.g. attitude) or multidimensional. Nowadays, only one tool - the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) Risk of Bias checklist [ 74 – 76 ] ( www.cosmin.nl/ ) is proper for assessing the methodological quality of outcome measurement instrument, and Table S 3 L presents its major items, including patient - reported outcome measure (PROM) development (Table S 3 LA), content validity (Table S 3 LB), structural validity (Table S 3 LC), internal consistency (Table S 3 LD), cross-cultural validity/ measurement invariance (Table S 3 LE), reliability (Table S 3 LF), measurement error (Table S 3 LG), criterion validity (Table S 3 LH), hypotheses testing for construct validity (Table S 3 LI), and responsiveness (Table S 3 LJ).

Tools for secondary medical studies

Systematic review and meta-analysis.

Systematic review and meta-analysis are popular methods to keep up with current medical literature [ 4 – 6 ]. Their ultimate purposes and values lie in promoting healthcare [ 6 , 77 , 78 ]. Meta-analysis is a statistical process of combining results from several studies, commonly a part of a systematic review [ 11 ]. Of course, critical appraisal would be necessary before using systematic review and meta-analysis.

In 1988, Sacks et al. developed the first tool for assessing the quality of meta-analysis on RCTs - the Sack’s Quality Assessment Checklist (SQAC) [ 79 ]; And then in 1991, Oxman and Guyatt developed another tool – the Overview Quality Assessment Questionnaire (OQAQ) [ 80 , 81 ]. To overcome the shortcomings of these two tools, in 2007 the A Measurement Tool to Assess Systematic Reviews (AMSTAR) was developed based on them [ 82 ] ( http://www.amstar.ca/ ). However, this original AMSTAR instrument did not include an assessment on the risk of bias for non-randomised studies, and the expert group thought revisions should address all aspects of the conduct of a systematic review. Hence, the new instrument for randomised or non-randomised studies on healthcare interventions - AMSTAR 2 was released in 2017 [ 83 ], and Table S 4 A presents its major items.

Besides, the CASP systematic review checklist (Table S 4 B), SIGN critical appraisal checklists for systematic reviews and meta-analyses (Table S 4 C), JBI critical appraisal checklist for systematic reviews and research syntheses (Table S 4 D), NIH quality assessment tool for systematic reviews and meta-analyses (Table S 4 E), The Decision Support Unit (DSU) network meta-analysis (NMA) methodology checklist (Table S 4 F), and the Risk of Bias in Systematic Review (ROBIS) [ 84 ] tool (Table S 4 G) are all suitable. Among them, the AMSTAR 2 is the most commonly used and the ROIBS is the most frequently recommended.

Among those tools, the AMSTAR 2 is suitable for assessing systematic review and meta-analysis based on randomised or non-randomised interventional studies, the DSU NMA methodology checklist for network meta-analysis, while the ROBIS for meta-analysis based on interventional, diagnostic test accuracy, clinical prediction, and prognostic studies.

Clinical practice guidelines

Clinical practice guideline (CPG) is integrated well into the thinking of practicing clinicians and professional clinical organizations [ 85 – 87 ]; and also make scientific evidence incorporated into clinical practice [ 88 ]. However, not all CPGs are evidence-based [ 89 , 90 ] and their qualities are uneven [ 91 – 93 ]. Until now there were more than 20 appraisal tools have been developed [ 94 ]. Among them, the Appraisal of Guidelines for Research and Evaluation (AGREE) instrument has the greatest potential in serving as a basis to develop an appraisal tool for clinical pathways [ 94 ]. The AGREE instrument was first released in 2003 [ 95 ] and updated to AGREE II instrument in 2009 [ 96 ] ( www.agreetrust.org/ ). Now the AGREE II instrument is the most recommended tool for CPG (Table S 4 H).

Besides, based on the AGREE II, the AGREE Global Rating Scale (AGREE GRS) Instrument [ 97 ] was developed as a short item tool to evaluate the quality and reporting of CPGs.

Discussion and conclusions

Currently, the EBM is widely accepted and the major attention of healthcare workers lies in “Going from evidence to recommendations” [ 98 , 99 ]. Hence, critical appraisal of evidence before using is a key point in this process [ 100 , 101 ]. In 1987, Mulrow CD [ 102 ] pointed out that medical reviews needed routinely use scientific methods to identify, assess, and synthesize information. Hence, perform methodological quality assessment is necessary before using the study. However, although there are more than 20 years have been passed since the first tool emergence, many users remain misunderstand the methodological quality and reporting quality. Of them, someone used the reporting checklist to assess the methodological quality, such as used the Consolidated Standards of Reporting Trials (CONSORT) statement [ 103 ] to assess methodological quality of RCT, used the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement [ 104 ] to methodological quality of cohort study. This phenomenon indicates more universal education of clinical epidemiology is needed for medical students and professionals.

The methodological quality tool development should according to the characteristics of different study types. In this review, we used “methodological quality”, “risk of bias”, “critical appraisal”, “checklist”, “scale”, “items”, and “assessment tool” to search in the NICE website, SIGN website, Cochrane Library website and JBI website, and on the basis of them, added “systematic review”, “meta-analysis”, “overview” and “clinical practice guideline” to search in PubMed. Compared with our previous systematic review [ 11 ], we found some tools are recommended and remain used, some are used without recommendation, and some are eliminated [ 10 , 29 , 30 , 36 , 53 , 94 , 105 – 107 ]. These tools produce a significant impetus for clinical practice [ 108 , 109 ].

In addition, compared with our previous systematic review [ 11 ], this review stated more tools, especially those developed after 2014, and the latest revisions. Of course, we also adjusted the method of study type classification. Firstly, in 2014, the NICE provided 7 methodology checklists but only retains and updated the checklist for economic evaluation now. Besides, the Cochrane RoB 2.0 tool, AMSTAR 2 tool, CASP checklist, and most of JBI critical appraisal checklists are all the newest revisions; the NIH quality assessment tool, ROBINS-I tool, EPOC RoB tool, AXIS tool, GRACE Checklist, PROBAST, COSMIN Risk of Bias checklist, and ROBIS tool are all newly released tools. Secondly, we also introduced tools for network meta-analysis, outcome measurement instruments, text and expert opinion papers, prediction studies, qualitative study, health economic evaluation, and CER. Thirdly, we classified interventional studies into randomized and non-randomized sub-types, and then further classified non-randomized studies into with and without controlled group. Moreover, we also classified cross-sectional study into analytic and purely descriptive sub-types, and case-series into interventional and observational sub-types. These processing courses were more objective and comprehensive.

Obviously, the number of appropriate tools is the largest for RCT, followed by cohort study; the applicable range of JBI is widest [ 63 , 64 ], with CASP following closely. However, further efforts remain necessary to develop appraisal tools. For some study types, only one assessment tool is suitable, such as CER, outcome measurement instruments, text and expert opinion papers, case report, and CPG. Besides, there is no proper assessment tool for many study types, such as overview, genetic association study, and cell study. Moreover, existing tools have not been fully accepted. In the future, how to develop well accepted tools remains a significant and important work [ 11 ].

Our review can help the professionals of systematic review, meta-analysis, guidelines, and evidence users to choose the best tool when producing or using evidence. Moreover, methodologists can obtain the research topics for developing new tools. Most importantly, we must remember that all assessment tools are subjective, and actual yields of wielding them would be influenced by user’s skills and knowledge level. Therefore, users must receive formal training (relevant epidemiological knowledge is necessary), and hold rigorous academic attitude, and at least two independent reviewers should be involved in evaluation and cross-checking to avoid performance bias [ 110 ].

Supplementary information

Acknowledgements.

The authors thank all the authors and technicians for their hard field work for development methodological quality assessment tools.

Abbreviations

AGREE GRSAGREE Global rating scale
AGREEAppraisal of guidelines for research and evaluation
AHRQAgency for healthcare research and quality
AMSTARA measurement tool to assess systematic reviews
AXISAppraisal tool for cross-sectional studies
CAMARADESThe collaborative approach to meta-analysis and review of animal data from experimental studies
CASPCritical appraisal skills programme
CERComparative effectiveness research
CHEERSConsolidated health economic evaluation reporting standards
CONSORTConsolidated standards of reporting trials
COSMINConsensus-based standards for the selection of health measurement instruments
CPGClinical practice guideline
DSUDecision support unit
DTADiagnostic test accuracy
EBMEvidence-based medicine
EPOCThe effective practice and organisation of care group
GRACEThe good research for comparative effectiveness initiative
IHECanada institute of health economics
IOMInstitute of medicine
JBIJoanna Briggs Institute
MINORSMethodological index for non-randomized studies
NICENational institute for clinical excellence
NIHNational institutes of health
NMANetwork meta-analysis
NOSNewcastle-Ottawa scale
OQAQOverview quality assessment questionnaire
PEDroPhysiotherapy evidence database
PROBASTThe prediction model risk of bias assessment tool
PROMPatient - reported outcome measure
QIPSQuality in prognosis studies
QUADASQuality assessment of diagnostic accuracy studies
RCTRandomized controlled trial
RoBRisk of bias
ROBINS-IRisk of bias in non-randomised studies - of interventions
ROBISRisk of bias in systematic review
SIGNThe Scottish intercollegiate guidelines network
SQACSack’s quality assessment checklist
STAIRStroke therapy academic industry roundtable
STROBEStrengthening the reporting of observational studies in epidemiology
SYRCLESystematic review center for laboratory animal experimentation

Authors’ contributions

XTZ is responsible for the design of the study and review of the manuscript; LLM, ZHY, YYW, and DH contributed to the data collection; LLM, YYW, and HW contributed to the preparation of the article. All authors read and approved the final manuscript.

This work was supported (in part) by the Entrusted Project of National commission on health and health of China (No. [2019]099), the National Key Research and Development Plan of China (2016YFC0106300), and the Nature Science Foundation of Hubei Province (2019FFB03902). The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The authors declare that there are no conflicts of interest in this study.

Availability of data and materials

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Contributor Information

Lin-Lu Ma, Email: moc.361@58251689531 .

Yun-Yun Wang, Email: moc.361@49072054531 .

Zhi-Hua Yang, Email: moc.621@xxauhihzgnay .

Di Huang, Email: moc.361@74384236131 .

Hong Weng, Email: moc.361@29hgnew .

Xian-Tao Zeng, Email: moc.361@8211oatnaixgnez , Email: moc.mtbecuhw@oatnaixgnez .

Supplementary information accompanies this paper at 10.1186/s40779-020-00238-8.

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • BMJ Journals

You are here

  • Volume 27, Issue Suppl 2
  • 12 Critical appraisal tools for qualitative research – towards ‘fit for purpose’
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Veronika Williams 1 ,
  • Anne-Marie Boylan 2 ,
  • Newhouse Nikki 2 ,
  • David Nunan 2
  • 1 Nipissing University, North Bay, Canada
  • 2 University of Oxford, Oxford, UK

Qualitative research has an important place within evidence-based health care (EBHC), contributing to policy on patient safety and quality of care, supporting understanding of the impact of chronic illness, and explaining contextual factors surrounding the implementation of interventions. However, the question of whether, when and how to critically appraise qualitative research persists. Whilst there is consensus that we cannot - and should not – simplistically adopt existing approaches for appraising quantitative methods, it is nonetheless crucial that we develop a better understanding of how to subject qualitative evidence to robust and systematic scrutiny in order to assess its trustworthiness and credibility. Currently, most appraisal methods and tools for qualitative health research use one of two approaches: checklists or frameworks. We have previously outlined the specific issues with these approaches (Williams et al 2019). A fundamental challenge still to be addressed, however, is the lack of differentiation between different methodological approaches when appraising qualitative health research. We do this routinely when appraising quantitative research: we have specific checklists and tools to appraise randomised controlled trials, diagnostic studies, observational studies and so on. Current checklists for qualitative research typically treat the entire paradigm as a single design (illustrated by titles of tools such as ‘CASP Qualitative Checklist’, ‘JBI checklist for qualitative research’) and frameworks tend to require substantial understanding of a given methodological approach without providing guidance on how they should be applied. Given the fundamental differences in the aims and outcomes of different methodologies, such as ethnography, grounded theory, and phenomenological approaches, as well as specific aspects of the research process, such as sampling, data collection and analysis, we cannot treat qualitative research as a single approach. Rather, we must strive to recognise core commonalities relating to rigour, but considering key methodological differences. We have argued for a reconsideration of current approaches to the systematic appraisal of qualitative health research (Williams et al 2021), and propose the development of a tool or tools that allow differentiated evaluations of multiple methodological approaches rather than continuing to treat qualitative health research as a single, unified method. Here we propose a workshop for researchers interested in the appraisal of qualitative health research and invite them to develop an initial consensus regarding core aspects of a new appraisal tool that differentiates between the different qualitative research methodologies and thus provides a ‘fit for purpose’ tool, for both, educators and clinicians.

https://doi.org/10.1136/ebm-2022-EBMLive.36

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Read the full text or download the PDF:

IMAGES

  1. The Joanna Briggs Institute (JBI) critical appraisal checklist for

    jbi critical appraisal checklist for qualitative research 2020

  2. Critical apprasial assignment- Nursing research

    jbi critical appraisal checklist for qualitative research 2020

  3. JBI critical appraisal checklist for qualitative research

    jbi critical appraisal checklist for qualitative research 2020

  4. JBI Critical Appraisal Checklist For Randomized Controlled Trials

    jbi critical appraisal checklist for qualitative research 2020

  5. Quality assessment using the Joanna Briggs Institute (JBI) Critical

    jbi critical appraisal checklist for qualitative research 2020

  6. JBI Critical Appraisal Checklist

    jbi critical appraisal checklist for qualitative research 2020

VIDEO

  1. Indomitable is BROKEN 🎯😈- Skullgirls Mobile

  2. Foundation Physics: Optics Part 1

  3. Your 20's Are Critical

  4. An Overview of Electrical Grid Monitoring, SCADA Systems and Optical Communication Networks

  5. Improving Financial Advice with Strategic Client Touchpoints

  6. Qualitative Research Appraisal Checklist

COMMENTS

  1. Full article: Knowledge and dispositions of caring professionals in

    To determine the quality of included studies they were assesses utilizing the JBI online appraisal tools for qualitative studies and cross-sectional studies (Joanna Briggs Institute [JBI], ... (JBI). (2020). Checklist for systematic reviews and research synthesis: Critical appraisal tools for use in JBI systematic reviews.

  2. Examining the effectiveness of food literacy interventions in improving

    The screening process for all citations, full-text articles, and abstract data will be carried out by two reviewers independently. In case of any potential conflicts, they will be resolved through discussion. The quality of quantitative studies will be reviewed using the JBI critical appraisal checklist for analytical cross-sectional studies.

  3. PDF CHECKLIST FOR QUALITATIVE RESEARCH

    Ethical approval by an appropriate body. A statement on the ethical approval process followed should be in the report. 10. Relationship of conclusions to analysis, or interpretation of the data. This criterion concerns the relationship between the findings reported and the views or words of study participants.

  4. PDF for use in JBI Systematic Reviews Checklist for Qualitative Research

    JBI Critical Appraisal Checklist for Qualitative Research Reviewer Date Author Year Record Number Yes No Unclear Not applicable 1. Is there congruity between the stated philosophical ... Joanna Briggs Institute 2017 Critical Appraisal Checklist for Qualitative Research Author: JBI Martin Created Date: 7/11/2017 12:50:05 PM ...

  5. JBI Critical Appraisal Tools

    JBI's Evidence Synthesis Critical Appraisal Tools Assist in Assessing the Trustworthiness, ... Stephenson M, Aromataris E. Methodological quality of case series studies: an introduction to the JBI critical appraisal tool. JBI Evidence Synthesis. 2020;18(10):2127-2133 ... Checklist for Qualitative Research. How to cite. COPY.

  6. PDF Checklist for Systematic Reviews and Research Syntheses

    theories, methodologies and rigorous processes for the critical appraisal and synthesis of these diverse forms of evidence in order to aid in clinical decision-making in health care. There now exists JBI guidance for conducting reviews of effectiveness research, qualitative research,

  7. PDF Checklist for Systematic Reviews and Research Syntheses

    The systematic review is essentially an analysis of the available literature (that is, evidence) and a. judgment of the effectiveness or otherwise of a practice, involving a series of complex steps. JBI takes a. particular view on what counts as evidence and the methods utilised to synthesise those different types of. evidence.

  8. PDF JBI CRITICAL APPRAISAL CHECKLIST FOR QUALITATIVE RESEARCH

    5. Is there congruity between the research methodology and the interpretation of results? 6. Is there a statement locating the researcher culturally or theoretically? 7. Is the influence of the researcher on the research, and vice- versa, addressed? 8. Are participants, and their voices, adequately represented? 9. Is the research ethical ...

  9. PDF CHECKLIST FOR RANDOMIZED CONTROLLED TRIALS

    JBI grants use of these Critical Appraisal Checklist for Randomized Controlled Trials -5. tools for research purposes only. All other enquiries should be sent to [email protected]. be attributed to the potential cause (the examined intervention or treatment), as maybe it is plausible that the effect may be explained by the ...

  10. PDF © Joanna Briggs Institute 2017 Critical Appraisal Checklist for

    JBI guidance for conducting reviews of effectiveness research, qualitative research, prevalence/incidence, etiology/risk, economic evaluations, text/opinion, diagnostic test accuracy, mixed-methods, umbrella reviews and scoping reviews.

  11. The revised JBI critical appraisal tool for the assessment... : JBI

    Previous iterations of the JBI critical appraisal tool for RCTs intuitively supported reviewers assessing the overall quality of a study using a checklist-based or ... Tufanaru C, Stern C, McArthur A, et al. Methodological quality of case series studies: an introduction to the JBI critical appraisal tool. JBI Evid Synth 2020;18(10):2127-33. ...

  12. Qualitative Research

    Below are some additional Frequently Asked Questions about the Qualitative Research Checklist that have been ... Please review these carefully as you conduct critical appraisal using JBI tools. Articles on Qualitative Research Design & Methodology ... L., McCabe, C., Keogh, B., Brady, A., & McCann, M. (2020). An overview of the qualitative ...

  13. JBI critical appraisal checklist for qualitative research

    JBI-QARI is commonly used to assess the strengths and limitations of qualitative studies and consists of 10 items, all of which were rated as 'yes', 'no', 'unclear' and 'not applicable'. The ...

  14. PDF Checklist for Case Reports

    The systematic review is essentially an analysis of the available literature (that is, evidence) and a. judgment of the effectiveness or otherwise of a practice, involving a series of complex steps. JBI takes a. particular view on what counts as evidence and the methods utilised to synthesise those different types of. evidence.

  15. Qualitative Research Resources: Assessing Qualitative Research

    Includes critical appraisal checklists for key study designs; glossary of key research terms; key links related to evidence based healthcare, statistics, and research; a bibliography of articles and research papers about CASP and other critical appraisal tools and approaches 1993-2012. ... Critical Appraisal Checklist for Qualitative Research ...

  16. Methodological quality of case series studies: an introduction to the

    Results: The JBI critical appraisal tool for case series studies includes 10 questions addressing the internal validity and risk of bias of case series designs, particularly confounding, selection, and information bias, in addition to the importance of clear reporting. Conclusion: In certain situations, case series designs may represent the ...

  17. PDF Additional file 3. JBI Critical Appraisal Checklist for Qualitative

    5.Is there congruity between the research methodology and the interpretation of results? 6.Is there a statement locating the researcher culturally or theoretically? 7.Is the influence of the researcher on the research, and vice-versa, addressed? 8.Are participants, and their voices, adequately represented?

  18. Chapter 2: Systematic Reviews of Qualitative Evidence

    Te methodological quality of the selected 19 studies relevant to the inclusion criteria was assessed using Te Joanna Briggs Institute (JBI) Critical Appraisal tools; a checklist for analytical ...

  19. Appendix 7.3 Critical appraisal checklists for case series

    JBI Critical Appraisal Checklist for Case Series ... JBI, 2020. Available from ... 'A case series (also known as a clinical series) is a type of medical research study that tracks subjects with a known exposure, such as patients who have received a similar treatment, or examines their medical records for exposure and outcome.' Wikipedia ...

  20. PDF Critical appraisal of qualitative research: necessity, partialities and

    Consolidate Criteria for Reporting Qualitative Research (COREQ)24 checklist, which was designed to provide standards for authors when reporting qualitative research but is often mistaken for a methods appraisal tool.10 Broadly speaking there are two types of crit-ical appraisal approaches for qualitative research: checklists and frameworks.

  21. Methodological quality (risk of bias) assessment tools for primary and

    Nowadays, the CASP qualitative research checklist (Table S3E) is the most frequently recommended tool for this issue. Besides, the JBI critical appraisal checklist for qualitative research [63, 64] (Table S3F) and the Quality Framework: Cabinet Office checklist for social research (Table S3G) are also suitable.

  22. 12 Critical appraisal tools for qualitative research

    Qualitative research has an important place within evidence-based health care (EBHC), contributing to policy on patient safety and quality of care, supporting understanding of the impact of chronic illness, and explaining contextual factors surrounding the implementation of interventions. However, the question of whether, when and how to critically appraise qualitative research persists ...