• How to Cite
  • Language & Lit
  • Rhyme & Rhythm
  • The Rewrite
  • Search Glass

How to Critique a Research Methodology

A research method is the specific procedure used to answer a set of research questions. Popular methods vary by field, but include qualitative as well as quantitative approaches. Qualitative approaches rely more on observation and interpretation, while quantitative methods focus on data collection and analysis. Research methods should not be confused with research methodology, which is the study of research methods.

Identifying and Critiquing a Research Method

Find the research method in a research paper by looking for a section by this title, which will typically be toward the beginning of the paper, after the abstract and introduction. The description of the research method should include a rationale for why it was chosen.

Ask yourself whether the method used makes sense in answering the research questions. Most basically, research questions which seek to understand a phenomenon may be best answered with qualitative methods such as case studies or narrative approaches. Research questions which seek to describe a phenomenon may be better suited to quantitative methods, such as experiments or surveys.

Match the research questions with the author’s conclusions. Make sure the research questions were answered specifically. Incomplete answers often indicate improper choice of research method.

Be aware of the most common methodological errors. First, even when a specific method answers specific research questions, data disparities and questions that arise during research often cause scientists to redesign their studies. Thus, a completed study should proceed logically from question to method to discussion and conclusions. If there are obvious questions left unanswered, a methodological error may be the cause.

Examine the researcher’s conclusions from a broad perspective. Ask yourself if they make significant contributions to existing knowledge about the topic. For example, if a study of apples reveals that they have seeds, this would not be a significant finding. Studies that merely support existing knowledge can be helpful, but an overly basic study can be the result of an improper method.

  • Before critiquing any study, become familiar with the most common research methods in your specific field.
  • Critique a researcher's work based on what the work claims to be. It's unfair to critique any research based on what it isn't.
  • "Mass Communication Research and Theory"; Guido H. Stempel, III et al.; 2003

Robin Donovan has been a freelance health writer specializing in chronic illness and women's health since 2008. Her work has appeared in "Cincinnati Magazine," "Southeast Ohio" magazine, "Perspectives" magazine, the "Athens News" and other publications. She has a master's degree in journalism from Ohio University.

Methodological criticism and critical methodology

An analysis of popper's critique of marxian social science

  • Published: September 1979
  • Volume 10 , pages 363–374, ( 1979 )

Cite this article

what is critique of methodology

  • Maurice A. Finocchiaro 1  

693 Accesses

3 Citations

1 Altmetric

Explore all metrics

Methodological criticism may be defined as the critique of scientific practice in the light of methodological principles, and critical methodology as the study of proper methods of criticism; the problem is that of the interaction between the scientific methods which give methodological criticism its methodological character and the critical methods which give it its character of criticism. These ideas and this problem are illustrated by an examination of Karl Popper's critique of Marxian social science. It is argued that though Popper's favorable articulations of Marx are valuable, his unfavorable criticism is invalid, the grounds of my argument being certain ideas in critical methodology relating to the distinctions between theory and practice, between inaccurate and invalid criticism, and between the justification of favorable criticism and the justification of unfavorable criticism.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Similar content being viewed by others

what is critique of methodology

What is Qualitative in Qualitative Research

what is critique of methodology

Research Methods for Public Policy

Stakeholder theory classification: a theoretical and empirical evaluation of definitions.

Agassi, Joseph. Towards an Historiography of Science. History and Theory Beiheft 2. The Hague: Mouton, 1963. Reprinted by Wesleyan University Press, Middletown, Conn.

Google Scholar  

Bunge, Mario (ed.). The Critical Approach to Science and Philosophy . New York: Free Press, 1964.

Cornforth, Maurice. The Open Philosophy and the Open Society: A Reply to Dr. Karl Popper's Refutations of Marxism . New York: International Publishers, 1968.

Feyerabend, Paul K. “Problems of Microphysics.” In Frontiers of Science and Philosophy . Edited by Robert G. Colodny. Pittsburgh: University of Pittsburgh Press, 1962 Pp. 189–283.

Finocchiaro, Maurice A. “Essay-review of Criticism and the Growth of Knowledge .” Studies in History and Philosophy of Science 3 (1972–73): 357–72.

Finocchiaro, Marice A. History of Science as Explanation . Detroit: Wayne State University Press, 1973.

Lakatos, Imre, and A. Musgrave (eds.). Criticism and the Growth of Knowledge . Cambridge: Cambridge University Press, 1970.

Popper, Karl R. The Logic of Scientific Discovery . New York: Harper, 1965.

Popper, Karl R. The Open Society and Its Enemies , vol. II. Princeton: Princeton University Press, 1971. First Edition, 1945.

Scriven, Michael. “The Frontiers of Psychology: Psychoanalysis and Parapsychology.” In Frontiers of Science and Philosophy . Edited by Robert G. Colodny. Pittsburgh: University of Pittsburgh Press, 1962, Pp. 79–129.

Download references

Author information

Authors and affiliations.

Dept. of Philosophy, University of Nevada, 89154, Las Vegas, Nevada, USA

Maurice A. Finocchiaro

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

About this article

Finocchiaro, M.A. Methodological criticism and critical methodology. Zeitschrift für Allgemeine Wissenschaftstheorie 10 , 363–374 (1979). https://doi.org/10.1007/BF01802358

Download citation

Issue Date : September 1979

DOI : https://doi.org/10.1007/BF01802358

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Social Science
  • Methodological Character
  • Scientific Method
  • Scientific Practice
  • Proper Method
  • Find a journal
  • Publish with us
  • Track your research

The University of Melbourne

Which review is that? A guide to review types.

  • Which review is that?
  • Review Comparison Chart
  • Decision Tool
  • Critical Review
  • Integrative Review
  • Narrative Review
  • State of the Art Review
  • Narrative Summary
  • Systematic Review
  • Meta-analysis
  • Comparative Effectiveness Review
  • Diagnostic Systematic Review
  • Network Meta-analysis
  • Prognostic Review
  • Psychometric Review
  • Review of Economic Evaluations
  • Systematic Review of Epidemiology Studies
  • Living Systematic Reviews
  • Umbrella Review
  • Review of Reviews
  • Rapid Review
  • Rapid Evidence Assessment
  • Rapid Realist Review
  • Qualitative Evidence Synthesis
  • Qualitative Interpretive Meta-synthesis
  • Qualitative Meta-synthesis
  • Qualitative Research Synthesis
  • Framework Synthesis - Best-fit Framework Synthesis
  • Meta-aggregation
  • Meta-ethnography
  • Meta-interpretation
  • Meta-narrative Review
  • Meta-summary
  • Thematic Synthesis
  • Mixed Methods Synthesis
  • Narrative Synthesis
  • Bayesian Meta-analysis
  • EPPI-Centre Review
  • Critical Interpretive Synthesis
  • Realist Synthesis - Realist Review
  • Scoping Review
  • Mapping Review
  • Systematised Review
  • Concept Synthesis
  • Expert Opinion - Policy Review
  • Technology Assessment Review

Methodological Review

  • Systematic Search and Review

A methodological review is a type of systematic secondary research (i.e., research synthesis) which focuses on summarising the state-of-the-art methodological practices of research in a substantive field or topic" (Chong et al, 2021).

Methodological reviews "can be performed to examine any methodological issues relating to the design, conduct and review of research studies and also evidence syntheses". Munn et al, 2018)

Further Reading/Resources

Clarke, M., Oxman, A. D., Paulsen, E., Higgins, J. P. T., & Green, S. (2011). Appendix A: Guide to the contents of a Cochrane Methodology protocol and review. Cochrane Handbook for systematic reviews of interventions . Full Text PDF

Aguinis, H., Ramani, R. S., & Alabduljader, N. (2023). Best-Practice Recommendations for Producers, Evaluators, and Users of Methodological Literature Reviews. Organizational Research Methods, 26(1), 46-76. https://doi.org/10.1177/1094428120943281 Full Text

Jha, C. K., & Kolekar, M. H. (2021). Electrocardiogram data compression techniques for cardiac healthcare systems: A methodological review. IRBM . Full Text

References Munn, Z., Stern, C., Aromataris, E., Lockwood, C., & Jordan, Z. (2018). What kind of systematic review should I conduct? A proposed typology and guidance for systematic reviewers in the medical and health sciences. BMC medical research methodology , 18 (1), 1-9. Full Text Chong, S. W., & Reinders, H. (2021). A methodological review of qualitative research syntheses in CALL: The state-of-the-art. System , 103 , 102646. Full Text

  • << Previous: Technology Assessment Review
  • Next: Systematic Search and Review >>
  • Last Updated: Apr 17, 2024 12:42 PM
  • URL: https://unimelb.libguides.com/whichreview
  • Open access
  • Published: 07 September 2020

A tutorial on methodological studies: the what, when, how and why

  • Lawrence Mbuagbaw   ORCID: orcid.org/0000-0001-5855-5461 1 , 2 , 3 ,
  • Daeria O. Lawson 1 ,
  • Livia Puljak 4 ,
  • David B. Allison 5 &
  • Lehana Thabane 1 , 2 , 6 , 7 , 8  

BMC Medical Research Methodology volume  20 , Article number:  226 ( 2020 ) Cite this article

38k Accesses

52 Citations

58 Altmetric

Metrics details

Methodological studies – studies that evaluate the design, analysis or reporting of other research-related reports – play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste.

We provide an overview of some of the key aspects of methodological studies such as what they are, and when, how and why they are done. We adopt a “frequently asked questions” format to facilitate reading this paper and provide multiple examples to help guide researchers interested in conducting methodological studies. Some of the topics addressed include: is it necessary to publish a study protocol? How to select relevant research reports and databases for a methodological study? What approaches to data extraction and statistical analysis should be considered when conducting a methodological study? What are potential threats to validity and is there a way to appraise the quality of methodological studies?

Appropriate reflection and application of basic principles of epidemiology and biostatistics are required in the design and analysis of methodological studies. This paper provides an introduction for further discussion about the conduct of methodological studies.

Peer Review reports

The field of meta-research (or research-on-research) has proliferated in recent years in response to issues with research quality and conduct [ 1 , 2 , 3 ]. As the name suggests, this field targets issues with research design, conduct, analysis and reporting. Various types of research reports are often examined as the unit of analysis in these studies (e.g. abstracts, full manuscripts, trial registry entries). Like many other novel fields of research, meta-research has seen a proliferation of use before the development of reporting guidance. For example, this was the case with randomized trials for which risk of bias tools and reporting guidelines were only developed much later – after many trials had been published and noted to have limitations [ 4 , 5 ]; and for systematic reviews as well [ 6 , 7 , 8 ]. However, in the absence of formal guidance, studies that report on research differ substantially in how they are named, conducted and reported [ 9 , 10 ]. This creates challenges in identifying, summarizing and comparing them. In this tutorial paper, we will use the term methodological study to refer to any study that reports on the design, conduct, analysis or reporting of primary or secondary research-related reports (such as trial registry entries and conference abstracts).

In the past 10 years, there has been an increase in the use of terms related to methodological studies (based on records retrieved with a keyword search [in the title and abstract] for “methodological review” and “meta-epidemiological study” in PubMed up to December 2019), suggesting that these studies may be appearing more frequently in the literature. See Fig.  1 .

figure 1

Trends in the number studies that mention “methodological review” or “meta-

epidemiological study” in PubMed.

The methods used in many methodological studies have been borrowed from systematic and scoping reviews. This practice has influenced the direction of the field, with many methodological studies including searches of electronic databases, screening of records, duplicate data extraction and assessments of risk of bias in the included studies. However, the research questions posed in methodological studies do not always require the approaches listed above, and guidance is needed on when and how to apply these methods to a methodological study. Even though methodological studies can be conducted on qualitative or mixed methods research, this paper focuses on and draws examples exclusively from quantitative research.

The objectives of this paper are to provide some insights on how to conduct methodological studies so that there is greater consistency between the research questions posed, and the design, analysis and reporting of findings. We provide multiple examples to illustrate concepts and a proposed framework for categorizing methodological studies in quantitative research.

What is a methodological study?

Any study that describes or analyzes methods (design, conduct, analysis or reporting) in published (or unpublished) literature is a methodological study. Consequently, the scope of methodological studies is quite extensive and includes, but is not limited to, topics as diverse as: research question formulation [ 11 ]; adherence to reporting guidelines [ 12 , 13 , 14 ] and consistency in reporting [ 15 ]; approaches to study analysis [ 16 ]; investigating the credibility of analyses [ 17 ]; and studies that synthesize these methodological studies [ 18 ]. While the nomenclature of methodological studies is not uniform, the intents and purposes of these studies remain fairly consistent – to describe or analyze methods in primary or secondary studies. As such, methodological studies may also be classified as a subtype of observational studies.

Parallel to this are experimental studies that compare different methods. Even though they play an important role in informing optimal research methods, experimental methodological studies are beyond the scope of this paper. Examples of such studies include the randomized trials by Buscemi et al., comparing single data extraction to double data extraction [ 19 ], and Carrasco-Labra et al., comparing approaches to presenting findings in Grading of Recommendations, Assessment, Development and Evaluations (GRADE) summary of findings tables [ 20 ]. In these studies, the unit of analysis is the person or groups of individuals applying the methods. We also direct readers to the Studies Within a Trial (SWAT) and Studies Within a Review (SWAR) programme operated through the Hub for Trials Methodology Research, for further reading as a potential useful resource for these types of experimental studies [ 21 ]. Lastly, this paper is not meant to inform the conduct of research using computational simulation and mathematical modeling for which some guidance already exists [ 22 ], or studies on the development of methods using consensus-based approaches.

When should we conduct a methodological study?

Methodological studies occupy a unique niche in health research that allows them to inform methodological advances. Methodological studies should also be conducted as pre-cursors to reporting guideline development, as they provide an opportunity to understand current practices, and help to identify the need for guidance and gaps in methodological or reporting quality. For example, the development of the popular Preferred Reporting Items of Systematic reviews and Meta-Analyses (PRISMA) guidelines were preceded by methodological studies identifying poor reporting practices [ 23 , 24 ]. In these instances, after the reporting guidelines are published, methodological studies can also be used to monitor uptake of the guidelines.

These studies can also be conducted to inform the state of the art for design, analysis and reporting practices across different types of health research fields, with the aim of improving research practices, and preventing or reducing research waste. For example, Samaan et al. conducted a scoping review of adherence to different reporting guidelines in health care literature [ 18 ]. Methodological studies can also be used to determine the factors associated with reporting practices. For example, Abbade et al. investigated journal characteristics associated with the use of the Participants, Intervention, Comparison, Outcome, Timeframe (PICOT) format in framing research questions in trials of venous ulcer disease [ 11 ].

How often are methodological studies conducted?

There is no clear answer to this question. Based on a search of PubMed, the use of related terms (“methodological review” and “meta-epidemiological study”) – and therefore, the number of methodological studies – is on the rise. However, many other terms are used to describe methodological studies. There are also many studies that explore design, conduct, analysis or reporting of research reports, but that do not use any specific terms to describe or label their study design in terms of “methodology”. This diversity in nomenclature makes a census of methodological studies elusive. Appropriate terminology and key words for methodological studies are needed to facilitate improved accessibility for end-users.

Why do we conduct methodological studies?

Methodological studies provide information on the design, conduct, analysis or reporting of primary and secondary research and can be used to appraise quality, quantity, completeness, accuracy and consistency of health research. These issues can be explored in specific fields, journals, databases, geographical regions and time periods. For example, Areia et al. explored the quality of reporting of endoscopic diagnostic studies in gastroenterology [ 25 ]; Knol et al. investigated the reporting of p -values in baseline tables in randomized trial published in high impact journals [ 26 ]; Chen et al. describe adherence to the Consolidated Standards of Reporting Trials (CONSORT) statement in Chinese Journals [ 27 ]; and Hopewell et al. describe the effect of editors’ implementation of CONSORT guidelines on reporting of abstracts over time [ 28 ]. Methodological studies provide useful information to researchers, clinicians, editors, publishers and users of health literature. As a result, these studies have been at the cornerstone of important methodological developments in the past two decades and have informed the development of many health research guidelines including the highly cited CONSORT statement [ 5 ].

Where can we find methodological studies?

Methodological studies can be found in most common biomedical bibliographic databases (e.g. Embase, MEDLINE, PubMed, Web of Science). However, the biggest caveat is that methodological studies are hard to identify in the literature due to the wide variety of names used and the lack of comprehensive databases dedicated to them. A handful can be found in the Cochrane Library as “Cochrane Methodology Reviews”, but these studies only cover methodological issues related to systematic reviews. Previous attempts to catalogue all empirical studies of methods used in reviews were abandoned 10 years ago [ 29 ]. In other databases, a variety of search terms may be applied with different levels of sensitivity and specificity.

Some frequently asked questions about methodological studies

In this section, we have outlined responses to questions that might help inform the conduct of methodological studies.

Q: How should I select research reports for my methodological study?

A: Selection of research reports for a methodological study depends on the research question and eligibility criteria. Once a clear research question is set and the nature of literature one desires to review is known, one can then begin the selection process. Selection may begin with a broad search, especially if the eligibility criteria are not apparent. For example, a methodological study of Cochrane Reviews of HIV would not require a complex search as all eligible studies can easily be retrieved from the Cochrane Library after checking a few boxes [ 30 ]. On the other hand, a methodological study of subgroup analyses in trials of gastrointestinal oncology would require a search to find such trials, and further screening to identify trials that conducted a subgroup analysis [ 31 ].

The strategies used for identifying participants in observational studies can apply here. One may use a systematic search to identify all eligible studies. If the number of eligible studies is unmanageable, a random sample of articles can be expected to provide comparable results if it is sufficiently large [ 32 ]. For example, Wilson et al. used a random sample of trials from the Cochrane Stroke Group’s Trial Register to investigate completeness of reporting [ 33 ]. It is possible that a simple random sample would lead to underrepresentation of units (i.e. research reports) that are smaller in number. This is relevant if the investigators wish to compare multiple groups but have too few units in one group. In this case a stratified sample would help to create equal groups. For example, in a methodological study comparing Cochrane and non-Cochrane reviews, Kahale et al. drew random samples from both groups [ 34 ]. Alternatively, systematic or purposeful sampling strategies can be used and we encourage researchers to justify their selected approaches based on the study objective.

Q: How many databases should I search?

A: The number of databases one should search would depend on the approach to sampling, which can include targeting the entire “population” of interest or a sample of that population. If you are interested in including the entire target population for your research question, or drawing a random or systematic sample from it, then a comprehensive and exhaustive search for relevant articles is required. In this case, we recommend using systematic approaches for searching electronic databases (i.e. at least 2 databases with a replicable and time stamped search strategy). The results of your search will constitute a sampling frame from which eligible studies can be drawn.

Alternatively, if your approach to sampling is purposeful, then we recommend targeting the database(s) or data sources (e.g. journals, registries) that include the information you need. For example, if you are conducting a methodological study of high impact journals in plastic surgery and they are all indexed in PubMed, you likely do not need to search any other databases. You may also have a comprehensive list of all journals of interest and can approach your search using the journal names in your database search (or by accessing the journal archives directly from the journal’s website). Even though one could also search journals’ web pages directly, using a database such as PubMed has multiple advantages, such as the use of filters, so the search can be narrowed down to a certain period, or study types of interest. Furthermore, individual journals’ web sites may have different search functionalities, which do not necessarily yield a consistent output.

Q: Should I publish a protocol for my methodological study?

A: A protocol is a description of intended research methods. Currently, only protocols for clinical trials require registration [ 35 ]. Protocols for systematic reviews are encouraged but no formal recommendation exists. The scientific community welcomes the publication of protocols because they help protect against selective outcome reporting, the use of post hoc methodologies to embellish results, and to help avoid duplication of efforts [ 36 ]. While the latter two risks exist in methodological research, the negative consequences may be substantially less than for clinical outcomes. In a sample of 31 methodological studies, 7 (22.6%) referenced a published protocol [ 9 ]. In the Cochrane Library, there are 15 protocols for methodological reviews (21 July 2020). This suggests that publishing protocols for methodological studies is not uncommon.

Authors can consider publishing their study protocol in a scholarly journal as a manuscript. Advantages of such publication include obtaining peer-review feedback about the planned study, and easy retrieval by searching databases such as PubMed. The disadvantages in trying to publish protocols includes delays associated with manuscript handling and peer review, as well as costs, as few journals publish study protocols, and those journals mostly charge article-processing fees [ 37 ]. Authors who would like to make their protocol publicly available without publishing it in scholarly journals, could deposit their study protocols in publicly available repositories, such as the Open Science Framework ( https://osf.io/ ).

Q: How to appraise the quality of a methodological study?

A: To date, there is no published tool for appraising the risk of bias in a methodological study, but in principle, a methodological study could be considered as a type of observational study. Therefore, during conduct or appraisal, care should be taken to avoid the biases common in observational studies [ 38 ]. These biases include selection bias, comparability of groups, and ascertainment of exposure or outcome. In other words, to generate a representative sample, a comprehensive reproducible search may be necessary to build a sampling frame. Additionally, random sampling may be necessary to ensure that all the included research reports have the same probability of being selected, and the screening and selection processes should be transparent and reproducible. To ensure that the groups compared are similar in all characteristics, matching, random sampling or stratified sampling can be used. Statistical adjustments for between-group differences can also be applied at the analysis stage. Finally, duplicate data extraction can reduce errors in assessment of exposures or outcomes.

Q: Should I justify a sample size?

A: In all instances where one is not using the target population (i.e. the group to which inferences from the research report are directed) [ 39 ], a sample size justification is good practice. The sample size justification may take the form of a description of what is expected to be achieved with the number of articles selected, or a formal sample size estimation that outlines the number of articles required to answer the research question with a certain precision and power. Sample size justifications in methodological studies are reasonable in the following instances:

Comparing two groups

Determining a proportion, mean or another quantifier

Determining factors associated with an outcome using regression-based analyses

For example, El Dib et al. computed a sample size requirement for a methodological study of diagnostic strategies in randomized trials, based on a confidence interval approach [ 40 ].

Q: What should I call my study?

A: Other terms which have been used to describe/label methodological studies include “ methodological review ”, “methodological survey” , “meta-epidemiological study” , “systematic review” , “systematic survey”, “meta-research”, “research-on-research” and many others. We recommend that the study nomenclature be clear, unambiguous, informative and allow for appropriate indexing. Methodological study nomenclature that should be avoided includes “ systematic review” – as this will likely be confused with a systematic review of a clinical question. “ Systematic survey” may also lead to confusion about whether the survey was systematic (i.e. using a preplanned methodology) or a survey using “ systematic” sampling (i.e. a sampling approach using specific intervals to determine who is selected) [ 32 ]. Any of the above meanings of the words “ systematic” may be true for methodological studies and could be potentially misleading. “ Meta-epidemiological study” is ideal for indexing, but not very informative as it describes an entire field. The term “ review ” may point towards an appraisal or “review” of the design, conduct, analysis or reporting (or methodological components) of the targeted research reports, yet it has also been used to describe narrative reviews [ 41 , 42 ]. The term “ survey ” is also in line with the approaches used in many methodological studies [ 9 ], and would be indicative of the sampling procedures of this study design. However, in the absence of guidelines on nomenclature, the term “ methodological study ” is broad enough to capture most of the scenarios of such studies.

Q: Should I account for clustering in my methodological study?

A: Data from methodological studies are often clustered. For example, articles coming from a specific source may have different reporting standards (e.g. the Cochrane Library). Articles within the same journal may be similar due to editorial practices and policies, reporting requirements and endorsement of guidelines. There is emerging evidence that these are real concerns that should be accounted for in analyses [ 43 ]. Some cluster variables are described in the section: “ What variables are relevant to methodological studies?”

A variety of modelling approaches can be used to account for correlated data, including the use of marginal, fixed or mixed effects regression models with appropriate computation of standard errors [ 44 ]. For example, Kosa et al. used generalized estimation equations to account for correlation of articles within journals [ 15 ]. Not accounting for clustering could lead to incorrect p -values, unduly narrow confidence intervals, and biased estimates [ 45 ].

Q: Should I extract data in duplicate?

A: Yes. Duplicate data extraction takes more time but results in less errors [ 19 ]. Data extraction errors in turn affect the effect estimate [ 46 ], and therefore should be mitigated. Duplicate data extraction should be considered in the absence of other approaches to minimize extraction errors. However, much like systematic reviews, this area will likely see rapid new advances with machine learning and natural language processing technologies to support researchers with screening and data extraction [ 47 , 48 ]. However, experience plays an important role in the quality of extracted data and inexperienced extractors should be paired with experienced extractors [ 46 , 49 ].

Q: Should I assess the risk of bias of research reports included in my methodological study?

A : Risk of bias is most useful in determining the certainty that can be placed in the effect measure from a study. In methodological studies, risk of bias may not serve the purpose of determining the trustworthiness of results, as effect measures are often not the primary goal of methodological studies. Determining risk of bias in methodological studies is likely a practice borrowed from systematic review methodology, but whose intrinsic value is not obvious in methodological studies. When it is part of the research question, investigators often focus on one aspect of risk of bias. For example, Speich investigated how blinding was reported in surgical trials [ 50 ], and Abraha et al., investigated the application of intention-to-treat analyses in systematic reviews and trials [ 51 ].

Q: What variables are relevant to methodological studies?

A: There is empirical evidence that certain variables may inform the findings in a methodological study. We outline some of these and provide a brief overview below:

Country: Countries and regions differ in their research cultures, and the resources available to conduct research. Therefore, it is reasonable to believe that there may be differences in methodological features across countries. Methodological studies have reported loco-regional differences in reporting quality [ 52 , 53 ]. This may also be related to challenges non-English speakers face in publishing papers in English.

Authors’ expertise: The inclusion of authors with expertise in research methodology, biostatistics, and scientific writing is likely to influence the end-product. Oltean et al. found that among randomized trials in orthopaedic surgery, the use of analyses that accounted for clustering was more likely when specialists (e.g. statistician, epidemiologist or clinical trials methodologist) were included on the study team [ 54 ]. Fleming et al. found that including methodologists in the review team was associated with appropriate use of reporting guidelines [ 55 ].

Source of funding and conflicts of interest: Some studies have found that funded studies report better [ 56 , 57 ], while others do not [ 53 , 58 ]. The presence of funding would indicate the availability of resources deployed to ensure optimal design, conduct, analysis and reporting. However, the source of funding may introduce conflicts of interest and warrant assessment. For example, Kaiser et al. investigated the effect of industry funding on obesity or nutrition randomized trials and found that reporting quality was similar [ 59 ]. Thomas et al. looked at reporting quality of long-term weight loss trials and found that industry funded studies were better [ 60 ]. Kan et al. examined the association between industry funding and “positive trials” (trials reporting a significant intervention effect) and found that industry funding was highly predictive of a positive trial [ 61 ]. This finding is similar to that of a recent Cochrane Methodology Review by Hansen et al. [ 62 ]

Journal characteristics: Certain journals’ characteristics may influence the study design, analysis or reporting. Characteristics such as journal endorsement of guidelines [ 63 , 64 ], and Journal Impact Factor (JIF) have been shown to be associated with reporting [ 63 , 65 , 66 , 67 ].

Study size (sample size/number of sites): Some studies have shown that reporting is better in larger studies [ 53 , 56 , 58 ].

Year of publication: It is reasonable to assume that design, conduct, analysis and reporting of research will change over time. Many studies have demonstrated improvements in reporting over time or after the publication of reporting guidelines [ 68 , 69 ].

Type of intervention: In a methodological study of reporting quality of weight loss intervention studies, Thabane et al. found that trials of pharmacologic interventions were reported better than trials of non-pharmacologic interventions [ 70 ].

Interactions between variables: Complex interactions between the previously listed variables are possible. High income countries with more resources may be more likely to conduct larger studies and incorporate a variety of experts. Authors in certain countries may prefer certain journals, and journal endorsement of guidelines and editorial policies may change over time.

Q: Should I focus only on high impact journals?

A: Investigators may choose to investigate only high impact journals because they are more likely to influence practice and policy, or because they assume that methodological standards would be higher. However, the JIF may severely limit the scope of articles included and may skew the sample towards articles with positive findings. The generalizability and applicability of findings from a handful of journals must be examined carefully, especially since the JIF varies over time. Even among journals that are all “high impact”, variations exist in methodological standards.

Q: Can I conduct a methodological study of qualitative research?

A: Yes. Even though a lot of methodological research has been conducted in the quantitative research field, methodological studies of qualitative studies are feasible. Certain databases that catalogue qualitative research including the Cumulative Index to Nursing & Allied Health Literature (CINAHL) have defined subject headings that are specific to methodological research (e.g. “research methodology”). Alternatively, one could also conduct a qualitative methodological review; that is, use qualitative approaches to synthesize methodological issues in qualitative studies.

Q: What reporting guidelines should I use for my methodological study?

A: There is no guideline that covers the entire scope of methodological studies. One adaptation of the PRISMA guidelines has been published, which works well for studies that aim to use the entire target population of research reports [ 71 ]. However, it is not widely used (40 citations in 2 years as of 09 December 2019), and methodological studies that are designed as cross-sectional or before-after studies require a more fit-for purpose guideline. A more encompassing reporting guideline for a broad range of methodological studies is currently under development [ 72 ]. However, in the absence of formal guidance, the requirements for scientific reporting should be respected, and authors of methodological studies should focus on transparency and reproducibility.

Q: What are the potential threats to validity and how can I avoid them?

A: Methodological studies may be compromised by a lack of internal or external validity. The main threats to internal validity in methodological studies are selection and confounding bias. Investigators must ensure that the methods used to select articles does not make them differ systematically from the set of articles to which they would like to make inferences. For example, attempting to make extrapolations to all journals after analyzing high-impact journals would be misleading.

Many factors (confounders) may distort the association between the exposure and outcome if the included research reports differ with respect to these factors [ 73 ]. For example, when examining the association between source of funding and completeness of reporting, it may be necessary to account for journals that endorse the guidelines. Confounding bias can be addressed by restriction, matching and statistical adjustment [ 73 ]. Restriction appears to be the method of choice for many investigators who choose to include only high impact journals or articles in a specific field. For example, Knol et al. examined the reporting of p -values in baseline tables of high impact journals [ 26 ]. Matching is also sometimes used. In the methodological study of non-randomized interventional studies of elective ventral hernia repair, Parker et al. matched prospective studies with retrospective studies and compared reporting standards [ 74 ]. Some other methodological studies use statistical adjustments. For example, Zhang et al. used regression techniques to determine the factors associated with missing participant data in trials [ 16 ].

With regard to external validity, researchers interested in conducting methodological studies must consider how generalizable or applicable their findings are. This should tie in closely with the research question and should be explicit. For example. Findings from methodological studies on trials published in high impact cardiology journals cannot be assumed to be applicable to trials in other fields. However, investigators must ensure that their sample truly represents the target sample either by a) conducting a comprehensive and exhaustive search, or b) using an appropriate and justified, randomly selected sample of research reports.

Even applicability to high impact journals may vary based on the investigators’ definition, and over time. For example, for high impact journals in the field of general medicine, Bouwmeester et al. included the Annals of Internal Medicine (AIM), BMJ, the Journal of the American Medical Association (JAMA), Lancet, the New England Journal of Medicine (NEJM), and PLoS Medicine ( n  = 6) [ 75 ]. In contrast, the high impact journals selected in the methodological study by Schiller et al. were BMJ, JAMA, Lancet, and NEJM ( n  = 4) [ 76 ]. Another methodological study by Kosa et al. included AIM, BMJ, JAMA, Lancet and NEJM ( n  = 5). In the methodological study by Thabut et al., journals with a JIF greater than 5 were considered to be high impact. Riado Minguez et al. used first quartile journals in the Journal Citation Reports (JCR) for a specific year to determine “high impact” [ 77 ]. Ultimately, the definition of high impact will be based on the number of journals the investigators are willing to include, the year of impact and the JIF cut-off [ 78 ]. We acknowledge that the term “generalizability” may apply differently for methodological studies, especially when in many instances it is possible to include the entire target population in the sample studied.

Finally, methodological studies are not exempt from information bias which may stem from discrepancies in the included research reports [ 79 ], errors in data extraction, or inappropriate interpretation of the information extracted. Likewise, publication bias may also be a concern in methodological studies, but such concepts have not yet been explored.

A proposed framework

In order to inform discussions about methodological studies, the development of guidance for what should be reported, we have outlined some key features of methodological studies that can be used to classify them. For each of the categories outlined below, we provide an example. In our experience, the choice of approach to completing a methodological study can be informed by asking the following four questions:

What is the aim?

Methodological studies that investigate bias

A methodological study may be focused on exploring sources of bias in primary or secondary studies (meta-bias), or how bias is analyzed. We have taken care to distinguish bias (i.e. systematic deviations from the truth irrespective of the source) from reporting quality or completeness (i.e. not adhering to a specific reporting guideline or norm). An example of where this distinction would be important is in the case of a randomized trial with no blinding. This study (depending on the nature of the intervention) would be at risk of performance bias. However, if the authors report that their study was not blinded, they would have reported adequately. In fact, some methodological studies attempt to capture both “quality of conduct” and “quality of reporting”, such as Richie et al., who reported on the risk of bias in randomized trials of pharmacy practice interventions [ 80 ]. Babic et al. investigated how risk of bias was used to inform sensitivity analyses in Cochrane reviews [ 81 ]. Further, biases related to choice of outcomes can also be explored. For example, Tan et al investigated differences in treatment effect size based on the outcome reported [ 82 ].

Methodological studies that investigate quality (or completeness) of reporting

Methodological studies may report quality of reporting against a reporting checklist (i.e. adherence to guidelines) or against expected norms. For example, Croituro et al. report on the quality of reporting in systematic reviews published in dermatology journals based on their adherence to the PRISMA statement [ 83 ], and Khan et al. described the quality of reporting of harms in randomized controlled trials published in high impact cardiovascular journals based on the CONSORT extension for harms [ 84 ]. Other methodological studies investigate reporting of certain features of interest that may not be part of formally published checklists or guidelines. For example, Mbuagbaw et al. described how often the implications for research are elaborated using the Evidence, Participants, Intervention, Comparison, Outcome, Timeframe (EPICOT) format [ 30 ].

Methodological studies that investigate the consistency of reporting

Sometimes investigators may be interested in how consistent reports of the same research are, as it is expected that there should be consistency between: conference abstracts and published manuscripts; manuscript abstracts and manuscript main text; and trial registration and published manuscript. For example, Rosmarakis et al. investigated consistency between conference abstracts and full text manuscripts [ 85 ].

Methodological studies that investigate factors associated with reporting

In addition to identifying issues with reporting in primary and secondary studies, authors of methodological studies may be interested in determining the factors that are associated with certain reporting practices. Many methodological studies incorporate this, albeit as a secondary outcome. For example, Farrokhyar et al. investigated the factors associated with reporting quality in randomized trials of coronary artery bypass grafting surgery [ 53 ].

Methodological studies that investigate methods

Methodological studies may also be used to describe methods or compare methods, and the factors associated with methods. Muller et al. described the methods used for systematic reviews and meta-analyses of observational studies [ 86 ].

Methodological studies that summarize other methodological studies

Some methodological studies synthesize results from other methodological studies. For example, Li et al. conducted a scoping review of methodological reviews that investigated consistency between full text and abstracts in primary biomedical research [ 87 ].

Methodological studies that investigate nomenclature and terminology

Some methodological studies may investigate the use of names and terms in health research. For example, Martinic et al. investigated the definitions of systematic reviews used in overviews of systematic reviews (OSRs), meta-epidemiological studies and epidemiology textbooks [ 88 ].

Other types of methodological studies

In addition to the previously mentioned experimental methodological studies, there may exist other types of methodological studies not captured here.

What is the design?

Methodological studies that are descriptive

Most methodological studies are purely descriptive and report their findings as counts (percent) and means (standard deviation) or medians (interquartile range). For example, Mbuagbaw et al. described the reporting of research recommendations in Cochrane HIV systematic reviews [ 30 ]. Gohari et al. described the quality of reporting of randomized trials in diabetes in Iran [ 12 ].

Methodological studies that are analytical

Some methodological studies are analytical wherein “analytical studies identify and quantify associations, test hypotheses, identify causes and determine whether an association exists between variables, such as between an exposure and a disease.” [ 89 ] In the case of methodological studies all these investigations are possible. For example, Kosa et al. investigated the association between agreement in primary outcome from trial registry to published manuscript and study covariates. They found that larger and more recent studies were more likely to have agreement [ 15 ]. Tricco et al. compared the conclusion statements from Cochrane and non-Cochrane systematic reviews with a meta-analysis of the primary outcome and found that non-Cochrane reviews were more likely to report positive findings. These results are a test of the null hypothesis that the proportions of Cochrane and non-Cochrane reviews that report positive results are equal [ 90 ].

What is the sampling strategy?

Methodological studies that include the target population

Methodological reviews with narrow research questions may be able to include the entire target population. For example, in the methodological study of Cochrane HIV systematic reviews, Mbuagbaw et al. included all of the available studies ( n  = 103) [ 30 ].

Methodological studies that include a sample of the target population

Many methodological studies use random samples of the target population [ 33 , 91 , 92 ]. Alternatively, purposeful sampling may be used, limiting the sample to a subset of research-related reports published within a certain time period, or in journals with a certain ranking or on a topic. Systematic sampling can also be used when random sampling may be challenging to implement.

What is the unit of analysis?

Methodological studies with a research report as the unit of analysis

Many methodological studies use a research report (e.g. full manuscript of study, abstract portion of the study) as the unit of analysis, and inferences can be made at the study-level. However, both published and unpublished research-related reports can be studied. These may include articles, conference abstracts, registry entries etc.

Methodological studies with a design, analysis or reporting item as the unit of analysis

Some methodological studies report on items which may occur more than once per article. For example, Paquette et al. report on subgroup analyses in Cochrane reviews of atrial fibrillation in which 17 systematic reviews planned 56 subgroup analyses [ 93 ].

This framework is outlined in Fig.  2 .

figure 2

A proposed framework for methodological studies

Conclusions

Methodological studies have examined different aspects of reporting such as quality, completeness, consistency and adherence to reporting guidelines. As such, many of the methodological study examples cited in this tutorial are related to reporting. However, as an evolving field, the scope of research questions that can be addressed by methodological studies is expected to increase.

In this paper we have outlined the scope and purpose of methodological studies, along with examples of instances in which various approaches have been used. In the absence of formal guidance on the design, conduct, analysis and reporting of methodological studies, we have provided some advice to help make methodological studies consistent. This advice is grounded in good contemporary scientific practice. Generally, the research question should tie in with the sampling approach and planned analysis. We have also highlighted the variables that may inform findings from methodological studies. Lastly, we have provided suggestions for ways in which authors can categorize their methodological studies to inform their design and analysis.

Availability of data and materials

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Abbreviations

Consolidated Standards of Reporting Trials

Evidence, Participants, Intervention, Comparison, Outcome, Timeframe

Grading of Recommendations, Assessment, Development and Evaluations

Participants, Intervention, Comparison, Outcome, Timeframe

Preferred Reporting Items of Systematic reviews and Meta-Analyses

Studies Within a Review

Studies Within a Trial

Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;374(9683):86–9.

PubMed   Google Scholar  

Chan AW, Song F, Vickers A, Jefferson T, Dickersin K, Gotzsche PC, Krumholz HM, Ghersi D, van der Worp HB. Increasing value and reducing waste: addressing inaccessible research. Lancet. 2014;383(9913):257–66.

PubMed   PubMed Central   Google Scholar  

Ioannidis JP, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D, Schulz KF, Tibshirani R. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383(9912):166–75.

Higgins JP, Altman DG, Gotzsche PC, Juni P, Moher D, Oxman AD, Savovic J, Schulz KF, Weeks L, Sterne JA. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.

Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet. 2001;357.

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med. 2009;6(7):e1000100.

Shea BJ, Hamel C, Wells GA, Bouter LM, Kristjansson E, Grimshaw J, Henry DA, Boers M. AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. J Clin Epidemiol. 2009;62(10):1013–20.

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, Moher D, Tugwell P, Welch V, Kristjansson E, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. Bmj. 2017;358:j4008.

Lawson DO, Leenus A, Mbuagbaw L. Mapping the nomenclature, methodology, and reporting of studies that review methods: a pilot methodological review. Pilot Feasibility Studies. 2020;6(1):13.

Puljak L, Makaric ZL, Buljan I, Pieper D. What is a meta-epidemiological study? Analysis of published literature indicated heterogeneous study designs and definitions. J Comp Eff Res. 2020.

Abbade LPF, Wang M, Sriganesh K, Jin Y, Mbuagbaw L, Thabane L. The framing of research questions using the PICOT format in randomized controlled trials of venous ulcer disease is suboptimal: a systematic survey. Wound Repair Regen. 2017;25(5):892–900.

Gohari F, Baradaran HR, Tabatabaee M, Anijidani S, Mohammadpour Touserkani F, Atlasi R, Razmgir M. Quality of reporting randomized controlled trials (RCTs) in diabetes in Iran; a systematic review. J Diabetes Metab Disord. 2015;15(1):36.

Wang M, Jin Y, Hu ZJ, Thabane A, Dennis B, Gajic-Veljanoski O, Paul J, Thabane L. The reporting quality of abstracts of stepped wedge randomized trials is suboptimal: a systematic survey of the literature. Contemp Clin Trials Commun. 2017;8:1–10.

Shanthanna H, Kaushal A, Mbuagbaw L, Couban R, Busse J, Thabane L: A cross-sectional study of the reporting quality of pilot or feasibility trials in high-impact anesthesia journals Can J Anaesthesia 2018, 65(11):1180–1195.

Kosa SD, Mbuagbaw L, Borg Debono V, Bhandari M, Dennis BB, Ene G, Leenus A, Shi D, Thabane M, Valvasori S, et al. Agreement in reporting between trial publications and current clinical trial registry in high impact journals: a methodological review. Contemporary Clinical Trials. 2018;65:144–50.

Zhang Y, Florez ID, Colunga Lozano LE, Aloweni FAB, Kennedy SA, Li A, Craigie S, Zhang S, Agarwal A, Lopes LC, et al. A systematic survey on reporting and methods for handling missing participant data for continuous outcomes in randomized controlled trials. J Clin Epidemiol. 2017;88:57–66.

CAS   PubMed   Google Scholar  

Hernández AV, Boersma E, Murray GD, Habbema JD, Steyerberg EW. Subgroup analyses in therapeutic cardiovascular clinical trials: are most of them misleading? Am Heart J. 2006;151(2):257–64.

Samaan Z, Mbuagbaw L, Kosa D, Borg Debono V, Dillenburg R, Zhang S, Fruci V, Dennis B, Bawor M, Thabane L. A systematic scoping review of adherence to reporting guidelines in health care literature. J Multidiscip Healthc. 2013;6:169–88.

Buscemi N, Hartling L, Vandermeer B, Tjosvold L, Klassen TP. Single data extraction generated more errors than double data extraction in systematic reviews. J Clin Epidemiol. 2006;59(7):697–703.

Carrasco-Labra A, Brignardello-Petersen R, Santesso N, Neumann I, Mustafa RA, Mbuagbaw L, Etxeandia Ikobaltzeta I, De Stio C, McCullagh LJ, Alonso-Coello P. Improving GRADE evidence tables part 1: a randomized trial shows improved understanding of content in summary-of-findings tables with a new format. J Clin Epidemiol. 2016;74:7–18.

The Northern Ireland Hub for Trials Methodology Research: SWAT/SWAR Information [ https://www.qub.ac.uk/sites/TheNorthernIrelandNetworkforTrialsMethodologyResearch/SWATSWARInformation/ ]. Accessed 31 Aug 2020.

Chick S, Sánchez P, Ferrin D, Morrice D. How to conduct a successful simulation study. In: Proceedings of the 2003 winter simulation conference: 2003; 2003. p. 66–70.

Google Scholar  

Mulrow CD. The medical review article: state of the science. Ann Intern Med. 1987;106(3):485–8.

Sacks HS, Reitman D, Pagano D, Kupelnick B. Meta-analysis: an update. Mount Sinai J Med New York. 1996;63(3–4):216–24.

CAS   Google Scholar  

Areia M, Soares M, Dinis-Ribeiro M. Quality reporting of endoscopic diagnostic studies in gastrointestinal journals: where do we stand on the use of the STARD and CONSORT statements? Endoscopy. 2010;42(2):138–47.

Knol M, Groenwold R, Grobbee D. P-values in baseline tables of randomised controlled trials are inappropriate but still common in high impact journals. Eur J Prev Cardiol. 2012;19(2):231–2.

Chen M, Cui J, Zhang AL, Sze DM, Xue CC, May BH. Adherence to CONSORT items in randomized controlled trials of integrative medicine for colorectal Cancer published in Chinese journals. J Altern Complement Med. 2018;24(2):115–24.

Hopewell S, Ravaud P, Baron G, Boutron I. Effect of editors' implementation of CONSORT guidelines on the reporting of abstracts in high impact medical journals: interrupted time series analysis. BMJ. 2012;344:e4178.

The Cochrane Methodology Register Issue 2 2009 [ https://cmr.cochrane.org/help.htm ]. Accessed 31 Aug 2020.

Mbuagbaw L, Kredo T, Welch V, Mursleen S, Ross S, Zani B, Motaze NV, Quinlan L. Critical EPICOT items were absent in Cochrane human immunodeficiency virus systematic reviews: a bibliometric analysis. J Clin Epidemiol. 2016;74:66–72.

Barton S, Peckitt C, Sclafani F, Cunningham D, Chau I. The influence of industry sponsorship on the reporting of subgroup analyses within phase III randomised controlled trials in gastrointestinal oncology. Eur J Cancer. 2015;51(18):2732–9.

Setia MS. Methodology series module 5: sampling strategies. Indian J Dermatol. 2016;61(5):505–9.

Wilson B, Burnett P, Moher D, Altman DG, Al-Shahi Salman R. Completeness of reporting of randomised controlled trials including people with transient ischaemic attack or stroke: a systematic review. Eur Stroke J. 2018;3(4):337–46.

Kahale LA, Diab B, Brignardello-Petersen R, Agarwal A, Mustafa RA, Kwong J, Neumann I, Li L, Lopes LC, Briel M, et al. Systematic reviews do not adequately report or address missing outcome data in their analyses: a methodological survey. J Clin Epidemiol. 2018;99:14–23.

De Angelis CD, Drazen JM, Frizelle FA, Haug C, Hoey J, Horton R, Kotzin S, Laine C, Marusic A, Overbeke AJPM, et al. Is this clinical trial fully registered?: a statement from the International Committee of Medical Journal Editors*. Ann Intern Med. 2005;143(2):146–8.

Ohtake PJ, Childs JD. Why publish study protocols? Phys Ther. 2014;94(9):1208–9.

Rombey T, Allers K, Mathes T, Hoffmann F, Pieper D. A descriptive analysis of the characteristics and the peer review process of systematic review protocols published in an open peer review journal from 2012 to 2017. BMC Med Res Methodol. 2019;19(1):57.

Grimes DA, Schulz KF. Bias and causal associations in observational research. Lancet. 2002;359(9302):248–52.

Porta M (ed.): A dictionary of epidemiology, 5th edn. Oxford: Oxford University Press, Inc.; 2008.

El Dib R, Tikkinen KAO, Akl EA, Gomaa HA, Mustafa RA, Agarwal A, Carpenter CR, Zhang Y, Jorge EC, Almeida R, et al. Systematic survey of randomized trials evaluating the impact of alternative diagnostic strategies on patient-important outcomes. J Clin Epidemiol. 2017;84:61–9.

Helzer JE, Robins LN, Taibleson M, Woodruff RA Jr, Reich T, Wish ED. Reliability of psychiatric diagnosis. I. a methodological review. Arch Gen Psychiatry. 1977;34(2):129–33.

Chung ST, Chacko SK, Sunehag AL, Haymond MW. Measurements of gluconeogenesis and Glycogenolysis: a methodological review. Diabetes. 2015;64(12):3996–4010.

CAS   PubMed   PubMed Central   Google Scholar  

Sterne JA, Juni P, Schulz KF, Altman DG, Bartlett C, Egger M. Statistical methods for assessing the influence of study characteristics on treatment effects in 'meta-epidemiological' research. Stat Med. 2002;21(11):1513–24.

Moen EL, Fricano-Kugler CJ, Luikart BW, O’Malley AJ. Analyzing clustered data: why and how to account for multiple observations nested within a study participant? PLoS One. 2016;11(1):e0146721.

Zyzanski SJ, Flocke SA, Dickinson LM. On the nature and analysis of clustered data. Ann Fam Med. 2004;2(3):199–200.

Mathes T, Klassen P, Pieper D. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review. BMC Med Res Methodol. 2017;17(1):152.

Bui DDA, Del Fiol G, Hurdle JF, Jonnalagadda S. Extractive text summarization system to aid data extraction from full text in systematic review development. J Biomed Inform. 2016;64:265–72.

Bui DD, Del Fiol G, Jonnalagadda S. PDF text classification to leverage information extraction from publication reports. J Biomed Inform. 2016;61:141–8.

Maticic K, Krnic Martinic M, Puljak L. Assessment of reporting quality of abstracts of systematic reviews with meta-analysis using PRISMA-A and discordance in assessments between raters without prior experience. BMC Med Res Methodol. 2019;19(1):32.

Speich B. Blinding in surgical randomized clinical trials in 2015. Ann Surg. 2017;266(1):21–2.

Abraha I, Cozzolino F, Orso M, Marchesi M, Germani A, Lombardo G, Eusebi P, De Florio R, Luchetta ML, Iorio A, et al. A systematic review found that deviations from intention-to-treat are common in randomized trials and systematic reviews. J Clin Epidemiol. 2017;84:37–46.

Zhong Y, Zhou W, Jiang H, Fan T, Diao X, Yang H, Min J, Wang G, Fu J, Mao B. Quality of reporting of two-group parallel randomized controlled clinical trials of multi-herb formulae: A survey of reports indexed in the Science Citation Index Expanded. Eur J Integrative Med. 2011;3(4):e309–16.

Farrokhyar F, Chu R, Whitlock R, Thabane L. A systematic review of the quality of publications reporting coronary artery bypass grafting trials. Can J Surg. 2007;50(4):266–77.

Oltean H, Gagnier JJ. Use of clustering analysis in randomized controlled trials in orthopaedic surgery. BMC Med Res Methodol. 2015;15:17.

Fleming PS, Koletsi D, Pandis N. Blinded by PRISMA: are systematic reviewers focusing on PRISMA and ignoring other guidelines? PLoS One. 2014;9(5):e96407.

Balasubramanian SP, Wiener M, Alshameeri Z, Tiruvoipati R, Elbourne D, Reed MW. Standards of reporting of randomized controlled trials in general surgery: can we do better? Ann Surg. 2006;244(5):663–7.

de Vries TW, van Roon EN. Low quality of reporting adverse drug reactions in paediatric randomised controlled trials. Arch Dis Child. 2010;95(12):1023–6.

Borg Debono V, Zhang S, Ye C, Paul J, Arya A, Hurlburt L, Murthy Y, Thabane L. The quality of reporting of RCTs used within a postoperative pain management meta-analysis, using the CONSORT statement. BMC Anesthesiol. 2012;12:13.

Kaiser KA, Cofield SS, Fontaine KR, Glasser SP, Thabane L, Chu R, Ambrale S, Dwary AD, Kumar A, Nayyar G, et al. Is funding source related to study reporting quality in obesity or nutrition randomized control trials in top-tier medical journals? Int J Obes. 2012;36(7):977–81.

Thomas O, Thabane L, Douketis J, Chu R, Westfall AO, Allison DB. Industry funding and the reporting quality of large long-term weight loss trials. Int J Obes. 2008;32(10):1531–6.

Khan NR, Saad H, Oravec CS, Rossi N, Nguyen V, Venable GT, Lillard JC, Patel P, Taylor DR, Vaughn BN, et al. A review of industry funding in randomized controlled trials published in the neurosurgical literature-the elephant in the room. Neurosurgery. 2018;83(5):890–7.

Hansen C, Lundh A, Rasmussen K, Hrobjartsson A. Financial conflicts of interest in systematic reviews: associations with results, conclusions, and methodological quality. Cochrane Database Syst Rev. 2019;8:Mr000047.

Kiehna EN, Starke RM, Pouratian N, Dumont AS. Standards for reporting randomized controlled trials in neurosurgery. J Neurosurg. 2011;114(2):280–5.

Liu LQ, Morris PJ, Pengel LH. Compliance to the CONSORT statement of randomized controlled trials in solid organ transplantation: a 3-year overview. Transpl Int. 2013;26(3):300–6.

Bala MM, Akl EA, Sun X, Bassler D, Mertz D, Mejza F, Vandvik PO, Malaga G, Johnston BC, Dahm P, et al. Randomized trials published in higher vs. lower impact journals differ in design, conduct, and analysis. J Clin Epidemiol. 2013;66(3):286–95.

Lee SY, Teoh PJ, Camm CF, Agha RA. Compliance of randomized controlled trials in trauma surgery with the CONSORT statement. J Trauma Acute Care Surg. 2013;75(4):562–72.

Ziogas DC, Zintzaras E. Analysis of the quality of reporting of randomized controlled trials in acute and chronic myeloid leukemia, and myelodysplastic syndromes as governed by the CONSORT statement. Ann Epidemiol. 2009;19(7):494–500.

Alvarez F, Meyer N, Gourraud PA, Paul C. CONSORT adoption and quality of reporting of randomized controlled trials: a systematic analysis in two dermatology journals. Br J Dermatol. 2009;161(5):1159–65.

Mbuagbaw L, Thabane M, Vanniyasingam T, Borg Debono V, Kosa S, Zhang S, Ye C, Parpia S, Dennis BB, Thabane L. Improvement in the quality of abstracts in major clinical journals since CONSORT extension for abstracts: a systematic review. Contemporary Clin trials. 2014;38(2):245–50.

Thabane L, Chu R, Cuddy K, Douketis J. What is the quality of reporting in weight loss intervention studies? A systematic review of randomized controlled trials. Int J Obes. 2007;31(10):1554–9.

Murad MH, Wang Z. Guidelines for reporting meta-epidemiological methodology research. Evidence Based Med. 2017;22(4):139.

METRIC - MEthodological sTudy ReportIng Checklist: guidelines for reporting methodological studies in health research [ http://www.equator-network.org/library/reporting-guidelines-under-development/reporting-guidelines-under-development-for-other-study-designs/#METRIC ]. Accessed 31 Aug 2020.

Jager KJ, Zoccali C, MacLeod A, Dekker FW. Confounding: what it is and how to deal with it. Kidney Int. 2008;73(3):256–60.

Parker SG, Halligan S, Erotocritou M, Wood CPJ, Boulton RW, Plumb AAO, Windsor ACJ, Mallett S. A systematic methodological review of non-randomised interventional studies of elective ventral hernia repair: clear definitions and a standardised minimum dataset are needed. Hernia. 2019.

Bouwmeester W, Zuithoff NPA, Mallett S, Geerlings MI, Vergouwe Y, Steyerberg EW, Altman DG, Moons KGM. Reporting and methods in clinical prediction research: a systematic review. PLoS Med. 2012;9(5):1–12.

Schiller P, Burchardi N, Niestroj M, Kieser M. Quality of reporting of clinical non-inferiority and equivalence randomised trials--update and extension. Trials. 2012;13:214.

Riado Minguez D, Kowalski M, Vallve Odena M, Longin Pontzen D, Jelicic Kadic A, Jeric M, Dosenovic S, Jakus D, Vrdoljak M, Poklepovic Pericic T, et al. Methodological and reporting quality of systematic reviews published in the highest ranking journals in the field of pain. Anesth Analg. 2017;125(4):1348–54.

Thabut G, Estellat C, Boutron I, Samama CM, Ravaud P. Methodological issues in trials assessing primary prophylaxis of venous thrombo-embolism. Eur Heart J. 2005;27(2):227–36.

Puljak L, Riva N, Parmelli E, González-Lorenzo M, Moja L, Pieper D. Data extraction methods: an analysis of internal reporting discrepancies in single manuscripts and practical advice. J Clin Epidemiol. 2020;117:158–64.

Ritchie A, Seubert L, Clifford R, Perry D, Bond C. Do randomised controlled trials relevant to pharmacy meet best practice standards for quality conduct and reporting? A systematic review. Int J Pharm Pract. 2019.

Babic A, Vuka I, Saric F, Proloscic I, Slapnicar E, Cavar J, Pericic TP, Pieper D, Puljak L. Overall bias methods and their use in sensitivity analysis of Cochrane reviews were not consistent. J Clin Epidemiol. 2019.

Tan A, Porcher R, Crequit P, Ravaud P, Dechartres A. Differences in treatment effect size between overall survival and progression-free survival in immunotherapy trials: a Meta-epidemiologic study of trials with results posted at ClinicalTrials.gov. J Clin Oncol. 2017;35(15):1686–94.

Croitoru D, Huang Y, Kurdina A, Chan AW, Drucker AM. Quality of reporting in systematic reviews published in dermatology journals. Br J Dermatol. 2020;182(6):1469–76.

Khan MS, Ochani RK, Shaikh A, Vaduganathan M, Khan SU, Fatima K, Yamani N, Mandrola J, Doukky R, Krasuski RA: Assessing the Quality of Reporting of Harms in Randomized Controlled Trials Published in High Impact Cardiovascular Journals. Eur Heart J Qual Care Clin Outcomes 2019.

Rosmarakis ES, Soteriades ES, Vergidis PI, Kasiakou SK, Falagas ME. From conference abstract to full paper: differences between data presented in conferences and journals. FASEB J. 2005;19(7):673–80.

Mueller M, D’Addario M, Egger M, Cevallos M, Dekkers O, Mugglin C, Scott P. Methods to systematically review and meta-analyse observational studies: a systematic scoping review of recommendations. BMC Med Res Methodol. 2018;18(1):44.

Li G, Abbade LPF, Nwosu I, Jin Y, Leenus A, Maaz M, Wang M, Bhatt M, Zielinski L, Sanger N, et al. A scoping review of comparisons between abstracts and full reports in primary biomedical research. BMC Med Res Methodol. 2017;17(1):181.

Krnic Martinic M, Pieper D, Glatt A, Puljak L. Definition of a systematic review used in overviews of systematic reviews, meta-epidemiological studies and textbooks. BMC Med Res Methodol. 2019;19(1):203.

Analytical study [ https://medical-dictionary.thefreedictionary.com/analytical+study ]. Accessed 31 Aug 2020.

Tricco AC, Tetzlaff J, Pham B, Brehaut J, Moher D. Non-Cochrane vs. Cochrane reviews were twice as likely to have positive conclusion statements: cross-sectional study. J Clin Epidemiol. 2009;62(4):380–6 e381.

Schalken N, Rietbergen C. The reporting quality of systematic reviews and Meta-analyses in industrial and organizational psychology: a systematic review. Front Psychol. 2017;8:1395.

Ranker LR, Petersen JM, Fox MP. Awareness of and potential for dependent error in the observational epidemiologic literature: A review. Ann Epidemiol. 2019;36:15–9 e12.

Paquette M, Alotaibi AM, Nieuwlaat R, Santesso N, Mbuagbaw L. A meta-epidemiological study of subgroup analyses in cochrane systematic reviews of atrial fibrillation. Syst Rev. 2019;8(1):241.

Download references

Acknowledgements

This work did not receive any dedicated funding.

Author information

Authors and affiliations.

Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada

Lawrence Mbuagbaw, Daeria O. Lawson & Lehana Thabane

Biostatistics Unit/FSORC, 50 Charlton Avenue East, St Joseph’s Healthcare—Hamilton, 3rd Floor Martha Wing, Room H321, Hamilton, Ontario, L8N 4A6, Canada

Lawrence Mbuagbaw & Lehana Thabane

Centre for the Development of Best Practices in Health, Yaoundé, Cameroon

Lawrence Mbuagbaw

Center for Evidence-Based Medicine and Health Care, Catholic University of Croatia, Ilica 242, 10000, Zagreb, Croatia

Livia Puljak

Department of Epidemiology and Biostatistics, School of Public Health – Bloomington, Indiana University, Bloomington, IN, 47405, USA

David B. Allison

Departments of Paediatrics and Anaesthesia, McMaster University, Hamilton, ON, Canada

Lehana Thabane

Centre for Evaluation of Medicine, St. Joseph’s Healthcare-Hamilton, Hamilton, ON, Canada

Population Health Research Institute, Hamilton Health Sciences, Hamilton, ON, Canada

You can also search for this author in PubMed   Google Scholar

Contributions

LM conceived the idea and drafted the outline and paper. DOL and LT commented on the idea and draft outline. LM, LP and DOL performed literature searches and data extraction. All authors (LM, DOL, LT, LP, DBA) reviewed several draft versions of the manuscript and approved the final manuscript.

Corresponding author

Correspondence to Lawrence Mbuagbaw .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

DOL, DBA, LM, LP and LT are involved in the development of a reporting guideline for methodological studies.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Mbuagbaw, L., Lawson, D.O., Puljak, L. et al. A tutorial on methodological studies: the what, when, how and why. BMC Med Res Methodol 20 , 226 (2020). https://doi.org/10.1186/s12874-020-01107-7

Download citation

Received : 27 May 2020

Accepted : 27 August 2020

Published : 07 September 2020

DOI : https://doi.org/10.1186/s12874-020-01107-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Methodological study
  • Meta-epidemiology
  • Research methods
  • Research-on-research

BMC Medical Research Methodology

ISSN: 1471-2288

what is critique of methodology

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Int J Qual Stud Health Well-being

Methodology or method? A critical review of qualitative case study reports

Despite on-going debate about credibility, and reported limitations in comparison to other approaches, case study is an increasingly popular approach among qualitative researchers. We critically analysed the methodological descriptions of published case studies. Three high-impact qualitative methods journals were searched to locate case studies published in the past 5 years; 34 were selected for analysis. Articles were categorized as health and health services ( n= 12), social sciences and anthropology ( n= 7), or methods ( n= 15) case studies. The articles were reviewed using an adapted version of established criteria to determine whether adequate methodological justification was present, and if study aims, methods, and reported findings were consistent with a qualitative case study approach. Findings were grouped into five themes outlining key methodological issues: case study methodology or method, case of something particular and case selection, contextually bound case study, researcher and case interactions and triangulation, and study design inconsistent with methodology reported. Improved reporting of case studies by qualitative researchers will advance the methodology for the benefit of researchers and practitioners.

Case study research is an increasingly popular approach among qualitative researchers (Thomas, 2011 ). Several prominent authors have contributed to methodological developments, which has increased the popularity of case study approaches across disciplines (Creswell, 2013b ; Denzin & Lincoln, 2011b ; Merriam, 2009 ; Ragin & Becker, 1992 ; Stake, 1995 ; Yin, 2009 ). Current qualitative case study approaches are shaped by paradigm, study design, and selection of methods, and, as a result, case studies in the published literature vary. Differences between published case studies can make it difficult for researchers to define and understand case study as a methodology.

Experienced qualitative researchers have identified case study research as a stand-alone qualitative approach (Denzin & Lincoln, 2011b ). Case study research has a level of flexibility that is not readily offered by other qualitative approaches such as grounded theory or phenomenology. Case studies are designed to suit the case and research question and published case studies demonstrate wide diversity in study design. There are two popular case study approaches in qualitative research. The first, proposed by Stake ( 1995 ) and Merriam ( 2009 ), is situated in a social constructivist paradigm, whereas the second, by Yin ( 2012 ), Flyvbjerg ( 2011 ), and Eisenhardt ( 1989 ), approaches case study from a post-positivist viewpoint. Scholarship from both schools of inquiry has contributed to the popularity of case study and development of theoretical frameworks and principles that characterize the methodology.

The diversity of case studies reported in the published literature, and on-going debates about credibility and the use of case study in qualitative research practice, suggests that differences in perspectives on case study methodology may prevent researchers from developing a mutual understanding of practice and rigour. In addition, discussion about case study limitations has led some authors to query whether case study is indeed a methodology (Luck, Jackson, & Usher, 2006 ; Meyer, 2001 ; Thomas, 2010 ; Tight, 2010 ). Methodological discussion of qualitative case study research is timely, and a review is required to analyse and understand how this methodology is applied in the qualitative research literature. The aims of this study were to review methodological descriptions of published qualitative case studies, to review how the case study methodological approach was applied, and to identify issues that need to be addressed by researchers, editors, and reviewers. An outline of the current definitions of case study and an overview of the issues proposed in the qualitative methodological literature are provided to set the scene for the review.

Definitions of qualitative case study research

Case study research is an investigation and analysis of a single or collective case, intended to capture the complexity of the object of study (Stake, 1995 ). Qualitative case study research, as described by Stake ( 1995 ), draws together “naturalistic, holistic, ethnographic, phenomenological, and biographic research methods” in a bricoleur design, or in his words, “a palette of methods” (Stake, 1995 , pp. xi–xii). Case study methodology maintains deep connections to core values and intentions and is “particularistic, descriptive and heuristic” (Merriam, 2009 , p. 46).

As a study design, case study is defined by interest in individual cases rather than the methods of inquiry used. The selection of methods is informed by researcher and case intuition and makes use of naturally occurring sources of knowledge, such as people or observations of interactions that occur in the physical space (Stake, 1998 ). Thomas ( 2011 ) suggested that “analytical eclecticism” is a defining factor (p. 512). Multiple data collection and analysis methods are adopted to further develop and understand the case, shaped by context and emergent data (Stake, 1995 ). This qualitative approach “explores a real-life, contemporary bounded system (a case ) or multiple bounded systems (cases) over time, through detailed, in-depth data collection involving multiple sources of information … and reports a case description and case themes ” (Creswell, 2013b , p. 97). Case study research has been defined by the unit of analysis, the process of study, and the outcome or end product, all essentially the case (Merriam, 2009 ).

The case is an object to be studied for an identified reason that is peculiar or particular. Classification of the case and case selection procedures informs development of the study design and clarifies the research question. Stake ( 1995 ) proposed three types of cases and study design frameworks. These include the intrinsic case, the instrumental case, and the collective instrumental case. The intrinsic case is used to understand the particulars of a single case, rather than what it represents. An instrumental case study provides insight on an issue or is used to refine theory. The case is selected to advance understanding of the object of interest. A collective refers to an instrumental case which is studied as multiple, nested cases, observed in unison, parallel, or sequential order. More than one case can be simultaneously studied; however, each case study is a concentrated, single inquiry, studied holistically in its own entirety (Stake, 1995 , 1998 ).

Researchers who use case study are urged to seek out what is common and what is particular about the case. This involves careful and in-depth consideration of the nature of the case, historical background, physical setting, and other institutional and political contextual factors (Stake, 1998 ). An interpretive or social constructivist approach to qualitative case study research supports a transactional method of inquiry, where the researcher has a personal interaction with the case. The case is developed in a relationship between the researcher and informants, and presented to engage the reader, inviting them to join in this interaction and in case discovery (Stake, 1995 ). A postpositivist approach to case study involves developing a clear case study protocol with careful consideration of validity and potential bias, which might involve an exploratory or pilot phase, and ensures that all elements of the case are measured and adequately described (Yin, 2009 , 2012 ).

Current methodological issues in qualitative case study research

The future of qualitative research will be influenced and constructed by the way research is conducted, and by what is reviewed and published in academic journals (Morse, 2011 ). If case study research is to further develop as a principal qualitative methodological approach, and make a valued contribution to the field of qualitative inquiry, issues related to methodological credibility must be considered. Researchers are required to demonstrate rigour through adequate descriptions of methodological foundations. Case studies published without sufficient detail for the reader to understand the study design, and without rationale for key methodological decisions, may lead to research being interpreted as lacking in quality or credibility (Hallberg, 2013 ; Morse, 2011 ).

There is a level of artistic license that is embraced by qualitative researchers and distinguishes practice, which nurtures creativity, innovation, and reflexivity (Denzin & Lincoln, 2011b ; Morse, 2009 ). Qualitative research is “inherently multimethod” (Denzin & Lincoln, 2011a , p. 5); however, with this creative freedom, it is important for researchers to provide adequate description for methodological justification (Meyer, 2001 ). This includes paradigm and theoretical perspectives that have influenced study design. Without adequate description, study design might not be understood by the reader, and can appear to be dishonest or inaccurate. Reviewers and readers might be confused by the inconsistent or inappropriate terms used to describe case study research approach and methods, and be distracted from important study findings (Sandelowski, 2000 ). This issue extends beyond case study research, and others have noted inconsistencies in reporting of methodology and method by qualitative researchers. Sandelowski ( 2000 , 2010 ) argued for accurate identification of qualitative description as a research approach. She recommended that the selected methodology should be harmonious with the study design, and be reflected in methods and analysis techniques. Similarly, Webb and Kevern ( 2000 ) uncovered inconsistencies in qualitative nursing research with focus group methods, recommending that methodological procedures must cite seminal authors and be applied with respect to the selected theoretical framework. Incorrect labelling using case study might stem from the flexibility in case study design and non-directional character relative to other approaches (Rosenberg & Yates, 2007 ). Methodological integrity is required in design of qualitative studies, including case study, to ensure study rigour and to enhance credibility of the field (Morse, 2011 ).

Case study has been unnecessarily devalued by comparisons with statistical methods (Eisenhardt, 1989 ; Flyvbjerg, 2006 , 2011 ; Jensen & Rodgers, 2001 ; Piekkari, Welch, & Paavilainen, 2009 ; Tight, 2010 ; Yin, 1999 ). It is reputed to be the “the weak sibling” in comparison to other, more rigorous, approaches (Yin, 2009 , p. xiii). Case study is not an inherently comparative approach to research. The objective is not statistical research, and the aim is not to produce outcomes that are generalizable to all populations (Thomas, 2011 ). Comparisons between case study and statistical research do little to advance this qualitative approach, and fail to recognize its inherent value, which can be better understood from the interpretive or social constructionist viewpoint of other authors (Merriam, 2009 ; Stake, 1995 ). Building on discussions relating to “fuzzy” (Bassey, 2001 ), or naturalistic generalizations (Stake, 1978 ), or transference of concepts and theories (Ayres, Kavanaugh, & Knafl, 2003 ; Morse et al., 2011 ) would have more relevance.

Case study research has been used as a catch-all design to justify or add weight to fundamental qualitative descriptive studies that do not fit with other traditional frameworks (Merriam, 2009 ). A case study has been a “convenient label for our research—when we ‘can't think of anything ‘better”—in an attempt to give it [qualitative methodology] some added respectability” (Tight, 2010 , p. 337). Qualitative case study research is a pliable approach (Merriam, 2009 ; Meyer, 2001 ; Stake, 1995 ), and has been likened to a “curious methodological limbo” (Gerring, 2004 , p. 341) or “paradigmatic bridge” (Luck et al., 2006 , p. 104), that is on the borderline between postpositivist and constructionist interpretations. This has resulted in inconsistency in application, which indicates that flexibility comes with limitations (Meyer, 2001 ), and the open nature of case study research might be off-putting to novice researchers (Thomas, 2011 ). The development of a well-(in)formed theoretical framework to guide a case study should improve consistency, rigour, and trust in studies published in qualitative research journals (Meyer, 2001 ).

Assessment of rigour

The purpose of this study was to analyse the methodological descriptions of case studies published in qualitative methods journals. To do this we needed to develop a suitable framework, which used existing, established criteria for appraising qualitative case study research rigour (Creswell, 2013b ; Merriam, 2009 ; Stake, 1995 ). A number of qualitative authors have developed concepts and criteria that are used to determine whether a study is rigorous (Denzin & Lincoln, 2011b ; Lincoln, 1995 ; Sandelowski & Barroso, 2002 ). The criteria proposed by Stake ( 1995 ) provide a framework for readers and reviewers to make judgements regarding case study quality, and identify key characteristics essential for good methodological rigour. Although each of the factors listed in Stake's criteria could enhance the quality of a qualitative research report, in Table I we present an adapted criteria used in this study, which integrates more recent work by Merriam ( 2009 ) and Creswell ( 2013b ). Stake's ( 1995 ) original criteria were separated into two categories. The first list of general criteria is “relevant for all qualitative research.” The second list, “high relevance to qualitative case study research,” was the criteria that we decided had higher relevance to case study research. This second list was the main criteria used to assess the methodological descriptions of the case studies reviewed. The complete table has been preserved so that the reader can determine how the original criteria were adapted.

Framework for assessing quality in qualitative case study research.

Adapted from Stake ( 1995 , p. 131).

Study design

The critical review method described by Grant and Booth ( 2009 ) was used, which is appropriate for the assessment of research quality, and is used for literature analysis to inform research and practice. This type of review goes beyond the mapping and description of scoping or rapid reviews, to include “analysis and conceptual innovation” (Grant & Booth, 2009 , p. 93). A critical review is used to develop existing, or produce new, hypotheses or models. This is different to systematic reviews that answer clinical questions. It is used to evaluate existing research and competing ideas, to provide a “launch pad” for conceptual development and “subsequent testing” (Grant & Booth, 2009 , p. 93).

Qualitative methods journals were located by a search of the 2011 ISI Journal Citation Reports in Social Science, via the database Web of Knowledge (see m.webofknowledge.com). No “qualitative research methods” category existed in the citation reports; therefore, a search of all categories was performed using the term “qualitative.” In Table II , we present the qualitative methods journals located, ranked by impact factor. The highest ranked journals were selected for searching. We acknowledge that the impact factor ranking system might not be the best measure of journal quality (Cheek, Garnham, & Quan, 2006 ); however, this was the most appropriate and accessible method available.

International Journal of Qualitative Studies on Health and Well-being.

Search strategy

In March 2013, searches of the journals, Qualitative Health Research , Qualitative Research , and Qualitative Inquiry were completed to retrieve studies with “case study” in the abstract field. The search was limited to the past 5 years (1 January 2008 to 1 March 2013). The objective was to locate published qualitative case studies suitable for assessment using the adapted criterion. Viewpoints, commentaries, and other article types were excluded from review. Title and abstracts of the 45 retrieved articles were read by the first author, who identified 34 empirical case studies for review. All authors reviewed the 34 studies to confirm selection and categorization. In Table III , we present the 34 case studies grouped by journal, and categorized by research topic, including health sciences, social sciences and anthropology, and methods research. There was a discrepancy in categorization of one article on pedagogy and a new teaching method published in Qualitative Inquiry (Jorrín-Abellán, Rubia-Avi, Anguita-Martínez, Gómez-Sánchez, & Martínez-Mones, 2008 ). Consensus was to allocate to the methods category.

Outcomes of search of qualitative methods journals.

In Table III , the number of studies located, and final numbers selected for review have been reported. Qualitative Health Research published the most empirical case studies ( n= 16). In the health category, there were 12 case studies of health conditions, health services, and health policy issues, all published in Qualitative Health Research . Seven case studies were categorized as social sciences and anthropology research, which combined case study with biography and ethnography methodologies. All three journals published case studies on methods research to illustrate a data collection or analysis technique, methodological procedure, or related issue.

The methodological descriptions of 34 case studies were critically reviewed using the adapted criteria. All articles reviewed contained a description of study methods; however, the length, amount of detail, and position of the description in the article varied. Few studies provided an accurate description and rationale for using a qualitative case study approach. In the 34 case studies reviewed, three described a theoretical framework informed by Stake ( 1995 ), two by Yin ( 2009 ), and three provided a mixed framework informed by various authors, which might have included both Yin and Stake. Few studies described their case study design, or included a rationale that explained why they excluded or added further procedures, and whether this was to enhance the study design, or to better suit the research question. In 26 of the studies no reference was provided to principal case study authors. From reviewing the description of methods, few authors provided a description or justification of case study methodology that demonstrated how their study was informed by the methodological literature that exists on this approach.

The methodological descriptions of each study were reviewed using the adapted criteria, and the following issues were identified: case study methodology or method; case of something particular and case selection; contextually bound case study; researcher and case interactions and triangulation; and, study design inconsistent with methodology. An outline of how the issues were developed from the critical review is provided, followed by a discussion of how these relate to the current methodological literature.

Case study methodology or method

A third of the case studies reviewed appeared to use a case report method, not case study methodology as described by principal authors (Creswell, 2013b ; Merriam, 2009 ; Stake, 1995 ; Yin, 2009 ). Case studies were identified as a case report because of missing methodological detail and by review of the study aims and purpose. These reports presented data for small samples of no more than three people, places or phenomenon. Four studies, or “case reports” were single cases selected retrospectively from larger studies (Bronken, Kirkevold, Martinsen, & Kvigne, 2012 ; Coltart & Henwood, 2012 ; Hooghe, Neimeyer, & Rober, 2012 ; Roscigno et al., 2012 ). Case reports were not a case of something, instead were a case demonstration or an example presented in a report. These reports presented outcomes, and reported on how the case could be generalized. Descriptions focussed on the phenomena, rather than the case itself, and did not appear to study the case in its entirety.

Case reports had minimal in-text references to case study methodology, and were informed by other qualitative traditions or secondary sources (Adamson & Holloway, 2012 ; Buzzanell & D'Enbeau, 2009 ; Nagar-Ron & Motzafi-Haller, 2011 ). This does not suggest that case study methodology cannot be multimethod, however, methodology should be consistent in design, be clearly described (Meyer, 2001 ; Stake, 1995 ), and maintain focus on the case (Creswell, 2013b ).

To demonstrate how case reports were identified, three examples are provided. The first, Yeh ( 2013 ) described their study as, “the examination of the emergence of vegetarianism in Victorian England serves as a case study to reveal the relationships between boundaries and entities” (p. 306). The findings were a historical case report, which resulted from an ethnographic study of vegetarianism. Cunsolo Willox, Harper, Edge, ‘My Word’: Storytelling and Digital Media Lab, and Rigolet Inuit Community Government (2013) used “a case study that illustrates the usage of digital storytelling within an Inuit community” (p. 130). This case study reported how digital storytelling can be used with indigenous communities as a participatory method to illuminate the benefits of this method for other studies. This “case study was conducted in the Inuit community” but did not include the Inuit community in case analysis (Cunsolo Willox et al., 2013 , p. 130). Bronken et al. ( 2012 ) provided a single case report to demonstrate issues observed in a larger clinical study of aphasia and stroke, without adequate case description or analysis.

Case study of something particular and case selection

Case selection is a precursor to case analysis, which needs to be presented as a convincing argument (Merriam, 2009 ). Descriptions of the case were often not adequate to ascertain why the case was selected, or whether it was a particular exemplar or outlier (Thomas, 2011 ). In a number of case studies in the health and social science categories, it was not explicit whether the case was of something particular, or peculiar to their discipline or field (Adamson & Holloway, 2012 ; Bronken et al., 2012 ; Colón-Emeric et al., 2010 ; Jackson, Botelho, Welch, Joseph, & Tennstedt, 2012 ; Mawn et al., 2010 ; Snyder-Young, 2011 ). There were exceptions in the methods category ( Table III ), where cases were selected by researchers to report on a new or innovative method. The cases emerged through heuristic study, and were reported to be particular, relative to the existing methods literature (Ajodhia-Andrews & Berman, 2009 ; Buckley & Waring, 2013 ; Cunsolo Willox et al., 2013 ; De Haene, Grietens, & Verschueren, 2010 ; Gratton & O'Donnell, 2011 ; Sumsion, 2013 ; Wimpenny & Savin-Baden, 2012 ).

Case selection processes were sometimes insufficient to understand why the case was selected from the global population of cases, or what study of this case would contribute to knowledge as compared with other possible cases (Adamson & Holloway, 2012 ; Bronken et al., 2012 ; Colón-Emeric et al., 2010 ; Jackson et al., 2012 ; Mawn et al., 2010 ). In two studies, local cases were selected (Barone, 2010 ; Fourie & Theron, 2012 ) because the researcher was familiar with and had access to the case. Possible limitations of a convenience sample were not acknowledged. Purposeful sampling was used to recruit participants within the case of one study, but not of the case itself (Gallagher et al., 2013 ). Random sampling was completed for case selection in two studies (Colón-Emeric et al., 2010 ; Jackson et al., 2012 ), which has limited meaning in interpretive qualitative research.

To demonstrate how researchers provided a good justification for the selection of case study approaches, four examples are provided. The first, cases of residential care homes, were selected because of reported occurrences of mistreatment, which included residents being locked in rooms at night (Rytterström, Unosson, & Arman, 2013 ). Roscigno et al. ( 2012 ) selected cases of parents who were admitted for early hospitalization in neonatal intensive care with a threatened preterm delivery before 26 weeks. Hooghe et al. ( 2012 ) used random sampling to select 20 couples that had experienced the death of a child; however, the case study was of one couple and a particular metaphor described only by them. The final example, Coltart and Henwood ( 2012 ), provided a detailed account of how they selected two cases from a sample of 46 fathers based on personal characteristics and beliefs. They described how the analysis of the two cases would contribute to their larger study on first time fathers and parenting.

Contextually bound case study

The limits or boundaries of the case are a defining factor of case study methodology (Merriam, 2009 ; Ragin & Becker, 1992 ; Stake, 1995 ; Yin, 2009 ). Adequate contextual description is required to understand the setting or context in which the case is revealed. In the health category, case studies were used to illustrate a clinical phenomenon or issue such as compliance and health behaviour (Colón-Emeric et al., 2010 ; D'Enbeau, Buzzanell, & Duckworth, 2010 ; Gallagher et al., 2013 ; Hooghe et al., 2012 ; Jackson et al., 2012 ; Roscigno et al., 2012 ). In these case studies, contextual boundaries, such as physical and institutional descriptions, were not sufficient to understand the case as a holistic system, for example, the general practitioner (GP) clinic in Gallagher et al. ( 2013 ), or the nursing home in Colón-Emeric et al. ( 2010 ). Similarly, in the social science and methods categories, attention was paid to some components of the case context, but not others, missing important information required to understand the case as a holistic system (Alexander, Moreira, & Kumar, 2012 ; Buzzanell & D'Enbeau, 2009 ; Nairn & Panelli, 2009 ; Wimpenny & Savin-Baden, 2012 ).

In two studies, vicarious experience or vignettes (Nairn & Panelli, 2009 ) and images (Jorrín-Abellán et al., 2008 ) were effective to support description of context, and might have been a useful addition for other case studies. Missing contextual boundaries suggests that the case might not be adequately defined. Additional information, such as the physical, institutional, political, and community context, would improve understanding of the case (Stake, 1998 ). In Boxes 1 and 2 , we present brief synopses of two studies that were reviewed, which demonstrated a well bounded case. In Box 1 , Ledderer ( 2011 ) used a qualitative case study design informed by Stake's tradition. In Box 2 , Gillard, Witt, and Watts ( 2011 ) were informed by Yin's tradition. By providing a brief outline of the case studies in Boxes 1 and 2 , we demonstrate how effective case boundaries can be constructed and reported, which may be of particular interest to prospective case study researchers.

Article synopsis of case study research using Stake's tradition

Ledderer ( 2011 ) used a qualitative case study research design, informed by modern ethnography. The study is bounded to 10 general practice clinics in Denmark, who had received federal funding to implement preventative care services based on a Motivational Interviewing intervention. The researcher question focussed on “why is it so difficult to create change in medical practice?” (Ledderer, 2011 , p. 27). The study context was adequately described, providing detail on the general practitioner (GP) clinics and relevant political and economic influences. Methodological decisions are described in first person narrative, providing insight on researcher perspectives and interaction with the case. Forty-four interviews were conducted, which focussed on how GPs conducted consultations, and the form, nature and content, rather than asking their opinion or experience (Ledderer, 2011 , p. 30). The duration and intensity of researcher immersion in the case enhanced depth of description and trustworthiness of study findings. Analysis was consistent with Stake's tradition, and the researcher provided examples of inquiry techniques used to challenge assumptions about emerging themes. Several other seminal qualitative works were cited. The themes and typology constructed are rich in narrative data and storytelling by clinic staff, demonstrating individual clinic experiences as well as shared meanings and understandings about changing from a biomedical to psychological approach to preventative health intervention. Conclusions make note of social and cultural meanings and lessons learned, which might not have been uncovered using a different methodology.

Article synopsis of case study research using Yin's tradition

Gillard et al. ( 2011 ) study of camps for adolescents living with HIV/AIDs provided a good example of Yin's interpretive case study approach. The context of the case is bounded by the three summer camps of which the researchers had prior professional involvement. A case study protocol was developed that used multiple methods to gather information at three data collection points coinciding with three youth camps (Teen Forum, Discover Camp, and Camp Strong). Gillard and colleagues followed Yin's ( 2009 ) principles, using a consistent data protocol that enhanced cross-case analysis. Data described the young people, the camp physical environment, camp schedule, objectives and outcomes, and the staff of three youth camps. The findings provided a detailed description of the context, with less detail of individual participants, including insight into researcher's interpretations and methodological decisions throughout the data collection and analysis process. Findings provided the reader with a sense of “being there,” and are discovered through constant comparison of the case with the research issues; the case is the unit of analysis. There is evidence of researcher immersion in the case, and Gillard reports spending significant time in the field in a naturalistic and integrated youth mentor role.

This case study is not intended to have a significant impact on broader health policy, although does have implications for health professionals working with adolescents. Study conclusions will inform future camps for young people with chronic disease, and practitioners are able to compare similarities between this case and their own practice (for knowledge translation). No limitations of this article were reported. Limitations related to publication of this case study were that it was 20 pages long and used three tables to provide sufficient description of the camp and program components, and relationships with the research issue.

Researcher and case interactions and triangulation

Researcher and case interactions and transactions are a defining feature of case study methodology (Stake, 1995 ). Narrative stories, vignettes, and thick description are used to provoke vicarious experience and a sense of being there with the researcher in their interaction with the case. Few of the case studies reviewed provided details of the researcher's relationship with the case, researcher–case interactions, and how these influenced the development of the case study (Buzzanell & D'Enbeau, 2009 ; D'Enbeau et al., 2010 ; Gallagher et al., 2013 ; Gillard et al., 2011 ; Ledderer, 2011 ; Nagar-Ron & Motzafi-Haller, 2011 ). The role and position of the researcher needed to be self-examined and understood by readers, to understand how this influenced interactions with participants, and to determine what triangulation is needed (Merriam, 2009 ; Stake, 1995 ).

Gillard et al. ( 2011 ) provided a good example of triangulation, comparing data sources in a table (p. 1513). Triangulation of sources was used to reveal as much depth as possible in the study by Nagar-Ron and Motzafi-Haller ( 2011 ), while also enhancing confirmation validity. There were several case studies that would have benefited from improved range and use of data sources, and descriptions of researcher–case interactions (Ajodhia-Andrews & Berman, 2009 ; Bronken et al., 2012 ; Fincham, Scourfield, & Langer, 2008 ; Fourie & Theron, 2012 ; Hooghe et al., 2012 ; Snyder-Young, 2011 ; Yeh, 2013 ).

Study design inconsistent with methodology

Good, rigorous case studies require a strong methodological justification (Meyer, 2001 ) and a logical and coherent argument that defines paradigm, methodological position, and selection of study methods (Denzin & Lincoln, 2011b ). Methodological justification was insufficient in several of the studies reviewed (Barone, 2010 ; Bronken et al., 2012 ; Hooghe et al., 2012 ; Mawn et al., 2010 ; Roscigno et al., 2012 ; Yeh, 2013 ). This was judged by the absence, or inadequate or inconsistent reference to case study methodology in-text.

In six studies, the methodological justification provided did not relate to case study. There were common issues identified. Secondary sources were used as primary methodological references indicating that study design might not have been theoretically sound (Colón-Emeric et al., 2010 ; Coltart & Henwood, 2012 ; Roscigno et al., 2012 ; Snyder-Young, 2011 ). Authors and sources cited in methodological descriptions were inconsistent with the actual study design and practices used (Fourie & Theron, 2012 ; Hooghe et al., 2012 ; Jorrín-Abellán et al., 2008 ; Mawn et al., 2010 ; Rytterström et al., 2013 ; Wimpenny & Savin-Baden, 2012 ). This occurred when researchers cited Stake or Yin, or both (Mawn et al., 2010 ; Rytterström et al., 2013 ), although did not follow their paradigmatic or methodological approach. In 26 studies there were no citations for a case study methodological approach.

The findings of this study have highlighted a number of issues for researchers. A considerable number of case studies reviewed were missing key elements that define qualitative case study methodology and the tradition cited. A significant number of studies did not provide a clear methodological description or justification relevant to case study. Case studies in health and social sciences did not provide sufficient information for the reader to understand case selection, and why this case was chosen above others. The context of the cases were not described in adequate detail to understand all relevant elements of the case context, which indicated that cases may have not been contextually bounded. There were inconsistencies between reported methodology, study design, and paradigmatic approach in case studies reviewed, which made it difficult to understand the study methodology and theoretical foundations. These issues have implications for methodological integrity and honesty when reporting study design, which are values of the qualitative research tradition and are ethical requirements (Wager & Kleinert, 2010a ). Poorly described methodological descriptions may lead the reader to misinterpret or discredit study findings, which limits the impact of the study, and, as a collective, hinders advancements in the broader qualitative research field.

The issues highlighted in our review build on current debates in the case study literature, and queries about the value of this methodology. Case study research can be situated within different paradigms or designed with an array of methods. In order to maintain the creativity and flexibility that is valued in this methodology, clearer descriptions of paradigm and theoretical position and methods should be provided so that study findings are not undervalued or discredited. Case study research is an interdisciplinary practice, which means that clear methodological descriptions might be more important for this approach than other methodologies that are predominantly driven by fewer disciplines (Creswell, 2013b ).

Authors frequently omit elements of methodologies and include others to strengthen study design, and we do not propose a rigid or purist ideology in this paper. On the contrary, we encourage new ideas about using case study, together with adequate reporting, which will advance the value and practice of case study. The implications of unclear methodological descriptions in the studies reviewed were that study design appeared to be inconsistent with reported methodology, and key elements required for making judgements of rigour were missing. It was not clear whether the deviations from methodological tradition were made by researchers to strengthen the study design, or because of misinterpretations. Morse ( 2011 ) recommended that innovations and deviations from practice are best made by experienced researchers, and that a novice might be unaware of the issues involved with making these changes. To perpetuate the tradition of case study research, applications in the published literature should have consistencies with traditional methodological constructions, and deviations should be described with a rationale that is inherent in study conduct and findings. Providing methodological descriptions that demonstrate a strong theoretical foundation and coherent study design will add credibility to the study, while ensuring the intrinsic meaning of case study is maintained.

The value of this review is that it contributes to discussion of whether case study is a methodology or method. We propose possible reasons why researchers might make this misinterpretation. Researchers may interchange the terms methods and methodology, and conduct research without adequate attention to epistemology and historical tradition (Carter & Little, 2007 ; Sandelowski, 2010 ). If the rich meaning that naming a qualitative methodology brings to the study is not recognized, a case study might appear to be inconsistent with the traditional approaches described by principal authors (Creswell, 2013a ; Merriam, 2009 ; Stake, 1995 ; Yin, 2009 ). If case studies are not methodologically and theoretically situated, then they might appear to be a case report.

Case reports are promoted by university and medical journals as a method of reporting on medical or scientific cases; guidelines for case reports are publicly available on websites ( http://www.hopkinsmedicine.org/institutional_review_board/guidelines_policies/guidelines/case_report.html ). The various case report guidelines provide a general criteria for case reports, which describes that this form of report does not meet the criteria of research, is used for retrospective analysis of up to three clinical cases, and is primarily illustrative and for educational purposes. Case reports can be published in academic journals, but do not require approval from a human research ethics committee. Traditionally, case reports describe a single case, to explain how and what occurred in a selected setting, for example, to illustrate a new phenomenon that has emerged from a larger study. A case report is not necessarily particular or the study of a case in its entirety, and the larger study would usually be guided by a different research methodology.

This description of a case report is similar to what was provided in some studies reviewed. This form of report lacks methodological grounding and qualities of research rigour. The case report has publication value in demonstrating an example and for dissemination of knowledge (Flanagan, 1999 ). However, case reports have different meaning and purpose to case study, which needs to be distinguished. Findings of our review suggest that the medical understanding of a case report has been confused with qualitative case study approaches.

In this review, a number of case studies did not have methodological descriptions that included key characteristics of case study listed in the adapted criteria, and several issues have been discussed. There have been calls for improvements in publication quality of qualitative research (Morse, 2011 ), and for improvements in peer review of submitted manuscripts (Carter & Little, 2007 ; Jasper, Vaismoradi, Bondas, & Turunen, 2013 ). The challenging nature of editor and reviewers responsibilities are acknowledged in the literature (Hames, 2013 ; Wager & Kleinert, 2010b ); however, review of case study methodology should be prioritized because of disputes on methodological value.

Authors using case study approaches are recommended to describe their theoretical framework and methods clearly, and to seek and follow specialist methodological advice when needed (Wager & Kleinert, 2010a ). Adequate page space for case study description would contribute to better publications (Gillard et al., 2011 ). Capitalizing on the ability to publish complementary resources should be considered.

Limitations of the review

There is a level of subjectivity involved in this type of review and this should be considered when interpreting study findings. Qualitative methods journals were selected because the aims and scope of these journals are to publish studies that contribute to methodological discussion and development of qualitative research. Generalist health and social science journals were excluded that might have contained good quality case studies. Journals in business or education were also excluded, although a review of case studies in international business journals has been published elsewhere (Piekkari et al., 2009 ).

The criteria used to assess the quality of the case studies were a set of qualitative indicators. A numerical or ranking system might have resulted in different results. Stake's ( 1995 ) criteria have been referenced elsewhere, and was deemed the best available (Creswell, 2013b ; Crowe et al., 2011 ). Not all qualitative studies are reported in a consistent way and some authors choose to report findings in a narrative form in comparison to a typical biomedical report style (Sandelowski & Barroso, 2002 ), if misinterpretations were made this may have affected the review.

Case study research is an increasingly popular approach among qualitative researchers, which provides methodological flexibility through the incorporation of different paradigmatic positions, study designs, and methods. However, whereas flexibility can be an advantage, a myriad of different interpretations has resulted in critics questioning the use of case study as a methodology. Using an adaptation of established criteria, we aimed to identify and assess the methodological descriptions of case studies in high impact, qualitative methods journals. Few articles were identified that applied qualitative case study approaches as described by experts in case study design. There were inconsistencies in methodology and study design, which indicated that researchers were confused whether case study was a methodology or a method. Commonly, there appeared to be confusion between case studies and case reports. Without clear understanding and application of the principles and key elements of case study methodology, there is a risk that the flexibility of the approach will result in haphazard reporting, and will limit its global application as a valuable, theoretically supported methodology that can be rigorously applied across disciplines and fields.

Conflict of interest and funding

The authors have not received any funding or benefits from industry or elsewhere to conduct this study.

  • Adamson S, Holloway M. Negotiating sensitivities and grappling with intangibles: Experiences from a study of spirituality and funerals. Qualitative Research. 2012; 12 (6):735–752. doi: 10.1177/1468794112439008. [ CrossRef ] [ Google Scholar ]
  • Ajodhia-Andrews A, Berman R. Exploring school life from the lens of a child who does not use speech to communicate. Qualitative Inquiry. 2009; 15 (5):931–951. doi: 10.1177/1077800408322789. [ CrossRef ] [ Google Scholar ]
  • Alexander B. K, Moreira C, Kumar H. S. Resisting (resistance) stories: A tri-autoethnographic exploration of father narratives across shades of difference. Qualitative Inquiry. 2012; 18 (2):121–133. doi: 10.1177/1077800411429087. [ CrossRef ] [ Google Scholar ]
  • Austin W, Park C, Goble E. From interdisciplinary to transdisciplinary research: A case study. Qualitative Health Research. 2008; 18 (4):557–564. doi: 10.1177/1049732307308514. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ayres L, Kavanaugh K, Knafl K. A. Within-case and across-case approaches to qualitative data analysis. Qualitative Health Research. 2003; 13 (6):871–883. doi: 10.1177/1049732303013006008. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Barone T. L. Culturally sensitive care 1969–2000: The Indian Chicano Health Center. Qualitative Health Research. 2010; 20 (4):453–464. doi: 10.1177/1049732310361893. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bassey M. A solution to the problem of generalisation in educational research: Fuzzy prediction. Oxford Review of Education. 2001; 27 (1):5–22. doi: 10.1080/03054980123773. [ CrossRef ] [ Google Scholar ]
  • Bronken B. A, Kirkevold M, Martinsen R, Kvigne K. The aphasic storyteller: Coconstructing stories to promote psychosocial well-being after stroke. Qualitative Health Research. 2012; 22 (10):1303–1316. doi: 10.1177/1049732312450366. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Broyles L. M, Rodriguez K. L, Price P. A, Bayliss N. K, Sevick M. A. Overcoming barriers to the recruitment of nurses as participants in health care research. Qualitative Health Research. 2011; 21 (12):1705–1718. doi: 10.1177/1049732311417727. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Buckley C. A, Waring M. J. Using diagrams to support the research process: Examples from grounded theory. Qualitative Research. 2013; 13 (2):148–172. doi: 10.1177/1468794112472280. [ CrossRef ] [ Google Scholar ]
  • Buzzanell P. M, D'Enbeau S. Stories of caregiving: Intersections of academic research and women's everyday experiences. Qualitative Inquiry. 2009; 15 (7):1199–1224. doi: 10.1177/1077800409338025. [ CrossRef ] [ Google Scholar ]
  • Carter S. M, Little M. Justifying knowledge, justifying method, taking action: Epistemologies, methodologies, and methods in qualitative research. Qualitative Health Research. 2007; 17 (10):1316–1328. doi: 10.1177/1049732307306927. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cheek J, Garnham B, Quan J. What's in a number? Issues in providing evidence of impact and quality of research(ers) Qualitative Health Research. 2006; 16 (3):423–435. doi: 10.1177/1049732305285701. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Colón-Emeric C. S, Plowman D, Bailey D, Corazzini K, Utley-Smith Q, Ammarell N, et al. Regulation and mindful resident care in nursing homes. Qualitative Health Research. 2010; 20 (9):1283–1294. doi: 10.1177/1049732310369337. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Coltart C, Henwood K. On paternal subjectivity: A qualitative longitudinal and psychosocial case analysis of men's classed positions and transitions to first-time fatherhood. Qualitative Research. 2012; 12 (1):35–52. doi: 10.1177/1468794111426224. [ CrossRef ] [ Google Scholar ]
  • Creswell J. W. Five qualitative approaches to inquiry. In: Creswell J. W, editor. Qualitative inquiry and research design: Choosing among five approaches. 3rd ed. Thousand Oaks, CA: Sage; 2013a. pp. 53–84. [ Google Scholar ]
  • Creswell J. W. Qualitative inquiry and research design: Choosing among five approaches. 3rd ed. Thousand Oaks, CA: Sage; 2013b. [ Google Scholar ]
  • Crowe S, Cresswell K, Robertson A, Huby G, Avery A, Sheikh A. The case study approach. BMC Medical Research Methodology. 2011; 11 (1):1–9. doi: 10.1186/1471-2288-11-100. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cunsolo Willox A, Harper S. L, Edge V. L, ‘My Word’: Storytelling and Digital Media Lab, & Rigolet Inuit Community Government Storytelling in a digital age: Digital storytelling as an emerging narrative method for preserving and promoting indigenous oral wisdom. Qualitative Research. 2013; 13 (2):127–147. doi: 10.1177/1468794112446105. [ CrossRef ] [ Google Scholar ]
  • De Haene L, Grietens H, Verschueren K. Holding harm: Narrative methods in mental health research on refugee trauma. Qualitative Health Research. 2010; 20 (12):1664–1676. doi: 10.1177/1049732310376521. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • D'Enbeau S, Buzzanell P. M, Duckworth J. Problematizing classed identities in fatherhood: Development of integrative case studies for analysis and praxis. Qualitative Inquiry. 2010; 16 (9):709–720. doi: 10.1177/1077800410374183. [ CrossRef ] [ Google Scholar ]
  • Denzin N. K, Lincoln Y. S. Introduction: Disciplining the practice of qualitative research. In: Denzin N. K, Lincoln Y. S, editors. The SAGE handbook of qualitative research. 4th ed. Thousand Oaks, CA: Sage; 2011a. pp. 1–6. [ Google Scholar ]
  • Denzin N. K, Lincoln Y. S, editors. The SAGE handbook of qualitative research. 4th ed. Thousand Oaks, CA: Sage; 2011b. [ Google Scholar ]
  • Edwards R, Weller S. Shifting analytic ontology: Using I-poems in qualitative longitudinal research. Qualitative Research. 2012; 12 (2):202–217. doi: 10.1177/1468794111422040. [ CrossRef ] [ Google Scholar ]
  • Eisenhardt K. M. Building theories from case study research. The Academy of Management Review. 1989; 14 (4):532–550. doi: 10.2307/258557. [ CrossRef ] [ Google Scholar ]
  • Fincham B, Scourfield J, Langer S. The impact of working with disturbing secondary data: Reading suicide files in a coroner's office. Qualitative Health Research. 2008; 18 (6):853–862. doi: 10.1177/1049732307308945. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Flanagan J. Public participation in the design of educational programmes for cancer nurses: A case report. European Journal of Cancer Care. 1999; 8 (2):107–112. doi: 10.1046/j.1365-2354.1999.00141.x. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Flyvbjerg B. Five misunderstandings about case-study research. Qualitative Inquiry. 2006; 12 (2):219–245. doi: 10.1177/1077800405284.363. [ CrossRef ] [ Google Scholar ]
  • Flyvbjerg B. Case study. In: Denzin N. K, Lincoln Y. S, editors. The SAGE handbook of qualitative research. 4th ed. Thousand Oaks, CA: Sage; 2011. pp. 301–316. [ Google Scholar ]
  • Fourie C. L, Theron L. C. Resilience in the face of fragile X syndrome. Qualitative Health Research. 2012; 22 (10):1355–1368. doi: 10.1177/1049732312451871. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gallagher N, MacFarlane A, Murphy A. W, Freeman G. K, Glynn L. G, Bradley C. P. Service users’ and caregivers’ perspectives on continuity of care in out-of-hours primary care. Qualitative Health Research. 2013; 23 (3):407–421. doi: 10.1177/1049732312470521. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gerring J. What is a case study and what is it good for? American Political Science Review. 2004; 98 (2):341–354. doi: 10.1017/S0003055404001182. [ CrossRef ] [ Google Scholar ]
  • Gillard A, Witt P. A, Watts C. E. Outcomes and processes at a camp for youth with HIV/AIDS. Qualitative Health Research. 2011; 21 (11):1508–1526. doi: 10.1177/1049732311413907. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Grant M, Booth A. A typology of reviews: An analysis of 14 review types and associated methodologies. Health Information and Libraries Journal. 2009; 26 :91–108. doi: 10.1111/j.1471-1842.2009.00848.x. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gratton M.-F, O'Donnell S. Communication technologies for focus groups with remote communities: A case study of research with First Nations in Canada. Qualitative Research. 2011; 11 (2):159–175. doi: 10.1177/1468794110394068. [ CrossRef ] [ Google Scholar ]
  • Hallberg L. Quality criteria and generalization of results from qualitative studies. International Journal of Qualitative Studies on Health and Wellbeing. 2013; 8 :1. doi: 10.3402/qhw.v8i0.20647. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hames I. Committee on Publication Ethics, 1. 2013, March. COPE Ethical guidelines for peer reviewers. Retrieved April 7, 2013, from http://publicationethics.org/resources/guidelines . [ Google Scholar ]
  • Hooghe A, Neimeyer R. A, Rober P. “Cycling around an emotional core of sadness”: Emotion regulation in a couple after the loss of a child. Qualitative Health Research. 2012; 22 (9):1220–1231. doi: 10.1177/1049732312449209. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Jackson C. B, Botelho E. M, Welch L. C, Joseph J, Tennstedt S. L. Talking with others about stigmatized health conditions: Implications for managing symptoms. Qualitative Health Research. 2012; 22 (11):1468–1475. doi: 10.1177/1049732312450323. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Jasper M, Vaismoradi M, Bondas T, Turunen H. Validity and reliability of the scientific review process in nursing journals—time for a rethink? Nursing Inquiry. 2013 doi: 10.1111/nin.12030. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Jensen J. L, Rodgers R. Cumulating the intellectual gold of case study research. Public Administration Review. 2001; 61 (2):235–246. doi: 10.1111/0033-3352.00025. [ CrossRef ] [ Google Scholar ]
  • Jorrín-Abellán I. M, Rubia-Avi B, Anguita-Martínez R, Gómez-Sánchez E, Martínez-Mones A. Bouncing between the dark and bright sides: Can technology help qualitative research? Qualitative Inquiry. 2008; 14 (7):1187–1204. doi: 10.1177/1077800408318435. [ CrossRef ] [ Google Scholar ]
  • Ledderer L. Understanding change in medical practice: The role of shared meaning in preventive treatment. Qualitative Health Research. 2011; 21 (1):27–40. doi: 10.1177/1049732310377451. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lincoln Y. S. Emerging criteria for quality in qualitative and interpretive research. Qualitative Inquiry. 1995; 1 (3):275–289. doi: 10.1177/107780049500100301. [ CrossRef ] [ Google Scholar ]
  • Luck L, Jackson D, Usher K. Case study: A bridge across the paradigms. Nursing Inquiry. 2006; 13 (2):103–109. doi: 10.1111/j.1440-1800.2006.00309.x. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Mawn B, Siqueira E, Koren A, Slatin C, Devereaux Melillo K, Pearce C, et al. Health disparities among health care workers. Qualitative Health Research. 2010; 20 (1):68–80. doi: 10.1177/1049732309355590. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Merriam S. B. Qualitative research: A guide to design and implementation. 3rd ed. San Francisco, CA: Jossey-Bass; 2009. [ Google Scholar ]
  • Meyer C. B. A case in case study methodology. Field Methods. 2001; 13 (4):329–352. doi: 10.1177/1525822x0101300402. [ CrossRef ] [ Google Scholar ]
  • Morse J. M. Mixing qualitative methods. Qualitative Health Research. 2009; 19 (11):1523–1524. doi: 10.1177/1049732309349360. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Morse J. M. Molding qualitative health research. Qualitative Health Research. 2011; 21 (8):1019–1021. doi: 10.1177/1049732311404706. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Morse J. M, Dimitroff L. J, Harper R, Koontz A, Kumra S, Matthew-Maich N, et al. Considering the qualitative–quantitative language divide. Qualitative Health Research. 2011; 21 (9):1302–1303. doi: 10.1177/1049732310392386. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Nagar-Ron S, Motzafi-Haller P. “My life? There is not much to tell”: On voice, silence and agency in interviews with first-generation Mizrahi Jewish women immigrants to Israel. Qualitative Inquiry. 2011; 17 (7):653–663. doi: 10.1177/1077800411414007. [ CrossRef ] [ Google Scholar ]
  • Nairn K, Panelli R. Using fiction to make meaning in research with young people in rural New Zealand. Qualitative Inquiry. 2009; 15 (1):96–112. doi: 10.1177/1077800408318314. [ CrossRef ] [ Google Scholar ]
  • Nespor J. The afterlife of “teachers’ beliefs”: Qualitative methodology and the textline. Qualitative Inquiry. 2012; 18 (5):449–460. doi: 10.1177/1077800412439530. [ CrossRef ] [ Google Scholar ]
  • Piekkari R, Welch C, Paavilainen E. The case study as disciplinary convention: Evidence from international business journals. Organizational Research Methods. 2009; 12 (3):567–589. doi: 10.1177/1094428108319905. [ CrossRef ] [ Google Scholar ]
  • Ragin C. C, Becker H. S. What is a case?: Exploring the foundations of social inquiry. Cambridge: Cambridge University Press; 1992. [ Google Scholar ]
  • Roscigno C. I, Savage T. A, Kavanaugh K, Moro T. T, Kilpatrick S. J, Strassner H. T, et al. Divergent views of hope influencing communications between parents and hospital providers. Qualitative Health Research. 2012; 22 (9):1232–1246. doi: 10.1177/1049732312449210. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rosenberg J. P, Yates P. M. Schematic representation of case study research designs. Journal of Advanced Nursing. 2007; 60 (4):447–452. doi: 10.1111/j.1365-2648.2007.04385.x. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rytterström P, Unosson M, Arman M. Care culture as a meaning- making process: A study of a mistreatment investigation. Qualitative Health Research. 2013; 23 :1179–1187. doi: 10.1177/1049732312470760. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sandelowski M. Whatever happened to qualitative description? Research in Nursing & Health. 2000; 23 (4):334–340. doi: 10.1002/1098-240X. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sandelowski M. What's in a name? Qualitative description revisited. Research in Nursing & Health. 2010; 33 (1):77–84. doi: 10.1002/nur.20362. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sandelowski M, Barroso J. Reading qualitative studies. International Journal of Qualitative Methods. 2002; 1 (1):74–108. [ Google Scholar ]
  • Snyder-Young D. “Here to tell her story”: Analyzing the autoethnographic performances of others. Qualitative Inquiry. 2011; 17 (10):943–951. doi: 10.1177/1077800411425149. [ CrossRef ] [ Google Scholar ]
  • Stake R. E. The case study method in social inquiry. Educational Researcher. 1978; 7 (2):5–8. [ Google Scholar ]
  • Stake R. E. The art of case study research. Thousand Oaks, CA: Sage; 1995. [ Google Scholar ]
  • Stake R. E. Case studies. In: Denzin N. K, Lincoln Y. S, editors. Strategies of qualitative inquiry. Thousand Oaks, CA: Sage; 1998. pp. 86–109. [ Google Scholar ]
  • Sumsion J. Opening up possibilities through team research: Investigating infants’ experiences of early childhood education and care. Qualitative Research. 2013; 14 (2):149–165. doi: 10.1177/1468794112468471.. [ CrossRef ] [ Google Scholar ]
  • Thomas G. Doing case study: Abduction not induction, phronesis not theory. Qualitative Inquiry. 2010; 16 (7):575–582. doi: 10.1177/1077800410372601. [ CrossRef ] [ Google Scholar ]
  • Thomas G. A typology for the case study in social science following a review of definition, discourse, and structure. Qualitative Inquiry. 2011; 17 (6):511–521. doi: 10.1177/1077800411409884. [ CrossRef ] [ Google Scholar ]
  • Tight M. The curious case of case study: A viewpoint. International Journal of Social Research Methodology. 2010; 13 (4):329–339. doi: 10.1080/13645570903187181. [ CrossRef ] [ Google Scholar ]
  • Wager E, Kleinert S. Responsible research publication: International standards for authors. A position statement developed at the 2nd World Conference on Research Integrity, Singapore, July 22–24, 2010. In: Mayer T, Steneck N, editors. Promoting research integrity in a global environment. Singapore: Imperial College Press/World Scientific; 2010a. pp. 309–316. [ Google Scholar ]
  • Wager E, Kleinert S. Responsible research publication: International standards for editors. A position statement developed at the 2nd World Conference on Research Integrity, Singapore, July 22–24, 2010. In: Mayer T, Steneck N, editors. Promoting research integrity in a global environment. Singapore: Imperial College Press/World Scientific; 2010b. pp. 317–328. [ Google Scholar ]
  • Webb C, Kevern J. Focus groups as a research method: A critique of some aspects of their use in nursing research. Journal of Advanced Nursing. 2000; 33 (6):798–805. doi: 10.1046/j.1365-2648.2001.01720.x. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wimpenny K, Savin-Baden M. Exploring and implementing participatory action synthesis. Qualitative Inquiry. 2012; 18 (8):689–698. doi: 10.1177/1077800412452854. [ CrossRef ] [ Google Scholar ]
  • Yeh H.-Y. Boundaries, entities, and modern vegetarianism: Examining the emergence of the first vegetarian organization. Qualitative Inquiry. 2013; 19 (4):298–309. doi: 10.1177/1077800412471516. [ CrossRef ] [ Google Scholar ]
  • Yin R. K. Enhancing the quality of case studies in health services research. Health Services Research. 1999; 34 (5 Pt 2):1209–1224. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Yin R. K. Case study research: Design and methods. 4th ed. Thousand Oaks, CA: Sage; 2009. [ Google Scholar ]
  • Yin R. K. Applications of case study research. 3rd ed. Thousand Oaks, CA: Sage; 2012. [ Google Scholar ]

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Humanities LibreTexts

8.1: What’s a Critique and Why Does it Matter?

  • Last updated
  • Save as PDF
  • Page ID 6510

  • Steven D. Krause
  • Eastern Michigan University

Critiques evaluate and analyze a wide variety of things (texts, images, performances, etc.) based on reasons or criteria. Sometimes, people equate the notion of “critique” to “criticism,” which usually suggests a negative interpretation. These terms are easy to confuse, but I want to be clear that critique and criticize don’t mean the same thing. A negative critique might be said to be “criticism” in the way we often understand the term “to criticize,” but critiques can be positive too.

We’re all familiar with one of the most basic forms of critique: reviews (film reviews, music reviews, art reviews, book reviews, etc.). Critiques in the form of reviews tend to have a fairly simple and particular point: whether or not something is “good” or “bad.”

Academic critiques are similar to the reviews we see in popular sources in that critique writers are trying to make a particular point about whatever it is that they are critiquing. But there are some differences between the sorts of critiques we read in academic sources versus the ones we read in popular sources.

  • The subjects of academic critiques tend to be other academic writings and they frequently appear in scholarly journals.
  • Academic critiques frequently go further in making an argument beyond a simple assessment of the quality of a particular book, film, performance, or work of art. Academic critique writers will often compare and discuss several works that are similar to each other to make some larger point. In other words, instead of simply commenting on whether something was good or bad, academic critiques tend to explore issues and ideas in ways that are more complicated than merely “good” or “bad.”

The main focus of this chapter is the value of writing critiques as a part of the research writing process. Critiquing writing is important because in order to write a good critique you need to critically read : that is, you need to closely read and understand whatever it is you are critiquing, you need to apply appropriate criteria in order evaluate it, you need to summarize it, and to ultimately make some sort of point about the text you are critiquing.

These skills-- critically and closely reading, summarizing, creating and applying criteria, and then making an evaluation-- are key to The Process of Research Writing, and they should help you as you work through the process of research writing.

In this chapter, I’ve provided a “step-by-step” process for making a critique. I would encourage you to quickly read or skim through this chapter first, and then go back and work through the steps and exercises describe.

Selecting the right text to critique

The first step in writing a critique is selecting a text to critique. For the purposes of this writing exercise, you should check with your teacher for guidelines on what text to pick. If you are doing an annotated bibliography as part of your research project (see chapter 6, “The Annotated Bibliography Exercise”), then you are might find more materials that will work well for this project as you continuously research.

Short and simple newspaper articles, while useful as part of the research process, can be difficult to critique since they don’t have the sort of detail that easily allows for a critical reading. On the other hand, critiquing an entire book is probably a more ambitious task than you are likely to have time or energy for with this exercise. Instead, consider critiquing one of the more fully developed texts you’ve come across in your research: an in-depth examination from a news magazine, a chapter from a scholarly book, a report on a research study or experiment, or an analysis published in an academic journal. These more complex essays usually present more opportunities for issues to critique.

Depending on your teacher’s assignment, the “text” you critique might include something that isn’t in writing: a movie, a music CD, a multimedia presentation, a computer game, a painting, etc. As is the case with more traditional writings, you want to select a text that has enough substance to it so that it stands up to a critical reading.

Exercise 7.1

Pick out at least three different possibilities for texts that you could critique for this exercise. If you’ve already started work on your research and an annotated bibliography for your research topic, you should consider those pieces of research as possibilities. Working alone or in small groups, consider the potential of each text. Here are some questions to think about:

  • Does the text provide in-depth information? How long is it? Does it include a “works cited” or bibliography section?
  • What is the source of the text? Does it come from an academic, professional, or scholarly publication?
  • Does the text advocate a particular position? What is it, and do you agree or disagree with the text?
  • Privacy Policy

Research Method

Home » Research Methodology – Types, Examples and writing Guide

Research Methodology – Types, Examples and writing Guide

Table of Contents

Research Methodology

Research Methodology

Definition:

Research Methodology refers to the systematic and scientific approach used to conduct research, investigate problems, and gather data and information for a specific purpose. It involves the techniques and procedures used to identify, collect , analyze , and interpret data to answer research questions or solve research problems . Moreover, They are philosophical and theoretical frameworks that guide the research process.

Structure of Research Methodology

Research methodology formats can vary depending on the specific requirements of the research project, but the following is a basic example of a structure for a research methodology section:

I. Introduction

  • Provide an overview of the research problem and the need for a research methodology section
  • Outline the main research questions and objectives

II. Research Design

  • Explain the research design chosen and why it is appropriate for the research question(s) and objectives
  • Discuss any alternative research designs considered and why they were not chosen
  • Describe the research setting and participants (if applicable)

III. Data Collection Methods

  • Describe the methods used to collect data (e.g., surveys, interviews, observations)
  • Explain how the data collection methods were chosen and why they are appropriate for the research question(s) and objectives
  • Detail any procedures or instruments used for data collection

IV. Data Analysis Methods

  • Describe the methods used to analyze the data (e.g., statistical analysis, content analysis )
  • Explain how the data analysis methods were chosen and why they are appropriate for the research question(s) and objectives
  • Detail any procedures or software used for data analysis

V. Ethical Considerations

  • Discuss any ethical issues that may arise from the research and how they were addressed
  • Explain how informed consent was obtained (if applicable)
  • Detail any measures taken to ensure confidentiality and anonymity

VI. Limitations

  • Identify any potential limitations of the research methodology and how they may impact the results and conclusions

VII. Conclusion

  • Summarize the key aspects of the research methodology section
  • Explain how the research methodology addresses the research question(s) and objectives

Research Methodology Types

Types of Research Methodology are as follows:

Quantitative Research Methodology

This is a research methodology that involves the collection and analysis of numerical data using statistical methods. This type of research is often used to study cause-and-effect relationships and to make predictions.

Qualitative Research Methodology

This is a research methodology that involves the collection and analysis of non-numerical data such as words, images, and observations. This type of research is often used to explore complex phenomena, to gain an in-depth understanding of a particular topic, and to generate hypotheses.

Mixed-Methods Research Methodology

This is a research methodology that combines elements of both quantitative and qualitative research. This approach can be particularly useful for studies that aim to explore complex phenomena and to provide a more comprehensive understanding of a particular topic.

Case Study Research Methodology

This is a research methodology that involves in-depth examination of a single case or a small number of cases. Case studies are often used in psychology, sociology, and anthropology to gain a detailed understanding of a particular individual or group.

Action Research Methodology

This is a research methodology that involves a collaborative process between researchers and practitioners to identify and solve real-world problems. Action research is often used in education, healthcare, and social work.

Experimental Research Methodology

This is a research methodology that involves the manipulation of one or more independent variables to observe their effects on a dependent variable. Experimental research is often used to study cause-and-effect relationships and to make predictions.

Survey Research Methodology

This is a research methodology that involves the collection of data from a sample of individuals using questionnaires or interviews. Survey research is often used to study attitudes, opinions, and behaviors.

Grounded Theory Research Methodology

This is a research methodology that involves the development of theories based on the data collected during the research process. Grounded theory is often used in sociology and anthropology to generate theories about social phenomena.

Research Methodology Example

An Example of Research Methodology could be the following:

Research Methodology for Investigating the Effectiveness of Cognitive Behavioral Therapy in Reducing Symptoms of Depression in Adults

Introduction:

The aim of this research is to investigate the effectiveness of cognitive-behavioral therapy (CBT) in reducing symptoms of depression in adults. To achieve this objective, a randomized controlled trial (RCT) will be conducted using a mixed-methods approach.

Research Design:

The study will follow a pre-test and post-test design with two groups: an experimental group receiving CBT and a control group receiving no intervention. The study will also include a qualitative component, in which semi-structured interviews will be conducted with a subset of participants to explore their experiences of receiving CBT.

Participants:

Participants will be recruited from community mental health clinics in the local area. The sample will consist of 100 adults aged 18-65 years old who meet the diagnostic criteria for major depressive disorder. Participants will be randomly assigned to either the experimental group or the control group.

Intervention :

The experimental group will receive 12 weekly sessions of CBT, each lasting 60 minutes. The intervention will be delivered by licensed mental health professionals who have been trained in CBT. The control group will receive no intervention during the study period.

Data Collection:

Quantitative data will be collected through the use of standardized measures such as the Beck Depression Inventory-II (BDI-II) and the Generalized Anxiety Disorder-7 (GAD-7). Data will be collected at baseline, immediately after the intervention, and at a 3-month follow-up. Qualitative data will be collected through semi-structured interviews with a subset of participants from the experimental group. The interviews will be conducted at the end of the intervention period, and will explore participants’ experiences of receiving CBT.

Data Analysis:

Quantitative data will be analyzed using descriptive statistics, t-tests, and mixed-model analyses of variance (ANOVA) to assess the effectiveness of the intervention. Qualitative data will be analyzed using thematic analysis to identify common themes and patterns in participants’ experiences of receiving CBT.

Ethical Considerations:

This study will comply with ethical guidelines for research involving human subjects. Participants will provide informed consent before participating in the study, and their privacy and confidentiality will be protected throughout the study. Any adverse events or reactions will be reported and managed appropriately.

Data Management:

All data collected will be kept confidential and stored securely using password-protected databases. Identifying information will be removed from qualitative data transcripts to ensure participants’ anonymity.

Limitations:

One potential limitation of this study is that it only focuses on one type of psychotherapy, CBT, and may not generalize to other types of therapy or interventions. Another limitation is that the study will only include participants from community mental health clinics, which may not be representative of the general population.

Conclusion:

This research aims to investigate the effectiveness of CBT in reducing symptoms of depression in adults. By using a randomized controlled trial and a mixed-methods approach, the study will provide valuable insights into the mechanisms underlying the relationship between CBT and depression. The results of this study will have important implications for the development of effective treatments for depression in clinical settings.

How to Write Research Methodology

Writing a research methodology involves explaining the methods and techniques you used to conduct research, collect data, and analyze results. It’s an essential section of any research paper or thesis, as it helps readers understand the validity and reliability of your findings. Here are the steps to write a research methodology:

  • Start by explaining your research question: Begin the methodology section by restating your research question and explaining why it’s important. This helps readers understand the purpose of your research and the rationale behind your methods.
  • Describe your research design: Explain the overall approach you used to conduct research. This could be a qualitative or quantitative research design, experimental or non-experimental, case study or survey, etc. Discuss the advantages and limitations of the chosen design.
  • Discuss your sample: Describe the participants or subjects you included in your study. Include details such as their demographics, sampling method, sample size, and any exclusion criteria used.
  • Describe your data collection methods : Explain how you collected data from your participants. This could include surveys, interviews, observations, questionnaires, or experiments. Include details on how you obtained informed consent, how you administered the tools, and how you minimized the risk of bias.
  • Explain your data analysis techniques: Describe the methods you used to analyze the data you collected. This could include statistical analysis, content analysis, thematic analysis, or discourse analysis. Explain how you dealt with missing data, outliers, and any other issues that arose during the analysis.
  • Discuss the validity and reliability of your research : Explain how you ensured the validity and reliability of your study. This could include measures such as triangulation, member checking, peer review, or inter-coder reliability.
  • Acknowledge any limitations of your research: Discuss any limitations of your study, including any potential threats to validity or generalizability. This helps readers understand the scope of your findings and how they might apply to other contexts.
  • Provide a summary: End the methodology section by summarizing the methods and techniques you used to conduct your research. This provides a clear overview of your research methodology and helps readers understand the process you followed to arrive at your findings.

When to Write Research Methodology

Research methodology is typically written after the research proposal has been approved and before the actual research is conducted. It should be written prior to data collection and analysis, as it provides a clear roadmap for the research project.

The research methodology is an important section of any research paper or thesis, as it describes the methods and procedures that will be used to conduct the research. It should include details about the research design, data collection methods, data analysis techniques, and any ethical considerations.

The methodology should be written in a clear and concise manner, and it should be based on established research practices and standards. It is important to provide enough detail so that the reader can understand how the research was conducted and evaluate the validity of the results.

Applications of Research Methodology

Here are some of the applications of research methodology:

  • To identify the research problem: Research methodology is used to identify the research problem, which is the first step in conducting any research.
  • To design the research: Research methodology helps in designing the research by selecting the appropriate research method, research design, and sampling technique.
  • To collect data: Research methodology provides a systematic approach to collect data from primary and secondary sources.
  • To analyze data: Research methodology helps in analyzing the collected data using various statistical and non-statistical techniques.
  • To test hypotheses: Research methodology provides a framework for testing hypotheses and drawing conclusions based on the analysis of data.
  • To generalize findings: Research methodology helps in generalizing the findings of the research to the target population.
  • To develop theories : Research methodology is used to develop new theories and modify existing theories based on the findings of the research.
  • To evaluate programs and policies : Research methodology is used to evaluate the effectiveness of programs and policies by collecting data and analyzing it.
  • To improve decision-making: Research methodology helps in making informed decisions by providing reliable and valid data.

Purpose of Research Methodology

Research methodology serves several important purposes, including:

  • To guide the research process: Research methodology provides a systematic framework for conducting research. It helps researchers to plan their research, define their research questions, and select appropriate methods and techniques for collecting and analyzing data.
  • To ensure research quality: Research methodology helps researchers to ensure that their research is rigorous, reliable, and valid. It provides guidelines for minimizing bias and error in data collection and analysis, and for ensuring that research findings are accurate and trustworthy.
  • To replicate research: Research methodology provides a clear and detailed account of the research process, making it possible for other researchers to replicate the study and verify its findings.
  • To advance knowledge: Research methodology enables researchers to generate new knowledge and to contribute to the body of knowledge in their field. It provides a means for testing hypotheses, exploring new ideas, and discovering new insights.
  • To inform decision-making: Research methodology provides evidence-based information that can inform policy and decision-making in a variety of fields, including medicine, public health, education, and business.

Advantages of Research Methodology

Research methodology has several advantages that make it a valuable tool for conducting research in various fields. Here are some of the key advantages of research methodology:

  • Systematic and structured approach : Research methodology provides a systematic and structured approach to conducting research, which ensures that the research is conducted in a rigorous and comprehensive manner.
  • Objectivity : Research methodology aims to ensure objectivity in the research process, which means that the research findings are based on evidence and not influenced by personal bias or subjective opinions.
  • Replicability : Research methodology ensures that research can be replicated by other researchers, which is essential for validating research findings and ensuring their accuracy.
  • Reliability : Research methodology aims to ensure that the research findings are reliable, which means that they are consistent and can be depended upon.
  • Validity : Research methodology ensures that the research findings are valid, which means that they accurately reflect the research question or hypothesis being tested.
  • Efficiency : Research methodology provides a structured and efficient way of conducting research, which helps to save time and resources.
  • Flexibility : Research methodology allows researchers to choose the most appropriate research methods and techniques based on the research question, data availability, and other relevant factors.
  • Scope for innovation: Research methodology provides scope for innovation and creativity in designing research studies and developing new research techniques.

Research Methodology Vs Research Methods

About the author.

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Research Paper Citation

How to Cite Research Paper – All Formats and...

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Paper Formats

Research Paper Format – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Research Design

Research Design – Types, Methods and Examples

  • Open access
  • Published: 19 April 2024

A scoping review of continuous quality improvement in healthcare system: conceptualization, models and tools, barriers and facilitators, and impact

  • Aklilu Endalamaw 1 , 2 ,
  • Resham B Khatri 1 , 3 ,
  • Tesfaye Setegn Mengistu 1 , 2 ,
  • Daniel Erku 1 , 4 , 5 ,
  • Eskinder Wolka 6 ,
  • Anteneh Zewdie 6 &
  • Yibeltal Assefa 1  

BMC Health Services Research volume  24 , Article number:  487 ( 2024 ) Cite this article

732 Accesses

Metrics details

The growing adoption of continuous quality improvement (CQI) initiatives in healthcare has generated a surge in research interest to gain a deeper understanding of CQI. However, comprehensive evidence regarding the diverse facets of CQI in healthcare has been limited. Our review sought to comprehensively grasp the conceptualization and principles of CQI, explore existing models and tools, analyze barriers and facilitators, and investigate its overall impacts.

This qualitative scoping review was conducted using Arksey and O’Malley’s methodological framework. We searched articles in PubMed, Web of Science, Scopus, and EMBASE databases. In addition, we accessed articles from Google Scholar. We used mixed-method analysis, including qualitative content analysis and quantitative descriptive for quantitative findings to summarize findings and PRISMA extension for scoping reviews (PRISMA-ScR) framework to report the overall works.

A total of 87 articles, which covered 14 CQI models, were included in the review. While 19 tools were used for CQI models and initiatives, Plan-Do-Study/Check-Act cycle was the commonly employed model to understand the CQI implementation process. The main reported purposes of using CQI, as its positive impact, are to improve the structure of the health system (e.g., leadership, health workforce, health technology use, supplies, and costs), enhance healthcare delivery processes and outputs (e.g., care coordination and linkages, satisfaction, accessibility, continuity of care, safety, and efficiency), and improve treatment outcome (reduce morbidity and mortality). The implementation of CQI is not without challenges. There are cultural (i.e., resistance/reluctance to quality-focused culture and fear of blame or punishment), technical, structural (related to organizational structure, processes, and systems), and strategic (inadequate planning and inappropriate goals) related barriers that were commonly reported during the implementation of CQI.

Conclusions

Implementing CQI initiatives necessitates thoroughly comprehending key principles such as teamwork and timeline. To effectively address challenges, it’s crucial to identify obstacles and implement optimal interventions proactively. Healthcare professionals and leaders need to be mentally equipped and cognizant of the significant role CQI initiatives play in achieving purposes for quality of care.

Peer Review reports

Continuous quality improvement (CQI) initiative is a crucial initiative aimed at enhancing quality in the health system that has gradually been adopted in the healthcare industry. In the early 20th century, Shewhart laid the foundation for quality improvement by describing three essential steps for process improvement: specification, production, and inspection [ 1 , 2 ]. Then, Deming expanded Shewhart’s three-step model into ‘plan, do, study/check, and act’ (PDSA or PDCA) cycle, which was applied to management practices in Japan in the 1950s [ 3 ] and was gradually translated into the health system. In 1991, Kuperman applied a CQI approach to healthcare, comprising selecting a process to be improved, assembling a team of expert clinicians that understands the process and the outcomes, determining key steps in the process and expected outcomes, collecting data that measure the key process steps and outcomes, and providing data feedback to the practitioners [ 4 ]. These philosophies have served as the baseline for the foundation of principles for continuous improvement [ 5 ].

Continuous quality improvement fosters a culture of continuous learning, innovation, and improvement. It encourages proactive identification and resolution of problems, promotes employee engagement and empowerment, encourages trust and respect, and aims for better quality of care [ 6 , 7 ]. These characteristics drive the interaction of CQI with other quality improvement projects, such as quality assurance and total quality management [ 8 ]. Quality assurance primarily focuses on identifying deviations or errors through inspections, audits, and formal reviews, often settling for what is considered ‘good enough’, rather than pursuing the highest possible standards [ 9 , 10 ], while total quality management is implemented as the management philosophy and system to improve all aspects of an organization continuously [ 11 ].

Continuous quality improvement has been implemented to provide quality care. However, providing effective healthcare is a complicated and complex task in achieving the desired health outcomes and the overall well-being of individuals and populations. It necessitates tackling issues, including access, patient safety, medical advances, care coordination, patient-centered care, and quality monitoring [ 12 , 13 ], rooted long ago. It is assumed that the history of quality improvement in healthcare started in 1854 when Florence Nightingale introduced quality improvement documentation [ 14 ]. Over the passing decades, Donabedian introduced structure, processes, and outcomes as quality of care components in 1966 [ 15 ]. More comprehensively, the Institute of Medicine in the United States of America (USA) has identified effectiveness, efficiency, equity, patient-centredness, safety, and timeliness as the components of quality of care [ 16 ]. Moreover, quality of care has recently been considered an integral part of universal health coverage (UHC) [ 17 ], which requires initiatives to mobilise essential inputs [ 18 ].

While the overall objective of CQI in health system is to enhance the quality of care, it is important to note that the purposes and principles of CQI can vary across different contexts [ 19 , 20 ]. This variation has sparked growing research interest. For instance, a review of CQI approaches for capacity building addressed its role in health workforce development [ 21 ]. Another systematic review, based on random-controlled design studies, assessed the effectiveness of CQI using training as an intervention and the PDSA model [ 22 ]. As a research gap, the former review was not directly related to the comprehensive elements of quality of care, while the latter focused solely on the impact of training using the PDSA model, among other potential models. Additionally, a review conducted in 2015 aimed to identify barriers and facilitators of CQI in Canadian contexts [ 23 ]. However, all these reviews presented different perspectives and investigated distinct outcomes. This suggests that there is still much to explore in terms of comprehensively understanding the various aspects of CQI initiatives in healthcare.

As a result, we conducted a scoping review to address several aspects of CQI. Scoping reviews serve as a valuable tool for systematically mapping the existing literature on a specific topic. They are instrumental when dealing with heterogeneous or complex bodies of research. Scoping reviews provide a comprehensive overview by summarizing and disseminating findings across multiple studies, even when evidence varies significantly [ 24 ]. In our specific scoping review, we included various types of literature, including systematic reviews, to enhance our understanding of CQI.

This scoping review examined how CQI is conceptualized and measured and investigated models and tools for its application while identifying implementation challenges and facilitators. It also analyzed the purposes and impact of CQI on the health systems, providing valuable insights for enhancing healthcare quality.

Protocol registration and results reporting

Protocol registration for this scoping review was not conducted. Arksey and O’Malley’s methodological framework was utilized to conduct this scoping review [ 25 ]. The scoping review procedures start by defining the research questions, identifying relevant literature, selecting articles, extracting data, and summarizing the results. The review findings are reported using the PRISMA extension for a scoping review (PRISMA-ScR) [ 26 ]. McGowan and colleagues also advised researchers to report findings from scoping reviews using PRISMA-ScR [ 27 ].

Defining the research problems

This review aims to comprehensively explore the conceptualization, models, tools, barriers, facilitators, and impacts of CQI within the healthcare system worldwide. Specifically, we address the following research questions: (1) How has CQI been defined across various contexts? (2) What are the diverse approaches to implementing CQI in healthcare settings? (3) Which tools are commonly employed for CQI implementation ? (4) What barriers hinder and facilitators support successful CQI initiatives? and (5) What effects CQI initiatives have on the overall care quality?

Information source and search strategy

We conducted the search in PubMed, Web of Science, Scopus, and EMBASE databases, and the Google Scholar search engine. The search terms were selected based on three main distinct concepts. One group was CQI-related terms. The second group included terms related to the purpose for which CQI has been implemented, and the third group included processes and impact. These terms were selected based on the Donabedian framework of structure, process, and outcome [ 28 ]. Additionally, the detailed keywords were recruited from the primary health framework, which has described lists of dimensions under process, output, outcome, and health system goals of any intervention for health [ 29 ]. The detailed search strategy is presented in the Supplementary file 1 (Search strategy). The search for articles was initiated on August 12, 2023, and the last search was conducted on September 01, 2023.

Eligibility criteria and article selection

Based on the scoping review’s population, concept, and context frameworks [ 30 ], the population included any patients or clients. Additionally, the concepts explored in the review encompassed definitions, implementation, models, tools, barriers, facilitators, and impacts of CQI. Furthermore, the review considered contexts at any level of health systems. We included articles if they reported results of qualitative or quantitative empirical study, case studies, analytic or descriptive synthesis, any review, and other written documents, were published in peer-reviewed journals, and were designed to address at least one of the identified research questions or one of the identified implementation outcomes or their synonymous taxonomy as described in the search strategy. Based on additional contexts, we included articles published in English without geographic and time limitations. We excluded articles with abstracts only, conference abstracts, letters to editors, commentators, and corrections.

We exported all citations to EndNote x20 to remove duplicates and screen relevant articles. The article selection process includes automatic duplicate removal by using EndNote x20, unmatched title and abstract removal, citation and abstract-only materials removal, and full-text assessment. The article selection process was mainly conducted by the first author (AE) and reported to the team during the weekly meetings. The first author encountered papers that caused confusion regarding whether to include or exclude them and discussed them with the last author (YA). Then, decisions were ultimately made. Whenever disagreements happened, they were resolved by discussion and reconsideration of the review questions in relation to the written documents of the article. Further statistical analysis, such as calculating Kappa, was not performed to determine article inclusion or exclusion.

Data extraction and data items

We extracted first author, publication year, country, settings, health problem, the purpose of the study, study design, types of intervention if applicable, CQI approaches/steps if applicable, CQI tools and procedures if applicable, and main findings using a customized Microsoft Excel form.

Summarizing and reporting the results

The main findings were summarized and described based on the main themes, including concepts under conceptualizing, principles, teams, timelines, models, tools, barriers, facilitators, and impacts of CQI. Results-based convergent synthesis, achieved through mixed-method analysis, involved content analysis to identify the thematic presentation of findings. Additionally, a narrative description was used for quantitative findings, aligning them with the appropriate theme. The authors meticulously reviewed the primary findings from each included material and contextualized these findings concerning the main themes1. This approach provides a comprehensive understanding of complex interventions and health systems, acknowledging quantitative and qualitative evidence.

Search results

A total of 11,251 documents were identified from various databases: SCOPUS ( n  = 4,339), PubMed ( n  = 2,893), Web of Science ( n  = 225), EMBASE ( n  = 3,651), and Google Scholar ( n  = 143). After removing duplicates ( n  = 5,061), 6,190 articles were evaluated by title and abstract. Subsequently, 208 articles were assessed for full-text eligibility. Following the eligibility criteria, 121 articles were excluded, leaving 87 included in the current review (Fig.  1 ).

figure 1

Article selection process

Operationalizing continuous quality improvement

Continuous Quality Improvement (CQI) is operationalized as a cyclic process that requires commitment to implementation, teamwork, time allocation, and celebrating successes and failures.

CQI is a cyclic ongoing process that is followed reflexive, analytical and iterative steps, including identifying gaps, generating data, developing and implementing action plans, evaluating performance, providing feedback to implementers and leaders, and proposing necessary adjustments [ 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 ].

CQI requires committing to the philosophy, involving continuous improvement [ 19 , 38 ], establishing a mission statement [ 37 ], and understanding quality definition [ 19 ].

CQI involves a wide range of patient-oriented measures and performance indicators, specifically satisfying internal and external customers, developing quality assurance, adopting common quality measures, and selecting process measures [ 8 , 19 , 35 , 36 , 37 , 39 , 40 ].

CQI requires celebrating success and failure without personalization, leading each team member to develop error-free attitudes [ 19 ]. Success and failure are related to underlying organizational processes and systems as causes of failure rather than blaming individuals [ 8 ] because CQI is process-focused based on collaborative, data-driven, responsive, rigorous and problem-solving statistical analysis [ 8 , 19 , 38 ]. Furthermore, a gap or failure opens another opportunity for establishing a data-driven learning organization [ 41 ].

CQI cannot be implemented without a CQI team [ 8 , 19 , 37 , 39 , 42 , 43 , 44 , 45 , 46 ]. A CQI team comprises individuals from various disciplines, often comprising a team leader, a subject matter expert (physician or other healthcare provider), a data analyst, a facilitator, frontline staff, and stakeholders [ 39 , 43 , 47 , 48 , 49 ]. It is also important to note that inviting stakeholders or partners as part of the CQI support intervention is crucial [ 19 , 38 , 48 ].

The timeline is another distinct feature of CQI because the results of CQI vary based on the implementation duration of each cycle [ 35 ]. There is no specific time limit for CQI implementation, although there is a general consensus that a cycle of CQI should be relatively short [ 35 ]. For instance, a CQI implementation took 2 months [ 42 ], 4 months [ 50 ], 9 months [ 51 , 52 ], 12 months [ 53 , 54 , 55 ], and one year and 5 months [ 49 ] duration to achieve the desired positive outcome, while bi-weekly [ 47 ] and monthly data reviews and analyses [ 44 , 48 , 56 ], and activities over 3 months [ 57 ] have also resulted in a positive outcome.

Continuous quality improvement models and tools

There have been several models are utilized. The Plan-Do-Study/Check-Act cycle is a stepwise process involving project initiation, situation analysis, root cause identification, solution generation and selection, implementation, result evaluation, standardization, and future planning [ 7 , 36 , 37 , 45 , 47 , 48 , 49 , 50 , 51 , 53 , 56 , 57 , 58 , 59 , 60 , 61 , 62 , 63 , 64 , 65 , 66 , 67 , 68 , 69 , 70 ]. The FOCUS-PDCA cycle enhances the PDCA process by adding steps to find and improve a process (F), organize a knowledgeable team (O), clarify the process (C), understand variations (U), and select improvements (S) [ 55 , 71 , 72 , 73 ]. The FADE cycle involves identifying a problem (Focus), understanding it through data analysis (Analyze), devising solutions (Develop), and implementing the plan (Execute) [ 74 ]. The Logic Framework involves brainstorming to identify improvement areas, conducting root cause analysis to develop a problem tree, logically reasoning to create an objective tree, formulating the framework, and executing improvement projects [ 75 ]. Breakthrough series approach requires CQI teams to meet in quarterly collaborative learning sessions, share learning experiences, and continue discussion by telephone and cross-site visits to strengthen learning and idea exchange [ 47 ]. Another CQI model is the Lean approach, which has been conducted with Kaizen principles [ 52 ], 5 S principles, and the Six Sigma model. The 5 S (Sort, Set/Straighten, Shine, Standardize, Sustain) systematically organises and improves the workplace, focusing on sorting, setting order, shining, standardizing, and sustaining the improvement [ 54 , 76 ]. Kaizen principles guide CQI by advocating for continuous improvement, valuing all ideas, solving problems, focusing on practical, low-cost improvements, using data to drive change, acknowledging process defects, reducing variability and waste, recognizing every interaction as a customer-supplier relationship, empowering workers, responding to all ideas, and maintaining a disciplined workplace [ 77 ]. Lean Six Sigma, a CQI model, applies the DMAIC methodology, which involves defining (D) and measuring the problem (M), analyzing root causes (A), improving by finding solutions (I), and controlling by assessing process stability (C) [ 78 , 79 ]. The 5 C-cyclic model (consultation, collection, consideration, collaboration, and celebration), the first CQI framework for volunteer dental services in Aboriginal communities, ensures quality care based on community needs [ 80 ]. One study used meetings involving activities such as reviewing objectives, assigning roles, discussing the agenda, completing tasks, retaining key outputs, planning future steps, and evaluating the meeting’s effectiveness [ 81 ].

Various tools are involved in the implementation or evaluation of CQI initiatives: checklists [ 53 , 82 ], flowcharts [ 81 , 82 , 83 ], cause-and-effect diagrams (fishbone or Ishikawa diagrams) [ 60 , 62 , 79 , 81 , 82 ], fuzzy Pareto diagram [ 82 ], process maps [ 60 ], time series charts [ 48 ], why-why analysis [ 79 ], affinity diagrams and multivoting [ 81 ], and run chart [ 47 , 48 , 51 , 60 , 84 ], and others mentioned in the table (Table  1 ).

Barriers and facilitators of continuous quality improvement implementation

Implementing CQI initiatives is determined by various barriers and facilitators, which can be thematized into four dimensions. These dimensions are cultural, technical, structural, and strategic dimensions.

Continuous quality improvement initiatives face various cultural, strategic, technical, and structural barriers. Cultural dimension barriers involve resistance to change (e.g., not accepting online technology), lack of quality-focused culture, staff reporting apprehensiveness, and fear of blame or punishment [ 36 , 41 , 85 , 86 ]. The technical dimension barriers of CQI can include various factors that hinder the effective implementation and execution of CQI processes [ 36 , 86 , 87 , 88 , 89 ]. Structural dimension barriers of CQI arise from the organization structure, process, and systems that can impede the effective implementation and sustainability of CQI [ 36 , 85 , 86 , 87 , 88 ]. Strategic dimension barriers are, for example, the inability to select proper CQI goals and failure to integrate CQI into organizational planning and goals [ 36 , 85 , 86 , 87 , 88 , 90 ].

Facilitators are also grouped to cultural, structural, technical, and strategic dimensions to provide solutions to CQI barriers. Cultural challenges were addressed by developing a group culture to CQI and other rewards [ 39 , 41 , 80 , 85 , 86 , 87 , 90 , 91 , 92 ]. Technical facilitators are pivotal to improving technical barriers [ 39 , 42 , 53 , 69 , 86 , 90 , 91 ]. Structural-related facilitators are related to improving communication, infrastructure, and systems [ 86 , 92 , 93 ]. Strategic dimension facilitators include strengthening leadership and improving decision-making skills [ 43 , 53 , 67 , 86 , 87 , 92 , 94 , 95 ] (Table  2 ).

Impact of continuous quality improvement

Continuous quality improvement initiatives can significantly impact the quality of healthcare in a wide range of health areas, focusing on improving structure, the health service delivery process and improving client wellbeing and reducing mortality.

Structure components

These are health leadership, financing, workforce, technology, and equipment and supplies. CQI has improved planning, monitoring and evaluation [ 48 , 53 ], and leadership and planning [ 48 ], indicating improvement in leadership perspectives. Implementing CQI in primary health care (PHC) settings has shown potential for maintaining or reducing operation costs [ 67 ]. Findings from another study indicate that the costs associated with implementing CQI interventions per facility ranged from approximately $2,000 to $10,500 per year, with an average cost of approximately $10 to $60 per admitted client [ 57 ]. However, based on model predictions, the average cost savings after implementing CQI were estimated to be $5430 [ 31 ]. CQI can also be applied to health workforce development [ 32 ]. CQI in the institutional system improved medical education [ 66 , 96 , 97 ], human resources management [ 53 ], motivated staffs [ 76 ], and increased staff health awareness [ 69 ], while concerns raised about CQI impartiality, independence, and public accountability [ 96 ]. Regarding health technology, CQI also improved registration and documentation [ 48 , 53 , 98 ]. Furthermore, the CQI initiatives increased cleanliness [ 54 ] and improved logistics, supplies, and equipment [ 48 , 53 , 68 ].

Process and output components

The process component focuses on the activities and actions involved in delivering healthcare services.

Service delivery

CQI interventions improved service delivery [ 53 , 56 , 99 ], particularly a significant 18% increase in the overall quality of service performance [ 48 ], improved patient counselling, adherence to appropriate procedures, and infection prevention [ 48 , 68 ], and optimised workflow [ 52 ].

Coordination and collaboration

CQI initiatives improved coordination and collaboration through collecting and analysing data, onsite technical support, training, supportive supervision [ 53 ] and facilitating linkages between work processes and a quality control group [ 65 ].

Patient satisfaction

The CQI initiatives increased patient satisfaction and improved quality of life by optimizing care quality management, improving the quality of clinical nursing, reducing nursing defects and enhancing the wellbeing of clients [ 54 , 76 , 100 ], although CQI was not associated with changes in adolescent and young adults’ satisfaction [ 51 ].

CQI initiatives reduced medication error reports from 16 to 6 [ 101 ], and it significantly reduced the administration of inappropriate prophylactic antibiotics [ 44 ], decreased errors in inpatient care [ 52 ], decreased the overall episiotomy rate from 44.5 to 33.3% [ 83 ], reduced the overall incidence of unplanned endotracheal extubation [ 102 ], improving appropriate use of computed tomography angiography [ 103 ], and appropriate diagnosis and treatment selection [ 47 ].

Continuity of care

CQI initiatives effectively improve continuity of care by improving client and physician interaction. For instance, provider continuity levels showed a 64% increase [ 55 ]. Modifying electronic medical record templates, scheduling, staff and parental education, standardization of work processes, and birth to 1-year age-specific incentives in post-natal follow-up care increased continuity of care to 74% in 2018 compared to baseline 13% in 2012 [ 84 ].

The CQI initiative yielded enhanced efficiency in the cardiac catheterization laboratory, as evidenced by improved punctuality in procedure starts and increased efficiency in manual sheath-pulls inside [ 78 ].

Accessibility

CQI initiatives were effective in improving accessibility in terms of increasing service coverage and utilization rate. For instance, screening for cigarettes, nutrition counselling, folate prescription, maternal care, immunization coverage [ 53 , 81 , 104 , 105 ], reducing the percentage of non-attending patients to surgery to 0.9% from the baseline 3.9% [ 43 ], increasing Chlamydia screening rates from 29 to 60% [ 45 ], increasing HIV care continuum coverage [ 51 , 59 , 60 ], increasing in the uptake of postpartum long-acting reversible contraceptive use from 6.9% at the baseline to 25.4% [ 42 ], increasing post-caesarean section prophylaxis from 36 to 89% [ 62 ], a 31% increase of kangaroo care practice [ 50 ], and increased follow-up [ 65 ]. Similarly, the QI intervention increased the quality of antenatal care by 29.3%, correct partograph use by 51.7%, and correct active third-stage labour management, a 19.6% improvement from the baseline, but not significantly associated with improvement in contraceptive service uptake [ 61 ].

Timely access

CQI interventions improved the time care provision [ 52 ], and reduced waiting time [ 62 , 74 , 76 , 106 ]. For instance, the discharge process waiting time in the emergency department decreased from 76 min to 22 min [ 79 ]. It also reduced mean postprocedural length of stay from 2.8 days to 2.0 days [ 31 ].

Acceptability

Acceptability of CQI by healthcare providers was satisfactory. For instance, 88% of the faculty, 64% of the residents, and 82% of the staff believed CQI to be useful in the healthcare clinic [ 107 ].

Outcome components

Morbidity and mortality.

CQI efforts have demonstrated better management outcomes among diabetic patients [ 40 ], patients with oral mucositis [ 71 ], and anaemic patients [ 72 ]. It has also reduced infection rate in post-caesarean Sect. [ 62 ], reduced post-peritoneal dialysis peritonitis [ 49 , 108 ], and prevented pressure ulcers [ 70 ]. It is explained by peritonitis incidence from once every 40.1 patient months at baseline to once every 70.8 patient months after CQI [ 49 ] and a 63% reduction in pressure ulcer prevalence within 2 years from 2008 to 2010 [ 70 ]. Furthermore, CQI initiatives significantly reduced in-hospital deaths [ 31 ] and increased patient survival rates [ 108 ]. Figure  2 displays the overall process of the CQI implementations.

figure 2

The overall mechanisms of continuous quality improvement implementation

In this review, we examined the fundamental concepts and principles underlying CQI, the factors that either hinder or assist in its successful application and implementation, and the purpose of CQI in enhancing quality of care across various health issues.

Our findings have brought attention to the application and implementation of CQI, emphasizing its underlying concepts and principles, as evident in the existing literature [ 31 , 32 , 33 , 34 , 35 , 36 , 39 , 40 , 43 , 45 , 46 ]. Continuous quality improvement has shared with the principles of continuous improvement, such as a customer-driven focus, effective leadership, active participation of individuals, a process-oriented approach, systematic implementation, emphasis on design improvement and prevention, evidence-based decision-making, and fostering partnership [ 5 ]. Moreover, Deming’s 14 principles laid the foundation for CQI principles [ 109 ]. These principles have been adapted and put into practice in various ways: ten [ 19 ] and five [ 38 ] principles in hospitals, five principles for capacity building [ 38 ], and two principles for medication error prevention [ 41 ]. As a principle, the application of CQI can be process-focused [ 8 , 19 ] or impact-focused [ 38 ]. Impact-focused CQI focuses on achieving specific outcomes or impacts, whereas process-focused CQI prioritizes and improves the underlying processes and systems. These principles complement each other and can be utilized based on the objectives of quality improvement initiatives in healthcare settings. Overall, CQI is an ongoing educational process that requires top management’s involvement, demands coordination across departments, encourages the incorporation of views beyond clinical area, and provides non-judgemental evidence based on objective data [ 110 ].

The current review recognized that it was not easy to implement CQI. It requires reasonable utilization of various models and tools. The application of each tool can be varied based on the studied health problem and the purpose of CQI initiative [ 111 ], varied in context, content, structure, and usability [ 112 ]. Additionally, overcoming the cultural, technical, structural, and strategic-related barriers. These barriers have emerged from clinical staff, managers, and health systems perspectives. Of the cultural obstacles, staff non-involvement, resistance to change, and reluctance to report error were staff-related. In contrast, others, such as the absence of celebration for success and hierarchical and rational culture, may require staff and manager involvement. Staff members may exhibit reluctance in reporting errors due to various cultural factors, including lack of trust, hierarchical structures, fear of retribution, and a blame-oriented culture. These challenges pose obstacles to implementing standardized CQI practices, as observed, for instance, in community pharmacy settings [ 85 ]. The hierarchical culture, characterized by clearly defined levels of power, authority, and decision-making, posed challenges to implementing CQI initiatives in public health [ 41 , 86 ]. Although rational culture, a type of organizational culture, emphasizes logical thinking and rational decision-making, it can also create challenges for CQI implementation [ 41 , 86 ] because hierarchical and rational cultures, which emphasize bureaucratic norms and narrow definitions of achievement, were found to act as barriers to the implementation of CQI [ 86 ]. These could be solved by developing a shared mindset and collective commitment, establishing a shared purpose, developing group norms, and cultivating psychological preparedness among staff, managers, and clients to implement and sustain CQI initiatives. Furthermore, reversing cultural-related barriers necessitates cultural-related solutions: development of a culture and group culture to CQI [ 41 , 86 ], positive comprehensive perception [ 91 ], commitment [ 85 ], involving patients, families, leaders, and staff [ 39 , 92 ], collaborating for a common goal [ 80 , 86 ], effective teamwork [ 86 , 87 ], and rewarding and celebrating successes [ 80 , 90 ].

The technical dimension barriers of CQI can include inadequate capitalization of a project and insufficient support for CQI facilitators and data entry managers [ 36 ], immature electronic medical records or poor information systems [ 36 , 86 ], and the lack of training and skills [ 86 , 87 , 88 ]. These challenges may cause the CQI team to rely on outdated information and technologies. The presence of barriers on the technical dimension may challenge the solid foundation of CQI expertise among staff, the ability to recognize opportunities for improvement, a comprehensive understanding of how services are produced and delivered, and routine use of expertise in daily work. Addressing these technical barriers requires knowledge creation activities (training, seminar, and education) [ 39 , 42 , 53 , 69 , 86 , 90 , 91 ], availability of quality data [ 86 ], reliable information [ 92 ], and a manual-online hybrid reporting system [ 85 ].

Structural dimension barriers of CQI include inadequate communication channels and lack of standardized process, specifically weak physician-to-physician synergies [ 36 ], lack of mechanisms for disseminating knowledge and limited use of communication mechanisms [ 86 ]. Lack of communication mechanism endangers sharing ideas and feedback among CQI teams, leading to misunderstandings, limited participation and misinterpretations, and a lack of learning [ 113 ]. Knowledge translation facilitates the co-production of research, subsequent diffusion of knowledge, and the developing stakeholder’s capacity and skills [ 114 ]. Thus, the absence of a knowledge translation mechanism may cause missed opportunities for learning, inefficient problem-solving, and limited creativity. To overcome these challenges, organizations should establish effective communication and information systems [ 86 , 93 ] and learning systems [ 92 ]. Though CQI and knowledge translation have interacted with each other, it is essential to recognize that they are distinct. CQI focuses on process improvement within health care systems, aiming to optimize existing processes, reduce errors, and enhance efficiency.

In contrast, knowledge translation bridges the gap between research evidence and clinical practice, translating research findings into actionable knowledge for practitioners. While both CQI and knowledge translation aim to enhance health care quality and patient outcomes, they employ different strategies: CQI utilizes tools like Plan-Do-Study-Act cycles and statistical process control, while knowledge translation involves knowledge synthesis and dissemination. Additionally, knowledge translation can also serve as a strategy to enhance CQI. Both concepts share the same principle: continuous improvement is essential for both. Therefore, effective strategies on the structural dimension may build efficient and effective steering councils, information systems, and structures to diffuse learning throughout the organization.

Strategic factors, such as goals, planning, funds, and resources, determine the overall purpose of CQI initiatives. Specific barriers were improper goals and poor planning [ 36 , 86 , 88 ], fragmentation of quality assurance policies [ 87 ], inadequate reinforcement to staff [ 36 , 90 ], time constraints [ 85 , 86 ], resource inadequacy [ 86 ], and work overload [ 86 ]. These barriers can be addressed through strengthening leadership [ 86 , 87 ], CQI-based mentoring [ 94 ], periodic monitoring, supportive supervision and coaching [ 43 , 53 , 87 , 92 , 95 ], participation, empowerment, and accountability [ 67 ], involving all stakeholders in decision-making [ 86 , 87 ], a provider-payer partnership [ 64 ], and compensating staff for after-hours meetings on CQI [ 85 ]. The strategic dimension, characterized by a strategic plan and integrated CQI efforts, is devoted to processes that are central to achieving strategic priorities. Roles and responsibilities are defined in terms of integrated strategic and quality-related goals [ 115 ].

The utmost goal of CQI has been to improve the quality of care, which is usually revealed by structure, process, and outcome. After resolving challenges and effectively using tools and running models, the goal of CQI reflects the ultimate reason and purpose of its implementation. First, effectively implemented CQI initiatives can improve leadership, health financing, health workforce development, health information technology, and availability of supplies as the building blocks of a health system [ 31 , 48 , 53 , 68 , 98 ]. Second, effectively implemented CQI initiatives improved care delivery process (counselling, adherence with standards, coordination, collaboration, and linkages) [ 48 , 53 , 65 , 68 ]. Third, the CQI can improve outputs of healthcare delivery, such as satisfaction, accessibility (timely access, utilization), continuity of care, safety, efficiency, and acceptability [ 52 , 54 , 55 , 76 , 78 ]. Finally, the effectiveness of the CQI initiatives has been tested in enhancing responses related to key aspects of the HIV response, maternal and child health, non-communicable disease control, and others (e.g., surgery and peritonitis). However, it is worth noting that CQI initiative has not always been effective. For instance, CQI using a two- to nine-times audit cycle model through systems assessment tools did not bring significant change to increase syphilis testing performance [ 116 ]. This study was conducted within the context of Aboriginal and Torres Strait Islander people’s primary health care settings. Notably, ‘the clinics may not have consistently prioritized syphilis testing performance in their improvement strategies, as facilitated by the CQI program’ [ 116 ]. Additionally, by applying CQI-based mentoring, uptake of facility-based interventions was not significantly improved, though it was effective in increasing community health worker visits during pregnancy and the postnatal period, knowledge about maternal and child health and exclusive breastfeeding practice, and HIV disclosure status [ 117 ]. The study conducted in South Africa revealed no significant association between the coverage of facility-based interventions and Continuous Quality Improvement (CQI) implementation. This lack of association was attributed to the already high antenatal and postnatal attendance rates in both control and intervention groups at baseline, leaving little room for improvement. Additionally, the coverage of HIV interventions remained consistently high throughout the study period [ 117 ].

Regarding health care and policy implications, CQI has played a vital role in advancing PHC and fostering the realization of UHC goals worldwide. The indicators found in Donabedian’s framework that are positively influenced by CQI efforts are comparable to those included in the PHC performance initiative’s conceptual framework [ 29 , 118 , 119 ]. It is clearly explained that PHC serves as the roadmap to realizing the vision of UHC [ 120 , 121 ]. Given these circumstances, implementing CQI can contribute to the achievement of PHC principles and the objectives of UHC. For instance, by implementing CQI methods, countries have enhanced the accessibility, affordability, and quality of PHC services, leading to better health outcomes for their populations. CQI has facilitated identifying and resolving healthcare gaps and inefficiencies, enabling countries to optimize resource allocation and deliver more effective and patient-centered care. However, it is crucial to recognize that the successful implementation of Continuous Quality Improvement (CQI) necessitates optimizing the duration of each cycle, understanding challenges and barriers that extend beyond the health system and settings, and acknowledging that its effectiveness may be compromised if these challenges are not adequately addressed.

Despite abundant literature, there are still gaps regarding the relationship between CQI and other dimensions within the healthcare system. No studies have examined the impact of CQI initiatives on catastrophic health expenditure, effective service coverage, patient-centredness, comprehensiveness, equity, health security, and responsiveness.

Limitations

In conducting this review, it has some limitations to consider. Firstly, only articles published in English were included, which may introduce the exclusion of relevant non-English articles. Additionally, as this review follows a scoping methodology, the focus is on synthesising available evidence rather than critically evaluating or scoring the quality of the included articles.

Continuous quality improvement is investigated as a continuous and ongoing intervention, where the implementation time can vary across different cycles. The CQI team and implementation timelines were critical elements of CQI in different models. Among the commonly used approaches, the PDSA or PDCA is frequently employed. In most CQI models, a wide range of tools, nineteen tools, are commonly utilized to support the improvement process. Cultural, technical, structural, and strategic barriers and facilitators are significant in implementing CQI initiatives. Implementing the CQI initiative aims to improve health system blocks, enhance health service delivery process and output, and ultimately prevent morbidity and reduce mortality. For future researchers, considering that CQI is context-dependent approach, conducting scale-up implementation research about catastrophic health expenditure, effective service coverage, patient-centredness, comprehensiveness, equity, health security, and responsiveness across various settings and health issues would be valuable.

Availability of data and materials

The data used and/or analyzed during the current study are available in this manuscript and/or the supplementary file.

Shewhart WA, Deming WE. Memoriam: Walter A. Shewhart, 1891–1967. Am Stat. 1967;21(2):39–40.

Article   Google Scholar  

Shewhart WA. Statistical method from the viewpoint of quality control. New York: Dover; 1986. ISBN 978-0486652320. OCLC 13822053. Reprint. Originally published: Washington, DC: Graduate School of the Department of Agriculture, 1939.

Moen R, editor Foundation and History of the PDSA Cycle. Asian network for quality conference Tokyo. https://www.deming.org/sites/default/files/pdf/2015/PDSA_History_Ron_MoenPdf . 2009.

Kuperman G, James B, Jacobsen J, Gardner RM. Continuous quality improvement applied to medical care: experiences at LDS hospital. Med Decis Making. 1991;11(4suppl):S60–65.

Article   CAS   PubMed   Google Scholar  

Singh J, Singh H. Continuous improvement philosophy–literature review and directions. Benchmarking: An International Journal. 2015;22(1):75–119.

Goldstone J. Presidential address: Sony, Porsche, and vascular surgery in the 21st century. J Vasc Surg. 1997;25(2):201–10.

Radawski D. Continuous quality improvement: origins, concepts, problems, and applications. J Physician Assistant Educ. 1999;10(1):12–6.

Shortell SM, O’Brien JL, Carman JM, Foster RW, Hughes E, Boerstler H, et al. Assessing the impact of continuous quality improvement/total quality management: concept versus implementation. Health Serv Res. 1995;30(2):377.

CAS   PubMed   PubMed Central   Google Scholar  

Lohr K. Quality of health care: an introduction to critical definitions, concepts, principles, and practicalities. Striving for quality in health care. 1991.

Berwick DM. The clinical process and the quality process. Qual Manage Healthc. 1992;1(1):1–8.

Article   CAS   Google Scholar  

Gift B. On the road to TQM. Food Manage. 1992;27(4):88–9.

CAS   PubMed   Google Scholar  

Greiner A, Knebel E. The core competencies needed for health care professionals. health professions education: A bridge to quality. 2003:45–73.

McCalman J, Bailie R, Bainbridge R, McPhail-Bell K, Percival N, Askew D et al. Continuous quality improvement and comprehensive primary health care: a systems framework to improve service quality and health outcomes. Front Public Health. 2018:6 (76):1–6.

Sheingold BH, Hahn JA. The history of healthcare quality: the first 100 years 1860–1960. Int J Afr Nurs Sci. 2014;1:18–22.

Google Scholar  

Donabedian A. Evaluating the quality of medical care. Milbank Q. 1966;44(3):166–206.

Institute of Medicine (US) Committee on Quality of Health Care in America. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington (DC): National Academies Press (US). 2001. 2, Improving the 21st-century Health Care System. Available from: https://www.ncbi.nlm.nih.gov/books/NBK222265/ .

Rubinstein A, Barani M, Lopez AS. Quality first for effective universal health coverage in low-income and middle-income countries. Lancet Global Health. 2018;6(11):e1142–1143.

Article   PubMed   Google Scholar  

Agency for Healthcare Reserach and Quality. Quality Improvement and monitoring at your fingertips USA,: Agency for Healthcare Reserach and Quality. 2022. Available from: https://qualityindicators.ahrq.gov/ .

Anderson CA, Cassidy B, Rivenburgh P. Implementing continuous quality improvement (CQI) in hospitals: lessons learned from the International Quality Study. Qual Assur Health Care. 1991;3(3):141–6.

Gardner K, Mazza D. Quality in general practice - definitions and frameworks. Aust Fam Physician. 2012;41(3):151–4.

PubMed   Google Scholar  

Loper AC, Jensen TM, Farley AB, Morgan JD, Metz AJ. A systematic review of approaches for continuous quality improvement capacity-building. J Public Health Manage Pract. 2022;28(2):E354.

Hill JE, Stephani A-M, Sapple P, Clegg AJ. The effectiveness of continuous quality improvement for developing professional practice and improving health care outcomes: a systematic review. Implement Sci. 2020;15(1):1–14.

Candas B, Jobin G, Dubé C, Tousignant M, Abdeljelil AB, Grenier S, et al. Barriers and facilitators to implementing continuous quality improvement programs in colonoscopy services: a mixed methods systematic review. Endoscopy Int Open. 2016;4(02):E118–133.

Peters MD, Marnie C, Colquhoun H, Garritty CM, Hempel S, Horsley T, et al. Scoping reviews: reinforcing and advancing the methodology and application. Syst Reviews. 2021;10(1):1–6.

Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19–32.

Tricco AC, Lillie E, Zarin W, O’Brien KK, Colquhoun H, Levac D, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. 2018;169(7):467–73.

McGowan J, Straus S, Moher D, Langlois EV, O’Brien KK, Horsley T, et al. Reporting scoping reviews—PRISMA ScR extension. J Clin Epidemiol. 2020;123:177–9.

Donabedian A. Explorations in quality assessment and monitoring: the definition of quality and approaches to its assessment. Health Administration Press, Ann Arbor. 1980;1.

World Health Organization. Operational framework for primary health care: transforming vision into action. Geneva: World Health Organization and the United Nations Children’s Fund (UNICEF); 2020 [updated 14 December 2020; cited 2023 Nov Oct 17]. Available from: https://www.who.int/publications/i/item/9789240017832 .

The Joanna Briggs Institute. The Joanna Briggs Institute Reviewers’ Manual :2014 edition. Australia: The Joanna Briggs Institute. 2014:88–91.

Rihal CS, Kamath CC, Holmes DR Jr, Reller MK, Anderson SS, McMurtry EK, et al. Economic and clinical outcomes of a physician-led continuous quality improvement intervention in the delivery of percutaneous coronary intervention. Am J Manag Care. 2006;12(8):445–52.

Ade-Oshifogun JB, Dufelmeier T. Prevention and Management of Do not return notices: a quality improvement process for Supplemental staffing nursing agencies. Nurs Forum. 2012;47(2):106–12.

Rubenstein L, Khodyakov D, Hempel S, Danz M, Salem-Schatz S, Foy R, et al. How can we recognize continuous quality improvement? Int J Qual Health Care. 2014;26(1):6–15.

O’Neill SM, Hempel S, Lim YW, Danz MS, Foy R, Suttorp MJ, et al. Identifying continuous quality improvement publications: what makes an improvement intervention ‘CQI’? BMJ Qual Saf. 2011;20(12):1011–9.

Article   PubMed   PubMed Central   Google Scholar  

Sibthorpe B, Gardner K, McAullay D. Furthering the quality agenda in Aboriginal community controlled health services: understanding the relationship between accreditation, continuous quality improvement and national key performance indicator reporting. Aust J Prim Health. 2016;22(4):270–5.

Bennett CL, Crane JM. Quality improvement efforts in oncology: are we ready to begin? Cancer Invest. 2001;19(1):86–95.

VanValkenburgh DA. Implementing continuous quality improvement at the facility level. Adv Ren Replace Ther. 2001;8(2):104–13.

Loper AC, Jensen TM, Farley AB, Morgan JD, Metz AJ. A systematic review of approaches for continuous quality improvement capacity-building. J Public Health Manage Practice. 2022;28(2):E354–361.

Ryan M. Achieving and sustaining quality in healthcare. Front Health Serv Manag. 2004;20(3):3–11.

Nicolucci A, Allotta G, Allegra G, Cordaro G, D’Agati F, Di Benedetto A, et al. Five-year impact of a continuous quality improvement effort implemented by a network of diabetes outpatient clinics. Diabetes Care. 2008;31(1):57–62.

Wakefield BJ, Blegen MA, Uden-Holman T, Vaughn T, Chrischilles E, Wakefield DS. Organizational culture, continuous quality improvement, and medication administration error reporting. Am J Med Qual. 2001;16(4):128–34.

Sori DA, Debelew GT, Degefa LS, Asefa Z. Continuous quality improvement strategy for increasing immediate postpartum long-acting reversible contraceptive use at Jimma University Medical Center, Jimma, Ethiopia. BMJ Open Qual. 2023;12(1):e002051.

Roche B, Robin C, Deleaval PJ, Marti MC. Continuous quality improvement in ambulatory surgery: the non-attending patient. Ambul Surg. 1998;6(2):97–100.

O’Connor JB, Sondhi SS, Mullen KD, McCullough AJ. A continuous quality improvement initiative reduces inappropriate prescribing of prophylactic antibiotics for endoscopic procedures. Am J Gastroenterol. 1999;94(8):2115–21.

Ursu A, Greenberg G, McKee M. Continuous quality improvement methodology: a case study on multidisciplinary collaboration to improve chlamydia screening. Fam Med Community Health. 2019;7(2):e000085.

Quick B, Nordstrom S, Johnson K. Using continuous quality improvement to implement evidence-based medicine. Lippincotts Case Manag. 2006;11(6):305–15 ( quiz 16 – 7 ).

Oyeledun B, Phillips A, Oronsaye F, Alo OD, Shaffer N, Osibo B, et al. The effect of a continuous quality improvement intervention on retention-in-care at 6 months postpartum in a PMTCT Program in Northern Nigeria: results of a cluster randomized controlled study. J Acquir Immune Defic Syndr. 2017;75(Suppl 2):S156–164.

Nyengerai T, Phohole M, Iqaba N, Kinge CW, Gori E, Moyo K, et al. Quality of service and continuous quality improvement in voluntary medical male circumcision programme across four provinces in South Africa: longitudinal and cross-sectional programme data. PLoS ONE. 2021;16(8):e0254850.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Wang J, Zhang H, Liu J, Zhang K, Yi B, Liu Y, et al. Implementation of a continuous quality improvement program reduces the occurrence of peritonitis in PD. Ren Fail. 2014;36(7):1029–32.

Stikes R, Barbier D. Applying the plan-do-study-act model to increase the use of kangaroo care. J Nurs Manag. 2013;21(1):70–8.

Wagner AD, Mugo C, Bluemer-Miroite S, Mutiti PM, Wamalwa DC, Bukusi D, et al. Continuous quality improvement intervention for adolescent and young adult HIV testing services in Kenya improves HIV knowledge. AIDS. 2017;31(Suppl 3):S243–252.

Le RD, Melanson SE, Santos KS, Paredes JD, Baum JM, Goonan EM, et al. Using lean principles to optimise inpatient phlebotomy services. J Clin Pathol. 2014;67(8):724–30.

Manyazewal T, Mekonnen A, Demelew T, Mengestu S, Abdu Y, Mammo D, et al. Improving immunization capacity in Ethiopia through continuous quality improvement interventions: a prospective quasi-experimental study. Infect Dis Poverty. 2018;7:7.

Kamiya Y, Ishijma H, Hagiwara A, Takahashi S, Ngonyani HAM, Samky E. Evaluating the impact of continuous quality improvement methods at hospitals in Tanzania: a cluster-randomized trial. Int J Qual Health Care. 2017;29(1):32–9.

Kibbe DC, Bentz E, McLaughlin CP. Continuous quality improvement for continuity of care. J Fam Pract. 1993;36(3):304–8.

Adrawa N, Ongiro S, Lotee K, Seret J, Adeke M, Izudi J. Use of a context-specific package to increase sputum smear monitoring among people with pulmonary tuberculosis in Uganda: a quality improvement study. BMJ Open Qual. 2023;12(3):1–6.

Hunt P, Hunter SB, Levan D. Continuous quality improvement in substance abuse treatment facilities: how much does it cost? J Subst Abuse Treat. 2017;77:133–40.

Azadeh A, Ameli M, Alisoltani N, Motevali Haghighi S. A unique fuzzy multi-control approach for continuous quality improvement in a radio therapy department. Qual Quantity. 2016;50(6):2469–93.

Memiah P, Tlale J, Shimabale M, Nzyoka S, Komba P, Sebeza J, et al. Continuous quality improvement (CQI) institutionalization to reach 95:95:95 HIV targets: a multicountry experience from the Global South. BMC Health Serv Res. 2021;21(1):711.

Yapa HM, De Neve JW, Chetty T, Herbst C, Post FA, Jiamsakul A, et al. The impact of continuous quality improvement on coverage of antenatal HIV care tests in rural South Africa: results of a stepped-wedge cluster-randomised controlled implementation trial. PLoS Med. 2020;17(10):e1003150.

Dadi TL, Abebo TA, Yeshitla A, Abera Y, Tadesse D, Tsegaye S, et al. Impact of quality improvement interventions on facility readiness, quality and uptake of maternal and child health services in developing regions of Ethiopia: a secondary analysis of programme data. BMJ Open Qual. 2023;12(4):e002140.

Weinberg M, Fuentes JM, Ruiz AI, Lozano FW, Angel E, Gaitan H, et al. Reducing infections among women undergoing cesarean section in Colombia by means of continuous quality improvement methods. Arch Intern Med. 2001;161(19):2357–65.

Andreoni V, Bilak Y, Bukumira M, Halfer D, Lynch-Stapleton P, Perez C. Project management: putting continuous quality improvement theory into practice. J Nurs Care Qual. 1995;9(3):29–37.

Balfour ME, Zinn TE, Cason K, Fox J, Morales M, Berdeja C, et al. Provider-payer partnerships as an engine for continuous quality improvement. Psychiatric Serv. 2018;69(6):623–5.

Agurto I, Sandoval J, De La Rosa M, Guardado ME. Improving cervical cancer prevention in a developing country. Int J Qual Health Care. 2006;18(2):81–6.

Anderson CI, Basson MD, Ali M, Davis AT, Osmer RL, McLeod MK, et al. Comprehensive multicenter graduate surgical education initiative incorporating entrustable professional activities, continuous quality improvement cycles, and a web-based platform to enhance teaching and learning. J Am Coll Surg. 2018;227(1):64–76.

Benjamin S, Seaman M. Applying continuous quality improvement and human performance technology to primary health care in Bahrain. Health Care Superv. 1998;17(1):62–71.

Byabagambi J, Marks P, Megere H, Karamagi E, Byakika S, Opio A, et al. Improving the quality of voluntary medical male circumcision through use of the continuous quality improvement approach: a pilot in 30 PEPFAR-Supported sites in Uganda. PLoS ONE. 2015;10(7):e0133369.

Hogg S, Roe Y, Mills R. Implementing evidence-based continuous quality improvement strategies in an urban Aboriginal Community Controlled Health Service in South East Queensland: a best practice implementation pilot. JBI Database Syst Rev Implement Rep. 2017;15(1):178–87.

Hopper MB, Morgan S. Continuous quality improvement initiative for pressure ulcer prevention. J Wound Ostomy Cont Nurs. 2014;41(2):178–80.

Ji J, Jiang DD, Xu Z, Yang YQ, Qian KY, Zhang MX. Continuous quality improvement of nutrition management during radiotherapy in patients with nasopharyngeal carcinoma. Nurs Open. 2021;8(6):3261–70.

Chen M, Deng JH, Zhou FD, Wang M, Wang HY. Improving the management of anemia in hemodialysis patients by implementing the continuous quality improvement program. Blood Purif. 2006;24(3):282–6.

Reeves S, Matney K, Crane V. Continuous quality improvement as an ideal in hospital practice. Health Care Superv. 1995;13(4):1–12.

Barton AJ, Danek G, Johns P, Coons M. Improving patient outcomes through CQI: vascular access planning. J Nurs Care Qual. 1998;13(2):77–85.

Buttigieg SC, Gauci D, Dey P. Continuous quality improvement in a Maltese hospital using logical framework analysis. J Health Organ Manag. 2016;30(7):1026–46.

Take N, Byakika S, Tasei H, Yoshikawa T. The effect of 5S-continuous quality improvement-total quality management approach on staff motivation, patients’ waiting time and patient satisfaction with services at hospitals in Uganda. J Public Health Afr. 2015;6(1):486.

PubMed   PubMed Central   Google Scholar  

Jacobson GH, McCoin NS, Lescallette R, Russ S, Slovis CM. Kaizen: a method of process improvement in the emergency department. Acad Emerg Med. 2009;16(12):1341–9.

Agarwal S, Gallo J, Parashar A, Agarwal K, Ellis S, Khot U, et al. Impact of lean six sigma process improvement methodology on cardiac catheterization laboratory efficiency. Catheter Cardiovasc Interv. 2015;85:S119.

Rahul G, Samanta AK, Varaprasad G A Lean Six Sigma approach to reduce overcrowding of patients and improving the discharge process in a super-specialty hospital. In 2020 International Conference on System, Computation, Automation and Networking (ICSCAN) 2020 July 3 (pp. 1-6). IEEE

Patel J, Nattabi B, Long R, Durey A, Naoum S, Kruger E, et al. The 5 C model: A proposed continuous quality improvement framework for volunteer dental services in remote Australian Aboriginal communities. Community Dent Oral Epidemiol. 2023;51(6):1150–8.

Van Acker B, McIntosh G, Gudes M. Continuous quality improvement techniques enhance HMO members’ immunization rates. J Healthc Qual. 1998;20(2):36–41.

Horine PD, Pohjala ED, Luecke RW. Healthcare financial managers and CQI. Healthc Financ Manage. 1993;47(9):34.

Reynolds JL. Reducing the frequency of episiotomies through a continuous quality improvement program. CMAJ. 1995;153(3):275–82.

Bunik M, Galloway K, Maughlin M, Hyman D. First five quality improvement program increases adherence and continuity with well-child care. Pediatr Qual Saf. 2021;6(6):e484.

Boyle TA, MacKinnon NJ, Mahaffey T, Duggan K, Dow N. Challenges of standardized continuous quality improvement programs in community pharmacies: the case of SafetyNET-Rx. Res Social Adm Pharm. 2012;8(6):499–508.

Price A, Schwartz R, Cohen J, Manson H, Scott F. Assessing continuous quality improvement in public health: adapting lessons from healthcare. Healthc Policy. 2017;12(3):34–49.

Gage AD, Gotsadze T, Seid E, Mutasa R, Friedman J. The influence of continuous quality improvement on healthcare quality: a mixed-methods study from Zimbabwe. Soc Sci Med. 2022;298:114831.

Chan YC, Ho SJ. Continuous quality improvement: a survey of American and Canadian healthcare executives. Hosp Health Serv Adm. 1997;42(4):525–44.

Balas EA, Puryear J, Mitchell JA, Barter B. How to structure clinical practice guidelines for continuous quality improvement? J Med Syst. 1994;18(5):289–97.

ElChamaa R, Seely AJE, Jeong D, Kitto S. Barriers and facilitators to the implementation and adoption of a continuous quality improvement program in surgery: a case study. J Contin Educ Health Prof. 2022;42(4):227–35.

Candas B, Jobin G, Dubé C, Tousignant M, Abdeljelil A, Grenier S, et al. Barriers and facilitators to implementing continuous quality improvement programs in colonoscopy services: a mixed methods systematic review. Endoscopy Int Open. 2016;4(2):E118–133.

Brandrud AS, Schreiner A, Hjortdahl P, Helljesen GS, Nyen B, Nelson EC. Three success factors for continual improvement in healthcare: an analysis of the reports of improvement team members. BMJ Qual Saf. 2011;20(3):251–9.

Lee S, Choi KS, Kang HY, Cho W, Chae YM. Assessing the factors influencing continuous quality improvement implementation: experience in Korean hospitals. Int J Qual Health Care. 2002;14(5):383–91.

Horwood C, Butler L, Barker P, Phakathi S, Haskins L, Grant M, et al. A continuous quality improvement intervention to improve the effectiveness of community health workers providing care to mothers and children: a cluster randomised controlled trial in South Africa. Hum Resour Health. 2017;15(1):39.

Hyrkäs K, Lehti K. Continuous quality improvement through team supervision supported by continuous self-monitoring of work and systematic patient feedback. J Nurs Manag. 2003;11(3):177–88.

Akdemir N, Peterson LN, Campbell CM, Scheele F. Evaluation of continuous quality improvement in accreditation for medical education. BMC Med Educ. 2020;20(Suppl 1):308.

Barzansky B, Hunt D, Moineau G, Ahn D, Lai CW, Humphrey H, et al. Continuous quality improvement in an accreditation system for undergraduate medical education: benefits and challenges. Med Teach. 2015;37(11):1032–8.

Gaylis F, Nasseri R, Salmasi A, Anderson C, Mohedin S, Prime R, et al. Implementing continuous quality improvement in an integrated community urology practice: lessons learned. Urology. 2021;153:139–46.

Gaga S, Mqoqi N, Chimatira R, Moko S, Igumbor JO. Continuous quality improvement in HIV and TB services at selected healthcare facilities in South Africa. South Afr J HIV Med. 2021;22(1):1202.

Wang F, Yao D. Application effect of continuous quality improvement measures on patient satisfaction and quality of life in gynecological nursing. Am J Transl Res. 2021;13(6):6391–8.

Lee SB, Lee LL, Yeung RS, Chan J. A continuous quality improvement project to reduce medication error in the emergency department. World J Emerg Med. 2013;4(3):179–82.

Chiang AA, Lee KC, Lee JC, Wei CH. Effectiveness of a continuous quality improvement program aiming to reduce unplanned extubation: a prospective study. Intensive Care Med. 1996;22(11):1269–71.

Chinnaiyan K, Al-Mallah M, Goraya T, Patel S, Kazerooni E, Poopat C, et al. Impact of a continuous quality improvement initiative on appropriate use of coronary CT angiography: results from a multicenter, statewide registry, the advanced cardiovascular imaging consortium (ACIC). J Cardiovasc Comput Tomogr. 2011;5(4):S29–30.

Gibson-Helm M, Rumbold A, Teede H, Ranasinha S, Bailie R, Boyle J. A continuous quality improvement initiative: improving the provision of pregnancy care for Aboriginal and Torres Strait Islander women. BJOG: Int J Obstet Gynecol. 2015;122:400–1.

Bennett IM, Coco A, Anderson J, Horst M, Gambler AS, Barr WB, et al. Improving maternal care with a continuous quality improvement strategy: a report from the interventions to minimize preterm and low birth weight infants through continuous improvement techniques (IMPLICIT) network. J Am Board Fam Med. 2009;22(4):380–6.

Krall SP, Iv CLR, Donahue L. Effect of continuous quality improvement methods on reducing triage to thrombolytic interval for Acute myocardial infarction. Acad Emerg Med. 1995;2(7):603–9.

Swanson TK, Eilers GM. Physician and staff acceptance of continuous quality improvement. Fam Med. 1994;26(9):583–6.

Yu Y, Zhou Y, Wang H, Zhou T, Li Q, Li T, et al. Impact of continuous quality improvement initiatives on clinical outcomes in peritoneal dialysis. Perit Dial Int. 2014;34(Suppl 2):S43–48.

Schiff GD, Goldfield NI. Deming meets Braverman: toward a progressive analysis of the continuous quality improvement paradigm. Int J Health Serv. 1994;24(4):655–73.

American Hospital Association Division of Quality Resources Chicago, IL: The role of hospital leadership in the continuous improvement of patient care quality. American Hospital Association. J Healthc Qual. 1992;14(5):8–14,22.

Scriven M. The Logic and Methodology of checklists [dissertation]. Western Michigan University; 2000.

Hales B, Terblanche M, Fowler R, Sibbald W. Development of medical checklists for improved quality of patient care. Int J Qual Health Care. 2008;20(1):22–30.

Vermeir P, Vandijck D, Degroote S, Peleman R, Verhaeghe R, Mortier E, et al. Communication in healthcare: a narrative review of the literature and practical recommendations. Int J Clin Pract. 2015;69(11):1257–67.

Eljiz K, Greenfield D, Hogden A, Taylor R, Siddiqui N, Agaliotis M, et al. Improving knowledge translation for increased engagement and impact in healthcare. BMJ open Qual. 2020;9(3):e000983.

O’Brien JL, Shortell SM, Hughes EF, Foster RW, Carman JM, Boerstler H, et al. An integrative model for organization-wide quality improvement: lessons from the field. Qual Manage Healthc. 1995;3(4):19–30.

Adily A, Girgis S, D’Este C, Matthews V, Ward JE. Syphilis testing performance in Aboriginal primary health care: exploring impact of continuous quality improvement over time. Aust J Prim Health. 2020;26(2):178–83.

Horwood C, Butler L, Barker P, Phakathi S, Haskins L, Grant M, et al. A continuous quality improvement intervention to improve the effectiveness of community health workers providing care to mothers and children: a cluster randomised controlled trial in South Africa. Hum Resour Health. 2017;15:1–11.

Veillard J, Cowling K, Bitton A, Ratcliffe H, Kimball M, Barkley S, et al. Better measurement for performance improvement in low- and middle-income countries: the primary Health Care Performance Initiative (PHCPI) experience of conceptual framework development and indicator selection. Milbank Q. 2017;95(4):836–83.

Barbazza E, Kringos D, Kruse I, Klazinga NS, Tello JE. Creating performance intelligence for primary health care strengthening in Europe. BMC Health Serv Res. 2019;19(1):1006.

Assefa Y, Hill PS, Gilks CF, Admassu M, Tesfaye D, Van Damme W. Primary health care contributions to universal health coverage. Ethiopia Bull World Health Organ. 2020;98(12):894.

Van Weel C, Kidd MR. Why strengthening primary health care is essential to achieving universal health coverage. CMAJ. 2018;190(15):E463–466.

Download references

Acknowledgements

Not applicable.

The authors received no fund.

Author information

Authors and affiliations.

School of Public Health, The University of Queensland, Brisbane, Australia

Aklilu Endalamaw, Resham B Khatri, Tesfaye Setegn Mengistu, Daniel Erku & Yibeltal Assefa

College of Medicine and Health Sciences, Bahir Dar University, Bahir Dar, Ethiopia

Aklilu Endalamaw & Tesfaye Setegn Mengistu

Health Social Science and Development Research Institute, Kathmandu, Nepal

Resham B Khatri

Centre for Applied Health Economics, School of Medicine, Grifth University, Brisbane, Australia

Daniel Erku

Menzies Health Institute Queensland, Grifth University, Brisbane, Australia

International Institute for Primary Health Care in Ethiopia, Addis Ababa, Ethiopia

Eskinder Wolka & Anteneh Zewdie

You can also search for this author in PubMed   Google Scholar

Contributions

AE conceptualized the study, developed the first draft of the manuscript, and managing feedbacks from co-authors. YA conceptualized the study, provided feedback, and supervised the whole processes. RBK provided feedback throughout. TSM provided feedback throughout. DE provided feedback throughout. EW provided feedback throughout. AZ provided feedback throughout. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Aklilu Endalamaw .

Ethics declarations

Ethics approval and consent to participate.

Not applicable because this research is based on publicly available articles.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., supplementary material 2., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Endalamaw, A., Khatri, R.B., Mengistu, T.S. et al. A scoping review of continuous quality improvement in healthcare system: conceptualization, models and tools, barriers and facilitators, and impact. BMC Health Serv Res 24 , 487 (2024). https://doi.org/10.1186/s12913-024-10828-0

Download citation

Received : 27 December 2023

Accepted : 05 March 2024

Published : 19 April 2024

DOI : https://doi.org/10.1186/s12913-024-10828-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Continuous quality improvement
  • Quality of Care

BMC Health Services Research

ISSN: 1472-6963

what is critique of methodology

  • Open access
  • Published: 27 April 2024

Assessing fragility of statistically significant findings from randomized controlled trials assessing pharmacological therapies for opioid use disorders: a systematic review

  • Leen Naji   ORCID: orcid.org/0000-0003-0994-1109 1 , 2 , 3 ,
  • Brittany Dennis 4 , 5 ,
  • Myanca Rodrigues 2 ,
  • Monica Bawor 6 ,
  • Alannah Hillmer 7 ,
  • Caroul Chawar 8 ,
  • Eve Deck 9 ,
  • Andrew Worster 2 , 4 ,
  • James Paul 10 ,
  • Lehana Thabane 11 , 2 &
  • Zainab Samaan 12 , 2  

Trials volume  25 , Article number:  286 ( 2024 ) Cite this article

Metrics details

The fragility index is a statistical measure of the robustness or “stability” of a statistically significant result. It has been adapted to assess the robustness of statistically significant outcomes from randomized controlled trials. By hypothetically switching some non-responders to responders, for instance, this metric measures how many individuals would need to have responded for a statistically significant finding to become non-statistically significant. The purpose of this study is to assess the fragility index of randomized controlled trials evaluating opioid substitution and antagonist therapies for opioid use disorder. This will provide an indication as to the robustness of trials in the field and the confidence that should be placed in the trials’ outcomes, potentially identifying ways to improve clinical research in the field. This is especially important as opioid use disorder has become a global epidemic, and the incidence of opioid related fatalities have climbed 500% in the past two decades.

Six databases were searched from inception to September 25, 2021, for randomized controlled trials evaluating opioid substitution and antagonist therapies for opioid use disorder, and meeting the necessary requirements for fragility index calculation. Specifically, we included all parallel arm or two-by-two factorial design RCTs that assessed the effectiveness of any opioid substitution and antagonist therapies using a binary primary outcome and reported a statistically significant result. The fragility index of each study was calculated using methods described by Walsh and colleagues. The risk of bias of included studies was assessed using the Revised Cochrane Risk of Bias tool for randomized trials.

Ten studies with a median sample size of 82.5 (interquartile range (IQR) 58, 179, range 52–226) were eligible for inclusion. Overall risk of bias was deemed to be low in seven studies, have some concerns in two studies, and be high in one study. The median fragility index was 7.5 (IQR 4, 12, range 1–26).

Conclusions

Our results suggest that approximately eight participants are needed to overturn the conclusions of the majority of trials in opioid use disorder. Future work should focus on maximizing transparency in reporting of study results, by reporting confidence intervals, fragility indexes, and emphasizing the clinical relevance of findings.

Trial registration

PROSPERO CRD42013006507. Registered on November 25, 2013.

Peer Review reports

Introduction

Opioid use disorder (OUD) has become a global epidemic, and the incidence of opioid related fatality is unparalleled to the rates observed in North America, having climbed 500% in the past two decades [ 1 , 2 ]. There is a dire need to identify the most effective treatment modality to maintain patient engagement in treatment, mitigate high risk consumption patterns, as well as eliminate overdose risk. Numerous studies have aimed to identify the most effective treatment modality for OUD [ 3 , 4 , 5 ]. Unfortunately, this multifaceted disease is complicated by the interplay between both neurobiological and social factors, impacting our current body of evidence and clinical decision making. Optimal treatment selection is further challenged by the rising number of pharmacological opioid substitution and antagonist therapies (OSAT) [ 6 ]. Despite this growing body of evidence and available therapies, we have yet to arrive to a consensus regarding the best treatment modality given the substantial variability in research findings and directly conflicting results [ 6 , 7 , 8 , 9 ]. More concerning, international clinical practice guidelines rely on out-of-date systematic review evidence to inform guideline development [ 10 ]. In fact, these guidelines make strong recommendations based on a fraction of the available evidence, employing trials with restrictive eligibility criteria which fail to reflect the common OUD patients seen in clinical practice [ 10 ].

A major factor hindering our ability to advance the field of addiction medicine is our failure to apply the necessary critical lens to the growing body of evidence used to inform clinical practice. While distinct concerns exist regarding the external validity of randomized trials in addiction medicine, the robustness of the universally recognized “well designed” trials remains unknown [ 10 ]. The reliability of the results of clinical trials rests on not only the sample size of the study but also the number of outcome events. In fact, a shift in the results of only a few events could in theory render the findings of the trial null, impacting the traditional hypothesis tests above the standard threshold accepted as “statistical significance.” A metric of this fragility was first introduced in 1990, known formally as the fragility index (FI) [ 11 ]. In 2014, it was adapted for use as a tool to assess the robustness of findings from randomized controlled trials (RCTs) [ 12 ]. Briefly, the FI determines the minimum number of participants whose outcome would have to change from non-event to event in order for a statistically significant result to become non-significant. Larger FIs indicate more robust findings [ 11 , 13 ]. Additionally, when the number of study participants lost to follow-up exceeds the FI of the trial, this implies that the outcome of these participants could have significantly altered the statistical significance and final conclusions of the study. The FI has been applied across multiple fields, often yielding similar results such that the change in a small number of outcome events has been powerful enough to overturn the statistical conclusions of many “well-designed” trials [ 13 ].

The concerning state of the OUD literature has left us with guidelines which neither acknowledge the lack of external validity and actually go so far as to rank the quality of the evidence as good, despite the concerning limitations we have raised [ 10 ]. Such alarming practices necessitate vigilance on behalf of methodologists and practitioners to be critical and open to a thorough review of the evidence in the field of addiction medicine [ 12 ]. Given the complex nature of OUD treatment and the increasing number of available therapies, concentrated efforts are needed to ensure the reliability and internal validity of the results of clinical trials used to inform guidelines. Application of the FI can serve to provide additional insight into the robustness of the evidence in addiction medicine. The purpose of this study is to assess the fragility of findings of RCTs assessing OSAT for OUD.

Systematic review protocol

We conducted a systematic review of the evidence surrounding OSATs for OUD [ 5 ]. The study protocol was registered with PROSPERO a priori (PROSPERO CRD42013006507). We searched Medline, EMBASE, PubMed, PsycINFO, Web of Science, and Cochrane Library for relevant studies from inception to September 25, 2021. We included all RCTs evaluating the effectiveness of any OSAT for OUD, which met the criteria required for FI calculation. Specifically, we included all parallel arm or two-by-two factorial design RCTs that allocated patients at a 1:1 ratio, assessed the effectiveness of any OSAT using a binary primary or co-primary outcome, and reported this outcome to be statistically significant ( p < 0.05).

All titles, abstracts, and full texts were screened for eligibility by two reviewers independently and in duplicate. Any discrepancies between the two reviewers were discussed for consensus, and a third reviewer was called upon when needed.

Data extraction and risk of bias assessment (ROB)

Two reviewers extracted the following data from the included studies in duplicate and independently using a pilot-tested excel data extraction sheet: sample size, whether a sample size calculation was conducted, statistical test used, primary outcome, number of responders and non-responders in each arm, number lost to follow-up, and the p -value. The 2021 Thomson Reuters Journal Impact Factor for each included study was also recorded. The ROB of included studies for the dichotomous outcome used in the FI calculation was assessed using the Revised Cochrane ROB tool for randomized trials [ 14 ]. Two reviewers independently assessed the included studies based on the following domains for potential ROB: randomization process, deviations from the intended interventions, missing outcome data, measurement of the outcome, and selection of the reported results.

Statistical analyses

Study characteristics were summarized using descriptive statistics. Means and standard deviations (SD), as well as medians and interquartile ranges (IQR: Q 25 , Q 75 ) were used as measures of central tendency for continuous outcomes with normal and skewed distributions, respectively. Frequencies and percentages were used to summarize categorical variables. The FI was calculated using a publicly available free online calculator, using the methods described by Walsh et al. [ 12 , 15 ] In summary, the number of events and non-events in each treatment arm were entered into a two-by-two contingency table for each trial. An event was added to the treatment arm with the smaller number of events, while subtracting a non-event from the same arm, thus keeping the overall sample size the same. Each time this was done, the two-sided p -value for Fisher’s exact test was recalculated. The FI was defined as the number of non-events that needed to be switched to events for the p -value to reach non-statistical significance (i.e., ≥0.05).

We intended to conduct a linear regression and Spearman’s rank correlations to assess the association between FI and journal impact factor, study sample size, and number events. However, we were not powered to do so given the limited number of eligible studies included in this review and thus refrained from conducting any inferential statistics.

We followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines for reporting (see Supplementary Material ) [ 16 ].

Study selection

Our search yielded 13,463 unique studies, of which 104 were RCTs evaluating OSAT for OUD. Among these, ten studies met the criteria required for FI calculation and were included in our analyses. Please refer to Fig. 1 for the search results, study inclusion flow diagram, and Table 1 for details on included studies.

figure 1

PRISMA flow diagram delineating study selection

Characteristics of included studies

The included studies were published between 1980 and 2018, in eight different journals with a median impact factor of 8.48 (IQR 6.53–56.27, range 3.77–91.25). Four studies reported on a calculated sample size [ 17 , 18 , 19 , 20 ], and only one study specified that reporting guidelines were used [ 21 ]. Treatment retention was the most commonly reported primary outcome ( k = 8). The median sample size of included studies was 82.5 (IQR 58–179, range 52–226).

Overall ROB was deemed to be low in seven studies [ 17 , 19 , 20 , 21 , 22 , 23 , 24 ], have some concerns in two studies [ 18 , 25 ], and be high in one study [ 26 ] due to a high proportion of missing outcome data that was not accounted for in the analyses. We present a breakdown of the ROB assessment of the included studies for the dichotomous outcome of interest in Table 2 .

  • Fragility index

The median FI of included studies was 7.5 (IQR 4–12; range 1–26). The FI of individual studies is reported in Table 1 . The number of participants lost to follow-up exceeded the FI in two studies [ 23 , 26 ]. We find that there is a relatively positive correlation between the FI and sample size. However, no clear correlation was appreciated between FI and journal impact factor or number of events.

This is the first study to evaluate the FI in the field of addiction medicine, and more specifically in OUD trials. Among the ten RCTs evaluating the OSAT for OUD, we found that, in some cases, changing the outcome of one or two participants could completely alter the study’s conclusions and render the results statistically non-significant.

We compare our findings to those of Holek et al. , wherein they examined the mean FI across all reviews published in PubMed between 2014 and 2019 that assessed the distribution of FI indices, irrespective of discipline (though none were in addiction medicine) [ 13 ]. Among 24 included reviews with a median sample size of 134 (IQR 82, 207), they found a mean FI of 4 (95% CI 3, 5) [ 13 ]. This is slightly lower than our calculated our median FI of 7.5 (IQR 4–12; range 1–26). It is important to note that half of the reviews included in the study by Holek et al. were conducted in surgical disciplines, which are generally subjected to more limitations to internal and external validity, as it is often not possible to conceal allocation, blind participants, or operators, and the intervention is operator dependent. [ 27 ] To date, no study has directly applied FI to the findings of trials in OUD. In the HIV/AIDS literature, however, a population which is commonly shared with addiction medicine due to the prevalence of the comorbidities coexisting, the median fragility across all trials assessing anti-retroviral therapies ( n = 39) was 6 (IQR = 1, 11) [ 28 ], which is more closely related to our calculated FI. Among the included studies, only 3 were deemed to be at high risk of bias, whereas 13 and 20 studies were deemed to be at low and some risk of bias, respectively.

Loss-to-follow-up plays an important role in the interpretation of the FI. For instance, when the number of study participants lost to follow-up exceeds the FI of the trial, this implies that the outcome of these participants could have significantly altered the statistical significance and final conclusions of the study. While only two of the included studies had an FI that was greater than the total number of participants lost to follow-up [ 23 , 26 ], this metric is less important in our case given the primary outcome assessed by the majority of trials was retention in treatment, rendering loss to follow-up an outcome itself. In our report, we considered participants to be lost to follow-up if they left the study for reasons that were known and not necessarily indicative of treatment failure, such as due to factors beyond the participants, control including incarceration or being transferred to another treatment location.

Findings from our analysis of the literature as well as the application of FI to the existing clinical trials in the field of addiction medicine demonstrates significant concerns regarding the robustness of the evidence. This, in conjunction with the large differences between the clinical population and trial participants of opioid-dependent patients inherent in addiction medicine trials, raises larger concerns as to a growing body of evidence with deficiencies in both internal and external validity. The findings from this study raise important clinical concerns regarding the applicability of the current evidence to treating patients in the context of the opioid epidemic. Are we recommending the appropriate treatments for patients with OUD based on robust and applicable evidence? Are we completing our due diligence and ensuring clinicians and researchers alike understand the critical issues rampant in the literature, including the fragility of the data and misconceptions of p -values? Are we possibly putting our patients at risk employing such treatment based on fragile data? These questions cannot be answered until the appropriate re-evaluation of the evidence takes place employing both the use pragmatic trial designs as well as transparent metrics to reflect the reliability and robustness of the findings.

Strengths and limitations

Our study is strengthened by a comprehensive search strategy, rigorous and systematic screening of studies, and the use of an objective measure to gauge the robustness of studies (i.e., FI). The limitations of this study are inherent in the limitations of the FI. Precisely, that it can only be calculated for RCTs with a 1:1 allocation ratio, a parallel arm or two-by-two factorial design, and a dichotomous primary outcome. As a result, 94 RCTs evaluating OSAT for OUD were excluded for not meeting these criteria (Fig. 1 ). Nonetheless, the FI provides a general sense of the robustness of the available studies, and our data reflect studies published across almost four decades in journals of varying impact factor.

Future direction

This study serves as further evidence for the need of a shift away from p -values [ 29 , 30 ]. Although there is increasingly a shift among statisticians to shift away from relying on statistical significance due to its inability to convey clinical importance [ 31 ], this remains the simplest way and most commonly reported metric in manuscripts. p -values provide a simple statistical measure to confirm or refute a null hypothesis, by providing a measure of how likely the observed result would be if the null hypothesis were true. An arbitrary cutoff of 5% is traditionally used as a threshold for rejecting the null hypothesis. However, a major drawback of the p -value is that it does not take into account the effect size of the outcome measure, such that a small incremental change that may not be clinically significant may still be statistically significant in a large enough trial. Contrastingly, a very large effect size that has biological plausibility, for instance, may not reach statistical significance if the trial size is not large enough [ 29 , 30 ]. This is highly problematic given the common misconceptions surrounding the p -value. Increasing emphasis is being placed on the importance of transparency in outcome reporting, and the reporting of confidence intervals to allow the reader to gauge the uncertainty in the evidence, and make a clinically informed decision about whether a finding is clinically significant or not. It has also been recommended that studies report FI where possible to provide readers with a comprehensible way of gauging the robustness of their findings [ 12 , 13 ]. There is a strive to make all data publicly available, allowing for replication of study findings as well as pooling of data among databases for generating more robust analyses using larger pragmatic samples [ 32 ]. Together, these efforts aim to increase transparency of research and facilitate data sharing to allow for stronger and more robust evidence to be produced, allowing for advancements in evidence-based medicine and improvements in the quality of care delivered to patients.

Our results suggest that approximately eight participants are needed to overturn the conclusions of the majority of trials in addiction medicine. Findings from our analysis of the literature and application of FI to the existing clinical trials in the field of addiction medicine demonstrates significant concerns regarding the overall quality and specifically robustness and stability of the evidence and the conclusions of the trials. Findings from this work raises larger concerns as to a growing body of evidence with deficiencies in both internal and external validity. In order to advance the field of addiction medicine, we must re-evaluate the quality of the evidence and consider employing pragmatic trial designs as well as transparent metrics to reflect the reliability and robustness of the findings. Placing emphasis on clinical relevance and reporting the FI along with confidence intervals may provide researchers, clinicians, and guideline developers with a transparent method to assess the outcomes from clinical trials, ensuring vigilance in decisions regarding management and treatment of patients with substance use disorders.

Availability of data and materials

All data generated or analyzed during this study are included in this published article (and its supplementary information files).

Abbreviations

Interquartile range

  • Opioid use disorder

Opioid substitution and antagonist therapies

  • Randomized controlled trials

Risk of bias

Standard deviation

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

Products - Vital Statistics Rapid Release - Provisional Drug Overdose Data. https://www.cdc.gov/nchs/nvss/vsrr/drug-overdose-data.htm . Accessed April 26, 2020.

Spencer MR, Miniño AM, Warner M. Drug overdose deaths in the United States, 2001–2021. NCHS Data Brief, no 457. Hyattsville, MD: National Center for Health Statistics. 2022. https://doi.org/10.15620/cdc:122556 .

Mattick RP, Breen C, Kimber J, Davoli M. Methadone maintenance therapy versus no opioid replacement therapy for opioid dependence. Cochrane Database Syst Rev. 2009;(3).  https://doi.org/10.1002/14651858.CD002209.PUB2/FULL .

Hedrich D, Alves P, Farrell M, Stöver H, Møller L, Mayet S. The effectiveness of opioid maintenance treatment in prison settings: a systematic review. Addiction. 2012;107(3):501–17. https://doi.org/10.1111/J.1360-0443.2011.03676.X .

Article   PubMed   Google Scholar  

Dennis BB, Naji L, Bawor M, et al. The effectiveness of opioid substitution treatments for patients with opioid dependence: a systematic review and multiple treatment comparison protocol. Syst Rev. 2014;3(1):105. https://doi.org/10.1186/2046-4053-3-105 .

Article   PubMed   PubMed Central   Google Scholar  

Dennis BB, Sanger N, Bawor M, et al. A call for consensus in defining efficacy in clinical trials for opioid addiction: combined results from a systematic review and qualitative study in patients receiving pharmacological assisted therapy for opioid use disorder. Trials. 2020;21(1). https://doi.org/10.1186/s13063-019-3995-y .

British Columbia Centre on Substance Use. (2017). A Guideline for the Clinical Management of Opioid Use Disorder . http://www.bccsu.ca/care-guidance-publications/ . Accessed December 4, 2020.

Kampman  K, Jarvis M. American Society of Addiction Medicine (ASAM) national practice guideline for the use of medications in the treatment of addiction involving opioid use. J Addict Med. 2015;9(5):358–367.

Srivastava A, Wyman J, Fcfp MD, Mph D. Methadone treatment for people who use fentanyl: recommendations. 2021. www.metaphi.ca . Accessed November 14, 2023.

Dennis BB, Roshanov PS, Naji L, et al. Opioid substitution and antagonist therapy trials exclude the common addiction patient: a systematic review and analysis of eligibility criteria. Trials. 2015;16(1):1. https://doi.org/10.1186/s13063-015-0942-4 .

Article   CAS   Google Scholar  

Feinstein AR. The unit fragility index: an additional appraisal of “statistical significance” for a contrast of two proportions. J Clin Epidemiol. 1990;43(2):201–9. https://doi.org/10.1016/0895-4356(90)90186-S .

Article   CAS   PubMed   Google Scholar  

Walsh M, Srinathan SK, McAuley DF, et al. The statistical significance of randomized controlled trial results is frequently fragile: a case for a fragility index. J Clin Epidemiol. 2014;67(6):622–8. https://doi.org/10.1016/j.jclinepi.2013.10.019 .

Holek M, Bdair F, Khan M, et al. Fragility of clinical trials across research fields: a synthesis of methodological reviews. Contemp Clin Trials. 2020;97. doi: https://doi.org/10.1016/j.cct.2020.106151

Sterne JAC, Savović J, Page MJ, et al. RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ. 2019;366. doi: https://doi.org/10.1136/bmj.l4898

Kane SP. Fragility Index Calculator. ClinCalc: https://clincalc.com/Stats/FragilityIndex.aspx . Updated July 19, 2018. Accessed October 17, 2023.

Page MJ, McKenzie JE, Bossuyt PM, The PRISMA, et al. statement: an updated guideline for reporting systematic reviews. BMJ. 2020;2021:372. https://doi.org/10.1136/bmj.n71 .

Article   Google Scholar  

Petitjean S, Stohler R, Déglon JJ, et al. Double-blind randomized trial of buprenorphine and methadone in opiate dependence. Drug Alcohol Depend. 2001;62(1):97–104. https://doi.org/10.1016/S0376-8716(00)00163-0 .

Sees KL, Delucchi KL, Masson C, et al. Methadone maintenance vs 180-day psychosocially enriched detoxification for treatment of opioid dependence: a randomized controlled trial. JAMA. 2000;283(10):1303–10. https://doi.org/10.1001/JAMA.283.10.1303 .

Kakko J, Dybrandt Svanborg K, Kreek MJ, Heilig M. 1-year retention and social function after buprenorphine-assisted relapse prevention treatment for heroin dependence in Sweden: a randomised, placebo-controlled trial. Lancet (London, England). 2003;361(9358):662–8. https://doi.org/10.1016/S0140-6736(03)12600-1 .

Oviedo-Joekes E, Brissette S, Marsh DC, et al. Diacetylmorphine versus methadone for the treatment of opioid addiction. N Engl J Med. 2009;361(8):777–86. https://doi.org/10.1056/NEJMoa0810635 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Hulse GK, Morris N, Arnold-Reed D, Tait RJ. Improving clinical outcomes in treating heroin dependence: randomized, controlled trial of oral or implant naltrexone. Arch Gen Psychiatry. 2009;66(10):1108–15. https://doi.org/10.1001/ARCHGENPSYCHIATRY.2009.130 .

Krupitsky EM, Zvartau EE, Masalov DV, et al. Naltrexone for heroin dependence treatment in St. Petersburg, Russia. J Subst Abuse Treat. 2004;26(4):285–94. https://doi.org/10.1016/j.jsat.2004.02.002 .

Krook AL, Brørs O, Dahlberg J, et al. A placebo-controlled study of high dose buprenorphine in opiate dependents waiting for medication-assisted rehabilitation in Oslo. Norway Addiction. 2002;97(5):533–42. https://doi.org/10.1046/J.1360-0443.2002.00090.X .

Hartnoll RL, Mitcheson MC, Battersby A, et al. Evaluation of heroin maintenance in controlled trial. Arch Gen Psychiatry. 1980;37(8):877–84. https://doi.org/10.1001/ARCHPSYC.1980.01780210035003 .

Fischer G, Gombas W, Eder H, et al. Buprenorphine versus methadone maintenance for the treatment of opioid dependence. Addiction. 1999;94(9):1337–47. https://doi.org/10.1046/J.1360-0443.1999.94913376.X .

Yancovitz SR, Des Jarlais DC, Peyser NP, et al. A randomized trial of an interim methadone maintenance clinic. Am J Public Health. 1991;81(9):1185–91. https://doi.org/10.2105/AJPH.81.9.1185 .

Demange MK, Fregni F. Limits to clinical trials in surgical areas. Clinics (Sao Paulo). 2011;66(1):159–61. https://doi.org/10.1590/S1807-59322011000100027 .

Wayant C, Meyer C, Gupton R, Som M, Baker D, Vassar M. The fragility index in a cohort of HIV/AIDS randomized controlled trials. J Gen Intern Med. 2019;34(7):1236–43. https://doi.org/10.1007/S11606-019-04928-5 .

Amrhein V, Greenland S, McShane B. Scientists rise up against statistical significance. Nature. 2019;567(7748):305–7. https://doi.org/10.1038/D41586-019-00857-9 .

Ioannidis JPA. Why most published research findings are false. PLoS Med. 2005;2(8):e124. https://doi.org/10.1371/journal.pmed.0020124 .

Goodman SN. Toward evidence-based medical statistics. 1: the p value fallacy. Ann Intern Med. 1999;130(12):995–1004. https://doi.org/10.7326/0003-4819-130-12-199906150-00008 .

Allison DB, Shiffrin RM, Stodden V. Reproducibility of research: issues and proposed remedies. Proc Natl Acad Sci U S A. 2018;115(11):2561–2. https://doi.org/10.1073/PNAS.1802324115 .

Download references

Acknowledgements

The authors received no funding for this work.

Author information

Authors and affiliations.

Department of Family Medicine, David Braley Health Sciences Centre, McMaster University, 100 Main St W, 3rdFloor, Hamilton, ON, L8P 1H6, Canada

Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, ON, Canada

Leen Naji, Myanca Rodrigues, Andrew Worster, Lehana Thabane & Zainab Samaan

Department of Medicine, Montefiore Medical Center, New York, NY, USA

Department of Medicine, McMaster University, Hamilton, ON, Canada

Brittany Dennis & Andrew Worster

Department of Medicine, University of British Columbia, Vancouver, Canada

Brittany Dennis

Department of Medicine, Imperial College Healthcare NHS Trust, London, UK

Monica Bawor

Department of Psychiatry and Behavaioral Neurosciences, McMaster University, Hamilton, ON, Canada

Alannah Hillmer

Physician Assistant Program, University of Toronto, Toronto, ON, Canada

Caroul Chawar

Department of Family Medicine, Western University, London, ON, Canada

Department of Anesthesia, McMaster University, Hamilton, ON, Canada

Biostatistics Unit, Research Institute at St Joseph’s Healthcare, Hamilton, ON, Canada

Lehana Thabane

Department of Psychiatry and Behavioral Neurosciences, McMaster University, Hamilton, ON, Canada

Zainab Samaan

You can also search for this author in PubMed   Google Scholar

Contributions

LN, BD, MB, LT, and ZS conceived the research question and protocol. LN, BD, MR, and AH designed the search strategy and ran the literature search. LN, BD, MR, AH, CC, and ED contributed to screening studies for eligibility and data extraction. LN and LT analyzed data. All authors contributed equally to the writing and revision of the manuscript. All authors approved the final version of the manuscript.

Corresponding author

Correspondence to Leen Naji .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Naji, L., Dennis, B., Rodrigues, M. et al. Assessing fragility of statistically significant findings from randomized controlled trials assessing pharmacological therapies for opioid use disorders: a systematic review. Trials 25 , 286 (2024). https://doi.org/10.1186/s13063-024-08104-x

Download citation

Received : 11 December 2022

Accepted : 10 April 2024

Published : 27 April 2024

DOI : https://doi.org/10.1186/s13063-024-08104-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Research methods
  • Critical appraisal
  • Systematic review

ISSN: 1745-6215

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

what is critique of methodology

U.S. flag

An official website of the United States government

Here's how you know

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

What the New Overtime Rule Means for Workers

Collage shows four professionals in business casual clothing.

One of the basic principles of the American workplace is that a hard day’s work deserves a fair day’s pay. Simply put, every worker’s time has value. A cornerstone of that promise is the  Fair Labor Standards Act ’s (FLSA) requirement that when most workers work more than 40 hours in a week, they get paid more. The  Department of Labor ’s new overtime regulation is restoring and extending this promise for millions more lower-paid salaried workers in the U.S.

Overtime protections have been a critical part of the FLSA since 1938 and were established to protect workers from exploitation and to benefit workers, their families and our communities. Strong overtime protections help build America’s middle class and ensure that workers are not overworked and underpaid.

Some workers are specifically exempt from the FLSA’s minimum wage and overtime protections, including bona fide executive, administrative or professional employees. This exemption, typically referred to as the “EAP” exemption, applies when: 

1. An employee is paid a salary,  

2. The salary is not less than a minimum salary threshold amount, and 

3. The employee primarily performs executive, administrative or professional duties.

While the department increased the minimum salary required for the EAP exemption from overtime pay every 5 to 9 years between 1938 and 1975, long periods between increases to the salary requirement after 1975 have caused an erosion of the real value of the salary threshold, lessening its effectiveness in helping to identify exempt EAP employees.

The department’s new overtime rule was developed based on almost 30 listening sessions across the country and the final rule was issued after reviewing over 33,000 written comments. We heard from a wide variety of members of the public who shared valuable insights to help us develop this Administration’s overtime rule, including from workers who told us: “I would love the opportunity to...be compensated for time worked beyond 40 hours, or alternately be given a raise,” and “I make around $40,000 a year and most week[s] work well over 40 hours (likely in the 45-50 range). This rule change would benefit me greatly and ensure that my time is paid for!” and “Please, I would love to be paid for the extra hours I work!”

The department’s final rule, which will go into effect on July 1, 2024, will increase the standard salary level that helps define and delimit which salaried workers are entitled to overtime pay protections under the FLSA. 

Starting July 1, most salaried workers who earn less than $844 per week will become eligible for overtime pay under the final rule. And on Jan. 1, 2025, most salaried workers who make less than $1,128 per week will become eligible for overtime pay. As these changes occur, job duties will continue to determine overtime exemption status for most salaried employees.

Who will become eligible for overtime pay under the final rule? Currently most salaried workers earning less than $684/week. Starting July 1, 2024, most salaried workers earning less than $844/week. Starting Jan. 1, 2025, most salaried workers earning less than $1,128/week. Starting July 1, 2027, the eligibility thresholds will be updated every three years, based on current wage data. DOL.gov/OT

The rule will also increase the total annual compensation requirement for highly compensated employees (who are not entitled to overtime pay under the FLSA if certain requirements are met) from $107,432 per year to $132,964 per year on July 1, 2024, and then set it equal to $151,164 per year on Jan. 1, 2025.

Starting July 1, 2027, these earnings thresholds will be updated every three years so they keep pace with changes in worker salaries, ensuring that employers can adapt more easily because they’ll know when salary updates will happen and how they’ll be calculated.

The final rule will restore and extend the right to overtime pay to many salaried workers, including workers who historically were entitled to overtime pay under the FLSA because of their lower pay or the type of work they performed. 

We urge workers and employers to visit  our website to learn more about the final rule.

Jessica Looman is the administrator for the U.S. Department of Labor’s Wage and Hour Division. Follow the Wage and Hour Division on Twitter at  @WHD_DOL  and  LinkedIn .  Editor's note: This blog was edited to correct a typo (changing "administrator" to "administrative.")

  • Wage and Hour Division (WHD)
  • Fair Labor Standards Act
  • overtime rule

SHARE THIS:   

Collage. Black-and-white photo from 1942 shows a Black woman holding a mop and broom in front of the US flag. Black-and-white photo from 1914 shows union women striking against child labor. Color photo from 2020s shows a Black woman holding a sign reading I heart home care workers.

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

Scoping review of hearing loss attributed to congenital syphilis

Roles Data curation, Writing – original draft, Writing – review & editing

Affiliation Department of Pediatrics, Queen’s University, Kingston, Ontario, Canada

Roles Data curation, Formal analysis, Writing – review & editing

Affiliation Research Investigator, University of Alberta, Edmonton, Alberta, Canada

Roles Data curation, Writing – review & editing

Affiliation Department of Nursing, University of Alberta, Edmonton, Alberta, Canada

ORCID logo

Affiliation Department of Anesthesiology, University of Alberta, Edmonton, Alberta, Canada

Roles Conceptualization, Formal analysis, Supervision, Writing – original draft, Writing – review & editing

* E-mail: [email protected]

Affiliation Department of Pediatrics, University of Alberta, Edmonton, Alberta, Canada

Roles Conceptualization, Data curation, Project administration, Supervision, Writing – original draft, Writing – review & editing

Affiliation Division of Otolaryngology-Head & Neck Surgery, University of Alberta, Edmonton, Alberta, Canada

  • Aleena Amjad Hafeeez, 
  • Karina Cavalcanti Bezerra, 
  • Zaharadeen Jimoh, 
  • Francesca B. Seal, 
  • Joan L. Robinson, 
  • Nahla A. Gomaa

PLOS

  • Published: April 26, 2024
  • https://doi.org/10.1371/journal.pone.0302452
  • Peer Review
  • Reader Comments

Fig 1

There are no narrative or systematic reviews of hearing loss in patients with congenital syphilis.

The aim of this study was to perform a scoping review to determine what is known about the incidence, characteristics, prognosis, and therapy of hearing loss in children or adults with presumed congenital syphilis.

Eligibility criteria

PROSPERO, OVID Medline, OVID EMBASE, Cochrane Library (CDSR and Central), Proquest Dissertations and Theses Global, and SCOPUS were searched from inception to March 31, 2023. Articles were included if patients with hearing loss were screened for CS, ii) patients with CS were screened for hearing loss, iii) they were case reports or case series that describe the characteristics of hearing loss, or iv) an intervention for hearing loss attributed to CS was studied.

Sources of evidence

Thirty-six articles met the inclusion criteria.

Five studies reported an incidence of CS in 0.3% to 8% of children with hearing loss, but all had a high risk of bias. Seven reported that 0 to 19% of children with CS had hearing loss, but the only one with a control group showed comparable rates in cases and controls. There were 18 case reports/ case series (one of which also reported screening children with hearing loss for CS), reporting that the onset of hearing loss was usually first recognized during adolescence or adulthood. The 7 intervention studies were all uncontrolled and published in 1983 or earlier and reported variable results following treatment with penicillin, prednisone, and/or ACTH.

Conclusions

The current literature is not informative with regard to the incidence, characteristics, prognosis, and therapy of hearing loss in children or adults with presumed congenital syphilis.

Citation: Amjad Hafeeez A, Cavalcanti Bezerra K, Jimoh Z, Seal FB, Robinson JL, Gomaa NA (2024) Scoping review of hearing loss attributed to congenital syphilis. PLoS ONE 19(4): e0302452. https://doi.org/10.1371/journal.pone.0302452

Editor: Bolajoko O. Olusanya, Center for Healthy Start Initiative, NIGERIA

Received: October 6, 2023; Accepted: April 2, 2024; Published: April 26, 2024

Copyright: © 2024 Hafeeez et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All data are in the manuscript and/or supporting information files.

Funding: The author(s) received no specific funding for this work.

Competing interests: The authors have declared that no competing interests exist.

Introduction

Syphilis is a sexually transmitted infection caused by the bacterium Treponema pallidum . If not recognized and treated early in pregnancy, fetal transmission commonly occurs [ 1 ]. According to the international Joint Committee on Infant Hearing, congenital syphilis (CS) is a risk indicator for hearing loss [ 2 , 3 ]. The Centers for Disease Control and Prevention state: “Otosyphilis is caused by an infection of the cochleovestibular system with T . pallidum and typically presents with sensorineural hearing loss, tinnitus, or vertigo. Hearing loss can be unilateral or bilateral, have a sudden onset, and progress rapidly.” ( Neurosyphilis, Ocular Syphilis, and Otosyphilis ( cdc.gov ) ). Almost all cases of CS are treated with penicillin which is not known to be ototoxic.

For decades, congenital syphilis had almost disappeared in Canada and the United States due to low rates of syphilis in the community and universal prenatal screening.. The number of cases of confirmed early congenital syphilis born to women aged 15–39 years in Canada rose from 17 cases in 2018 to 117 in 2022 [ 4 ]. Trends in the United States (US) mirror this with an increase from 1325 congenital syphilis cases in 2018 to 3755 in 2022 [ 5 ].

The recent resurgence has increased interest in the clinical manifestations and complications of congenital syphilis. There are no published data summarizing the incidence or characteristics of hearing loss due to congenital syphilis. Despite the larger number of cases now occurring in Canada and the US, there are no evidence-based guidelines on screening or management of hearing loss in children with congenital syphilis. We therefore performed a scoping review. Our specific questions were:

  • How often is hearing loss due to congenital syphilis?
  • What is the incidence of hearing loss in children with congenital syphilis?
  • When hearing loss occurs from congenital syphilis, what is the usual age of onset? Is it unilateral or bilateral? How severe is it? How rapidly does it progress?
  • Is there evidence for any interventions for treatment of hearing loss attributed to congenital syphilis?

This will inform the studies that need to be done to determine the incidence and age of onset of hearing loss from CS, the severity of hearing loss, and interventions that warrant further study.

The methodology was based on the Preferred Reporting Items for a Systematic Review and Meta-analysis Extension for Scoping Reviews: The PRISMA-ScR statement [ 6 ] (See attached S1 Checklist ). A search was executed by a health librarian on the following databases: PROSPERO, OVID Medline, OVID EMBASE, Cochrane Library (CDSR and Central), Proquest Dissertations and Theses Global, and SCOPUS using controlled vocabulary (e.g.: MeSH, Emtree, etc.) and selecting key words representing the concepts “congenital syphilis" or "hearing loss” ( S1 Appendix ). Databases were searched from inception to October 17, 2021, with an updated search to March 31, 2023.

Articles were included if they described persons of any age with hearing loss that the authors of the article attributed to congenital syphilis. To delineate the burden and incidence of hearing loss from congenital syphilis, we included any studies that i) screened children with hearing loss for evidence of congenital syphilis or ii) screened children with congenital syphilis for hearing loss. We also included randomized controlled trials (RCTs), cross-sectional studies, case series, and case reports that described the characteristics of hearing loss, the long-term outcomes of hearing loss, or the results of any interventions for hearing loss. We excluded autopsy reports, animal studies, studies focusing solely on acquired syphilis and those published in a language other than English, French, or Portuguese.

Articles published in English were screened by two reviewers independently [AH, KC], and conflicts were resolved by a senior author [JR, NG]. Articles published in French had a single reviewer [FS]. There were no articles published in Portuguese. Because of the small number of recent articles, preprints were included. The protocol has not been published.

Studies were divided into four types: i) those that screened patients with hearing loss for congenital syphilis, ii) those that screened patients with congenital syphilis for hearing loss, iii) case reports or case series that describe the characteristics of hearing loss in patients with congenital syphilis, and iv) studies that describe an intervention for hearing loss attributed to congenital syphilis. Data were collected and managed using Research Electronic Data Capture (REDCap) tools [ 7 ] hosted at the University of Alberta with the extracted data determined by the study type. Data were entered by a single investigator. The JBI critical appraisal tool was used as appropriate to assess all included studies [ 8 – 11 ] ( S2 Appendix ). The critical appraisal and bias risk assessment was completed by a single reviewer [NG], and all studies were rated as high, unclear or low risk of bias.

The search yielded 1983 records of which 832 were duplicates. Screening led to 159 records for full-text review of which 36 met inclusion criteria ( Fig 1 ). The figure outlines the reasons for exclusion of other records.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0302452.g001

Screening of patients with hearing loss for congenital syphilis

There were 5 studies where patients with hearing loss were screened for CS. They were published from 1900 to 1990 and all had a high risk of bias (Tables 1 and 2 ). The incidence of CS ranged from 0.3% to 8% in children attending schools for the hearing impaired and was 2% in children seen at a clinic for the hearing impaired.

thumbnail

https://doi.org/10.1371/journal.pone.0302452.t001

thumbnail

https://doi.org/10.1371/journal.pone.0302452.t002

Screening of patients with CS for hearing loss

There were 7 studies of which 4 were published from 2016 to 2022 ( Table 3 ). The risk of bias was high for 1, unclear for 3, and low for 3. Hearing loss was reported in 0 to 19% of children with probable or proven CS. One study from the modern era showed an incidence of 6% (22/342) (12). However, a small recent study reported no hearing loss for 7 infants treated in utero, a 5% incidence for 37 treated at birth, and a 6% incidence in 49 controls [ 23 ].

thumbnail

https://doi.org/10.1371/journal.pone.0302452.t003

Case series and case reports of hearing loss attributed to CS

There were 10 case series (one of which was also included in Table 2 ) ( Table 4A ) and 8 case reports ( Table 4B ) of which all but 6 were published prior to 1980. The risk of bias was high for 5 articles, unclear for 3 and low for 10. In these reports, hearing loss was often first noted in adolescence or adulthood with the youngest being 5 years old at diagnosis. Many cases also had interstitial keratitis. Follow-up was too variable to allow determination of the expected rate of progression of hearing loss. A wide variety of therapies are reported with small numbers of patients and inconsistent results that were often subjective.

thumbnail

A ‐ Case series of hearing loss attributed to congenital syphilis. B ‐ Case reports of hearing loss attributed to congenital syphilis.

https://doi.org/10.1371/journal.pone.0302452.t004

Studies with interventions for hearing loss

The 7 studies included a range of 6 to 39 patients with the most recent one being from 1983 ( Table 5 ). All were observational. Most commonly patients were prescribed penicillin with addition of prednisone followed by ACTH if response was poor or transient. Outcomes were often subjective and inconsistent. Risk of bias was unclear for 5 studies and low for 2 studies.

thumbnail

https://doi.org/10.1371/journal.pone.0302452.t005

The scoping review shows that studies of hearing loss due to congenital syphilis are limited and low quality. All but one study reported as a pre-print [ 23 ] are observational studies and only 15 of 36 studies (42%) were at low risk of bias. One cannot determine the incidence or characteristics of hearing loss from congenital syphilis or the efficacy of interventions from this review. It seems unlikely that a systematic review would find further studies that could answer these questions.

As expected, there were major variations in the study methodologies employed to diagnose hearing loss. In the early 1900s, investigators used basic tuning fork tests and subjective behavioral responses [ 24 ]. Studies performed after the year 2000, used full diagnostic tests or Auditory Brainstem Responses (ABR) for neonates [ 21 ].

A small percentage of children attending schools for the hearing impaired had evidence of congenital syphilis. However, these data are of limited value without a control group from the same jurisdiction. The percentage of hearing loss that is due to congenital syphilis no doubt varies considerably by country and over time.

It is perhaps unexpected that almost all case reports and case series describe recognition of hearing loss only in adolescence or adulthood. It is possible that hearing loss started years prior but was not recognized, particularly, if the hearing loss was slowly progressive. The major problem with all these reports is that they do not exclude the possibility that the patient had acquired syphilis or had another etiology for their hearing loss.

Clearly, there is paucity of up-to-date literature regarding this important health problem. The majority of articles were published before 1980. The recent surge in congenital syphilis cases in Canada and the United States may lead to further studies. Recent results from neonatal hearing screening programs in low- or middle-income countries where the incidence of congenital syphilis never waned are informative. Besen reported screening 21,434 newborns in Brazil 2017 through 2019 and reported a prevalence of test failure in the Universal Neonatal Hearing Screening Program (UNHS) of 1.6% (95% CI: 1.4; 1.8). This study used Otoacoustic Emission and ABR to identify both cochlear and retrocochlear damage. They report that 1.7% (95% CI: 1.5; 1.8) had congenital syphilis but do not report how many with congenital syphilis had hearing loss [ 22 ]. In a follow-up report of 34,801 infants screened 2017 through 2021, they report that neonates with congenital syphilis were 2.38 times as likely to fail in the UNHS as those without congenital syphilis [ 48 ]. However, another small study from Brazil reported as a pre-print examined failed hearing screens at 2 months of life did not find an association between congenital syphilis and failed hearing screens [ 23 ].

It is not clear whether there is a treatment for hearing loss due to congenital syphilis. Antibiotics were presumably always given at the time of diagnosis of hearing loss if the patient had not previously been adequately treated. There are no convincing reports that this alone resulted in sustained improved hearing. Uncontrolled studies that included corticosteroids with or without ACTH reported variable response and improvement in hearing was often subjective.

The main limitation of this scoping review is the lack of high-quality studies.

Our scoping review outlines a general map of the trend of publications across the decades and shows that the incidence of hearing loss due to congenital syphilis is completely unknown. It is not clear whether the stage of maternal syphilis or the age at which infants are treated changes outcomes. The literature does not inform us as to whether treatment in-utero prevents development of hearing loss. Until there are high quality long-term observational studies, it is difficult to know what hearing screening to recommend for children with congenital syphilis. Hearing loss attributed to congenital syphilis is often first recognized in adolescence or adulthood. Therefore, there is a need to increase awareness that people of all ages with unexplained hearing loss of sudden or gradual onset should be screened for syphilis. Other than treatment of the congenital syphilis, no other treatments can be recommended until there are RCTs or cohort studies with valid control groups.

Supporting information

S1 checklist. preferred reporting items for systematic reviews and meta-analyses extension for scoping reviews (prisma-scr) checklist..

https://doi.org/10.1371/journal.pone.0302452.s001

S1 Appendix. Systematic review search strategy.

https://doi.org/10.1371/journal.pone.0302452.s002

S2 Appendix. Joanna Briggs Institute (JBI) critical appraisal checklist.

https://doi.org/10.1371/journal.pone.0302452.s003

  • 1. Public Health Agency of Canada T. The Chief Public Health Officer’s Report on the State of Public Health in Canada. Canada: Public Health Agency of Canada; 2014.
  • 2. Joint Committee on Infant Hearing T. Year 2019 Position Statement: Principles and Guidelines for Early Hearing Detection and Intervention Programs. Journal of Early Hearing Detection and Intervention. 2019;4(2).
  • 3. Joint Committee on Infant Hearing T. Year 2007 Position Statement: Principles and Guidelines for Early Hearing Detection and Intervention Programs. Pediatrics. 2007;120(4):898–921.
  • 4. Public Health Agency of Canada T. Infectious syphilis and congenital syphilis in Canada, 2022.) CCDR: Volume 49–10, October 2023: Influenza and Other Respiratory Infections ‐ Canada.ca
  • 5. Centers for Disease Control and Prevention T. Sexually Transmitted Disease Surveillance 2022. Sexually Transmitted Infections Surveillance, 2022 ( cdc.gov ).
  • View Article
  • PubMed/NCBI
  • Google Scholar
  • 9. Moola S MZ, Tufanaru C, Aromataris E, Sears K, Sfetcu R, Currie M, et al. Chapter 7: Systematic reviews of etiology and risk. In: Aromataris E MZ, editor. JBI Manual for Evidence Synthesis. New Zealand: Joanna Briggs Institute; 2020.
  • 10. Joanna Briggs I. Critical Appraisal Checklist for Case Reports. New Zealand: Joanna Briggs Institute; 2017.

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • Hosted content
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Online First
  • Inappropriate use of proton pump inhibitors in clinical practice globally: a systematic review and meta-analysis
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0002-5111-7861 Amit K Dutta 1 ,
  • http://orcid.org/0000-0003-2472-3409 Vishal Sharma 2 ,
  • Abhinav Jain 3 ,
  • Anshuman Elhence 4 ,
  • Manas K Panigrahi 5 ,
  • Srikant Mohta 6 ,
  • Richard Kirubakaran 7 ,
  • Mathew Philip 8 ,
  • http://orcid.org/0000-0003-1700-7543 Mahesh Goenka 9 ,
  • Shobna Bhatia 10 ,
  • http://orcid.org/0000-0002-9435-3557 Usha Dutta 2 ,
  • D Nageshwar Reddy 11 ,
  • Rakesh Kochhar 12 ,
  • http://orcid.org/0000-0002-1305-189X Govind K Makharia 4
  • 1 Gastroenterology , Christian Medical College and Hospital Vellore , Vellore , India
  • 2 Gastroenterology , Post Graduate Institute of Medical Education and Research , Chandigarh , India
  • 3 Gastroenterology , Gastro 1 Hospital , Ahmedabad , India
  • 4 Gastroenterology and Human Nutrition , All India Institute of Medical Sciences , New Delhi , India
  • 5 Gastroenterology , All India Institute of Medical Sciences - Bhubaneswar , Bhubaneswar , India
  • 6 Department of Gastroenterology , Narayana Superspeciality Hospital , Kolkata , India
  • 7 Center of Biostatistics and Evidence Based Medicine , Vellore , India
  • 8 Lisie Hospital , Cochin , India
  • 9 Apollo Gleneagles Hospital , Kolkata , India
  • 10 Gastroenterology , National Institute of Medical Science , Jaipur , India
  • 11 Asian Institute of Gastroenterology , Hyderabad , India
  • 12 Gastroenterology , Paras Hospitals, Panchkula , Chandigarh , India
  • Correspondence to Dr Amit K Dutta, Gastroenterology, Christian Medical College and Hospital Vellore, Vellore, Tamil Nadu, India; akdutta1995{at}gmail.com

https://doi.org/10.1136/gutjnl-2024-332154

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

  • PROTON PUMP INHIBITION
  • META-ANALYSIS

We read with interest the population-based cohort studies by Abrahami et al on proton pump inhibitors (PPI) and the risk of gastric and colon cancers. 1 2 PPI are used at all levels of healthcare and across different subspecialties for various indications. 3 4 A recent systematic review on the global trends and practices of PPI recognised 28 million PPI users from 23 countries, suggesting that 23.4% of the adults were using PPI. 5 Inappropriate use of PPI appears to be frequent, although there is a lack of compiled information on the prevalence of inappropriate overuse of PPI. Hence, we conducted a systematic review and meta-analysis on the inappropriate overuse of PPI globally.

Supplemental material

Overall, 79 studies, including 20 050 patients, reported on the inappropriate overuse of PPI and were included in this meta-analysis. The pooled proportion of inappropriate overuse of PPI was 0.60 (95% CI 0.55 to 0.65, I 2 97%, figure 1 ). The proportion of inappropriate overuse by dose was 0.17 (0.08 to 0.33) and by duration of use was 0.17 (0.07 to 0.35). Subgroup analysis was done to assess for heterogeneity ( figure 2A ). No significant differences in the pooled proportion of inappropriate overuse were noted based on the study design, setting (inpatient or outpatient), data source, human development index of the country, indication for use, sample size estimation, year of publication and study quality. However, regional differences were noted (p<0.01): Australia—40%, North America—56%, Europe—61%, Asia—62% and Africa—91% ( figure 2B ). The quality of studies was good in 27.8%, fair in 62.03% and low in 10.12%. 6

  • Download figure
  • Open in new tab
  • Download powerpoint

Forest plot showing inappropriate overuse of proton pump inhibitors.

(A) Subgroup analysis of inappropriate overuse of proton pump inhibitors (PPI). (B) Prevalence of inappropriate overuse of PPI across different countries of the world. NA, data not available.

This is the first systematic review and meta-analysis on global prescribing inappropriateness of PPI. The results of this meta-analysis are concerning and suggest that about 60% of PPI prescriptions in clinical practice do not have a valid indication. The overuse of PPI appears to be a global problem and across all age groups including geriatric subjects (63%). Overprescription increases the patient’s cost, pill burden and risk of adverse effects. 7–9 The heterogeneity in the outcome data persisted after subgroup analysis. Hence, this may be inherent to the practice of PPI use rather than related to factors such as study design, setting or study quality.

Several factors (both physician and patient-related) may contribute to the high magnitude of PPI overuse. These include a long list of indications for use, availability of the drug ‘over the counter’, an exaggerated sense of safety, and lack of awareness about the correct indications, dose and duration of therapy. A recently published guideline makes detailed recommendations on the accepted indications for the use of PPI, including the dose and duration, and further such documents may help to promote its rational use. 3 Overall, there is a need for urgent adoption of PPI stewardship practices, as is done for antibiotics. Apart from avoiding prescription when there is no indication, effective deprescription strategies are also required. 10 We hope the result of the present systematic review and meta-analysis will create awareness about the current situation and translate into a change in clinical practice globally.

Ethics statements

Patient consent for publication.

Not applicable.

Ethics approval

  • Abrahami D ,
  • McDonald EG ,
  • Schnitzer ME , et al
  • Jearth V , et al
  • Malfertheiner P ,
  • Megraud F ,
  • Rokkas T , et al
  • Shanika LGT ,
  • Reynolds A ,
  • Pattison S , et al
  • O’Connell D , et al
  • Choudhury A ,
  • Gillis KA ,
  • Lees JS , et al
  • Paynter S , et al
  • Targownik LE ,
  • Fisher DA ,

Supplementary materials

Supplementary data.

This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

  • Data supplement 1

X @drvishal82

Contributors AKD: concept, study design, data acquisition and interpretation, drafting the manuscript and approval of the manuscript. VS: study design, data acquisition, analysis and interpretation, drafting the manuscript and approval of the manuscript. AJ, AE, MKP, SM: data acquisition and interpretation, critical revision of the manuscript, and approval of the manuscript. RK: study design, data analysis and interpretation, critical revision of the manuscript and approval of the manuscript. MP, MG, SB, UD, DNR, RK: data interpretation, critical revision of the manuscript and approval of the manuscript. GKM: concept, study design, data interpretation, drafting the manuscript, critical revision and approval of the manuscript.

Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

Competing interests None declared.

Provenance and peer review Not commissioned; internally peer reviewed.

Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Read the full text or download the PDF:

IMAGES

  1. How to Critique of an Article

    what is critique of methodology

  2. PPT

    what is critique of methodology

  3. 😝 Critique format example. How to write Critique With Examples. 2022-10-30

    what is critique of methodology

  4. How to Critique an Article in 3 Steps (with Example)

    what is critique of methodology

  5. How to Write a Critique (2022 Guide)

    what is critique of methodology

  6. How To Write A Methodology In A Research Paper ~ Alngindabu Words

    what is critique of methodology

VIDEO

  1. Research Critique of Methodology/Chapter-3

  2. How to write an article review 1

  3. Methodological Reviews

  4. Metho 4: Good Research Qualities / Research Process / Research Methods Vs Research Methodology

  5. How to critique research methodology?

  6. How Not To Perform Science

COMMENTS

  1. How to Critique a Research Methodology

    How to Critique a Research Methodology. A research method is the specific procedure used to answer a set of research questions. Popular methods vary by field, but include qualitative as well as quantitative approaches. Qualitative approaches rely more on observation and interpretation, while quantitative methods focus ...

  2. PDF Step'by-step guide to critiquing research. Part 1: quantitative research

    through the literature review, the theoretical framework, the research question, the methodology section, the data analysis, and the findings (Ryan-Wenger, 1992). Literature review The primary purpose of the literature review is to define or develop the research question while also identifying an appropriate method of data collection (Burns and

  3. A tutorial on methodological studies: the what, when, how and why

    Methodological studies - studies that evaluate the design, analysis or reporting of other research-related reports - play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste.

  4. Methodological Approaches to Literature Review

    A literature review is defined as "a critical analysis of a segment of a published body of knowledge through summary, classification, and comparison of prior research studies, reviews of literature, and theoretical articles." (The Writing Center University of Winconsin-Madison 2022) A literature review is an integrated analysis, not just a summary of scholarly work on a specific topic.

  5. Critiquing Research Evidence for Use in Practice: Revisited

    The first step is to critique and appraise the research evidence. Through critiquing and appraising the research evidence, dialog with colleagues, and changing practice based on evidence, NPs can improve patient outcomes ( Dale, 2005) and successfully translate research into evidence-based practice in today's ever-changing health care ...

  6. What Is a Research Methodology?

    Your research methodology discusses and explains the data collection and analysis methods you used in your research. A key part of your thesis, dissertation, or research paper, the methodology chapter explains what you did and how you did it, allowing readers to evaluate the reliability and validity of your research and your dissertation topic.

  7. Review A guide to critiquing a research paper. Methodological appraisal

    Review Methods. A published critiquing tool has been applied. It was chosen because it is pragmatic, clearly laid out and accessible as full text to the people likely to need it. It comprises two stages, the first of which centres on the believability of the research. The second stage is more detailed and examines the research process and ...

  8. PDF Topic 8: How to critique a research paper 1

    1. Use these guidelines to critique your selected research article to be included in your research proposal. You do not need to address all the questions indicated in this guideline, and only include the questions that apply. 2. Prepare your report as a paper with appropriate headings and use APA format 5th edition.

  9. PDF Methodological criticism and critical methodology

    Summary Methodological criticism m ybe defined asthe critique ofscientific in the light practice of methodological principles, and critical methodology asthe study of proper m thods ofcriticism; the problem isthat of heinteraction between scientific the methods which give m thodological criticism ts methodological character and the critical ...

  10. Methodological criticism and critical methodology

    Summary. Methodological criticism may be defined as the critique of scientific practice in the light of methodological principles, and critical methodology as the study of proper methods of criticism; the problem is that of the interaction between the scientific methods which give methodological criticism its methodological character and the ...

  11. Which review is that? A guide to review types.

    A methodological review is a type of systematic secondary research (i.e., research synthesis) which focuses on summarising the state-of-the-art methodological practices of research in a substantive field or topic" (Chong et al, 2021).

  12. How to Write a Literature Review

    A Review of the Theoretical Literature" (Theoretical literature review about the development of economic migration theory from the 1950s to today.) Example literature review #2: "Literature review as a research methodology: An overview and guidelines" (Methodological literature review about interdisciplinary knowledge acquisition and ...

  13. A tutorial on methodological studies: the what, when, how and why

    Methodological studies - studies that evaluate the design, analysis or reporting of other research-related reports - play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste. We provide an overview of some of the key aspects of methodological studies such ...

  14. Literature review as a research methodology: An ...

    This is why the literature review as a research method is more relevant than ever. Traditional literature reviews often lack thoroughness and rigor and are conducted ad hoc, rather than following a specific methodology. Therefore, questions can be raised about the quality and trustworthiness of these types of reviews.

  15. An overview of methodological approaches in systematic reviews

    1. INTRODUCTION. Evidence synthesis is a prerequisite for knowledge translation. 1 A well conducted systematic review (SR), often in conjunction with meta‐analyses (MA) when appropriate, is considered the "gold standard" of methods for synthesizing evidence related to a topic of interest. 2 The central strength of an SR is the transparency of the methods used to systematically search ...

  16. Critical Analysis: The Often-Missing Step in Conducting Literature

    Literature reviews are essential in moving our evidence-base forward. "A literature review makes a significant contribution when the authors add to the body of knowledge through providing new insights" (Bearman, 2016, p. 383).Although there are many methods for conducting a literature review (e.g., systematic review, scoping review, qualitative synthesis), some commonalities in ...

  17. PDF Methodology: What It Is and Why It Is So Important

    Methodology: What It Is and Why It Is so Important 5 and desirable) and these are our means (use of theory, methodology, guiding concepts, replication of results). Science is hardly a game because so many of its tasks and topics are so serious—indeed, a matter of life and death (e.g., suicide, risky behavior, cigarette smoking).

  18. Methodology or method? A critical review of qualitative case study

    The value of this review is that it contributes to discussion of whether case study is a methodology or method. We propose possible reasons why researchers might make this misinterpretation. Researchers may interchange the terms methods and methodology, and conduct research without adequate attention to epistemology and historical tradition ...

  19. 8.1: What's a Critique and Why Does it Matter?

    Critiques evaluate and analyze a wide variety of things (texts, images, performances, etc.) based on reasons or criteria. Sometimes, people equate the notion of "critique" to "criticism," which usually suggests a negative interpretation. These terms are easy to confuse, but I want to be clear that critique and criticize don't mean the ...

  20. Systematic Review

    A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer. Example: Systematic review. In 2008, Dr. Robert Boyle and his colleagues published a systematic review in ...

  21. Research Methodology

    Qualitative Research Methodology. This is a research methodology that involves the collection and analysis of non-numerical data such as words, images, and observations. This type of research is often used to explore complex phenomena, to gain an in-depth understanding of a particular topic, and to generate hypotheses.

  22. (PDF) Critique of Research Methodologies and Methods in Educational

    The purpose of this exploratory study was to document aspects o f research methodology in educational leadership. directed at emerging school lead ers and the academic community that supports them ...

  23. Physical Challenge Interventions and the Development of Transferable

    The systematic review adhered to rigorous methods, however, there are limitations that must be acknowledged. Firstly, the findings from the review only relied on published literature and English language papers. We recognize that the omission of unpublished literature and conference abstracts may have contributed to the observable publication ...

  24. A scoping review of continuous quality improvement in healthcare system

    The growing adoption of continuous quality improvement (CQI) initiatives in healthcare has generated a surge in research interest to gain a deeper understanding of CQI. However, comprehensive evidence regarding the diverse facets of CQI in healthcare has been limited. Our review sought to comprehensively grasp the conceptualization and principles of CQI, explore existing models and tools ...

  25. Assessing fragility of statistically significant findings from

    The fragility index is a statistical measure of the robustness or "stability" of a statistically significant result. It has been adapted to assess the robustness of statistically significant outcomes from randomized controlled trials. By hypothetically switching some non-responders to responders, for instance, this metric measures how many individuals would need to have responded for a ...

  26. What the New Overtime Rule Means for Workers

    The Department of Labor's new overtime regulation is restoring and extending this promise for millions more lower-paid salaried workers in the U.S.

  27. Scoping review of hearing loss attributed to congenital syphilis

    Background There are no narrative or systematic reviews of hearing loss in patients with congenital syphilis. Objectives The aim of this study was to perform a scoping review to determine what is known about the incidence, characteristics, prognosis, and therapy of hearing loss in children or adults with presumed congenital syphilis. Eligibility criteria PROSPERO, OVID Medline, OVID EMBASE ...

  28. Inappropriate use of proton pump inhibitors in clinical practice

    We read with interest the population-based cohort studies by Abrahami et al on proton pump inhibitors (PPI) and the risk of gastric and colon cancers.1 2 PPI are used at all levels of healthcare and across different subspecialties for various indications.3 4 A recent systematic review on the global trends and practices of PPI recognised 28 million PPI users from 23 countries, suggesting that ...

  29. FTC's Final Rule Banning Worker Noncompete Clauses: What It Means for

    On April 23, 2024, the Federal Trade Commission (FTC), in a 3-2 vote, issued a final rule that bans noncompete clauses between workers and employers as "unfair method[s] of competition" under Section 5 of the FTC Act, subject to only a few exceptions. This highly anticipated final rule follows on the FTC's substantially similar proposed rule released well over a year ago on January 5, 2023.

  30. Full article: Methodology or method? A critical review of qualitative

    Study design. The critical review method described by Grant and Booth (Citation 2009) was used, which is appropriate for the assessment of research quality, and is used for literature analysis to inform research and practice.This type of review goes beyond the mapping and description of scoping or rapid reviews, to include "analysis and conceptual innovation" (Grant & Booth, Citation 2009 ...