- Methodology
- Open access
- Published: 11 October 2016
Reviewing the research methods literature: principles and strategies illustrated by a systematic overview of sampling in qualitative research
- Stephen J. Gentles 1 , 4 ,
- Cathy Charles 1 ,
- David B. Nicholas 2 ,
- Jenny Ploeg 3 &
- K. Ann McKibbon 1
Systematic Reviews volume 5 , Article number: 172 ( 2016 ) Cite this article
56k Accesses
27 Citations
13 Altmetric
Metrics details
Overviews of methods are potentially useful means to increase clarity and enhance collective understanding of specific methods topics that may be characterized by ambiguity, inconsistency, or a lack of comprehensiveness. This type of review represents a distinct literature synthesis method, although to date, its methodology remains relatively undeveloped despite several aspects that demand unique review procedures. The purpose of this paper is to initiate discussion about what a rigorous systematic approach to reviews of methods, referred to here as systematic methods overviews , might look like by providing tentative suggestions for approaching specific challenges likely to be encountered. The guidance offered here was derived from experience conducting a systematic methods overview on the topic of sampling in qualitative research.
The guidance is organized into several principles that highlight specific objectives for this type of review given the common challenges that must be overcome to achieve them. Optional strategies for achieving each principle are also proposed, along with discussion of how they were successfully implemented in the overview on sampling. We describe seven paired principles and strategies that address the following aspects: delimiting the initial set of publications to consider, searching beyond standard bibliographic databases, searching without the availability of relevant metadata, selecting publications on purposeful conceptual grounds, defining concepts and other information to abstract iteratively, accounting for inconsistent terminology used to describe specific methods topics, and generating rigorous verifiable analytic interpretations. Since a broad aim in systematic methods overviews is to describe and interpret the relevant literature in qualitative terms, we suggest that iterative decision making at various stages of the review process, and a rigorous qualitative approach to analysis are necessary features of this review type.
Conclusions
We believe that the principles and strategies provided here will be useful to anyone choosing to undertake a systematic methods overview. This paper represents an initial effort to promote high quality critical evaluations of the literature regarding problematic methods topics, which have the potential to promote clearer, shared understandings, and accelerate advances in research methods. Further work is warranted to develop more definitive guidance.
Peer Review reports
While reviews of methods are not new, they represent a distinct review type whose methodology remains relatively under-addressed in the literature despite the clear implications for unique review procedures. One of few examples to describe it is a chapter containing reflections of two contributing authors in a book of 21 reviews on methodological topics compiled for the British National Health Service, Health Technology Assessment Program [ 1 ]. Notable is their observation of how the differences between the methods reviews and conventional quantitative systematic reviews, specifically attributable to their varying content and purpose, have implications for defining what qualifies as systematic. While the authors describe general aspects of “systematicity” (including rigorous application of a methodical search, abstraction, and analysis), they also describe a high degree of variation within the category of methods reviews itself and so offer little in the way of concrete guidance. In this paper, we present tentative concrete guidance, in the form of a preliminary set of proposed principles and optional strategies, for a rigorous systematic approach to reviewing and evaluating the literature on quantitative or qualitative methods topics. For purposes of this article, we have used the term systematic methods overview to emphasize the notion of a systematic approach to such reviews.
The conventional focus of rigorous literature reviews (i.e., review types for which systematic methods have been codified, including the various approaches to quantitative systematic reviews [ 2 – 4 ], and the numerous forms of qualitative and mixed methods literature synthesis [ 5 – 10 ]) is to synthesize empirical research findings from multiple studies. By contrast, the focus of overviews of methods, including the systematic approach we advocate, is to synthesize guidance on methods topics. The literature consulted for such reviews may include the methods literature, methods-relevant sections of empirical research reports, or both. Thus, this paper adds to previous work published in this journal—namely, recent preliminary guidance for conducting reviews of theory [ 11 ]—that has extended the application of systematic review methods to novel review types that are concerned with subject matter other than empirical research findings.
Published examples of methods overviews illustrate the varying objectives they can have. One objective is to establish methodological standards for appraisal purposes. For example, reviews of existing quality appraisal standards have been used to propose universal standards for appraising the quality of primary qualitative research [ 12 ] or evaluating qualitative research reports [ 13 ]. A second objective is to survey the methods-relevant sections of empirical research reports to establish current practices on methods use and reporting practices, which Moher and colleagues [ 14 ] recommend as a means for establishing the needs to be addressed in reporting guidelines (see, for example [ 15 , 16 ]). A third objective for a methods review is to offer clarity and enhance collective understanding regarding a specific methods topic that may be characterized by ambiguity, inconsistency, or a lack of comprehensiveness within the available methods literature. An example of this is a overview whose objective was to review the inconsistent definitions of intention-to-treat analysis (the methodologically preferred approach to analyze randomized controlled trial data) that have been offered in the methods literature and propose a solution for improving conceptual clarity [ 17 ]. Such reviews are warranted because students and researchers who must learn or apply research methods typically lack the time to systematically search, retrieve, review, and compare the available literature to develop a thorough and critical sense of the varied approaches regarding certain controversial or ambiguous methods topics.
While systematic methods overviews , as a review type, include both reviews of the methods literature and reviews of methods-relevant sections from empirical study reports, the guidance provided here is primarily applicable to reviews of the methods literature since it was derived from the experience of conducting such a review [ 18 ], described below. To our knowledge, there are no well-developed proposals on how to rigorously conduct such reviews. Such guidance would have the potential to improve the thoroughness and credibility of critical evaluations of the methods literature, which could increase their utility as a tool for generating understandings that advance research methods, both qualitative and quantitative. Our aim in this paper is thus to initiate discussion about what might constitute a rigorous approach to systematic methods overviews. While we hope to promote rigor in the conduct of systematic methods overviews wherever possible, we do not wish to suggest that all methods overviews need be conducted to the same standard. Rather, we believe that the level of rigor may need to be tailored pragmatically to the specific review objectives, which may not always justify the resource requirements of an intensive review process.
The example systematic methods overview on sampling in qualitative research
The principles and strategies we propose in this paper are derived from experience conducting a systematic methods overview on the topic of sampling in qualitative research [ 18 ]. The main objective of that methods overview was to bring clarity and deeper understanding of the prominent concepts related to sampling in qualitative research (purposeful sampling strategies, saturation, etc.). Specifically, we interpreted the available guidance, commenting on areas lacking clarity, consistency, or comprehensiveness (without proposing any recommendations on how to do sampling). This was achieved by a comparative and critical analysis of publications representing the most influential (i.e., highly cited) guidance across several methodological traditions in qualitative research.
The specific methods and procedures for the overview on sampling [ 18 ] from which our proposals are derived were developed both after soliciting initial input from local experts in qualitative research and an expert health librarian (KAM) and through ongoing careful deliberation throughout the review process. To summarize, in that review, we employed a transparent and rigorous approach to search the methods literature, selected publications for inclusion according to a purposeful and iterative process, abstracted textual data using structured abstraction forms, and analyzed (synthesized) the data using a systematic multi-step approach featuring abstraction of text, summary of information in matrices, and analytic comparisons.
For this article, we reflected on both the problems and challenges encountered at different stages of the review and our means for selecting justifiable procedures to deal with them. Several principles were then derived by considering the generic nature of these problems, while the generalizable aspects of the procedures used to address them formed the basis of optional strategies. Further details of the specific methods and procedures used in the overview on qualitative sampling are provided below to illustrate both the types of objectives and challenges that reviewers will likely need to consider and our approach to implementing each of the principles and strategies.
Organization of the guidance into principles and strategies
For the purposes of this article, principles are general statements outlining what we propose are important aims or considerations within a particular review process, given the unique objectives or challenges to be overcome with this type of review. These statements follow the general format, “considering the objective or challenge of X, we propose Y to be an important aim or consideration.” Strategies are optional and flexible approaches for implementing the previous principle outlined. Thus, generic challenges give rise to principles, which in turn give rise to strategies.
We organize the principles and strategies below into three sections corresponding to processes characteristic of most systematic literature synthesis approaches: literature identification and selection ; data abstraction from the publications selected for inclusion; and analysis , including critical appraisal and synthesis of the abstracted data. Within each section, we also describe the specific methodological decisions and procedures used in the overview on sampling in qualitative research [ 18 ] to illustrate how the principles and strategies for each review process were applied and implemented in a specific case. We expect this guidance and accompanying illustrations will be useful for anyone considering engaging in a methods overview, particularly those who may be familiar with conventional systematic review methods but may not yet appreciate some of the challenges specific to reviewing the methods literature.
Results and discussion
Literature identification and selection.
The identification and selection process includes search and retrieval of publications and the development and application of inclusion and exclusion criteria to select the publications that will be abstracted and analyzed in the final review. Literature identification and selection for overviews of the methods literature is challenging and potentially more resource-intensive than for most reviews of empirical research. This is true for several reasons that we describe below, alongside discussion of the potential solutions. Additionally, we suggest in this section how the selection procedures can be chosen to match the specific analytic approach used in methods overviews.
Delimiting a manageable set of publications
One aspect of methods overviews that can make identification and selection challenging is the fact that the universe of literature containing potentially relevant information regarding most methods-related topics is expansive and often unmanageably so. Reviewers are faced with two large categories of literature: the methods literature , where the possible publication types include journal articles, books, and book chapters; and the methods-relevant sections of empirical study reports , where the possible publication types include journal articles, monographs, books, theses, and conference proceedings. In our systematic overview of sampling in qualitative research, exhaustively searching (including retrieval and first-pass screening) all publication types across both categories of literature for information on a single methods-related topic was too burdensome to be feasible. The following proposed principle follows from the need to delimit a manageable set of literature for the review.
Principle #1:
Considering the broad universe of potentially relevant literature, we propose that an important objective early in the identification and selection stage is to delimit a manageable set of methods-relevant publications in accordance with the objectives of the methods overview.
Strategy #1:
To limit the set of methods-relevant publications that must be managed in the selection process, reviewers have the option to initially review only the methods literature, and exclude the methods-relevant sections of empirical study reports, provided this aligns with the review’s particular objectives.
We propose that reviewers are justified in choosing to select only the methods literature when the objective is to map out the range of recognized concepts relevant to a methods topic, to summarize the most authoritative or influential definitions or meanings for methods-related concepts, or to demonstrate a problematic lack of clarity regarding a widely established methods-related concept and potentially make recommendations for a preferred approach to the methods topic in question. For example, in the case of the methods overview on sampling [ 18 ], the primary aim was to define areas lacking in clarity for multiple widely established sampling-related topics. In the review on intention-to-treat in the context of missing outcome data [ 17 ], the authors identified a lack of clarity based on multiple inconsistent definitions in the literature and went on to recommend separating the issue of how to handle missing outcome data from the issue of whether an intention-to-treat analysis can be claimed.
In contrast to strategy #1, it may be appropriate to select the methods-relevant sections of empirical study reports when the objective is to illustrate how a methods concept is operationalized in research practice or reported by authors. For example, one could review all the publications in 2 years’ worth of issues of five high-impact field-related journals to answer questions about how researchers describe implementing a particular method or approach, or to quantify how consistently they define or report using it. Such reviews are often used to highlight gaps in the reporting practices regarding specific methods, which may be used to justify items to address in reporting guidelines (for example, [ 14 – 16 ]).
It is worth recognizing that other authors have advocated broader positions regarding the scope of literature to be considered in a review, expanding on our perspective. Suri [ 10 ] (who, like us, emphasizes how different sampling strategies are suitable for different literature synthesis objectives) has, for example, described a two-stage literature sampling procedure (pp. 96–97). First, reviewers use an initial approach to conduct a broad overview of the field—for reviews of methods topics, this would entail an initial review of the research methods literature. This is followed by a second more focused stage in which practical examples are purposefully selected—for methods reviews, this would involve sampling the empirical literature to illustrate key themes and variations. While this approach is seductive in its capacity to generate more in depth and interpretive analytic findings, some reviewers may consider it too resource-intensive to include the second step no matter how selective the purposeful sampling. In the overview on sampling where we stopped after the first stage [ 18 ], we discussed our selective focus on the methods literature as a limitation that left opportunities for further analysis of the literature. We explicitly recommended, for example, that theoretical sampling was a topic for which a future review of the methods sections of empirical reports was justified to answer specific questions identified in the primary review.
Ultimately, reviewers must make pragmatic decisions that balance resource considerations, combined with informed predictions about the depth and complexity of literature available on their topic, with the stated objectives of their review. The remaining principles and strategies apply primarily to overviews that include the methods literature, although some aspects may be relevant to reviews that include empirical study reports.
Searching beyond standard bibliographic databases
An important reality affecting identification and selection in overviews of the methods literature is the increased likelihood for relevant publications to be located in sources other than journal articles (which is usually not the case for overviews of empirical research, where journal articles generally represent the primary publication type). In the overview on sampling [ 18 ], out of 41 full-text publications retrieved and reviewed, only 4 were journal articles, while 37 were books or book chapters. Since many books and book chapters did not exist electronically, their full text had to be physically retrieved in hardcopy, while 11 publications were retrievable only through interlibrary loan or purchase request. The tasks associated with such retrieval are substantially more time-consuming than electronic retrieval. Since a substantial proportion of methods-related guidance may be located in publication types that are less comprehensively indexed in standard bibliographic databases, identification and retrieval thus become complicated processes.
Principle #2:
Considering that important sources of methods guidance can be located in non-journal publication types (e.g., books, book chapters) that tend to be poorly indexed in standard bibliographic databases, it is important to consider alternative search methods for identifying relevant publications to be further screened for inclusion.
Strategy #2:
To identify books, book chapters, and other non-journal publication types not thoroughly indexed in standard bibliographic databases, reviewers may choose to consult one or more of the following less standard sources: Google Scholar, publisher web sites, or expert opinion.
In the case of the overview on sampling in qualitative research [ 18 ], Google Scholar had two advantages over other standard bibliographic databases: it indexes and returns records of books and book chapters likely to contain guidance on qualitative research methods topics; and it has been validated as providing higher citation counts than ISI Web of Science (a producer of numerous bibliographic databases accessible through institutional subscription) for several non-biomedical disciplines including the social sciences where qualitative research methods are prominently used [ 19 – 21 ]. While we identified numerous useful publications by consulting experts, the author publication lists generated through Google Scholar searches were uniquely useful to identify more recent editions of methods books identified by experts.
Searching without relevant metadata
Determining what publications to select for inclusion in the overview on sampling [ 18 ] could only rarely be accomplished by reviewing the publication’s metadata. This was because for the many books and other non-journal type publications we identified as possibly relevant, the potential content of interest would be located in only a subsection of the publication. In this common scenario for reviews of the methods literature (as opposed to methods overviews that include empirical study reports), reviewers will often be unable to employ standard title, abstract, and keyword database searching or screening as a means for selecting publications.
Principle #3:
Considering that the presence of information about the topic of interest may not be indicated in the metadata for books and similar publication types, it is important to consider other means of identifying potentially useful publications for further screening.
Strategy #3:
One approach to identifying potentially useful books and similar publication types is to consider what classes of such publications (e.g., all methods manuals for a certain research approach) are likely to contain relevant content, then identify, retrieve, and review the full text of corresponding publications to determine whether they contain information on the topic of interest.
In the example of the overview on sampling in qualitative research [ 18 ], the topic of interest (sampling) was one of numerous topics covered in the general qualitative research methods manuals. Consequently, examples from this class of publications first had to be identified for retrieval according to non-keyword-dependent criteria. Thus, all methods manuals within the three research traditions reviewed (grounded theory, phenomenology, and case study) that might contain discussion of sampling were sought through Google Scholar and expert opinion, their full text obtained, and hand-searched for relevant content to determine eligibility. We used tables of contents and index sections of books to aid this hand searching.
Purposefully selecting literature on conceptual grounds
A final consideration in methods overviews relates to the type of analysis used to generate the review findings. Unlike quantitative systematic reviews where reviewers aim for accurate or unbiased quantitative estimates—something that requires identifying and selecting the literature exhaustively to obtain all relevant data available (i.e., a complete sample)—in methods overviews, reviewers must describe and interpret the relevant literature in qualitative terms to achieve review objectives. In other words, the aim in methods overviews is to seek coverage of the qualitative concepts relevant to the methods topic at hand. For example, in the overview of sampling in qualitative research [ 18 ], achieving review objectives entailed providing conceptual coverage of eight sampling-related topics that emerged as key domains. The following principle recognizes that literature sampling should therefore support generating qualitative conceptual data as the input to analysis.
Principle #4:
Since the analytic findings of a systematic methods overview are generated through qualitative description and interpretation of the literature on a specified topic, selection of the literature should be guided by a purposeful strategy designed to achieve adequate conceptual coverage (i.e., representing an appropriate degree of variation in relevant ideas) of the topic according to objectives of the review.
Strategy #4:
One strategy for choosing the purposeful approach to use in selecting the literature according to the review objectives is to consider whether those objectives imply exploring concepts either at a broad overview level, in which case combining maximum variation selection with a strategy that limits yield (e.g., critical case, politically important, or sampling for influence—described below) may be appropriate; or in depth, in which case purposeful approaches aimed at revealing innovative cases will likely be necessary.
In the methods overview on sampling, the implied scope was broad since we set out to review publications on sampling across three divergent qualitative research traditions—grounded theory, phenomenology, and case study—to facilitate making informative conceptual comparisons. Such an approach would be analogous to maximum variation sampling.
At the same time, the purpose of that review was to critically interrogate the clarity, consistency, and comprehensiveness of literature from these traditions that was “most likely to have widely influenced students’ and researchers’ ideas about sampling” (p. 1774) [ 18 ]. In other words, we explicitly set out to review and critique the most established and influential (and therefore dominant) literature, since this represents a common basis of knowledge among students and researchers seeking understanding or practical guidance on sampling in qualitative research. To achieve this objective, we purposefully sampled publications according to the criterion of influence , which we operationalized as how often an author or publication has been referenced in print or informal discourse. This second sampling approach also limited the literature we needed to consider within our broad scope review to a manageable amount.
To operationalize this strategy of sampling for influence , we sought to identify both the most influential authors within a qualitative research tradition (all of whose citations were subsequently screened) and the most influential publications on the topic of interest by non-influential authors. This involved a flexible approach that combined multiple indicators of influence to avoid the dilemma that any single indicator might provide inadequate coverage. These indicators included bibliometric data (h-index for author influence [ 22 ]; number of cites for publication influence), expert opinion, and cross-references in the literature (i.e., snowball sampling). As a final selection criterion, a publication was included only if it made an original contribution in terms of novel guidance regarding sampling or a related concept; thus, purely secondary sources were excluded. Publish or Perish software (Anne-Wil Harzing; available at http://www.harzing.com/resources/publish-or-perish ) was used to generate bibliometric data via the Google Scholar database. Figure 1 illustrates how identification and selection in the methods overview on sampling was a multi-faceted and iterative process. The authors selected as influential, and the publications selected for inclusion or exclusion are listed in Additional file 1 (Matrices 1, 2a, 2b).
Literature identification and selection process used in the methods overview on sampling [ 18 ]
In summary, the strategies of seeking maximum variation and sampling for influence were employed in the sampling overview to meet the specific review objectives described. Reviewers will need to consider the full range of purposeful literature sampling approaches at their disposal in deciding what best matches the specific aims of their own reviews. Suri [ 10 ] has recently retooled Patton’s well-known typology of purposeful sampling strategies (originally intended for primary research) for application to literature synthesis, providing a useful resource in this respect.
Data abstraction
The purpose of data abstraction in rigorous literature reviews is to locate and record all data relevant to the topic of interest from the full text of included publications, making them available for subsequent analysis. Conventionally, a data abstraction form—consisting of numerous distinct conceptually defined fields to which corresponding information from the source publication is recorded—is developed and employed. There are several challenges, however, to the processes of developing the abstraction form and abstracting the data itself when conducting methods overviews, which we address here. Some of these problems and their solutions may be familiar to those who have conducted qualitative literature syntheses, which are similarly conceptual.
Iteratively defining conceptual information to abstract
In the overview on sampling [ 18 ], while we surveyed multiple sources beforehand to develop a list of concepts relevant for abstraction (e.g., purposeful sampling strategies, saturation, sample size), there was no way for us to anticipate some concepts prior to encountering them in the review process. Indeed, in many cases, reviewers are unable to determine the complete set of methods-related concepts that will be the focus of the final review a priori without having systematically reviewed the publications to be included. Thus, defining what information to abstract beforehand may not be feasible.
Principle #5:
Considering the potential impracticality of defining a complete set of relevant methods-related concepts from a body of literature one has not yet systematically read, selecting and defining fields for data abstraction must often be undertaken iteratively. Thus, concepts to be abstracted can be expected to grow and change as data abstraction proceeds.
Strategy #5:
Reviewers can develop an initial form or set of concepts for abstraction purposes according to standard methods (e.g., incorporating expert feedback, pilot testing) and remain attentive to the need to iteratively revise it as concepts are added or modified during the review. Reviewers should document revisions and return to re-abstract data from previously abstracted publications as the new data requirements are determined.
In the sampling overview [ 18 ], we developed and maintained the abstraction form in Microsoft Word. We derived the initial set of abstraction fields from our own knowledge of relevant sampling-related concepts, consultation with local experts, and reviewing a pilot sample of publications. Since the publications in this review included a large proportion of books, the abstraction process often began by flagging the broad sections within a publication containing topic-relevant information for detailed review to identify text to abstract. When reviewing flagged text, the reviewer occasionally encountered an unanticipated concept significant enough to warrant being added as a new field to the abstraction form. For example, a field was added to capture how authors described the timing of sampling decisions, whether before (a priori) or after (ongoing) starting data collection, or whether this was unclear. In these cases, we systematically documented the modification to the form and returned to previously abstracted publications to abstract any information that might be relevant to the new field.
The logic of this strategy is analogous to the logic used in a form of research synthesis called best fit framework synthesis (BFFS) [ 23 – 25 ]. In that method, reviewers initially code evidence using an a priori framework they have selected. When evidence cannot be accommodated by the selected framework, reviewers then develop new themes or concepts from which they construct a new expanded framework. Both the strategy proposed and the BFFS approach to research synthesis are notable for their rigorous and transparent means to adapt a final set of concepts to the content under review.
Accounting for inconsistent terminology
An important complication affecting the abstraction process in methods overviews is that the language used by authors to describe methods-related concepts can easily vary across publications. For example, authors from different qualitative research traditions often use different terms for similar methods-related concepts. Furthermore, as we found in the sampling overview [ 18 ], there may be cases where no identifiable term, phrase, or label for a methods-related concept is used at all, and a description of it is given instead. This can make searching the text for relevant concepts based on keywords unreliable.
Principle #6:
Since accepted terms may not be used consistently to refer to methods concepts, it is necessary to rely on the definitions for concepts, rather than keywords, to identify relevant information in the publication to abstract.
Strategy #6:
An effective means to systematically identify relevant information is to develop and iteratively adjust written definitions for key concepts (corresponding to abstraction fields) that are consistent with and as inclusive of as much of the literature reviewed as possible. Reviewers then seek information that matches these definitions (rather than keywords) when scanning a publication for relevant data to abstract.
In the abstraction process for the sampling overview [ 18 ], we noted the several concepts of interest to the review for which abstraction by keyword was particularly problematic due to inconsistent terminology across publications: sampling , purposeful sampling , sampling strategy , and saturation (for examples, see Additional file 1 , Matrices 3a, 3b, 4). We iteratively developed definitions for these concepts by abstracting text from publications that either provided an explicit definition or from which an implicit definition could be derived, which was recorded in fields dedicated to the concept’s definition. Using a method of constant comparison, we used text from definition fields to inform and modify a centrally maintained definition of the corresponding concept to optimize its fit and inclusiveness with the literature reviewed. Table 1 shows, as an example, the final definition constructed in this way for one of the central concepts of the review, qualitative sampling .
We applied iteratively developed definitions when making decisions about what specific text to abstract for an existing field, which allowed us to abstract concept-relevant data even if no recognized keyword was used. For example, this was the case for the sampling-related concept, saturation , where the relevant text available for abstraction in one publication [ 26 ]—“to continue to collect data until nothing new was being observed or recorded, no matter how long that takes”—was not accompanied by any term or label whatsoever.
This comparative analytic strategy (and our approach to analysis more broadly as described in strategy #7, below) is analogous to the process of reciprocal translation —a technique first introduced for meta-ethnography by Noblit and Hare [ 27 ] that has since been recognized as a common element in a variety of qualitative metasynthesis approaches [ 28 ]. Reciprocal translation, taken broadly, involves making sense of a study’s findings in terms of the findings of the other studies included in the review. In practice, it has been operationalized in different ways. Melendez-Torres and colleagues developed a typology from their review of the metasynthesis literature, describing four overlapping categories of specific operations undertaken in reciprocal translation: visual representation, key paper integration, data reduction and thematic extraction, and line-by-line coding [ 28 ]. The approaches suggested in both strategies #6 and #7, with their emphasis on constant comparison, appear to fall within the line-by-line coding category.
Generating credible and verifiable analytic interpretations
The analysis in a systematic methods overview must support its more general objective, which we suggested above is often to offer clarity and enhance collective understanding regarding a chosen methods topic. In our experience, this involves describing and interpreting the relevant literature in qualitative terms. Furthermore, any interpretative analysis required may entail reaching different levels of abstraction, depending on the more specific objectives of the review. For example, in the overview on sampling [ 18 ], we aimed to produce a comparative analysis of how multiple sampling-related topics were treated differently within and among different qualitative research traditions. To promote credibility of the review, however, not only should one seek a qualitative analytic approach that facilitates reaching varying levels of abstraction but that approach must also ensure that abstract interpretations are supported and justified by the source data and not solely the product of the analyst’s speculative thinking.
Principle #7:
Considering the qualitative nature of the analysis required in systematic methods overviews, it is important to select an analytic method whose interpretations can be verified as being consistent with the literature selected, regardless of the level of abstraction reached.
Strategy #7:
We suggest employing the constant comparative method of analysis [ 29 ] because it supports developing and verifying analytic links to the source data throughout progressively interpretive or abstract levels. In applying this approach, we advise a rigorous approach, documenting how supportive quotes or references to the original texts are carried forward in the successive steps of analysis to allow for easy verification.
The analytic approach used in the methods overview on sampling [ 18 ] comprised four explicit steps, progressing in level of abstraction—data abstraction, matrices, narrative summaries, and final analytic conclusions (Fig. 2 ). While we have positioned data abstraction as the second stage of the generic review process (prior to Analysis), above, we also considered it as an initial step of analysis in the sampling overview for several reasons. First, it involved a process of constant comparisons and iterative decision-making about the fields to add or define during development and modification of the abstraction form, through which we established the range of concepts to be addressed in the review. At the same time, abstraction involved continuous analytic decisions about what textual quotes (ranging in size from short phrases to numerous paragraphs) to record in the fields thus created. This constant comparative process was analogous to open coding in which textual data from publications was compared to conceptual fields (equivalent to codes) or to other instances of data previously abstracted when constructing definitions to optimize their fit with the overall literature as described in strategy #6. Finally, in the data abstraction step, we also recorded our first interpretive thoughts in dedicated fields, providing initial material for the more abstract analytic steps.
Summary of progressive steps of analysis used in the methods overview on sampling [ 18 ]
In the second step of the analysis, we constructed topic-specific matrices , or tables, by copying relevant quotes from abstraction forms into the appropriate cells of matrices (for the complete set of analytic matrices developed in the sampling review, see Additional file 1 (matrices 3 to 10)). Each matrix ranged from one to five pages; row headings, nested three-deep, identified the methodological tradition, author, and publication, respectively; and column headings identified the concepts, which corresponded to abstraction fields. Matrices thus allowed us to make further comparisons across methodological traditions, and between authors within a tradition. In the third step of analysis, we recorded our comparative observations as narrative summaries , in which we used illustrative quotes more sparingly. In the final step, we developed analytic conclusions based on the narrative summaries about the sampling-related concepts within each methodological tradition for which clarity, consistency, or comprehensiveness of the available guidance appeared to be lacking. Higher levels of analysis thus built logically from the lower levels, enabling us to easily verify analytic conclusions by tracing the support for claims by comparing the original text of publications reviewed.
Integrative versus interpretive methods overviews
The analytic product of systematic methods overviews is comparable to qualitative evidence syntheses, since both involve describing and interpreting the relevant literature in qualitative terms. Most qualitative synthesis approaches strive to produce new conceptual understandings that vary in level of interpretation. Dixon-Woods and colleagues [ 30 ] elaborate on a useful distinction, originating from Noblit and Hare [ 27 ], between integrative and interpretive reviews. Integrative reviews focus on summarizing available primary data and involve using largely secure and well defined concepts to do so; definitions are used from an early stage to specify categories for abstraction (or coding) of data, which in turn supports their aggregation; they do not seek as their primary focus to develop or specify new concepts, although they may achieve some theoretical or interpretive functions. For interpretive reviews, meanwhile, the main focus is to develop new concepts and theories that integrate them, with the implication that the concepts developed become fully defined towards the end of the analysis. These two forms are not completely distinct, and “every integrative synthesis will include elements of interpretation, and every interpretive synthesis will include elements of aggregation of data” [ 30 ].
The example methods overview on sampling [ 18 ] could be classified as predominantly integrative because its primary goal was to aggregate influential authors’ ideas on sampling-related concepts; there were also, however, elements of interpretive synthesis since it aimed to develop new ideas about where clarity in guidance on certain sampling-related topics is lacking, and definitions for some concepts were flexible and not fixed until late in the review. We suggest that most systematic methods overviews will be classifiable as predominantly integrative (aggregative). Nevertheless, more highly interpretive methods overviews are also quite possible—for example, when the review objective is to provide a highly critical analysis for the purpose of generating new methodological guidance. In such cases, reviewers may need to sample more deeply (see strategy #4), specifically by selecting empirical research reports (i.e., to go beyond dominant or influential ideas in the methods literature) that are likely to feature innovations or instructive lessons in employing a given method.
In this paper, we have outlined tentative guidance in the form of seven principles and strategies on how to conduct systematic methods overviews, a review type in which methods-relevant literature is systematically analyzed with the aim of offering clarity and enhancing collective understanding regarding a specific methods topic. Our proposals include strategies for delimiting the set of publications to consider, searching beyond standard bibliographic databases, searching without the availability of relevant metadata, selecting publications on purposeful conceptual grounds, defining concepts and other information to abstract iteratively, accounting for inconsistent terminology, and generating credible and verifiable analytic interpretations. We hope the suggestions proposed will be useful to others undertaking reviews on methods topics in future.
As far as we are aware, this is the first published source of concrete guidance for conducting this type of review. It is important to note that our primary objective was to initiate methodological discussion by stimulating reflection on what rigorous methods for this type of review should look like, leaving the development of more complete guidance to future work. While derived from the experience of reviewing a single qualitative methods topic, we believe the principles and strategies provided are generalizable to overviews of both qualitative and quantitative methods topics alike. However, it is expected that additional challenges and insights for conducting such reviews have yet to be defined. Thus, we propose that next steps for developing more definitive guidance should involve an attempt to collect and integrate other reviewers’ perspectives and experiences in conducting systematic methods overviews on a broad range of qualitative and quantitative methods topics. Formalized guidance and standards would improve the quality of future methods overviews, something we believe has important implications for advancing qualitative and quantitative methodology. When undertaken to a high standard, rigorous critical evaluations of the available methods guidance have significant potential to make implicit controversies explicit, and improve the clarity and precision of our understandings of problematic qualitative or quantitative methods issues.
A review process central to most types of rigorous reviews of empirical studies, which we did not explicitly address in a separate review step above, is quality appraisal . The reason we have not treated this as a separate step stems from the different objectives of the primary publications included in overviews of the methods literature (i.e., providing methodological guidance) compared to the primary publications included in the other established review types (i.e., reporting findings from single empirical studies). This is not to say that appraising quality of the methods literature is not an important concern for systematic methods overviews. Rather, appraisal is much more integral to (and difficult to separate from) the analysis step, in which we advocate appraising clarity, consistency, and comprehensiveness—the quality appraisal criteria that we suggest are appropriate for the methods literature. As a second important difference regarding appraisal, we currently advocate appraising the aforementioned aspects at the level of the literature in aggregate rather than at the level of individual publications. One reason for this is that methods guidance from individual publications generally builds on previous literature, and thus we feel that ahistorical judgments about comprehensiveness of single publications lack relevance and utility. Additionally, while different methods authors may express themselves less clearly than others, their guidance can nonetheless be highly influential and useful, and should therefore not be downgraded or ignored based on considerations of clarity—which raises questions about the alternative uses that quality appraisals of individual publications might have. Finally, legitimate variability in the perspectives that methods authors wish to emphasize, and the levels of generality at which they write about methods, makes critiquing individual publications based on the criterion of clarity a complex and potentially problematic endeavor that is beyond the scope of this paper to address. By appraising the current state of the literature at a holistic level, reviewers stand to identify important gaps in understanding that represent valuable opportunities for further methodological development.
To summarize, the principles and strategies provided here may be useful to those seeking to undertake their own systematic methods overview. Additional work is needed, however, to establish guidance that is comprehensive by comparing the experiences from conducting a variety of methods overviews on a range of methods topics. Efforts that further advance standards for systematic methods overviews have the potential to promote high-quality critical evaluations that produce conceptually clear and unified understandings of problematic methods topics, thereby accelerating the advance of research methodology.
Hutton JL, Ashcroft R. What does “systematic” mean for reviews of methods? In: Black N, Brazier J, Fitzpatrick R, Reeves B, editors. Health services research methods: a guide to best practice. London: BMJ Publishing Group; 1998. p. 249–54.
Google Scholar
Cochrane handbook for systematic reviews of interventions. In. Edited by Higgins JPT, Green S, Version 5.1.0 edn: The Cochrane Collaboration; 2011.
Centre for Reviews and Dissemination: Systematic reviews: CRD’s guidance for undertaking reviews in health care . York: Centre for Reviews and Dissemination; 2009.
Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JPA, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ. 2009;339:b2700–0.
Barnett-Page E, Thomas J. Methods for the synthesis of qualitative research: a critical review. BMC Med Res Methodol. 2009;9(1):59.
Article PubMed PubMed Central Google Scholar
Kastner M, Tricco AC, Soobiah C, Lillie E, Perrier L, Horsley T, Welch V, Cogo E, Antony J, Straus SE. What is the most appropriate knowledge synthesis method to conduct a review? Protocol for a scoping review. BMC Med Res Methodol. 2012;12(1):1–1.
Article Google Scholar
Booth A, Noyes J, Flemming K, Gerhardus A. Guidance on choosing qualitative evidence synthesis methods for use in health technology assessments of complex interventions. In: Integrate-HTA. 2016.
Booth A, Sutton A, Papaioannou D. Systematic approaches to successful literature review. 2nd ed. London: Sage; 2016.
Hannes K, Lockwood C. Synthesizing qualitative research: choosing the right approach. Chichester: Wiley-Blackwell; 2012.
Suri H. Towards methodologically inclusive research syntheses: expanding possibilities. New York: Routledge; 2014.
Campbell M, Egan M, Lorenc T, Bond L, Popham F, Fenton C, Benzeval M. Considering methodological options for reviews of theory: illustrated by a review of theories linking income and health. Syst Rev. 2014;3(1):1–11.
Cohen DJ, Crabtree BF. Evaluative criteria for qualitative research in health care: controversies and recommendations. Ann Fam Med. 2008;6(4):331–9.
Tong A, Sainsbury P, Craig J. Consolidated criteria for reportingqualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57.
Article PubMed Google Scholar
Moher D, Schulz KF, Simera I, Altman DG. Guidance for developers of health research reporting guidelines. PLoS Med. 2010;7(2):e1000217.
Moher D, Tetzlaff J, Tricco AC, Sampson M, Altman DG. Epidemiology and reporting characteristics of systematic reviews. PLoS Med. 2007;4(3):e78.
Chan AW, Altman DG. Epidemiology and reporting of randomised trials published in PubMed journals. Lancet. 2005;365(9465):1159–62.
Alshurafa M, Briel M, Akl EA, Haines T, Moayyedi P, Gentles SJ, Rios L, Tran C, Bhatnagar N, Lamontagne F, et al. Inconsistent definitions for intention-to-treat in relation to missing outcome data: systematic review of the methods literature. PLoS One. 2012;7(11):e49163.
Article CAS PubMed PubMed Central Google Scholar
Gentles SJ, Charles C, Ploeg J, McKibbon KA. Sampling in qualitative research: insights from an overview of the methods literature. Qual Rep. 2015;20(11):1772–89.
Harzing A-W, Alakangas S. Google Scholar, Scopus and the Web of Science: a longitudinal and cross-disciplinary comparison. Scientometrics. 2016;106(2):787–804.
Harzing A-WK, van der Wal R. Google Scholar as a new source for citation analysis. Ethics Sci Environ Polit. 2008;8(1):61–73.
Kousha K, Thelwall M. Google Scholar citations and Google Web/URL citations: a multi‐discipline exploratory analysis. J Assoc Inf Sci Technol. 2007;58(7):1055–65.
Hirsch JE. An index to quantify an individual’s scientific research output. Proc Natl Acad Sci U S A. 2005;102(46):16569–72.
Booth A, Carroll C. How to build up the actionable knowledge base: the role of ‘best fit’ framework synthesis for studies of improvement in healthcare. BMJ Quality Safety. 2015;24(11):700–8.
Carroll C, Booth A, Leaviss J, Rick J. “Best fit” framework synthesis: refining the method. BMC Med Res Methodol. 2013;13(1):37.
Carroll C, Booth A, Cooper K. A worked example of “best fit” framework synthesis: a systematic review of views concerning the taking of some potential chemopreventive agents. BMC Med Res Methodol. 2011;11(1):29.
Cohen MZ, Kahn DL, Steeves DL. Hermeneutic phenomenological research: a practical guide for nurse researchers. Thousand Oaks: Sage; 2000.
Noblit GW, Hare RD. Meta-ethnography: synthesizing qualitative studies. Newbury Park: Sage; 1988.
Book Google Scholar
Melendez-Torres GJ, Grant S, Bonell C. A systematic review and critical appraisal of qualitative metasynthetic practice in public health to develop a taxonomy of operations of reciprocal translation. Res Synthesis Methods. 2015;6(4):357–71.
Article CAS Google Scholar
Glaser BG, Strauss A. The discovery of grounded theory. Chicago: Aldine; 1967.
Dixon-Woods M, Agarwal S, Young B, Jones D, Sutton A. Integrative approaches to qualitative and quantitative evidence. In: UK National Health Service. 2004. p. 1–44.
Download references
Acknowledgements
Not applicable.
There was no funding for this work.
Availability of data and materials
The systematic methods overview used as a worked example in this article (Gentles SJ, Charles C, Ploeg J, McKibbon KA: Sampling in qualitative research: insights from an overview of the methods literature. The Qual Rep 2015, 20(11):1772-1789) is available from http://nsuworks.nova.edu/tqr/vol20/iss11/5 .
Authors’ contributions
SJG wrote the first draft of this article, with CC contributing to drafting. All authors contributed to revising the manuscript. All authors except CC (deceased) approved the final draft. SJG, CC, KAB, and JP were involved in developing methods for the systematic methods overview on sampling.
Authors’ information
Competing interests.
The authors declare that they have no competing interests.
Consent for publication
Ethics approval and consent to participate, author information, authors and affiliations.
Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada
Stephen J. Gentles, Cathy Charles & K. Ann McKibbon
Faculty of Social Work, University of Calgary, Alberta, Canada
David B. Nicholas
School of Nursing, McMaster University, Hamilton, Ontario, Canada
Jenny Ploeg
CanChild Centre for Childhood Disability Research, McMaster University, 1400 Main Street West, IAHS 408, Hamilton, ON, L8S 1C7, Canada
Stephen J. Gentles
You can also search for this author in PubMed Google Scholar
Corresponding author
Correspondence to Stephen J. Gentles .
Additional information
Cathy Charles is deceased
Additional file
Additional file 1:.
Submitted: Analysis_matrices. (DOC 330 kb)
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.
Reprints and permissions
About this article
Cite this article.
Gentles, S.J., Charles, C., Nicholas, D.B. et al. Reviewing the research methods literature: principles and strategies illustrated by a systematic overview of sampling in qualitative research. Syst Rev 5 , 172 (2016). https://doi.org/10.1186/s13643-016-0343-0
Download citation
Received : 06 June 2016
Accepted : 14 September 2016
Published : 11 October 2016
DOI : https://doi.org/10.1186/s13643-016-0343-0
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Systematic review
- Literature selection
- Research methods
- Research methodology
- Overview of methods
- Systematic methods overview
- Review methods
Systematic Reviews
ISSN: 2046-4053
- Submission enquiries: Access here and click Contact Us
- General enquiries: [email protected]
- Open access
- Published: 07 September 2020
A tutorial on methodological studies: the what, when, how and why
- Lawrence Mbuagbaw ORCID: orcid.org/0000-0001-5855-5461 1 , 2 , 3 ,
- Daeria O. Lawson 1 ,
- Livia Puljak 4 ,
- David B. Allison 5 &
- Lehana Thabane 1 , 2 , 6 , 7 , 8
BMC Medical Research Methodology volume 20 , Article number: 226 ( 2020 ) Cite this article
44k Accesses
61 Citations
46 Altmetric
Metrics details
Methodological studies – studies that evaluate the design, analysis or reporting of other research-related reports – play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste.
We provide an overview of some of the key aspects of methodological studies such as what they are, and when, how and why they are done. We adopt a “frequently asked questions” format to facilitate reading this paper and provide multiple examples to help guide researchers interested in conducting methodological studies. Some of the topics addressed include: is it necessary to publish a study protocol? How to select relevant research reports and databases for a methodological study? What approaches to data extraction and statistical analysis should be considered when conducting a methodological study? What are potential threats to validity and is there a way to appraise the quality of methodological studies?
Appropriate reflection and application of basic principles of epidemiology and biostatistics are required in the design and analysis of methodological studies. This paper provides an introduction for further discussion about the conduct of methodological studies.
Peer Review reports
The field of meta-research (or research-on-research) has proliferated in recent years in response to issues with research quality and conduct [ 1 , 2 , 3 ]. As the name suggests, this field targets issues with research design, conduct, analysis and reporting. Various types of research reports are often examined as the unit of analysis in these studies (e.g. abstracts, full manuscripts, trial registry entries). Like many other novel fields of research, meta-research has seen a proliferation of use before the development of reporting guidance. For example, this was the case with randomized trials for which risk of bias tools and reporting guidelines were only developed much later – after many trials had been published and noted to have limitations [ 4 , 5 ]; and for systematic reviews as well [ 6 , 7 , 8 ]. However, in the absence of formal guidance, studies that report on research differ substantially in how they are named, conducted and reported [ 9 , 10 ]. This creates challenges in identifying, summarizing and comparing them. In this tutorial paper, we will use the term methodological study to refer to any study that reports on the design, conduct, analysis or reporting of primary or secondary research-related reports (such as trial registry entries and conference abstracts).
In the past 10 years, there has been an increase in the use of terms related to methodological studies (based on records retrieved with a keyword search [in the title and abstract] for “methodological review” and “meta-epidemiological study” in PubMed up to December 2019), suggesting that these studies may be appearing more frequently in the literature. See Fig. 1 .
Trends in the number studies that mention “methodological review” or “meta-
epidemiological study” in PubMed.
The methods used in many methodological studies have been borrowed from systematic and scoping reviews. This practice has influenced the direction of the field, with many methodological studies including searches of electronic databases, screening of records, duplicate data extraction and assessments of risk of bias in the included studies. However, the research questions posed in methodological studies do not always require the approaches listed above, and guidance is needed on when and how to apply these methods to a methodological study. Even though methodological studies can be conducted on qualitative or mixed methods research, this paper focuses on and draws examples exclusively from quantitative research.
The objectives of this paper are to provide some insights on how to conduct methodological studies so that there is greater consistency between the research questions posed, and the design, analysis and reporting of findings. We provide multiple examples to illustrate concepts and a proposed framework for categorizing methodological studies in quantitative research.
What is a methodological study?
Any study that describes or analyzes methods (design, conduct, analysis or reporting) in published (or unpublished) literature is a methodological study. Consequently, the scope of methodological studies is quite extensive and includes, but is not limited to, topics as diverse as: research question formulation [ 11 ]; adherence to reporting guidelines [ 12 , 13 , 14 ] and consistency in reporting [ 15 ]; approaches to study analysis [ 16 ]; investigating the credibility of analyses [ 17 ]; and studies that synthesize these methodological studies [ 18 ]. While the nomenclature of methodological studies is not uniform, the intents and purposes of these studies remain fairly consistent – to describe or analyze methods in primary or secondary studies. As such, methodological studies may also be classified as a subtype of observational studies.
Parallel to this are experimental studies that compare different methods. Even though they play an important role in informing optimal research methods, experimental methodological studies are beyond the scope of this paper. Examples of such studies include the randomized trials by Buscemi et al., comparing single data extraction to double data extraction [ 19 ], and Carrasco-Labra et al., comparing approaches to presenting findings in Grading of Recommendations, Assessment, Development and Evaluations (GRADE) summary of findings tables [ 20 ]. In these studies, the unit of analysis is the person or groups of individuals applying the methods. We also direct readers to the Studies Within a Trial (SWAT) and Studies Within a Review (SWAR) programme operated through the Hub for Trials Methodology Research, for further reading as a potential useful resource for these types of experimental studies [ 21 ]. Lastly, this paper is not meant to inform the conduct of research using computational simulation and mathematical modeling for which some guidance already exists [ 22 ], or studies on the development of methods using consensus-based approaches.
When should we conduct a methodological study?
Methodological studies occupy a unique niche in health research that allows them to inform methodological advances. Methodological studies should also be conducted as pre-cursors to reporting guideline development, as they provide an opportunity to understand current practices, and help to identify the need for guidance and gaps in methodological or reporting quality. For example, the development of the popular Preferred Reporting Items of Systematic reviews and Meta-Analyses (PRISMA) guidelines were preceded by methodological studies identifying poor reporting practices [ 23 , 24 ]. In these instances, after the reporting guidelines are published, methodological studies can also be used to monitor uptake of the guidelines.
These studies can also be conducted to inform the state of the art for design, analysis and reporting practices across different types of health research fields, with the aim of improving research practices, and preventing or reducing research waste. For example, Samaan et al. conducted a scoping review of adherence to different reporting guidelines in health care literature [ 18 ]. Methodological studies can also be used to determine the factors associated with reporting practices. For example, Abbade et al. investigated journal characteristics associated with the use of the Participants, Intervention, Comparison, Outcome, Timeframe (PICOT) format in framing research questions in trials of venous ulcer disease [ 11 ].
How often are methodological studies conducted?
There is no clear answer to this question. Based on a search of PubMed, the use of related terms (“methodological review” and “meta-epidemiological study”) – and therefore, the number of methodological studies – is on the rise. However, many other terms are used to describe methodological studies. There are also many studies that explore design, conduct, analysis or reporting of research reports, but that do not use any specific terms to describe or label their study design in terms of “methodology”. This diversity in nomenclature makes a census of methodological studies elusive. Appropriate terminology and key words for methodological studies are needed to facilitate improved accessibility for end-users.
Why do we conduct methodological studies?
Methodological studies provide information on the design, conduct, analysis or reporting of primary and secondary research and can be used to appraise quality, quantity, completeness, accuracy and consistency of health research. These issues can be explored in specific fields, journals, databases, geographical regions and time periods. For example, Areia et al. explored the quality of reporting of endoscopic diagnostic studies in gastroenterology [ 25 ]; Knol et al. investigated the reporting of p -values in baseline tables in randomized trial published in high impact journals [ 26 ]; Chen et al. describe adherence to the Consolidated Standards of Reporting Trials (CONSORT) statement in Chinese Journals [ 27 ]; and Hopewell et al. describe the effect of editors’ implementation of CONSORT guidelines on reporting of abstracts over time [ 28 ]. Methodological studies provide useful information to researchers, clinicians, editors, publishers and users of health literature. As a result, these studies have been at the cornerstone of important methodological developments in the past two decades and have informed the development of many health research guidelines including the highly cited CONSORT statement [ 5 ].
Where can we find methodological studies?
Methodological studies can be found in most common biomedical bibliographic databases (e.g. Embase, MEDLINE, PubMed, Web of Science). However, the biggest caveat is that methodological studies are hard to identify in the literature due to the wide variety of names used and the lack of comprehensive databases dedicated to them. A handful can be found in the Cochrane Library as “Cochrane Methodology Reviews”, but these studies only cover methodological issues related to systematic reviews. Previous attempts to catalogue all empirical studies of methods used in reviews were abandoned 10 years ago [ 29 ]. In other databases, a variety of search terms may be applied with different levels of sensitivity and specificity.
Some frequently asked questions about methodological studies
In this section, we have outlined responses to questions that might help inform the conduct of methodological studies.
Q: How should I select research reports for my methodological study?
A: Selection of research reports for a methodological study depends on the research question and eligibility criteria. Once a clear research question is set and the nature of literature one desires to review is known, one can then begin the selection process. Selection may begin with a broad search, especially if the eligibility criteria are not apparent. For example, a methodological study of Cochrane Reviews of HIV would not require a complex search as all eligible studies can easily be retrieved from the Cochrane Library after checking a few boxes [ 30 ]. On the other hand, a methodological study of subgroup analyses in trials of gastrointestinal oncology would require a search to find such trials, and further screening to identify trials that conducted a subgroup analysis [ 31 ].
The strategies used for identifying participants in observational studies can apply here. One may use a systematic search to identify all eligible studies. If the number of eligible studies is unmanageable, a random sample of articles can be expected to provide comparable results if it is sufficiently large [ 32 ]. For example, Wilson et al. used a random sample of trials from the Cochrane Stroke Group’s Trial Register to investigate completeness of reporting [ 33 ]. It is possible that a simple random sample would lead to underrepresentation of units (i.e. research reports) that are smaller in number. This is relevant if the investigators wish to compare multiple groups but have too few units in one group. In this case a stratified sample would help to create equal groups. For example, in a methodological study comparing Cochrane and non-Cochrane reviews, Kahale et al. drew random samples from both groups [ 34 ]. Alternatively, systematic or purposeful sampling strategies can be used and we encourage researchers to justify their selected approaches based on the study objective.
Q: How many databases should I search?
A: The number of databases one should search would depend on the approach to sampling, which can include targeting the entire “population” of interest or a sample of that population. If you are interested in including the entire target population for your research question, or drawing a random or systematic sample from it, then a comprehensive and exhaustive search for relevant articles is required. In this case, we recommend using systematic approaches for searching electronic databases (i.e. at least 2 databases with a replicable and time stamped search strategy). The results of your search will constitute a sampling frame from which eligible studies can be drawn.
Alternatively, if your approach to sampling is purposeful, then we recommend targeting the database(s) or data sources (e.g. journals, registries) that include the information you need. For example, if you are conducting a methodological study of high impact journals in plastic surgery and they are all indexed in PubMed, you likely do not need to search any other databases. You may also have a comprehensive list of all journals of interest and can approach your search using the journal names in your database search (or by accessing the journal archives directly from the journal’s website). Even though one could also search journals’ web pages directly, using a database such as PubMed has multiple advantages, such as the use of filters, so the search can be narrowed down to a certain period, or study types of interest. Furthermore, individual journals’ web sites may have different search functionalities, which do not necessarily yield a consistent output.
Q: Should I publish a protocol for my methodological study?
A: A protocol is a description of intended research methods. Currently, only protocols for clinical trials require registration [ 35 ]. Protocols for systematic reviews are encouraged but no formal recommendation exists. The scientific community welcomes the publication of protocols because they help protect against selective outcome reporting, the use of post hoc methodologies to embellish results, and to help avoid duplication of efforts [ 36 ]. While the latter two risks exist in methodological research, the negative consequences may be substantially less than for clinical outcomes. In a sample of 31 methodological studies, 7 (22.6%) referenced a published protocol [ 9 ]. In the Cochrane Library, there are 15 protocols for methodological reviews (21 July 2020). This suggests that publishing protocols for methodological studies is not uncommon.
Authors can consider publishing their study protocol in a scholarly journal as a manuscript. Advantages of such publication include obtaining peer-review feedback about the planned study, and easy retrieval by searching databases such as PubMed. The disadvantages in trying to publish protocols includes delays associated with manuscript handling and peer review, as well as costs, as few journals publish study protocols, and those journals mostly charge article-processing fees [ 37 ]. Authors who would like to make their protocol publicly available without publishing it in scholarly journals, could deposit their study protocols in publicly available repositories, such as the Open Science Framework ( https://osf.io/ ).
Q: How to appraise the quality of a methodological study?
A: To date, there is no published tool for appraising the risk of bias in a methodological study, but in principle, a methodological study could be considered as a type of observational study. Therefore, during conduct or appraisal, care should be taken to avoid the biases common in observational studies [ 38 ]. These biases include selection bias, comparability of groups, and ascertainment of exposure or outcome. In other words, to generate a representative sample, a comprehensive reproducible search may be necessary to build a sampling frame. Additionally, random sampling may be necessary to ensure that all the included research reports have the same probability of being selected, and the screening and selection processes should be transparent and reproducible. To ensure that the groups compared are similar in all characteristics, matching, random sampling or stratified sampling can be used. Statistical adjustments for between-group differences can also be applied at the analysis stage. Finally, duplicate data extraction can reduce errors in assessment of exposures or outcomes.
Q: Should I justify a sample size?
A: In all instances where one is not using the target population (i.e. the group to which inferences from the research report are directed) [ 39 ], a sample size justification is good practice. The sample size justification may take the form of a description of what is expected to be achieved with the number of articles selected, or a formal sample size estimation that outlines the number of articles required to answer the research question with a certain precision and power. Sample size justifications in methodological studies are reasonable in the following instances:
Comparing two groups
Determining a proportion, mean or another quantifier
Determining factors associated with an outcome using regression-based analyses
For example, El Dib et al. computed a sample size requirement for a methodological study of diagnostic strategies in randomized trials, based on a confidence interval approach [ 40 ].
Q: What should I call my study?
A: Other terms which have been used to describe/label methodological studies include “ methodological review ”, “methodological survey” , “meta-epidemiological study” , “systematic review” , “systematic survey”, “meta-research”, “research-on-research” and many others. We recommend that the study nomenclature be clear, unambiguous, informative and allow for appropriate indexing. Methodological study nomenclature that should be avoided includes “ systematic review” – as this will likely be confused with a systematic review of a clinical question. “ Systematic survey” may also lead to confusion about whether the survey was systematic (i.e. using a preplanned methodology) or a survey using “ systematic” sampling (i.e. a sampling approach using specific intervals to determine who is selected) [ 32 ]. Any of the above meanings of the words “ systematic” may be true for methodological studies and could be potentially misleading. “ Meta-epidemiological study” is ideal for indexing, but not very informative as it describes an entire field. The term “ review ” may point towards an appraisal or “review” of the design, conduct, analysis or reporting (or methodological components) of the targeted research reports, yet it has also been used to describe narrative reviews [ 41 , 42 ]. The term “ survey ” is also in line with the approaches used in many methodological studies [ 9 ], and would be indicative of the sampling procedures of this study design. However, in the absence of guidelines on nomenclature, the term “ methodological study ” is broad enough to capture most of the scenarios of such studies.
Q: Should I account for clustering in my methodological study?
A: Data from methodological studies are often clustered. For example, articles coming from a specific source may have different reporting standards (e.g. the Cochrane Library). Articles within the same journal may be similar due to editorial practices and policies, reporting requirements and endorsement of guidelines. There is emerging evidence that these are real concerns that should be accounted for in analyses [ 43 ]. Some cluster variables are described in the section: “ What variables are relevant to methodological studies?”
A variety of modelling approaches can be used to account for correlated data, including the use of marginal, fixed or mixed effects regression models with appropriate computation of standard errors [ 44 ]. For example, Kosa et al. used generalized estimation equations to account for correlation of articles within journals [ 15 ]. Not accounting for clustering could lead to incorrect p -values, unduly narrow confidence intervals, and biased estimates [ 45 ].
Q: Should I extract data in duplicate?
A: Yes. Duplicate data extraction takes more time but results in less errors [ 19 ]. Data extraction errors in turn affect the effect estimate [ 46 ], and therefore should be mitigated. Duplicate data extraction should be considered in the absence of other approaches to minimize extraction errors. However, much like systematic reviews, this area will likely see rapid new advances with machine learning and natural language processing technologies to support researchers with screening and data extraction [ 47 , 48 ]. However, experience plays an important role in the quality of extracted data and inexperienced extractors should be paired with experienced extractors [ 46 , 49 ].
Q: Should I assess the risk of bias of research reports included in my methodological study?
A : Risk of bias is most useful in determining the certainty that can be placed in the effect measure from a study. In methodological studies, risk of bias may not serve the purpose of determining the trustworthiness of results, as effect measures are often not the primary goal of methodological studies. Determining risk of bias in methodological studies is likely a practice borrowed from systematic review methodology, but whose intrinsic value is not obvious in methodological studies. When it is part of the research question, investigators often focus on one aspect of risk of bias. For example, Speich investigated how blinding was reported in surgical trials [ 50 ], and Abraha et al., investigated the application of intention-to-treat analyses in systematic reviews and trials [ 51 ].
Q: What variables are relevant to methodological studies?
A: There is empirical evidence that certain variables may inform the findings in a methodological study. We outline some of these and provide a brief overview below:
Country: Countries and regions differ in their research cultures, and the resources available to conduct research. Therefore, it is reasonable to believe that there may be differences in methodological features across countries. Methodological studies have reported loco-regional differences in reporting quality [ 52 , 53 ]. This may also be related to challenges non-English speakers face in publishing papers in English.
Authors’ expertise: The inclusion of authors with expertise in research methodology, biostatistics, and scientific writing is likely to influence the end-product. Oltean et al. found that among randomized trials in orthopaedic surgery, the use of analyses that accounted for clustering was more likely when specialists (e.g. statistician, epidemiologist or clinical trials methodologist) were included on the study team [ 54 ]. Fleming et al. found that including methodologists in the review team was associated with appropriate use of reporting guidelines [ 55 ].
Source of funding and conflicts of interest: Some studies have found that funded studies report better [ 56 , 57 ], while others do not [ 53 , 58 ]. The presence of funding would indicate the availability of resources deployed to ensure optimal design, conduct, analysis and reporting. However, the source of funding may introduce conflicts of interest and warrant assessment. For example, Kaiser et al. investigated the effect of industry funding on obesity or nutrition randomized trials and found that reporting quality was similar [ 59 ]. Thomas et al. looked at reporting quality of long-term weight loss trials and found that industry funded studies were better [ 60 ]. Kan et al. examined the association between industry funding and “positive trials” (trials reporting a significant intervention effect) and found that industry funding was highly predictive of a positive trial [ 61 ]. This finding is similar to that of a recent Cochrane Methodology Review by Hansen et al. [ 62 ]
Journal characteristics: Certain journals’ characteristics may influence the study design, analysis or reporting. Characteristics such as journal endorsement of guidelines [ 63 , 64 ], and Journal Impact Factor (JIF) have been shown to be associated with reporting [ 63 , 65 , 66 , 67 ].
Study size (sample size/number of sites): Some studies have shown that reporting is better in larger studies [ 53 , 56 , 58 ].
Year of publication: It is reasonable to assume that design, conduct, analysis and reporting of research will change over time. Many studies have demonstrated improvements in reporting over time or after the publication of reporting guidelines [ 68 , 69 ].
Type of intervention: In a methodological study of reporting quality of weight loss intervention studies, Thabane et al. found that trials of pharmacologic interventions were reported better than trials of non-pharmacologic interventions [ 70 ].
Interactions between variables: Complex interactions between the previously listed variables are possible. High income countries with more resources may be more likely to conduct larger studies and incorporate a variety of experts. Authors in certain countries may prefer certain journals, and journal endorsement of guidelines and editorial policies may change over time.
Q: Should I focus only on high impact journals?
A: Investigators may choose to investigate only high impact journals because they are more likely to influence practice and policy, or because they assume that methodological standards would be higher. However, the JIF may severely limit the scope of articles included and may skew the sample towards articles with positive findings. The generalizability and applicability of findings from a handful of journals must be examined carefully, especially since the JIF varies over time. Even among journals that are all “high impact”, variations exist in methodological standards.
Q: Can I conduct a methodological study of qualitative research?
A: Yes. Even though a lot of methodological research has been conducted in the quantitative research field, methodological studies of qualitative studies are feasible. Certain databases that catalogue qualitative research including the Cumulative Index to Nursing & Allied Health Literature (CINAHL) have defined subject headings that are specific to methodological research (e.g. “research methodology”). Alternatively, one could also conduct a qualitative methodological review; that is, use qualitative approaches to synthesize methodological issues in qualitative studies.
Q: What reporting guidelines should I use for my methodological study?
A: There is no guideline that covers the entire scope of methodological studies. One adaptation of the PRISMA guidelines has been published, which works well for studies that aim to use the entire target population of research reports [ 71 ]. However, it is not widely used (40 citations in 2 years as of 09 December 2019), and methodological studies that are designed as cross-sectional or before-after studies require a more fit-for purpose guideline. A more encompassing reporting guideline for a broad range of methodological studies is currently under development [ 72 ]. However, in the absence of formal guidance, the requirements for scientific reporting should be respected, and authors of methodological studies should focus on transparency and reproducibility.
Q: What are the potential threats to validity and how can I avoid them?
A: Methodological studies may be compromised by a lack of internal or external validity. The main threats to internal validity in methodological studies are selection and confounding bias. Investigators must ensure that the methods used to select articles does not make them differ systematically from the set of articles to which they would like to make inferences. For example, attempting to make extrapolations to all journals after analyzing high-impact journals would be misleading.
Many factors (confounders) may distort the association between the exposure and outcome if the included research reports differ with respect to these factors [ 73 ]. For example, when examining the association between source of funding and completeness of reporting, it may be necessary to account for journals that endorse the guidelines. Confounding bias can be addressed by restriction, matching and statistical adjustment [ 73 ]. Restriction appears to be the method of choice for many investigators who choose to include only high impact journals or articles in a specific field. For example, Knol et al. examined the reporting of p -values in baseline tables of high impact journals [ 26 ]. Matching is also sometimes used. In the methodological study of non-randomized interventional studies of elective ventral hernia repair, Parker et al. matched prospective studies with retrospective studies and compared reporting standards [ 74 ]. Some other methodological studies use statistical adjustments. For example, Zhang et al. used regression techniques to determine the factors associated with missing participant data in trials [ 16 ].
With regard to external validity, researchers interested in conducting methodological studies must consider how generalizable or applicable their findings are. This should tie in closely with the research question and should be explicit. For example. Findings from methodological studies on trials published in high impact cardiology journals cannot be assumed to be applicable to trials in other fields. However, investigators must ensure that their sample truly represents the target sample either by a) conducting a comprehensive and exhaustive search, or b) using an appropriate and justified, randomly selected sample of research reports.
Even applicability to high impact journals may vary based on the investigators’ definition, and over time. For example, for high impact journals in the field of general medicine, Bouwmeester et al. included the Annals of Internal Medicine (AIM), BMJ, the Journal of the American Medical Association (JAMA), Lancet, the New England Journal of Medicine (NEJM), and PLoS Medicine ( n = 6) [ 75 ]. In contrast, the high impact journals selected in the methodological study by Schiller et al. were BMJ, JAMA, Lancet, and NEJM ( n = 4) [ 76 ]. Another methodological study by Kosa et al. included AIM, BMJ, JAMA, Lancet and NEJM ( n = 5). In the methodological study by Thabut et al., journals with a JIF greater than 5 were considered to be high impact. Riado Minguez et al. used first quartile journals in the Journal Citation Reports (JCR) for a specific year to determine “high impact” [ 77 ]. Ultimately, the definition of high impact will be based on the number of journals the investigators are willing to include, the year of impact and the JIF cut-off [ 78 ]. We acknowledge that the term “generalizability” may apply differently for methodological studies, especially when in many instances it is possible to include the entire target population in the sample studied.
Finally, methodological studies are not exempt from information bias which may stem from discrepancies in the included research reports [ 79 ], errors in data extraction, or inappropriate interpretation of the information extracted. Likewise, publication bias may also be a concern in methodological studies, but such concepts have not yet been explored.
A proposed framework
In order to inform discussions about methodological studies, the development of guidance for what should be reported, we have outlined some key features of methodological studies that can be used to classify them. For each of the categories outlined below, we provide an example. In our experience, the choice of approach to completing a methodological study can be informed by asking the following four questions:
What is the aim?
Methodological studies that investigate bias
A methodological study may be focused on exploring sources of bias in primary or secondary studies (meta-bias), or how bias is analyzed. We have taken care to distinguish bias (i.e. systematic deviations from the truth irrespective of the source) from reporting quality or completeness (i.e. not adhering to a specific reporting guideline or norm). An example of where this distinction would be important is in the case of a randomized trial with no blinding. This study (depending on the nature of the intervention) would be at risk of performance bias. However, if the authors report that their study was not blinded, they would have reported adequately. In fact, some methodological studies attempt to capture both “quality of conduct” and “quality of reporting”, such as Richie et al., who reported on the risk of bias in randomized trials of pharmacy practice interventions [ 80 ]. Babic et al. investigated how risk of bias was used to inform sensitivity analyses in Cochrane reviews [ 81 ]. Further, biases related to choice of outcomes can also be explored. For example, Tan et al investigated differences in treatment effect size based on the outcome reported [ 82 ].
Methodological studies that investigate quality (or completeness) of reporting
Methodological studies may report quality of reporting against a reporting checklist (i.e. adherence to guidelines) or against expected norms. For example, Croituro et al. report on the quality of reporting in systematic reviews published in dermatology journals based on their adherence to the PRISMA statement [ 83 ], and Khan et al. described the quality of reporting of harms in randomized controlled trials published in high impact cardiovascular journals based on the CONSORT extension for harms [ 84 ]. Other methodological studies investigate reporting of certain features of interest that may not be part of formally published checklists or guidelines. For example, Mbuagbaw et al. described how often the implications for research are elaborated using the Evidence, Participants, Intervention, Comparison, Outcome, Timeframe (EPICOT) format [ 30 ].
Methodological studies that investigate the consistency of reporting
Sometimes investigators may be interested in how consistent reports of the same research are, as it is expected that there should be consistency between: conference abstracts and published manuscripts; manuscript abstracts and manuscript main text; and trial registration and published manuscript. For example, Rosmarakis et al. investigated consistency between conference abstracts and full text manuscripts [ 85 ].
Methodological studies that investigate factors associated with reporting
In addition to identifying issues with reporting in primary and secondary studies, authors of methodological studies may be interested in determining the factors that are associated with certain reporting practices. Many methodological studies incorporate this, albeit as a secondary outcome. For example, Farrokhyar et al. investigated the factors associated with reporting quality in randomized trials of coronary artery bypass grafting surgery [ 53 ].
Methodological studies that investigate methods
Methodological studies may also be used to describe methods or compare methods, and the factors associated with methods. Muller et al. described the methods used for systematic reviews and meta-analyses of observational studies [ 86 ].
Methodological studies that summarize other methodological studies
Some methodological studies synthesize results from other methodological studies. For example, Li et al. conducted a scoping review of methodological reviews that investigated consistency between full text and abstracts in primary biomedical research [ 87 ].
Methodological studies that investigate nomenclature and terminology
Some methodological studies may investigate the use of names and terms in health research. For example, Martinic et al. investigated the definitions of systematic reviews used in overviews of systematic reviews (OSRs), meta-epidemiological studies and epidemiology textbooks [ 88 ].
Other types of methodological studies
In addition to the previously mentioned experimental methodological studies, there may exist other types of methodological studies not captured here.
What is the design?
Methodological studies that are descriptive
Most methodological studies are purely descriptive and report their findings as counts (percent) and means (standard deviation) or medians (interquartile range). For example, Mbuagbaw et al. described the reporting of research recommendations in Cochrane HIV systematic reviews [ 30 ]. Gohari et al. described the quality of reporting of randomized trials in diabetes in Iran [ 12 ].
Methodological studies that are analytical
Some methodological studies are analytical wherein “analytical studies identify and quantify associations, test hypotheses, identify causes and determine whether an association exists between variables, such as between an exposure and a disease.” [ 89 ] In the case of methodological studies all these investigations are possible. For example, Kosa et al. investigated the association between agreement in primary outcome from trial registry to published manuscript and study covariates. They found that larger and more recent studies were more likely to have agreement [ 15 ]. Tricco et al. compared the conclusion statements from Cochrane and non-Cochrane systematic reviews with a meta-analysis of the primary outcome and found that non-Cochrane reviews were more likely to report positive findings. These results are a test of the null hypothesis that the proportions of Cochrane and non-Cochrane reviews that report positive results are equal [ 90 ].
What is the sampling strategy?
Methodological studies that include the target population
Methodological reviews with narrow research questions may be able to include the entire target population. For example, in the methodological study of Cochrane HIV systematic reviews, Mbuagbaw et al. included all of the available studies ( n = 103) [ 30 ].
Methodological studies that include a sample of the target population
Many methodological studies use random samples of the target population [ 33 , 91 , 92 ]. Alternatively, purposeful sampling may be used, limiting the sample to a subset of research-related reports published within a certain time period, or in journals with a certain ranking or on a topic. Systematic sampling can also be used when random sampling may be challenging to implement.
What is the unit of analysis?
Methodological studies with a research report as the unit of analysis
Many methodological studies use a research report (e.g. full manuscript of study, abstract portion of the study) as the unit of analysis, and inferences can be made at the study-level. However, both published and unpublished research-related reports can be studied. These may include articles, conference abstracts, registry entries etc.
Methodological studies with a design, analysis or reporting item as the unit of analysis
Some methodological studies report on items which may occur more than once per article. For example, Paquette et al. report on subgroup analyses in Cochrane reviews of atrial fibrillation in which 17 systematic reviews planned 56 subgroup analyses [ 93 ].
This framework is outlined in Fig. 2 .
A proposed framework for methodological studies
Conclusions
Methodological studies have examined different aspects of reporting such as quality, completeness, consistency and adherence to reporting guidelines. As such, many of the methodological study examples cited in this tutorial are related to reporting. However, as an evolving field, the scope of research questions that can be addressed by methodological studies is expected to increase.
In this paper we have outlined the scope and purpose of methodological studies, along with examples of instances in which various approaches have been used. In the absence of formal guidance on the design, conduct, analysis and reporting of methodological studies, we have provided some advice to help make methodological studies consistent. This advice is grounded in good contemporary scientific practice. Generally, the research question should tie in with the sampling approach and planned analysis. We have also highlighted the variables that may inform findings from methodological studies. Lastly, we have provided suggestions for ways in which authors can categorize their methodological studies to inform their design and analysis.
Availability of data and materials
Data sharing is not applicable to this article as no new data were created or analyzed in this study.
Abbreviations
Consolidated Standards of Reporting Trials
Evidence, Participants, Intervention, Comparison, Outcome, Timeframe
Grading of Recommendations, Assessment, Development and Evaluations
Participants, Intervention, Comparison, Outcome, Timeframe
Preferred Reporting Items of Systematic reviews and Meta-Analyses
Studies Within a Review
Studies Within a Trial
Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;374(9683):86–9.
PubMed Google Scholar
Chan AW, Song F, Vickers A, Jefferson T, Dickersin K, Gotzsche PC, Krumholz HM, Ghersi D, van der Worp HB. Increasing value and reducing waste: addressing inaccessible research. Lancet. 2014;383(9913):257–66.
PubMed PubMed Central Google Scholar
Ioannidis JP, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D, Schulz KF, Tibshirani R. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383(9912):166–75.
Higgins JP, Altman DG, Gotzsche PC, Juni P, Moher D, Oxman AD, Savovic J, Schulz KF, Weeks L, Sterne JA. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.
Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet. 2001;357.
Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med. 2009;6(7):e1000100.
Shea BJ, Hamel C, Wells GA, Bouter LM, Kristjansson E, Grimshaw J, Henry DA, Boers M. AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. J Clin Epidemiol. 2009;62(10):1013–20.
Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, Moher D, Tugwell P, Welch V, Kristjansson E, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. Bmj. 2017;358:j4008.
Lawson DO, Leenus A, Mbuagbaw L. Mapping the nomenclature, methodology, and reporting of studies that review methods: a pilot methodological review. Pilot Feasibility Studies. 2020;6(1):13.
Puljak L, Makaric ZL, Buljan I, Pieper D. What is a meta-epidemiological study? Analysis of published literature indicated heterogeneous study designs and definitions. J Comp Eff Res. 2020.
Abbade LPF, Wang M, Sriganesh K, Jin Y, Mbuagbaw L, Thabane L. The framing of research questions using the PICOT format in randomized controlled trials of venous ulcer disease is suboptimal: a systematic survey. Wound Repair Regen. 2017;25(5):892–900.
Gohari F, Baradaran HR, Tabatabaee M, Anijidani S, Mohammadpour Touserkani F, Atlasi R, Razmgir M. Quality of reporting randomized controlled trials (RCTs) in diabetes in Iran; a systematic review. J Diabetes Metab Disord. 2015;15(1):36.
Wang M, Jin Y, Hu ZJ, Thabane A, Dennis B, Gajic-Veljanoski O, Paul J, Thabane L. The reporting quality of abstracts of stepped wedge randomized trials is suboptimal: a systematic survey of the literature. Contemp Clin Trials Commun. 2017;8:1–10.
Shanthanna H, Kaushal A, Mbuagbaw L, Couban R, Busse J, Thabane L: A cross-sectional study of the reporting quality of pilot or feasibility trials in high-impact anesthesia journals Can J Anaesthesia 2018, 65(11):1180–1195.
Kosa SD, Mbuagbaw L, Borg Debono V, Bhandari M, Dennis BB, Ene G, Leenus A, Shi D, Thabane M, Valvasori S, et al. Agreement in reporting between trial publications and current clinical trial registry in high impact journals: a methodological review. Contemporary Clinical Trials. 2018;65:144–50.
Zhang Y, Florez ID, Colunga Lozano LE, Aloweni FAB, Kennedy SA, Li A, Craigie S, Zhang S, Agarwal A, Lopes LC, et al. A systematic survey on reporting and methods for handling missing participant data for continuous outcomes in randomized controlled trials. J Clin Epidemiol. 2017;88:57–66.
CAS PubMed Google Scholar
Hernández AV, Boersma E, Murray GD, Habbema JD, Steyerberg EW. Subgroup analyses in therapeutic cardiovascular clinical trials: are most of them misleading? Am Heart J. 2006;151(2):257–64.
Samaan Z, Mbuagbaw L, Kosa D, Borg Debono V, Dillenburg R, Zhang S, Fruci V, Dennis B, Bawor M, Thabane L. A systematic scoping review of adherence to reporting guidelines in health care literature. J Multidiscip Healthc. 2013;6:169–88.
Buscemi N, Hartling L, Vandermeer B, Tjosvold L, Klassen TP. Single data extraction generated more errors than double data extraction in systematic reviews. J Clin Epidemiol. 2006;59(7):697–703.
Carrasco-Labra A, Brignardello-Petersen R, Santesso N, Neumann I, Mustafa RA, Mbuagbaw L, Etxeandia Ikobaltzeta I, De Stio C, McCullagh LJ, Alonso-Coello P. Improving GRADE evidence tables part 1: a randomized trial shows improved understanding of content in summary-of-findings tables with a new format. J Clin Epidemiol. 2016;74:7–18.
The Northern Ireland Hub for Trials Methodology Research: SWAT/SWAR Information [ https://www.qub.ac.uk/sites/TheNorthernIrelandNetworkforTrialsMethodologyResearch/SWATSWARInformation/ ]. Accessed 31 Aug 2020.
Chick S, Sánchez P, Ferrin D, Morrice D. How to conduct a successful simulation study. In: Proceedings of the 2003 winter simulation conference: 2003; 2003. p. 66–70.
Google Scholar
Mulrow CD. The medical review article: state of the science. Ann Intern Med. 1987;106(3):485–8.
Sacks HS, Reitman D, Pagano D, Kupelnick B. Meta-analysis: an update. Mount Sinai J Med New York. 1996;63(3–4):216–24.
CAS Google Scholar
Areia M, Soares M, Dinis-Ribeiro M. Quality reporting of endoscopic diagnostic studies in gastrointestinal journals: where do we stand on the use of the STARD and CONSORT statements? Endoscopy. 2010;42(2):138–47.
Knol M, Groenwold R, Grobbee D. P-values in baseline tables of randomised controlled trials are inappropriate but still common in high impact journals. Eur J Prev Cardiol. 2012;19(2):231–2.
Chen M, Cui J, Zhang AL, Sze DM, Xue CC, May BH. Adherence to CONSORT items in randomized controlled trials of integrative medicine for colorectal Cancer published in Chinese journals. J Altern Complement Med. 2018;24(2):115–24.
Hopewell S, Ravaud P, Baron G, Boutron I. Effect of editors' implementation of CONSORT guidelines on the reporting of abstracts in high impact medical journals: interrupted time series analysis. BMJ. 2012;344:e4178.
The Cochrane Methodology Register Issue 2 2009 [ https://cmr.cochrane.org/help.htm ]. Accessed 31 Aug 2020.
Mbuagbaw L, Kredo T, Welch V, Mursleen S, Ross S, Zani B, Motaze NV, Quinlan L. Critical EPICOT items were absent in Cochrane human immunodeficiency virus systematic reviews: a bibliometric analysis. J Clin Epidemiol. 2016;74:66–72.
Barton S, Peckitt C, Sclafani F, Cunningham D, Chau I. The influence of industry sponsorship on the reporting of subgroup analyses within phase III randomised controlled trials in gastrointestinal oncology. Eur J Cancer. 2015;51(18):2732–9.
Setia MS. Methodology series module 5: sampling strategies. Indian J Dermatol. 2016;61(5):505–9.
Wilson B, Burnett P, Moher D, Altman DG, Al-Shahi Salman R. Completeness of reporting of randomised controlled trials including people with transient ischaemic attack or stroke: a systematic review. Eur Stroke J. 2018;3(4):337–46.
Kahale LA, Diab B, Brignardello-Petersen R, Agarwal A, Mustafa RA, Kwong J, Neumann I, Li L, Lopes LC, Briel M, et al. Systematic reviews do not adequately report or address missing outcome data in their analyses: a methodological survey. J Clin Epidemiol. 2018;99:14–23.
De Angelis CD, Drazen JM, Frizelle FA, Haug C, Hoey J, Horton R, Kotzin S, Laine C, Marusic A, Overbeke AJPM, et al. Is this clinical trial fully registered?: a statement from the International Committee of Medical Journal Editors*. Ann Intern Med. 2005;143(2):146–8.
Ohtake PJ, Childs JD. Why publish study protocols? Phys Ther. 2014;94(9):1208–9.
Rombey T, Allers K, Mathes T, Hoffmann F, Pieper D. A descriptive analysis of the characteristics and the peer review process of systematic review protocols published in an open peer review journal from 2012 to 2017. BMC Med Res Methodol. 2019;19(1):57.
Grimes DA, Schulz KF. Bias and causal associations in observational research. Lancet. 2002;359(9302):248–52.
Porta M (ed.): A dictionary of epidemiology, 5th edn. Oxford: Oxford University Press, Inc.; 2008.
El Dib R, Tikkinen KAO, Akl EA, Gomaa HA, Mustafa RA, Agarwal A, Carpenter CR, Zhang Y, Jorge EC, Almeida R, et al. Systematic survey of randomized trials evaluating the impact of alternative diagnostic strategies on patient-important outcomes. J Clin Epidemiol. 2017;84:61–9.
Helzer JE, Robins LN, Taibleson M, Woodruff RA Jr, Reich T, Wish ED. Reliability of psychiatric diagnosis. I. a methodological review. Arch Gen Psychiatry. 1977;34(2):129–33.
Chung ST, Chacko SK, Sunehag AL, Haymond MW. Measurements of gluconeogenesis and Glycogenolysis: a methodological review. Diabetes. 2015;64(12):3996–4010.
CAS PubMed PubMed Central Google Scholar
Sterne JA, Juni P, Schulz KF, Altman DG, Bartlett C, Egger M. Statistical methods for assessing the influence of study characteristics on treatment effects in 'meta-epidemiological' research. Stat Med. 2002;21(11):1513–24.
Moen EL, Fricano-Kugler CJ, Luikart BW, O’Malley AJ. Analyzing clustered data: why and how to account for multiple observations nested within a study participant? PLoS One. 2016;11(1):e0146721.
Zyzanski SJ, Flocke SA, Dickinson LM. On the nature and analysis of clustered data. Ann Fam Med. 2004;2(3):199–200.
Mathes T, Klassen P, Pieper D. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review. BMC Med Res Methodol. 2017;17(1):152.
Bui DDA, Del Fiol G, Hurdle JF, Jonnalagadda S. Extractive text summarization system to aid data extraction from full text in systematic review development. J Biomed Inform. 2016;64:265–72.
Bui DD, Del Fiol G, Jonnalagadda S. PDF text classification to leverage information extraction from publication reports. J Biomed Inform. 2016;61:141–8.
Maticic K, Krnic Martinic M, Puljak L. Assessment of reporting quality of abstracts of systematic reviews with meta-analysis using PRISMA-A and discordance in assessments between raters without prior experience. BMC Med Res Methodol. 2019;19(1):32.
Speich B. Blinding in surgical randomized clinical trials in 2015. Ann Surg. 2017;266(1):21–2.
Abraha I, Cozzolino F, Orso M, Marchesi M, Germani A, Lombardo G, Eusebi P, De Florio R, Luchetta ML, Iorio A, et al. A systematic review found that deviations from intention-to-treat are common in randomized trials and systematic reviews. J Clin Epidemiol. 2017;84:37–46.
Zhong Y, Zhou W, Jiang H, Fan T, Diao X, Yang H, Min J, Wang G, Fu J, Mao B. Quality of reporting of two-group parallel randomized controlled clinical trials of multi-herb formulae: A survey of reports indexed in the Science Citation Index Expanded. Eur J Integrative Med. 2011;3(4):e309–16.
Farrokhyar F, Chu R, Whitlock R, Thabane L. A systematic review of the quality of publications reporting coronary artery bypass grafting trials. Can J Surg. 2007;50(4):266–77.
Oltean H, Gagnier JJ. Use of clustering analysis in randomized controlled trials in orthopaedic surgery. BMC Med Res Methodol. 2015;15:17.
Fleming PS, Koletsi D, Pandis N. Blinded by PRISMA: are systematic reviewers focusing on PRISMA and ignoring other guidelines? PLoS One. 2014;9(5):e96407.
Balasubramanian SP, Wiener M, Alshameeri Z, Tiruvoipati R, Elbourne D, Reed MW. Standards of reporting of randomized controlled trials in general surgery: can we do better? Ann Surg. 2006;244(5):663–7.
de Vries TW, van Roon EN. Low quality of reporting adverse drug reactions in paediatric randomised controlled trials. Arch Dis Child. 2010;95(12):1023–6.
Borg Debono V, Zhang S, Ye C, Paul J, Arya A, Hurlburt L, Murthy Y, Thabane L. The quality of reporting of RCTs used within a postoperative pain management meta-analysis, using the CONSORT statement. BMC Anesthesiol. 2012;12:13.
Kaiser KA, Cofield SS, Fontaine KR, Glasser SP, Thabane L, Chu R, Ambrale S, Dwary AD, Kumar A, Nayyar G, et al. Is funding source related to study reporting quality in obesity or nutrition randomized control trials in top-tier medical journals? Int J Obes. 2012;36(7):977–81.
Thomas O, Thabane L, Douketis J, Chu R, Westfall AO, Allison DB. Industry funding and the reporting quality of large long-term weight loss trials. Int J Obes. 2008;32(10):1531–6.
Khan NR, Saad H, Oravec CS, Rossi N, Nguyen V, Venable GT, Lillard JC, Patel P, Taylor DR, Vaughn BN, et al. A review of industry funding in randomized controlled trials published in the neurosurgical literature-the elephant in the room. Neurosurgery. 2018;83(5):890–7.
Hansen C, Lundh A, Rasmussen K, Hrobjartsson A. Financial conflicts of interest in systematic reviews: associations with results, conclusions, and methodological quality. Cochrane Database Syst Rev. 2019;8:Mr000047.
Kiehna EN, Starke RM, Pouratian N, Dumont AS. Standards for reporting randomized controlled trials in neurosurgery. J Neurosurg. 2011;114(2):280–5.
Liu LQ, Morris PJ, Pengel LH. Compliance to the CONSORT statement of randomized controlled trials in solid organ transplantation: a 3-year overview. Transpl Int. 2013;26(3):300–6.
Bala MM, Akl EA, Sun X, Bassler D, Mertz D, Mejza F, Vandvik PO, Malaga G, Johnston BC, Dahm P, et al. Randomized trials published in higher vs. lower impact journals differ in design, conduct, and analysis. J Clin Epidemiol. 2013;66(3):286–95.
Lee SY, Teoh PJ, Camm CF, Agha RA. Compliance of randomized controlled trials in trauma surgery with the CONSORT statement. J Trauma Acute Care Surg. 2013;75(4):562–72.
Ziogas DC, Zintzaras E. Analysis of the quality of reporting of randomized controlled trials in acute and chronic myeloid leukemia, and myelodysplastic syndromes as governed by the CONSORT statement. Ann Epidemiol. 2009;19(7):494–500.
Alvarez F, Meyer N, Gourraud PA, Paul C. CONSORT adoption and quality of reporting of randomized controlled trials: a systematic analysis in two dermatology journals. Br J Dermatol. 2009;161(5):1159–65.
Mbuagbaw L, Thabane M, Vanniyasingam T, Borg Debono V, Kosa S, Zhang S, Ye C, Parpia S, Dennis BB, Thabane L. Improvement in the quality of abstracts in major clinical journals since CONSORT extension for abstracts: a systematic review. Contemporary Clin trials. 2014;38(2):245–50.
Thabane L, Chu R, Cuddy K, Douketis J. What is the quality of reporting in weight loss intervention studies? A systematic review of randomized controlled trials. Int J Obes. 2007;31(10):1554–9.
Murad MH, Wang Z. Guidelines for reporting meta-epidemiological methodology research. Evidence Based Med. 2017;22(4):139.
METRIC - MEthodological sTudy ReportIng Checklist: guidelines for reporting methodological studies in health research [ http://www.equator-network.org/library/reporting-guidelines-under-development/reporting-guidelines-under-development-for-other-study-designs/#METRIC ]. Accessed 31 Aug 2020.
Jager KJ, Zoccali C, MacLeod A, Dekker FW. Confounding: what it is and how to deal with it. Kidney Int. 2008;73(3):256–60.
Parker SG, Halligan S, Erotocritou M, Wood CPJ, Boulton RW, Plumb AAO, Windsor ACJ, Mallett S. A systematic methodological review of non-randomised interventional studies of elective ventral hernia repair: clear definitions and a standardised minimum dataset are needed. Hernia. 2019.
Bouwmeester W, Zuithoff NPA, Mallett S, Geerlings MI, Vergouwe Y, Steyerberg EW, Altman DG, Moons KGM. Reporting and methods in clinical prediction research: a systematic review. PLoS Med. 2012;9(5):1–12.
Schiller P, Burchardi N, Niestroj M, Kieser M. Quality of reporting of clinical non-inferiority and equivalence randomised trials--update and extension. Trials. 2012;13:214.
Riado Minguez D, Kowalski M, Vallve Odena M, Longin Pontzen D, Jelicic Kadic A, Jeric M, Dosenovic S, Jakus D, Vrdoljak M, Poklepovic Pericic T, et al. Methodological and reporting quality of systematic reviews published in the highest ranking journals in the field of pain. Anesth Analg. 2017;125(4):1348–54.
Thabut G, Estellat C, Boutron I, Samama CM, Ravaud P. Methodological issues in trials assessing primary prophylaxis of venous thrombo-embolism. Eur Heart J. 2005;27(2):227–36.
Puljak L, Riva N, Parmelli E, González-Lorenzo M, Moja L, Pieper D. Data extraction methods: an analysis of internal reporting discrepancies in single manuscripts and practical advice. J Clin Epidemiol. 2020;117:158–64.
Ritchie A, Seubert L, Clifford R, Perry D, Bond C. Do randomised controlled trials relevant to pharmacy meet best practice standards for quality conduct and reporting? A systematic review. Int J Pharm Pract. 2019.
Babic A, Vuka I, Saric F, Proloscic I, Slapnicar E, Cavar J, Pericic TP, Pieper D, Puljak L. Overall bias methods and their use in sensitivity analysis of Cochrane reviews were not consistent. J Clin Epidemiol. 2019.
Tan A, Porcher R, Crequit P, Ravaud P, Dechartres A. Differences in treatment effect size between overall survival and progression-free survival in immunotherapy trials: a Meta-epidemiologic study of trials with results posted at ClinicalTrials.gov. J Clin Oncol. 2017;35(15):1686–94.
Croitoru D, Huang Y, Kurdina A, Chan AW, Drucker AM. Quality of reporting in systematic reviews published in dermatology journals. Br J Dermatol. 2020;182(6):1469–76.
Khan MS, Ochani RK, Shaikh A, Vaduganathan M, Khan SU, Fatima K, Yamani N, Mandrola J, Doukky R, Krasuski RA: Assessing the Quality of Reporting of Harms in Randomized Controlled Trials Published in High Impact Cardiovascular Journals. Eur Heart J Qual Care Clin Outcomes 2019.
Rosmarakis ES, Soteriades ES, Vergidis PI, Kasiakou SK, Falagas ME. From conference abstract to full paper: differences between data presented in conferences and journals. FASEB J. 2005;19(7):673–80.
Mueller M, D’Addario M, Egger M, Cevallos M, Dekkers O, Mugglin C, Scott P. Methods to systematically review and meta-analyse observational studies: a systematic scoping review of recommendations. BMC Med Res Methodol. 2018;18(1):44.
Li G, Abbade LPF, Nwosu I, Jin Y, Leenus A, Maaz M, Wang M, Bhatt M, Zielinski L, Sanger N, et al. A scoping review of comparisons between abstracts and full reports in primary biomedical research. BMC Med Res Methodol. 2017;17(1):181.
Krnic Martinic M, Pieper D, Glatt A, Puljak L. Definition of a systematic review used in overviews of systematic reviews, meta-epidemiological studies and textbooks. BMC Med Res Methodol. 2019;19(1):203.
Analytical study [ https://medical-dictionary.thefreedictionary.com/analytical+study ]. Accessed 31 Aug 2020.
Tricco AC, Tetzlaff J, Pham B, Brehaut J, Moher D. Non-Cochrane vs. Cochrane reviews were twice as likely to have positive conclusion statements: cross-sectional study. J Clin Epidemiol. 2009;62(4):380–6 e381.
Schalken N, Rietbergen C. The reporting quality of systematic reviews and Meta-analyses in industrial and organizational psychology: a systematic review. Front Psychol. 2017;8:1395.
Ranker LR, Petersen JM, Fox MP. Awareness of and potential for dependent error in the observational epidemiologic literature: A review. Ann Epidemiol. 2019;36:15–9 e12.
Paquette M, Alotaibi AM, Nieuwlaat R, Santesso N, Mbuagbaw L. A meta-epidemiological study of subgroup analyses in cochrane systematic reviews of atrial fibrillation. Syst Rev. 2019;8(1):241.
Download references
Acknowledgements
This work did not receive any dedicated funding.
Author information
Authors and affiliations.
Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada
Lawrence Mbuagbaw, Daeria O. Lawson & Lehana Thabane
Biostatistics Unit/FSORC, 50 Charlton Avenue East, St Joseph’s Healthcare—Hamilton, 3rd Floor Martha Wing, Room H321, Hamilton, Ontario, L8N 4A6, Canada
Lawrence Mbuagbaw & Lehana Thabane
Centre for the Development of Best Practices in Health, Yaoundé, Cameroon
Lawrence Mbuagbaw
Center for Evidence-Based Medicine and Health Care, Catholic University of Croatia, Ilica 242, 10000, Zagreb, Croatia
Livia Puljak
Department of Epidemiology and Biostatistics, School of Public Health – Bloomington, Indiana University, Bloomington, IN, 47405, USA
David B. Allison
Departments of Paediatrics and Anaesthesia, McMaster University, Hamilton, ON, Canada
Lehana Thabane
Centre for Evaluation of Medicine, St. Joseph’s Healthcare-Hamilton, Hamilton, ON, Canada
Population Health Research Institute, Hamilton Health Sciences, Hamilton, ON, Canada
You can also search for this author in PubMed Google Scholar
Contributions
LM conceived the idea and drafted the outline and paper. DOL and LT commented on the idea and draft outline. LM, LP and DOL performed literature searches and data extraction. All authors (LM, DOL, LT, LP, DBA) reviewed several draft versions of the manuscript and approved the final manuscript.
Corresponding author
Correspondence to Lawrence Mbuagbaw .
Ethics declarations
Ethics approval and consent to participate.
Not applicable.
Consent for publication
Competing interests.
DOL, DBA, LM, LP and LT are involved in the development of a reporting guideline for methodological studies.
Additional information
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Reprints and permissions
About this article
Cite this article.
Mbuagbaw, L., Lawson, D.O., Puljak, L. et al. A tutorial on methodological studies: the what, when, how and why. BMC Med Res Methodol 20 , 226 (2020). https://doi.org/10.1186/s12874-020-01107-7
Download citation
Received : 27 May 2020
Accepted : 27 August 2020
Published : 07 September 2020
DOI : https://doi.org/10.1186/s12874-020-01107-7
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Methodological study
- Meta-epidemiology
- Research methods
- Research-on-research
BMC Medical Research Methodology
ISSN: 1471-2288
- General enquiries: [email protected]
An official website of the United States government
Official websites use .gov A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS A lock ( Lock Locked padlock icon ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.
- Publications
- Account settings
- Advanced Search
- Journal List
Introduction to qualitative research methods – Part I
Shagufta bhangu, fabien provost, carlo caduff.
- Author information
- Article notes
- Copyright and License information
Address for correspondence: Prof. Carlo Caduf, Department of Global Health and Social Medicine, King's College London, Strand, London WC2R 2LS, United Kingdom. E-mail: [email protected]
Received 2022 Nov 28; Accepted 2022 Nov 29; Issue date 2023 Jan-Mar.
This is an open access journal, and articles are distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms.
Qualitative research methods are widely used in the social sciences and the humanities, but they can also complement quantitative approaches used in clinical research. In this article, we discuss the key features and contributions of qualitative research methods.
Keywords: Qualitative research, social sciences, sociology
INTRODUCTION
Qualitative research methods refer to techniques of investigation that rely on nonstatistical and nonnumerical methods of data collection, analysis, and evidence production. Qualitative research techniques provide a lens for learning about nonquantifiable phenomena such as people's experiences, languages, histories, and cultures. In this article, we describe the strengths and role of qualitative research methods and how these can be employed in clinical research.
Although frequently employed in the social sciences and humanities, qualitative research methods can complement clinical research. These techniques can contribute to a better understanding of the social, cultural, political, and economic dimensions of health and illness. Social scientists and scholars in the humanities rely on a wide range of methods, including interviews, surveys, participant observation, focus groups, oral history, and archival research to examine both structural conditions and lived experience [ Figure 1 ]. Such research can not only provide robust and reliable data but can also humanize and add richness to our understanding of the ways in which people in different parts of the world perceive and experience illness and how they interact with medical institutions, systems, and therapeutics.
Examples of qualitative research techniques
Qualitative research methods should not be seen as tools that can be applied independently of theory. It is important for these tools to be based on more than just method. In their research, social scientists and scholars in the humanities emphasize social theory. Departing from a reductionist psychological model of individual behavior that often blames people for their illness, social theory focuses on relations – disease happens not simply in people but between people. This type of theoretically informed and empirically grounded research thus examines not just patients but interactions between a wide range of actors (e.g., patients, family members, friends, neighbors, local politicians, medical practitioners at all levels, and from many systems of medicine, researchers, policymakers) to give voice to the lived experiences, motivations, and constraints of all those who are touched by disease.
PHILOSOPHICAL FOUNDATIONS OF QUALITATIVE RESEARCH METHODS
In identifying the factors that contribute to the occurrence and persistence of a phenomenon, it is paramount that we begin by asking the question: what do we know about this reality? How have we come to know this reality? These two processes, which we can refer to as the “what” question and the “how” question, are the two that all scientists (natural and social) grapple with in their research. We refer to these as the ontological and epistemological questions a research study must address. Together, they help us create a suitable methodology for any research study[ 1 ] [ Figure 2 ]. Therefore, as with quantitative methods, there must be a justifiable and logical method for understanding the world even for qualitative methods. By engaging with these two dimensions, the ontological and the epistemological, we open a path for learning that moves away from commonsensical understandings of the world, and the perpetuation of stereotypes and toward robust scientific knowledge production.
Developing a research methodology
Every discipline has a distinct research philosophy and way of viewing the world and conducting research. Philosophers and historians of science have extensively studied how these divisions and specializations have emerged over centuries.[ 1 , 2 , 3 ] The most important distinction between quantitative and qualitative research techniques lies in the nature of the data they study and analyze. While the former focus on statistical, numerical, and quantitative aspects of phenomena and employ the same in data collection and analysis, qualitative techniques focus on humanistic, descriptive, and qualitative aspects of phenomena.[ 4 ]
For the findings of any research study to be reliable, they must employ the appropriate research techniques that are uniquely tailored to the phenomena under investigation. To do so, researchers must choose techniques based on their specific research questions and understand the strengths and limitations of the different tools available to them. Since clinical work lies at the intersection of both natural and social phenomena, it means that it must study both: biological and physiological phenomena (natural, quantitative, and objective phenomena) and behavioral and cultural phenomena (social, qualitative, and subjective phenomena). Therefore, clinical researchers can gain from both sets of techniques in their efforts to produce medical knowledge and bring forth scientifically informed change.
KEY FEATURES AND CONTRIBUTIONS OF QUALITATIVE RESEARCH METHODS
In this section, we discuss the key features and contributions of qualitative research methods [ Figure 3 ]. We describe the specific strengths and limitations of these techniques and discuss how they can be deployed in scientific investigations.
Key features of qualitative research methods
One of the most important contributions of qualitative research methods is that they provide rigorous, theoretically sound, and rational techniques for the analysis of subjective, nebulous, and difficult-to-pin-down phenomena. We are aware, for example, of the role that social factors play in health care but find it hard to qualify and quantify these in our research studies. Often, we find researchers basing their arguments on “common sense,” developing research studies based on assumptions about the people that are studied. Such commonsensical assumptions are perhaps among the greatest impediments to knowledge production. For example, in trying to understand stigma, surveys often make assumptions about its reasons and frequently associate it with vague and general common sense notions of “fear” and “lack of information.” While these may be at work, to make such assumptions based on commonsensical understandings, and without conducting research inhibit us from exploring the multiple social factors that are at work under the guise of stigma.
In unpacking commonsensical understandings and researching experiences, relationships, and other phenomena, qualitative researchers are assisted by their methodological commitment to open-ended research. By open-ended research, we mean that these techniques take on an unbiased and exploratory approach in which learnings from the field and from research participants, are recorded and analyzed to learn about the world.[ 5 ] This orientation is made possible by qualitative research techniques that are particularly effective in learning about specific social, cultural, economic, and political milieus.
Second, qualitative research methods equip us in studying complex phenomena. Qualitative research methods provide scientific tools for exploring and identifying the numerous contributing factors to an occurrence. Rather than establishing one or the other factor as more important, qualitative methods are open-ended, inductive (ground-up), and empirical. They allow us to understand the object of our analysis from multiple vantage points and in its dispersion and caution against predetermined notions of the object of inquiry. They encourage researchers instead to discover a reality that is not yet given, fixed, and predetermined by the methods that are used and the hypotheses that underlie the study.
Once the multiple factors at work in a phenomenon have been identified, we can employ quantitative techniques and embark on processes of measurement, establish patterns and regularities, and analyze the causal and correlated factors at work through statistical techniques. For example, a doctor may observe that there is a high patient drop-out in treatment. Before carrying out a study which relies on quantitative techniques, qualitative research methods such as conversation analysis, interviews, surveys, or even focus group discussions may prove more effective in learning about all the factors that are contributing to patient default. After identifying the multiple, intersecting factors, quantitative techniques can be deployed to measure each of these factors through techniques such as correlational or regression analyses. Here, the use of quantitative techniques without identifying the diverse factors influencing patient decisions would be premature. Qualitative techniques thus have a key role to play in investigations of complex realities and in conducting rich exploratory studies while embracing rigorous and philosophically grounded methodologies.
Third, apart from subjective, nebulous, and complex phenomena, qualitative research techniques are also effective in making sense of irrational, illogical, and emotional phenomena. These play an important role in understanding logics at work among patients, their families, and societies. Qualitative research techniques are aided by their ability to shift focus away from the individual as a unit of analysis to the larger social, cultural, political, economic, and structural forces at work in health. As health-care practitioners and researchers focused on biological, physiological, disease and therapeutic processes, sociocultural, political, and economic conditions are often peripheral or ignored in day-to-day clinical work. However, it is within these latter processes that both health-care practices and patient lives are entrenched. Qualitative researchers are particularly adept at identifying the structural conditions such as the social, cultural, political, local, and economic conditions which contribute to health care and experiences of disease and illness.
For example, the decision to delay treatment by a patient may be understood as an irrational choice impacting his/her chances of survival, but the same may be a result of the patient treating their child's education as a financial priority over his/her own health. While this appears as an “emotional” choice, qualitative researchers try to understand the social and cultural factors that structure, inform, and justify such choices. Rather than assuming that it is an irrational choice, qualitative researchers try to understand the norms and logical grounds on which the patient is making this decision. By foregrounding such logics, stories, fears, and desires, qualitative research expands our analytic precision in learning about complex social worlds, recognizing reasons for medical successes and failures, and interrogating our assumptions about human behavior. These in turn can prove useful in arriving at conclusive, actionable findings which can inform institutional and public health policies and have a very important role to play in any change and transformation we may wish to bring to the societies in which we work.
Financial support and sponsorship
Conflicts of interest.
There are no conflicts of interest.
- 1. Shapin S, Schaffer S. Princeton: Princeton University Press; 1985. Leviathan and the Air-Pump: Hobbes, Boyle, and the Experimental Life. [ Google Scholar ]
- 2. Uberoi JP. Delhi: Oxford University Press; 1978. Science and Culture. [ Google Scholar ]
- 3. Poovey M. Chicago, IL: University of Chicago Press; 1998. A History of the Modern Fact: Problems of Knowledge in the Sciences of Wealth and Society. [ Google Scholar ]
- 4. Creswell JW. 2nd. Thousand Oaks, CA: Sage Publications; 2007. Qualitative Inquiry and Research Design: Choosing among Five Approaches. [ Google Scholar ]
- 5. Bhangu S, Bisshop A, Engelmann S, Meulemans G, Reinert H, Thibault-Picazo Y. Feeling/Following: Creative Experiments and Material Play, Anthropocene Curriculum, Haus der Kulturen der Welt. Max Planck Institute for the History of Science; The Anthropocene Issue. 2016 [ Google Scholar ]
- View on publisher site
- PDF (583.8 KB)
- Collections
Similar articles
Cited by other articles, links to ncbi databases.
- Download .nbib .nbib
- Format: AMA APA MLA NLM
Add to Collections
Criteria for Good Qualitative Research: A Comprehensive Review
- Regular Article
- Open access
- Published: 18 September 2021
- Volume 31 , pages 679–689, ( 2022 )
Cite this article
You have full access to this open access article
- Drishti Yadav ORCID: orcid.org/0000-0002-2974-0323 1
109k Accesses
53 Citations
69 Altmetric
Explore all metrics
This review aims to synthesize a published set of evaluative criteria for good qualitative research. The aim is to shed light on existing standards for assessing the rigor of qualitative research encompassing a range of epistemological and ontological standpoints. Using a systematic search strategy, published journal articles that deliberate criteria for rigorous research were identified. Then, references of relevant articles were surveyed to find noteworthy, distinct, and well-defined pointers to good qualitative research. This review presents an investigative assessment of the pivotal features in qualitative research that can permit the readers to pass judgment on its quality and to condemn it as good research when objectively and adequately utilized. Overall, this review underlines the crux of qualitative research and accentuates the necessity to evaluate such research by the very tenets of its being. It also offers some prospects and recommendations to improve the quality of qualitative research. Based on the findings of this review, it is concluded that quality criteria are the aftereffect of socio-institutional procedures and existing paradigmatic conducts. Owing to the paradigmatic diversity of qualitative research, a single and specific set of quality criteria is neither feasible nor anticipated. Since qualitative research is not a cohesive discipline, researchers need to educate and familiarize themselves with applicable norms and decisive factors to evaluate qualitative research from within its theoretical and methodological framework of origin.
Similar content being viewed by others
Good Qualitative Research: Opening up the Debate
Beyond qualitative/quantitative structuralism: the positivist qualitative research and the paradigmatic disclaimer.
What is Qualitative in Research
Avoid common mistakes on your manuscript.
Introduction
“… It is important to regularly dialogue about what makes for good qualitative research” (Tracy, 2010 , p. 837)
To decide what represents good qualitative research is highly debatable. There are numerous methods that are contained within qualitative research and that are established on diverse philosophical perspectives. Bryman et al., ( 2008 , p. 262) suggest that “It is widely assumed that whereas quality criteria for quantitative research are well‐known and widely agreed, this is not the case for qualitative research.” Hence, the question “how to evaluate the quality of qualitative research” has been continuously debated. There are many areas of science and technology wherein these debates on the assessment of qualitative research have taken place. Examples include various areas of psychology: general psychology (Madill et al., 2000 ); counseling psychology (Morrow, 2005 ); and clinical psychology (Barker & Pistrang, 2005 ), and other disciplines of social sciences: social policy (Bryman et al., 2008 ); health research (Sparkes, 2001 ); business and management research (Johnson et al., 2006 ); information systems (Klein & Myers, 1999 ); and environmental studies (Reid & Gough, 2000 ). In the literature, these debates are enthused by the impression that the blanket application of criteria for good qualitative research developed around the positivist paradigm is improper. Such debates are based on the wide range of philosophical backgrounds within which qualitative research is conducted (e.g., Sandberg, 2000 ; Schwandt, 1996 ). The existence of methodological diversity led to the formulation of different sets of criteria applicable to qualitative research.
Among qualitative researchers, the dilemma of governing the measures to assess the quality of research is not a new phenomenon, especially when the virtuous triad of objectivity, reliability, and validity (Spencer et al., 2004 ) are not adequate. Occasionally, the criteria of quantitative research are used to evaluate qualitative research (Cohen & Crabtree, 2008 ; Lather, 2004 ). Indeed, Howe ( 2004 ) claims that the prevailing paradigm in educational research is scientifically based experimental research. Hypotheses and conjectures about the preeminence of quantitative research can weaken the worth and usefulness of qualitative research by neglecting the prominence of harmonizing match for purpose on research paradigm, the epistemological stance of the researcher, and the choice of methodology. Researchers have been reprimanded concerning this in “paradigmatic controversies, contradictions, and emerging confluences” (Lincoln & Guba, 2000 ).
In general, qualitative research tends to come from a very different paradigmatic stance and intrinsically demands distinctive and out-of-the-ordinary criteria for evaluating good research and varieties of research contributions that can be made. This review attempts to present a series of evaluative criteria for qualitative researchers, arguing that their choice of criteria needs to be compatible with the unique nature of the research in question (its methodology, aims, and assumptions). This review aims to assist researchers in identifying some of the indispensable features or markers of high-quality qualitative research. In a nutshell, the purpose of this systematic literature review is to analyze the existing knowledge on high-quality qualitative research and to verify the existence of research studies dealing with the critical assessment of qualitative research based on the concept of diverse paradigmatic stances. Contrary to the existing reviews, this review also suggests some critical directions to follow to improve the quality of qualitative research in different epistemological and ontological perspectives. This review is also intended to provide guidelines for the acceleration of future developments and dialogues among qualitative researchers in the context of assessing the qualitative research.
The rest of this review article is structured in the following fashion: Sect. Methods describes the method followed for performing this review. Section Criteria for Evaluating Qualitative Studies provides a comprehensive description of the criteria for evaluating qualitative studies. This section is followed by a summary of the strategies to improve the quality of qualitative research in Sect. Improving Quality: Strategies . Section How to Assess the Quality of the Research Findings? provides details on how to assess the quality of the research findings. After that, some of the quality checklists (as tools to evaluate quality) are discussed in Sect. Quality Checklists: Tools for Assessing the Quality . At last, the review ends with the concluding remarks presented in Sect. Conclusions, Future Directions and Outlook . Some prospects in qualitative research for enhancing its quality and usefulness in the social and techno-scientific research community are also presented in Sect. Conclusions, Future Directions and Outlook .
For this review, a comprehensive literature search was performed from many databases using generic search terms such as Qualitative Research , Criteria , etc . The following databases were chosen for the literature search based on the high number of results: IEEE Explore, ScienceDirect, PubMed, Google Scholar, and Web of Science. The following keywords (and their combinations using Boolean connectives OR/AND) were adopted for the literature search: qualitative research, criteria, quality, assessment, and validity. The synonyms for these keywords were collected and arranged in a logical structure (see Table 1 ). All publications in journals and conference proceedings later than 1950 till 2021 were considered for the search. Other articles extracted from the references of the papers identified in the electronic search were also included. A large number of publications on qualitative research were retrieved during the initial screening. Hence, to include the searches with the main focus on criteria for good qualitative research, an inclusion criterion was utilized in the search string.
From the selected databases, the search retrieved a total of 765 publications. Then, the duplicate records were removed. After that, based on the title and abstract, the remaining 426 publications were screened for their relevance by using the following inclusion and exclusion criteria (see Table 2 ). Publications focusing on evaluation criteria for good qualitative research were included, whereas those works which delivered theoretical concepts on qualitative research were excluded. Based on the screening and eligibility, 45 research articles were identified that offered explicit criteria for evaluating the quality of qualitative research and were found to be relevant to this review.
Figure 1 illustrates the complete review process in the form of PRISMA flow diagram. PRISMA, i.e., “preferred reporting items for systematic reviews and meta-analyses” is employed in systematic reviews to refine the quality of reporting.
PRISMA flow diagram illustrating the search and inclusion process. N represents the number of records
Criteria for Evaluating Qualitative Studies
Fundamental criteria: general research quality.
Various researchers have put forward criteria for evaluating qualitative research, which have been summarized in Table 3 . Also, the criteria outlined in Table 4 effectively deliver the various approaches to evaluate and assess the quality of qualitative work. The entries in Table 4 are based on Tracy’s “Eight big‐tent criteria for excellent qualitative research” (Tracy, 2010 ). Tracy argues that high-quality qualitative work should formulate criteria focusing on the worthiness, relevance, timeliness, significance, morality, and practicality of the research topic, and the ethical stance of the research itself. Researchers have also suggested a series of questions as guiding principles to assess the quality of a qualitative study (Mays & Pope, 2020 ). Nassaji ( 2020 ) argues that good qualitative research should be robust, well informed, and thoroughly documented.
Qualitative Research: Interpretive Paradigms
All qualitative researchers follow highly abstract principles which bring together beliefs about ontology, epistemology, and methodology. These beliefs govern how the researcher perceives and acts. The net, which encompasses the researcher’s epistemological, ontological, and methodological premises, is referred to as a paradigm, or an interpretive structure, a “Basic set of beliefs that guides action” (Guba, 1990 ). Four major interpretive paradigms structure the qualitative research: positivist and postpositivist, constructivist interpretive, critical (Marxist, emancipatory), and feminist poststructural. The complexity of these four abstract paradigms increases at the level of concrete, specific interpretive communities. Table 5 presents these paradigms and their assumptions, including their criteria for evaluating research, and the typical form that an interpretive or theoretical statement assumes in each paradigm. Moreover, for evaluating qualitative research, quantitative conceptualizations of reliability and validity are proven to be incompatible (Horsburgh, 2003 ). In addition, a series of questions have been put forward in the literature to assist a reviewer (who is proficient in qualitative methods) for meticulous assessment and endorsement of qualitative research (Morse, 2003 ). Hammersley ( 2007 ) also suggests that guiding principles for qualitative research are advantageous, but methodological pluralism should not be simply acknowledged for all qualitative approaches. Seale ( 1999 ) also points out the significance of methodological cognizance in research studies.
Table 5 reflects that criteria for assessing the quality of qualitative research are the aftermath of socio-institutional practices and existing paradigmatic standpoints. Owing to the paradigmatic diversity of qualitative research, a single set of quality criteria is neither possible nor desirable. Hence, the researchers must be reflexive about the criteria they use in the various roles they play within their research community.
Improving Quality: Strategies
Another critical question is “How can the qualitative researchers ensure that the abovementioned quality criteria can be met?” Lincoln and Guba ( 1986 ) delineated several strategies to intensify each criteria of trustworthiness. Other researchers (Merriam & Tisdell, 2016 ; Shenton, 2004 ) also presented such strategies. A brief description of these strategies is shown in Table 6 .
It is worth mentioning that generalizability is also an integral part of qualitative research (Hays & McKibben, 2021 ). In general, the guiding principle pertaining to generalizability speaks about inducing and comprehending knowledge to synthesize interpretive components of an underlying context. Table 7 summarizes the main metasynthesis steps required to ascertain generalizability in qualitative research.
Figure 2 reflects the crucial components of a conceptual framework and their contribution to decisions regarding research design, implementation, and applications of results to future thinking, study, and practice (Johnson et al., 2020 ). The synergy and interrelationship of these components signifies their role to different stances of a qualitative research study.
Essential elements of a conceptual framework
In a nutshell, to assess the rationale of a study, its conceptual framework and research question(s), quality criteria must take account of the following: lucid context for the problem statement in the introduction; well-articulated research problems and questions; precise conceptual framework; distinct research purpose; and clear presentation and investigation of the paradigms. These criteria would expedite the quality of qualitative research.
How to Assess the Quality of the Research Findings?
The inclusion of quotes or similar research data enhances the confirmability in the write-up of the findings. The use of expressions (for instance, “80% of all respondents agreed that” or “only one of the interviewees mentioned that”) may also quantify qualitative findings (Stenfors et al., 2020 ). On the other hand, the persuasive reason for “why this may not help in intensifying the research” has also been provided (Monrouxe & Rees, 2020 ). Further, the Discussion and Conclusion sections of an article also prove robust markers of high-quality qualitative research, as elucidated in Table 8 .
Quality Checklists: Tools for Assessing the Quality
Numerous checklists are available to speed up the assessment of the quality of qualitative research. However, if used uncritically and recklessly concerning the research context, these checklists may be counterproductive. I recommend that such lists and guiding principles may assist in pinpointing the markers of high-quality qualitative research. However, considering enormous variations in the authors’ theoretical and philosophical contexts, I would emphasize that high dependability on such checklists may say little about whether the findings can be applied in your setting. A combination of such checklists might be appropriate for novice researchers. Some of these checklists are listed below:
The most commonly used framework is Consolidated Criteria for Reporting Qualitative Research (COREQ) (Tong et al., 2007 ). This framework is recommended by some journals to be followed by the authors during article submission.
Standards for Reporting Qualitative Research (SRQR) is another checklist that has been created particularly for medical education (O’Brien et al., 2014 ).
Also, Tracy ( 2010 ) and Critical Appraisal Skills Programme (CASP, 2021 ) offer criteria for qualitative research relevant across methods and approaches.
Further, researchers have also outlined different criteria as hallmarks of high-quality qualitative research. For instance, the “Road Trip Checklist” (Epp & Otnes, 2021 ) provides a quick reference to specific questions to address different elements of high-quality qualitative research.
Conclusions, Future Directions, and Outlook
This work presents a broad review of the criteria for good qualitative research. In addition, this article presents an exploratory analysis of the essential elements in qualitative research that can enable the readers of qualitative work to judge it as good research when objectively and adequately utilized. In this review, some of the essential markers that indicate high-quality qualitative research have been highlighted. I scope them narrowly to achieve rigor in qualitative research and note that they do not completely cover the broader considerations necessary for high-quality research. This review points out that a universal and versatile one-size-fits-all guideline for evaluating the quality of qualitative research does not exist. In other words, this review also emphasizes the non-existence of a set of common guidelines among qualitative researchers. In unison, this review reinforces that each qualitative approach should be treated uniquely on account of its own distinctive features for different epistemological and disciplinary positions. Owing to the sensitivity of the worth of qualitative research towards the specific context and the type of paradigmatic stance, researchers should themselves analyze what approaches can be and must be tailored to ensemble the distinct characteristics of the phenomenon under investigation. Although this article does not assert to put forward a magic bullet and to provide a one-stop solution for dealing with dilemmas about how, why, or whether to evaluate the “goodness” of qualitative research, it offers a platform to assist the researchers in improving their qualitative studies. This work provides an assembly of concerns to reflect on, a series of questions to ask, and multiple sets of criteria to look at, when attempting to determine the quality of qualitative research. Overall, this review underlines the crux of qualitative research and accentuates the need to evaluate such research by the very tenets of its being. Bringing together the vital arguments and delineating the requirements that good qualitative research should satisfy, this review strives to equip the researchers as well as reviewers to make well-versed judgment about the worth and significance of the qualitative research under scrutiny. In a nutshell, a comprehensive portrayal of the research process (from the context of research to the research objectives, research questions and design, speculative foundations, and from approaches of collecting data to analyzing the results, to deriving inferences) frequently proliferates the quality of a qualitative research.
Prospects : A Road Ahead for Qualitative Research
Irrefutably, qualitative research is a vivacious and evolving discipline wherein different epistemological and disciplinary positions have their own characteristics and importance. In addition, not surprisingly, owing to the sprouting and varied features of qualitative research, no consensus has been pulled off till date. Researchers have reflected various concerns and proposed several recommendations for editors and reviewers on conducting reviews of critical qualitative research (Levitt et al., 2021 ; McGinley et al., 2021 ). Following are some prospects and a few recommendations put forward towards the maturation of qualitative research and its quality evaluation:
In general, most of the manuscript and grant reviewers are not qualitative experts. Hence, it is more likely that they would prefer to adopt a broad set of criteria. However, researchers and reviewers need to keep in mind that it is inappropriate to utilize the same approaches and conducts among all qualitative research. Therefore, future work needs to focus on educating researchers and reviewers about the criteria to evaluate qualitative research from within the suitable theoretical and methodological context.
There is an urgent need to refurbish and augment critical assessment of some well-known and widely accepted tools (including checklists such as COREQ, SRQR) to interrogate their applicability on different aspects (along with their epistemological ramifications).
Efforts should be made towards creating more space for creativity, experimentation, and a dialogue between the diverse traditions of qualitative research. This would potentially help to avoid the enforcement of one's own set of quality criteria on the work carried out by others.
Moreover, journal reviewers need to be aware of various methodological practices and philosophical debates.
It is pivotal to highlight the expressions and considerations of qualitative researchers and bring them into a more open and transparent dialogue about assessing qualitative research in techno-scientific, academic, sociocultural, and political rooms.
Frequent debates on the use of evaluative criteria are required to solve some potentially resolved issues (including the applicability of a single set of criteria in multi-disciplinary aspects). Such debates would not only benefit the group of qualitative researchers themselves, but primarily assist in augmenting the well-being and vivacity of the entire discipline.
To conclude, I speculate that the criteria, and my perspective, may transfer to other methods, approaches, and contexts. I hope that they spark dialog and debate – about criteria for excellent qualitative research and the underpinnings of the discipline more broadly – and, therefore, help improve the quality of a qualitative study. Further, I anticipate that this review will assist the researchers to contemplate on the quality of their own research, to substantiate research design and help the reviewers to review qualitative research for journals. On a final note, I pinpoint the need to formulate a framework (encompassing the prerequisites of a qualitative study) by the cohesive efforts of qualitative researchers of different disciplines with different theoretic-paradigmatic origins. I believe that tailoring such a framework (of guiding principles) paves the way for qualitative researchers to consolidate the status of qualitative research in the wide-ranging open science debate. Dialogue on this issue across different approaches is crucial for the impending prospects of socio-techno-educational research.
Amin, M. E. K., Nørgaard, L. S., Cavaco, A. M., Witry, M. J., Hillman, L., Cernasev, A., & Desselle, S. P. (2020). Establishing trustworthiness and authenticity in qualitative pharmacy research. Research in Social and Administrative Pharmacy, 16 (10), 1472–1482.
Article Google Scholar
Barker, C., & Pistrang, N. (2005). Quality criteria under methodological pluralism: Implications for conducting and evaluating research. American Journal of Community Psychology, 35 (3–4), 201–212.
Bryman, A., Becker, S., & Sempik, J. (2008). Quality criteria for quantitative, qualitative and mixed methods research: A view from social policy. International Journal of Social Research Methodology, 11 (4), 261–276.
Caelli, K., Ray, L., & Mill, J. (2003). ‘Clear as mud’: Toward greater clarity in generic qualitative research. International Journal of Qualitative Methods, 2 (2), 1–13.
CASP (2021). CASP checklists. Retrieved May 2021 from https://casp-uk.net/casp-tools-checklists/
Cohen, D. J., & Crabtree, B. F. (2008). Evaluative criteria for qualitative research in health care: Controversies and recommendations. The Annals of Family Medicine, 6 (4), 331–339.
Denzin, N. K., & Lincoln, Y. S. (2005). Introduction: The discipline and practice of qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.), The sage handbook of qualitative research (pp. 1–32). Sage Publications Ltd.
Google Scholar
Elliott, R., Fischer, C. T., & Rennie, D. L. (1999). Evolving guidelines for publication of qualitative research studies in psychology and related fields. British Journal of Clinical Psychology, 38 (3), 215–229.
Epp, A. M., & Otnes, C. C. (2021). High-quality qualitative research: Getting into gear. Journal of Service Research . https://doi.org/10.1177/1094670520961445
Guba, E. G. (1990). The paradigm dialog. In Alternative paradigms conference, mar, 1989, Indiana u, school of education, San Francisco, ca, us . Sage Publications, Inc.
Hammersley, M. (2007). The issue of quality in qualitative research. International Journal of Research and Method in Education, 30 (3), 287–305.
Haven, T. L., Errington, T. M., Gleditsch, K. S., van Grootel, L., Jacobs, A. M., Kern, F. G., & Mokkink, L. B. (2020). Preregistering qualitative research: A Delphi study. International Journal of Qualitative Methods, 19 , 1609406920976417.
Hays, D. G., & McKibben, W. B. (2021). Promoting rigorous research: Generalizability and qualitative research. Journal of Counseling and Development, 99 (2), 178–188.
Horsburgh, D. (2003). Evaluation of qualitative research. Journal of Clinical Nursing, 12 (2), 307–312.
Howe, K. R. (2004). A critique of experimentalism. Qualitative Inquiry, 10 (1), 42–46.
Johnson, J. L., Adkins, D., & Chauvin, S. (2020). A review of the quality indicators of rigor in qualitative research. American Journal of Pharmaceutical Education, 84 (1), 7120.
Johnson, P., Buehring, A., Cassell, C., & Symon, G. (2006). Evaluating qualitative management research: Towards a contingent criteriology. International Journal of Management Reviews, 8 (3), 131–156.
Klein, H. K., & Myers, M. D. (1999). A set of principles for conducting and evaluating interpretive field studies in information systems. MIS Quarterly, 23 (1), 67–93.
Lather, P. (2004). This is your father’s paradigm: Government intrusion and the case of qualitative research in education. Qualitative Inquiry, 10 (1), 15–34.
Levitt, H. M., Morrill, Z., Collins, K. M., & Rizo, J. L. (2021). The methodological integrity of critical qualitative research: Principles to support design and research review. Journal of Counseling Psychology, 68 (3), 357.
Lincoln, Y. S., & Guba, E. G. (1986). But is it rigorous? Trustworthiness and authenticity in naturalistic evaluation. New Directions for Program Evaluation, 1986 (30), 73–84.
Lincoln, Y. S., & Guba, E. G. (2000). Paradigmatic controversies, contradictions and emerging confluences. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed., pp. 163–188). Sage Publications.
Madill, A., Jordan, A., & Shirley, C. (2000). Objectivity and reliability in qualitative analysis: Realist, contextualist and radical constructionist epistemologies. British Journal of Psychology, 91 (1), 1–20.
Mays, N., & Pope, C. (2020). Quality in qualitative research. Qualitative Research in Health Care . https://doi.org/10.1002/9781119410867.ch15
McGinley, S., Wei, W., Zhang, L., & Zheng, Y. (2021). The state of qualitative research in hospitality: A 5-year review 2014 to 2019. Cornell Hospitality Quarterly, 62 (1), 8–20.
Merriam, S., & Tisdell, E. (2016). Qualitative research: A guide to design and implementation. San Francisco, US.
Meyer, M., & Dykes, J. (2019). Criteria for rigor in visualization design study. IEEE Transactions on Visualization and Computer Graphics, 26 (1), 87–97.
Monrouxe, L. V., & Rees, C. E. (2020). When I say… quantification in qualitative research. Medical Education, 54 (3), 186–187.
Morrow, S. L. (2005). Quality and trustworthiness in qualitative research in counseling psychology. Journal of Counseling Psychology, 52 (2), 250.
Morse, J. M. (2003). A review committee’s guide for evaluating qualitative proposals. Qualitative Health Research, 13 (6), 833–851.
Nassaji, H. (2020). Good qualitative research. Language Teaching Research, 24 (4), 427–431.
O’Brien, B. C., Harris, I. B., Beckman, T. J., Reed, D. A., & Cook, D. A. (2014). Standards for reporting qualitative research: A synthesis of recommendations. Academic Medicine, 89 (9), 1245–1251.
O’Connor, C., & Joffe, H. (2020). Intercoder reliability in qualitative research: Debates and practical guidelines. International Journal of Qualitative Methods, 19 , 1609406919899220.
Reid, A., & Gough, S. (2000). Guidelines for reporting and evaluating qualitative research: What are the alternatives? Environmental Education Research, 6 (1), 59–91.
Rocco, T. S. (2010). Criteria for evaluating qualitative studies. Human Resource Development International . https://doi.org/10.1080/13678868.2010.501959
Sandberg, J. (2000). Understanding human competence at work: An interpretative approach. Academy of Management Journal, 43 (1), 9–25.
Schwandt, T. A. (1996). Farewell to criteriology. Qualitative Inquiry, 2 (1), 58–72.
Seale, C. (1999). Quality in qualitative research. Qualitative Inquiry, 5 (4), 465–478.
Shenton, A. K. (2004). Strategies for ensuring trustworthiness in qualitative research projects. Education for Information, 22 (2), 63–75.
Sparkes, A. C. (2001). Myth 94: Qualitative health researchers will agree about validity. Qualitative Health Research, 11 (4), 538–552.
Spencer, L., Ritchie, J., Lewis, J., & Dillon, L. (2004). Quality in qualitative evaluation: A framework for assessing research evidence.
Stenfors, T., Kajamaa, A., & Bennett, D. (2020). How to assess the quality of qualitative research. The Clinical Teacher, 17 (6), 596–599.
Taylor, E. W., Beck, J., & Ainsworth, E. (2001). Publishing qualitative adult education research: A peer review perspective. Studies in the Education of Adults, 33 (2), 163–179.
Tong, A., Sainsbury, P., & Craig, J. (2007). Consolidated criteria for reporting qualitative research (COREQ): A 32-item checklist for interviews and focus groups. International Journal for Quality in Health Care, 19 (6), 349–357.
Tracy, S. J. (2010). Qualitative quality: Eight “big-tent” criteria for excellent qualitative research. Qualitative Inquiry, 16 (10), 837–851.
Download references
Open access funding provided by TU Wien (TUW).
Author information
Authors and affiliations.
Faculty of Informatics, Technische Universität Wien, 1040, Vienna, Austria
Drishti Yadav
You can also search for this author in PubMed Google Scholar
Corresponding author
Correspondence to Drishti Yadav .
Ethics declarations
Conflict of interest.
The author declares no conflict of interest.
Additional information
Publisher's note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .
Reprints and permissions
About this article
Yadav, D. Criteria for Good Qualitative Research: A Comprehensive Review. Asia-Pacific Edu Res 31 , 679–689 (2022). https://doi.org/10.1007/s40299-021-00619-0
Download citation
Accepted : 28 August 2021
Published : 18 September 2021
Issue Date : December 2022
DOI : https://doi.org/10.1007/s40299-021-00619-0
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Qualitative research
- Evaluative criteria
- Find a journal
- Publish with us
- Track your research
REVIEW article
The use of research methods in psychological research: a systematised review.
- 1 Community Psychosocial Research (COMPRES), School of Psychosocial Health, North-West University, Potchefstroom, South Africa
- 2 WorkWell Research Institute, North-West University, Potchefstroom, South Africa
Research methods play an imperative role in research quality as well as educating young researchers, however, the application thereof is unclear which can be detrimental to the field of psychology. Therefore, this systematised review aimed to determine what research methods are being used, how these methods are being used and for what topics in the field. Our review of 999 articles from five journals over a period of 5 years indicated that psychology research is conducted in 10 topics via predominantly quantitative research methods. Of these 10 topics, social psychology was the most popular. The remainder of the conducted methodology is described. It was also found that articles lacked rigour and transparency in the used methodology which has implications for replicability. In conclusion this article, provides an overview of all reported methodologies used in a sample of psychology journals. It highlights the popularity and application of methods and designs throughout the article sample as well as an unexpected lack of rigour with regard to most aspects of methodology. Possible sample bias should be considered when interpreting the results of this study. It is recommended that future research should utilise the results of this study to determine the possible impact on the field of psychology as a science and to further investigation into the use of research methods. Results should prompt the following future research into: a lack or rigour and its implication on replication, the use of certain methods above others, publication bias and choice of sampling method.
Introduction
Psychology is an ever-growing and popular field ( Gough and Lyons, 2016 ; Clay, 2017 ). Due to this growth and the need for science-based research to base health decisions on ( Perestelo-Pérez, 2013 ), the use of research methods in the broad field of psychology is an essential point of investigation ( Stangor, 2011 ; Aanstoos, 2014 ). Research methods are therefore viewed as important tools used by researchers to collect data ( Nieuwenhuis, 2016 ) and include the following: quantitative, qualitative, mixed method and multi method ( Maree, 2016 ). Additionally, researchers also employ various types of literature reviews to address research questions ( Grant and Booth, 2009 ). According to literature, what research method is used and why a certain research method is used is complex as it depends on various factors that may include paradigm ( O'Neil and Koekemoer, 2016 ), research question ( Grix, 2002 ), or the skill and exposure of the researcher ( Nind et al., 2015 ). How these research methods are employed is also difficult to discern as research methods are often depicted as having fixed boundaries that are continuously crossed in research ( Johnson et al., 2001 ; Sandelowski, 2011 ). Examples of this crossing include adding quantitative aspects to qualitative studies ( Sandelowski et al., 2009 ), or stating that a study used a mixed-method design without the study having any characteristics of this design ( Truscott et al., 2010 ).
The inappropriate use of research methods affects how students and researchers improve and utilise their research skills ( Scott Jones and Goldring, 2015 ), how theories are developed ( Ngulube, 2013 ), and the credibility of research results ( Levitt et al., 2017 ). This, in turn, can be detrimental to the field ( Nind et al., 2015 ), journal publication ( Ketchen et al., 2008 ; Ezeh et al., 2010 ), and attempts to address public social issues through psychological research ( Dweck, 2017 ). This is especially important given the now well-known replication crisis the field is facing ( Earp and Trafimow, 2015 ; Hengartner, 2018 ).
Due to this lack of clarity on method use and the potential impact of inept use of research methods, the aim of this study was to explore the use of research methods in the field of psychology through a review of journal publications. Chaichanasakul et al. (2011) identify reviewing articles as the opportunity to examine the development, growth and progress of a research area and overall quality of a journal. Studies such as Lee et al. (1999) as well as Bluhm et al. (2011) review of qualitative methods has attempted to synthesis the use of research methods and indicated the growth of qualitative research in American and European journals. Research has also focused on the use of research methods in specific sub-disciplines of psychology, for example, in the field of Industrial and Organisational psychology Coetzee and Van Zyl (2014) found that South African publications tend to consist of cross-sectional quantitative research methods with underrepresented longitudinal studies. Qualitative studies were found to make up 21% of the articles published from 1995 to 2015 in a similar study by O'Neil and Koekemoer (2016) . Other methods in health psychology, such as Mixed methods research have also been reportedly growing in popularity ( O'Cathain, 2009 ).
A broad overview of the use of research methods in the field of psychology as a whole is however, not available in the literature. Therefore, our research focused on answering what research methods are being used, how these methods are being used and for what topics in practice (i.e., journal publications) in order to provide a general perspective of method used in psychology publication. We synthesised the collected data into the following format: research topic [areas of scientific discourse in a field or the current needs of a population ( Bittermann and Fischer, 2018 )], method [data-gathering tools ( Nieuwenhuis, 2016 )], sampling [elements chosen from a population to partake in research ( Ritchie et al., 2009 )], data collection [techniques and research strategy ( Maree, 2016 )], and data analysis [discovering information by examining bodies of data ( Ktepi, 2016 )]. A systematised review of recent articles (2013 to 2017) collected from five different journals in the field of psychological research was conducted.
Grant and Booth (2009) describe systematised reviews as the review of choice for post-graduate studies, which is employed using some elements of a systematic review and seldom more than one or two databases to catalogue studies after a comprehensive literature search. The aspects used in this systematised review that are similar to that of a systematic review were a full search within the chosen database and data produced in tabular form ( Grant and Booth, 2009 ).
Sample sizes and timelines vary in systematised reviews (see Lowe and Moore, 2014 ; Pericall and Taylor, 2014 ; Barr-Walker, 2017 ). With no clear parameters identified in the literature (see Grant and Booth, 2009 ), the sample size of this study was determined by the purpose of the sample ( Strydom, 2011 ), and time and cost constraints ( Maree and Pietersen, 2016 ). Thus, a non-probability purposive sample ( Ritchie et al., 2009 ) of the top five psychology journals from 2013 to 2017 was included in this research study. Per Lee (2015) American Psychological Association (APA) recommends the use of the most up-to-date sources for data collection with consideration of the context of the research study. As this research study focused on the most recent trends in research methods used in the broad field of psychology, the identified time frame was deemed appropriate.
Psychology journals were only included if they formed part of the top five English journals in the miscellaneous psychology domain of the Scimago Journal and Country Rank ( Scimago Journal & Country Rank, 2017 ). The Scimago Journal and Country Rank provides a yearly updated list of publicly accessible journal and country-specific indicators derived from the Scopus ® database ( Scopus, 2017b ) by means of the Scimago Journal Rank (SJR) indicator developed by Scimago from the algorithm Google PageRank™ ( Scimago Journal & Country Rank, 2017 ). Scopus is the largest global database of abstracts and citations from peer-reviewed journals ( Scopus, 2017a ). Reasons for the development of the Scimago Journal and Country Rank list was to allow researchers to assess scientific domains, compare country rankings, and compare and analyse journals ( Scimago Journal & Country Rank, 2017 ), which supported the aim of this research study. Additionally, the goals of the journals had to focus on topics in psychology in general with no preference to specific research methods and have full-text access to articles.
The following list of top five journals in 2018 fell within the abovementioned inclusion criteria (1) Australian Journal of Psychology, (2) British Journal of Psychology, (3) Europe's Journal of Psychology, (4) International Journal of Psychology and lastly the (5) Journal of Psychology Applied and Interdisciplinary.
Journals were excluded from this systematised review if no full-text versions of their articles were available, if journals explicitly stated a publication preference for certain research methods, or if the journal only published articles in a specific discipline of psychological research (for example, industrial psychology, clinical psychology etc.).
The researchers followed a procedure (see Figure 1 ) adapted from that of Ferreira et al. (2016) for systematised reviews. Data collection and categorisation commenced on 4 December 2017 and continued until 30 June 2019. All the data was systematically collected and coded manually ( Grant and Booth, 2009 ) with an independent person acting as co-coder. Codes of interest included the research topic, method used, the design used, sampling method, and methodology (the method used for data collection and data analysis). These codes were derived from the wording in each article. Themes were created based on the derived codes and checked by the co-coder. Lastly, these themes were catalogued into a table as per the systematised review design.
Figure 1 . Systematised review procedure.
According to Johnston et al. (2019) , “literature screening, selection, and data extraction/analyses” (p. 7) are specifically tailored to the aim of a review. Therefore, the steps followed in a systematic review must be reported in a comprehensive and transparent manner. The chosen systematised design adhered to the rigour expected from systematic reviews with regard to full search and data produced in tabular form ( Grant and Booth, 2009 ). The rigorous application of the systematic review is, therefore discussed in relation to these two elements.
Firstly, to ensure a comprehensive search, this research study promoted review transparency by following a clear protocol outlined according to each review stage before collecting data ( Johnston et al., 2019 ). This protocol was similar to that of Ferreira et al. (2016) and approved by three research committees/stakeholders and the researchers ( Johnston et al., 2019 ). The eligibility criteria for article inclusion was based on the research question and clearly stated, and the process of inclusion was recorded on an electronic spreadsheet to create an evidence trail ( Bandara et al., 2015 ; Johnston et al., 2019 ). Microsoft Excel spreadsheets are a popular tool for review studies and can increase the rigour of the review process ( Bandara et al., 2015 ). Screening for appropriate articles for inclusion forms an integral part of a systematic review process ( Johnston et al., 2019 ). This step was applied to two aspects of this research study: the choice of eligible journals and articles to be included. Suitable journals were selected by the first author and reviewed by the second and third authors. Initially, all articles from the chosen journals were included. Then, by process of elimination, those irrelevant to the research aim, i.e., interview articles or discussions etc., were excluded.
To ensure rigourous data extraction, data was first extracted by one reviewer, and an independent person verified the results for completeness and accuracy ( Johnston et al., 2019 ). The research question served as a guide for efficient, organised data extraction ( Johnston et al., 2019 ). Data was categorised according to the codes of interest, along with article identifiers for audit trails such as authors, title and aims of articles. The categorised data was based on the aim of the review ( Johnston et al., 2019 ) and synthesised in tabular form under methods used, how these methods were used, and for what topics in the field of psychology.
The initial search produced a total of 1,145 articles from the 5 journals identified. Inclusion and exclusion criteria resulted in a final sample of 999 articles ( Figure 2 ). Articles were co-coded into 84 codes, from which 10 themes were derived ( Table 1 ).
Figure 2 . Journal article frequency.
Table 1 . Codes used to form themes (research topics).
These 10 themes represent the topic section of our research question ( Figure 3 ). All these topics except, for the final one, psychological practice , were found to concur with the research areas in psychology as identified by Weiten (2010) . These research areas were chosen to represent the derived codes as they provided broad definitions that allowed for clear, concise categorisation of the vast amount of data. Article codes were categorised under particular themes/topics if they adhered to the research area definitions created by Weiten (2010) . It is important to note that these areas of research do not refer to specific disciplines in psychology, such as industrial psychology; but to broader fields that may encompass sub-interests of these disciplines.
Figure 3 . Topic frequency (international sample).
In the case of developmental psychology , researchers conduct research into human development from childhood to old age. Social psychology includes research on behaviour governed by social drivers. Researchers in the field of educational psychology study how people learn and the best way to teach them. Health psychology aims to determine the effect of psychological factors on physiological health. Physiological psychology , on the other hand, looks at the influence of physiological aspects on behaviour. Experimental psychology is not the only theme that uses experimental research and focuses on the traditional core topics of psychology (for example, sensation). Cognitive psychology studies the higher mental processes. Psychometrics is concerned with measuring capacity or behaviour. Personality research aims to assess and describe consistency in human behaviour ( Weiten, 2010 ). The final theme of psychological practice refers to the experiences, techniques, and interventions employed by practitioners, researchers, and academia in the field of psychology.
Articles under these themes were further subdivided into methodologies: method, sampling, design, data collection, and data analysis. The categorisation was based on information stated in the articles and not inferred by the researchers. Data were compiled into two sets of results presented in this article. The first set addresses the aim of this study from the perspective of the topics identified. The second set of results represents a broad overview of the results from the perspective of the methodology employed. The second set of results are discussed in this article, while the first set is presented in table format. The discussion thus provides a broad overview of methods use in psychology (across all themes), while the table format provides readers with in-depth insight into methods used in the individual themes identified. We believe that presenting the data from both perspectives allow readers a broad understanding of the results. Due a large amount of information that made up our results, we followed Cichocka and Jost (2014) in simplifying our results. Please note that the numbers indicated in the table in terms of methodology differ from the total number of articles. Some articles employed more than one method/sampling technique/design/data collection method/data analysis in their studies.
What follows is the results for what methods are used, how these methods are used, and which topics in psychology they are applied to . Percentages are reported to the second decimal in order to highlight small differences in the occurrence of methodology.
Firstly, with regard to the research methods used, our results show that researchers are more likely to use quantitative research methods (90.22%) compared to all other research methods. Qualitative research was the second most common research method but only made up about 4.79% of the general method usage. Reviews occurred almost as much as qualitative studies (3.91%), as the third most popular method. Mixed-methods research studies (0.98%) occurred across most themes, whereas multi-method research was indicated in only one study and amounted to 0.10% of the methods identified. The specific use of each method in the topics identified is shown in Table 2 and Figure 4 .
Table 2 . Research methods in psychology.
Figure 4 . Research method frequency in topics.
Secondly, in the case of how these research methods are employed , our study indicated the following.
Sampling −78.34% of the studies in the collected articles did not specify a sampling method. From the remainder of the studies, 13 types of sampling methods were identified. These sampling methods included broad categorisation of a sample as, for example, a probability or non-probability sample. General samples of convenience were the methods most likely to be applied (10.34%), followed by random sampling (3.51%), snowball sampling (2.73%), and purposive (1.37%) and cluster sampling (1.27%). The remainder of the sampling methods occurred to a more limited extent (0–1.0%). See Table 3 and Figure 5 for sampling methods employed in each topic.
Table 3 . Sampling use in the field of psychology.
Figure 5 . Sampling method frequency in topics.
Designs were categorised based on the articles' statement thereof. Therefore, it is important to note that, in the case of quantitative studies, non-experimental designs (25.55%) were often indicated due to a lack of experiments and any other indication of design, which, according to Laher (2016) , is a reasonable categorisation. Non-experimental designs should thus be compared with experimental designs only in the description of data, as it could include the use of correlational/cross-sectional designs, which were not overtly stated by the authors. For the remainder of the research methods, “not stated” (7.12%) was assigned to articles without design types indicated.
From the 36 identified designs the most popular designs were cross-sectional (23.17%) and experimental (25.64%), which concurred with the high number of quantitative studies. Longitudinal studies (3.80%), the third most popular design, was used in both quantitative and qualitative studies. Qualitative designs consisted of ethnography (0.38%), interpretative phenomenological designs/phenomenology (0.28%), as well as narrative designs (0.28%). Studies that employed the review method were mostly categorised as “not stated,” with the most often stated review designs being systematic reviews (0.57%). The few mixed method studies employed exploratory, explanatory (0.09%), and concurrent designs (0.19%), with some studies referring to separate designs for the qualitative and quantitative methods. The one study that identified itself as a multi-method study used a longitudinal design. Please see how these designs were employed in each specific topic in Table 4 , Figure 6 .
Table 4 . Design use in the field of psychology.
Figure 6 . Design frequency in topics.
Data collection and analysis —data collection included 30 methods, with the data collection method most often employed being questionnaires (57.84%). The experimental task (16.56%) was the second most preferred collection method, which included established or unique tasks designed by the researchers. Cognitive ability tests (6.84%) were also regularly used along with various forms of interviewing (7.66%). Table 5 and Figure 7 represent data collection use in the various topics. Data analysis consisted of 3,857 occurrences of data analysis categorised into ±188 various data analysis techniques shown in Table 6 and Figures 1 – 7 . Descriptive statistics were the most commonly used (23.49%) along with correlational analysis (17.19%). When using a qualitative method, researchers generally employed thematic analysis (0.52%) or different forms of analysis that led to coding and the creation of themes. Review studies presented few data analysis methods, with most studies categorising their results. Mixed method and multi-method studies followed the analysis methods identified for the qualitative and quantitative studies included.
Table 5 . Data collection in the field of psychology.
Figure 7 . Data collection frequency in topics.
Table 6 . Data analysis in the field of psychology.
Results of the topics researched in psychology can be seen in the tables, as previously stated in this article. It is noteworthy that, of the 10 topics, social psychology accounted for 43.54% of the studies, with cognitive psychology the second most popular research topic at 16.92%. The remainder of the topics only occurred in 4.0–7.0% of the articles considered. A list of the included 999 articles is available under the section “View Articles” on the following website: https://methodgarden.xtrapolate.io/ . This website was created by Scholtz et al. (2019) to visually present a research framework based on this Article's results.
This systematised review categorised full-length articles from five international journals across the span of 5 years to provide insight into the use of research methods in the field of psychology. Results indicated what methods are used how these methods are being used and for what topics (why) in the included sample of articles. The results should be seen as providing insight into method use and by no means a comprehensive representation of the aforementioned aim due to the limited sample. To our knowledge, this is the first research study to address this topic in this manner. Our discussion attempts to promote a productive way forward in terms of the key results for method use in psychology, especially in the field of academia ( Holloway, 2008 ).
With regard to the methods used, our data stayed true to literature, finding only common research methods ( Grant and Booth, 2009 ; Maree, 2016 ) that varied in the degree to which they were employed. Quantitative research was found to be the most popular method, as indicated by literature ( Breen and Darlaston-Jones, 2010 ; Counsell and Harlow, 2017 ) and previous studies in specific areas of psychology (see Coetzee and Van Zyl, 2014 ). Its long history as the first research method ( Leech et al., 2007 ) in the field of psychology as well as researchers' current application of mathematical approaches in their studies ( Toomela, 2010 ) might contribute to its popularity today. Whatever the case may be, our results show that, despite the growth in qualitative research ( Demuth, 2015 ; Smith and McGannon, 2018 ), quantitative research remains the first choice for article publication in these journals. Despite the included journals indicating openness to articles that apply any research methods. This finding may be due to qualitative research still being seen as a new method ( Burman and Whelan, 2011 ) or reviewers' standards being higher for qualitative studies ( Bluhm et al., 2011 ). Future research is encouraged into the possible biasness in publication of research methods, additionally further investigation with a different sample into the proclaimed growth of qualitative research may also provide different results.
Review studies were found to surpass that of multi-method and mixed method studies. To this effect Grant and Booth (2009) , state that the increased awareness, journal contribution calls as well as its efficiency in procuring research funds all promote the popularity of reviews. The low frequency of mixed method studies contradicts the view in literature that it's the third most utilised research method ( Tashakkori and Teddlie's, 2003 ). Its' low occurrence in this sample could be due to opposing views on mixing methods ( Gunasekare, 2015 ) or that authors prefer publishing in mixed method journals, when using this method, or its relative novelty ( Ivankova et al., 2016 ). Despite its low occurrence, the application of the mixed methods design in articles was methodologically clear in all cases which were not the case for the remainder of research methods.
Additionally, a substantial number of studies used a combination of methodologies that are not mixed or multi-method studies. Perceived fixed boundaries are according to literature often set aside, as confirmed by this result, in order to investigate the aim of a study, which could create a new and helpful way of understanding the world ( Gunasekare, 2015 ). According to Toomela (2010) , this is not unheard of and could be considered a form of “structural systemic science,” as in the case of qualitative methodology (observation) applied in quantitative studies (experimental design) for example. Based on this result, further research into this phenomenon as well as its implications for research methods such as multi and mixed methods is recommended.
Discerning how these research methods were applied, presented some difficulty. In the case of sampling, most studies—regardless of method—did mention some form of inclusion and exclusion criteria, but no definite sampling method. This result, along with the fact that samples often consisted of students from the researchers' own academic institutions, can contribute to literature and debates among academics ( Peterson and Merunka, 2014 ; Laher, 2016 ). Samples of convenience and students as participants especially raise questions about the generalisability and applicability of results ( Peterson and Merunka, 2014 ). This is because attention to sampling is important as inappropriate sampling can debilitate the legitimacy of interpretations ( Onwuegbuzie and Collins, 2017 ). Future investigation into the possible implications of this reported popular use of convenience samples for the field of psychology as well as the reason for this use could provide interesting insight, and is encouraged by this study.
Additionally, and this is indicated in Table 6 , articles seldom report the research designs used, which highlights the pressing aspect of the lack of rigour in the included sample. Rigour with regards to the applied empirical method is imperative in promoting psychology as a science ( American Psychological Association, 2020 ). Omitting parts of the research process in publication when it could have been used to inform others' research skills should be questioned, and the influence on the process of replicating results should be considered. Publications are often rejected due to a lack of rigour in the applied method and designs ( Fonseca, 2013 ; Laher, 2016 ), calling for increased clarity and knowledge of method application. Replication is a critical part of any field of scientific research and requires the “complete articulation” of the study methods used ( Drotar, 2010 , p. 804). The lack of thorough description could be explained by the requirements of certain journals to only report on certain aspects of a research process, especially with regard to the applied design (Laher, 20). However, naming aspects such as sampling and designs, is a requirement according to the APA's Journal Article Reporting Standards (JARS-Quant) ( Appelbaum et al., 2018 ). With very little information on how a study was conducted, authors lose a valuable opportunity to enhance research validity, enrich the knowledge of others, and contribute to the growth of psychology and methodology as a whole. In the case of this research study, it also restricted our results to only reported samples and designs, which indicated a preference for certain designs, such as cross-sectional designs for quantitative studies.
Data collection and analysis were for the most part clearly stated. A key result was the versatile use of questionnaires. Researchers would apply a questionnaire in various ways, for example in questionnaire interviews, online surveys, and written questionnaires across most research methods. This may highlight a trend for future research.
With regard to the topics these methods were employed for, our research study found a new field named “psychological practice.” This result may show the growing consciousness of researchers as part of the research process ( Denzin and Lincoln, 2003 ), psychological practice, and knowledge generation. The most popular of these topics was social psychology, which is generously covered in journals and by learning societies, as testaments of the institutional support and richness social psychology has in the field of psychology ( Chryssochoou, 2015 ). The APA's perspective on 2018 trends in psychology also identifies an increased amount of psychology focus on how social determinants are influencing people's health ( Deangelis, 2017 ).
This study was not without limitations and the following should be taken into account. Firstly, this study used a sample of five specific journals to address the aim of the research study, despite general journal aims (as stated on journal websites), this inclusion signified a bias towards the research methods published in these specific journals only and limited generalisability. A broader sample of journals over a different period of time, or a single journal over a longer period of time might provide different results. A second limitation is the use of Excel spreadsheets and an electronic system to log articles, which was a manual process and therefore left room for error ( Bandara et al., 2015 ). To address this potential issue, co-coding was performed to reduce error. Lastly, this article categorised data based on the information presented in the article sample; there was no interpretation of what methodology could have been applied or whether the methods stated adhered to the criteria for the methods used. Thus, a large number of articles that did not clearly indicate a research method or design could influence the results of this review. However, this in itself was also a noteworthy result. Future research could review research methods of a broader sample of journals with an interpretive review tool that increases rigour. Additionally, the authors also encourage the future use of systematised review designs as a way to promote a concise procedure in applying this design.
Our research study presented the use of research methods for published articles in the field of psychology as well as recommendations for future research based on these results. Insight into the complex questions identified in literature, regarding what methods are used how these methods are being used and for what topics (why) was gained. This sample preferred quantitative methods, used convenience sampling and presented a lack of rigorous accounts for the remaining methodologies. All methodologies that were clearly indicated in the sample were tabulated to allow researchers insight into the general use of methods and not only the most frequently used methods. The lack of rigorous account of research methods in articles was represented in-depth for each step in the research process and can be of vital importance to address the current replication crisis within the field of psychology. Recommendations for future research aimed to motivate research into the practical implications of the results for psychology, for example, publication bias and the use of convenience samples.
Ethics Statement
This study was cleared by the North-West University Health Research Ethics Committee: NWU-00115-17-S1.
Author Contributions
All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Aanstoos, C. M. (2014). Psychology . Available online at: http://eds.a.ebscohost.com.nwulib.nwu.ac.za/eds/detail/detail?sid=18de6c5c-2b03-4eac-94890145eb01bc70%40sessionmgr4006&vid$=$1&hid$=$4113&bdata$=$JnNpdGU9ZWRzL~WxpdmU%3d#AN$=$93871882&db$=$ers
Google Scholar
American Psychological Association (2020). Science of Psychology . Available online at: https://www.apa.org/action/science/
Appelbaum, M., Cooper, H., Kline, R. B., Mayo-Wilson, E., Nezu, A. M., and Rao, S. M. (2018). Journal article reporting standards for quantitative research in psychology: the APA Publications and Communications Board task force report. Am. Psychol. 73:3. doi: 10.1037/amp0000191
PubMed Abstract | CrossRef Full Text | Google Scholar
Bandara, W., Furtmueller, E., Gorbacheva, E., Miskon, S., and Beekhuyzen, J. (2015). Achieving rigor in literature reviews: insights from qualitative data analysis and tool-support. Commun. Ass. Inform. Syst. 37, 154–204. doi: 10.17705/1CAIS.03708
CrossRef Full Text | Google Scholar
Barr-Walker, J. (2017). Evidence-based information needs of public health workers: a systematized review. J. Med. Libr. Assoc. 105, 69–79. doi: 10.5195/JMLA.2017.109
Bittermann, A., and Fischer, A. (2018). How to identify hot topics in psychology using topic modeling. Z. Psychol. 226, 3–13. doi: 10.1027/2151-2604/a000318
Bluhm, D. J., Harman, W., Lee, T. W., and Mitchell, T. R. (2011). Qualitative research in management: a decade of progress. J. Manage. Stud. 48, 1866–1891. doi: 10.1111/j.1467-6486.2010.00972.x
Breen, L. J., and Darlaston-Jones, D. (2010). Moving beyond the enduring dominance of positivism in psychological research: implications for psychology in Australia. Aust. Psychol. 45, 67–76. doi: 10.1080/00050060903127481
Burman, E., and Whelan, P. (2011). Problems in / of Qualitative Research . Maidenhead: Open University Press/McGraw Hill.
Chaichanasakul, A., He, Y., Chen, H., Allen, G. E. K., Khairallah, T. S., and Ramos, K. (2011). Journal of Career Development: a 36-year content analysis (1972–2007). J. Career. Dev. 38, 440–455. doi: 10.1177/0894845310380223
Chryssochoou, X. (2015). Social Psychology. Inter. Encycl. Soc. Behav. Sci. 22, 532–537. doi: 10.1016/B978-0-08-097086-8.24095-6
Cichocka, A., and Jost, J. T. (2014). Stripped of illusions? Exploring system justification processes in capitalist and post-Communist societies. Inter. J. Psychol. 49, 6–29. doi: 10.1002/ijop.12011
Clay, R. A. (2017). Psychology is More Popular Than Ever. Monitor on Psychology: Trends Report . Available online at: https://www.apa.org/monitor/2017/11/trends-popular
Coetzee, M., and Van Zyl, L. E. (2014). A review of a decade's scholarly publications (2004–2013) in the South African Journal of Industrial Psychology. SA. J. Psychol . 40, 1–16. doi: 10.4102/sajip.v40i1.1227
Counsell, A., and Harlow, L. (2017). Reporting practices and use of quantitative methods in Canadian journal articles in psychology. Can. Psychol. 58, 140–147. doi: 10.1037/cap0000074
Deangelis, T. (2017). Targeting Social Factors That Undermine Health. Monitor on Psychology: Trends Report . Available online at: https://www.apa.org/monitor/2017/11/trend-social-factors
Demuth, C. (2015). New directions in qualitative research in psychology. Integr. Psychol. Behav. Sci. 49, 125–133. doi: 10.1007/s12124-015-9303-9
Denzin, N. K., and Lincoln, Y. (2003). The Landscape of Qualitative Research: Theories and Issues , 2nd Edn. London: Sage.
Drotar, D. (2010). A call for replications of research in pediatric psychology and guidance for authors. J. Pediatr. Psychol. 35, 801–805. doi: 10.1093/jpepsy/jsq049
Dweck, C. S. (2017). Is psychology headed in the right direction? Yes, no, and maybe. Perspect. Psychol. Sci. 12, 656–659. doi: 10.1177/1745691616687747
Earp, B. D., and Trafimow, D. (2015). Replication, falsification, and the crisis of confidence in social psychology. Front. Psychol. 6:621. doi: 10.3389/fpsyg.2015.00621
Ezeh, A. C., Izugbara, C. O., Kabiru, C. W., Fonn, S., Kahn, K., Manderson, L., et al. (2010). Building capacity for public and population health research in Africa: the consortium for advanced research training in Africa (CARTA) model. Glob. Health Action 3:5693. doi: 10.3402/gha.v3i0.5693
Ferreira, A. L. L., Bessa, M. M. M., Drezett, J., and De Abreu, L. C. (2016). Quality of life of the woman carrier of endometriosis: systematized review. Reprod. Clim. 31, 48–54. doi: 10.1016/j.recli.2015.12.002
Fonseca, M. (2013). Most Common Reasons for Journal Rejections . Available online at: http://www.editage.com/insights/most-common-reasons-for-journal-rejections
Gough, B., and Lyons, A. (2016). The future of qualitative research in psychology: accentuating the positive. Integr. Psychol. Behav. Sci. 50, 234–243. doi: 10.1007/s12124-015-9320-8
Grant, M. J., and Booth, A. (2009). A typology of reviews: an analysis of 14 review types and associated methodologies. Health Info. Libr. J. 26, 91–108. doi: 10.1111/j.1471-1842.2009.00848.x
Grix, J. (2002). Introducing students to the generic terminology of social research. Politics 22, 175–186. doi: 10.1111/1467-9256.00173
Gunasekare, U. L. T. P. (2015). Mixed research method as the third research paradigm: a literature review. Int. J. Sci. Res. 4, 361–368. Available online at: https://ssrn.com/abstract=2735996
Hengartner, M. P. (2018). Raising awareness for the replication crisis in clinical psychology by focusing on inconsistencies in psychotherapy Research: how much can we rely on published findings from efficacy trials? Front. Psychol. 9:256. doi: 10.3389/fpsyg.2018.00256
Holloway, W. (2008). Doing intellectual disagreement differently. Psychoanal. Cult. Soc. 13, 385–396. doi: 10.1057/pcs.2008.29
Ivankova, N. V., Creswell, J. W., and Plano Clark, V. L. (2016). “Foundations and Approaches to mixed methods research,” in First Steps in Research , 2nd Edn. K. Maree (Pretoria: Van Schaick Publishers), 306–335.
Johnson, M., Long, T., and White, A. (2001). Arguments for British pluralism in qualitative health research. J. Adv. Nurs. 33, 243–249. doi: 10.1046/j.1365-2648.2001.01659.x
Johnston, A., Kelly, S. E., Hsieh, S. C., Skidmore, B., and Wells, G. A. (2019). Systematic reviews of clinical practice guidelines: a methodological guide. J. Clin. Epidemiol. 108, 64–72. doi: 10.1016/j.jclinepi.2018.11.030
Ketchen, D. J. Jr., Boyd, B. K., and Bergh, D. D. (2008). Research methodology in strategic management: past accomplishments and future challenges. Organ. Res. Methods 11, 643–658. doi: 10.1177/1094428108319843
Ktepi, B. (2016). Data Analytics (DA) . Available online at: https://eds-b-ebscohost-com.nwulib.nwu.ac.za/eds/detail/detail?vid=2&sid=24c978f0-6685-4ed8-ad85-fa5bb04669b9%40sessionmgr101&bdata=JnNpdGU9ZWRzLWxpdmU%3d#AN=113931286&db=ers
Laher, S. (2016). Ostinato rigore: establishing methodological rigour in quantitative research. S. Afr. J. Psychol. 46, 316–327. doi: 10.1177/0081246316649121
Lee, C. (2015). The Myth of the Off-Limits Source . Available online at: http://blog.apastyle.org/apastyle/research/
Lee, T. W., Mitchell, T. R., and Sablynski, C. J. (1999). Qualitative research in organizational and vocational psychology, 1979–1999. J. Vocat. Behav. 55, 161–187. doi: 10.1006/jvbe.1999.1707
Leech, N. L., Anthony, J., and Onwuegbuzie, A. J. (2007). A typology of mixed methods research designs. Sci. Bus. Media B. V Qual. Quant 43, 265–275. doi: 10.1007/s11135-007-9105-3
Levitt, H. M., Motulsky, S. L., Wertz, F. J., Morrow, S. L., and Ponterotto, J. G. (2017). Recommendations for designing and reviewing qualitative research in psychology: promoting methodological integrity. Qual. Psychol. 4, 2–22. doi: 10.1037/qup0000082
Lowe, S. M., and Moore, S. (2014). Social networks and female reproductive choices in the developing world: a systematized review. Rep. Health 11:85. doi: 10.1186/1742-4755-11-85
Maree, K. (2016). “Planning a research proposal,” in First Steps in Research , 2nd Edn, ed K. Maree (Pretoria: Van Schaik Publishers), 49–70.
Maree, K., and Pietersen, J. (2016). “Sampling,” in First Steps in Research, 2nd Edn , ed K. Maree (Pretoria: Van Schaik Publishers), 191–202.
Ngulube, P. (2013). Blending qualitative and quantitative research methods in library and information science in sub-Saharan Africa. ESARBICA J. 32, 10–23. Available online at: http://hdl.handle.net/10500/22397 .
Nieuwenhuis, J. (2016). “Qualitative research designs and data-gathering techniques,” in First Steps in Research , 2nd Edn, ed K. Maree (Pretoria: Van Schaik Publishers), 71–102.
Nind, M., Kilburn, D., and Wiles, R. (2015). Using video and dialogue to generate pedagogic knowledge: teachers, learners and researchers reflecting together on the pedagogy of social research methods. Int. J. Soc. Res. Methodol. 18, 561–576. doi: 10.1080/13645579.2015.1062628
O'Cathain, A. (2009). Editorial: mixed methods research in the health sciences—a quiet revolution. J. Mix. Methods 3, 1–6. doi: 10.1177/1558689808326272
O'Neil, S., and Koekemoer, E. (2016). Two decades of qualitative research in psychology, industrial and organisational psychology and human resource management within South Africa: a critical review. SA J. Indust. Psychol. 42, 1–16. doi: 10.4102/sajip.v42i1.1350
Onwuegbuzie, A. J., and Collins, K. M. (2017). The role of sampling in mixed methods research enhancing inference quality. Köln Z Soziol. 2, 133–156. doi: 10.1007/s11577-017-0455-0
Perestelo-Pérez, L. (2013). Standards on how to develop and report systematic reviews in psychology and health. Int. J. Clin. Health Psychol. 13, 49–57. doi: 10.1016/S1697-2600(13)70007-3
Pericall, L. M. T., and Taylor, E. (2014). Family function and its relationship to injury severity and psychiatric outcome in children with acquired brain injury: a systematized review. Dev. Med. Child Neurol. 56, 19–30. doi: 10.1111/dmcn.12237
Peterson, R. A., and Merunka, D. R. (2014). Convenience samples of college students and research reproducibility. J. Bus. Res. 67, 1035–1041. doi: 10.1016/j.jbusres.2013.08.010
Ritchie, J., Lewis, J., and Elam, G. (2009). “Designing and selecting samples,” in Qualitative Research Practice: A Guide for Social Science Students and Researchers , 2nd Edn, ed J. Ritchie and J. Lewis (London: Sage), 1–23.
Sandelowski, M. (2011). When a cigar is not just a cigar: alternative perspectives on data and data analysis. Res. Nurs. Health 34, 342–352. doi: 10.1002/nur.20437
Sandelowski, M., Voils, C. I., and Knafl, G. (2009). On quantitizing. J. Mix. Methods Res. 3, 208–222. doi: 10.1177/1558689809334210
Scholtz, S. E., De Klerk, W., and De Beer, L. T. (2019). A data generated research framework for conducting research methods in psychological research.
Scimago Journal & Country Rank (2017). Available online at: http://www.scimagojr.com/journalrank.php?category=3201&year=2015
Scopus (2017a). About Scopus . Available online at: https://www.scopus.com/home.uri (accessed February 01, 2017).
Scopus (2017b). Document Search . Available online at: https://www.scopus.com/home.uri (accessed February 01, 2017).
Scott Jones, J., and Goldring, J. E. (2015). ‘I' m not a quants person'; key strategies in building competence and confidence in staff who teach quantitative research methods. Int. J. Soc. Res. Methodol. 18, 479–494. doi: 10.1080/13645579.2015.1062623
Smith, B., and McGannon, K. R. (2018). Developing rigor in quantitative research: problems and opportunities within sport and exercise psychology. Int. Rev. Sport Exerc. Psychol. 11, 101–121. doi: 10.1080/1750984X.2017.1317357
Stangor, C. (2011). Introduction to Psychology . Available online at: http://www.saylor.org/books/
Strydom, H. (2011). “Sampling in the quantitative paradigm,” in Research at Grass Roots; For the Social Sciences and Human Service Professions , 4th Edn, eds A. S. de Vos, H. Strydom, C. B. Fouché, and C. S. L. Delport (Pretoria: Van Schaik Publishers), 221–234.
Tashakkori, A., and Teddlie, C. (2003). Handbook of Mixed Methods in Social & Behavioural Research . Thousand Oaks, CA: SAGE publications.
Toomela, A. (2010). Quantitative methods in psychology: inevitable and useless. Front. Psychol. 1:29. doi: 10.3389/fpsyg.2010.00029
Truscott, D. M., Swars, S., Smith, S., Thornton-Reid, F., Zhao, Y., Dooley, C., et al. (2010). A cross-disciplinary examination of the prevalence of mixed methods in educational research: 1995–2005. Int. J. Soc. Res. Methodol. 13, 317–328. doi: 10.1080/13645570903097950
Weiten, W. (2010). Psychology Themes and Variations . Belmont, CA: Wadsworth.
Keywords: research methods, research approach, research trends, psychological research, systematised review, research designs, research topic
Citation: Scholtz SE, de Klerk W and de Beer LT (2020) The Use of Research Methods in Psychological Research: A Systematised Review. Front. Res. Metr. Anal. 5:1. doi: 10.3389/frma.2020.00001
Received: 30 December 2019; Accepted: 28 February 2020; Published: 20 March 2020.
Reviewed by:
Copyright © 2020 Scholtz, de Klerk and de Beer. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Salomé Elizabeth Scholtz, 22308563@nwu.ac.za
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
An official website of the United States government
Official websites use .gov A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS A lock ( Lock Locked padlock icon ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.
- Publications
- Account settings
- Advanced Search
- Journal List
Methodology or method? A critical review of qualitative case study reports
Nerida hyett, amanda kenny dr, virginia dickson-swift dr.
- Author information
- Article notes
- Copyright and License information
Correspondence: N. Hyett, La Trobe Rural Health School, La Trobe University, P.O. Box 199, Bendigo, Victoria 3550, Australia. E-mail: [email protected]
Accepted 2014 Apr 7; Collection date 2014.
This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Despite on-going debate about credibility, and reported limitations in comparison to other approaches, case study is an increasingly popular approach among qualitative researchers. We critically analysed the methodological descriptions of published case studies. Three high-impact qualitative methods journals were searched to locate case studies published in the past 5 years; 34 were selected for analysis. Articles were categorized as health and health services ( n= 12), social sciences and anthropology ( n= 7), or methods ( n= 15) case studies. The articles were reviewed using an adapted version of established criteria to determine whether adequate methodological justification was present, and if study aims, methods, and reported findings were consistent with a qualitative case study approach. Findings were grouped into five themes outlining key methodological issues: case study methodology or method, case of something particular and case selection, contextually bound case study, researcher and case interactions and triangulation, and study design inconsistent with methodology reported. Improved reporting of case studies by qualitative researchers will advance the methodology for the benefit of researchers and practitioners.
Keywords: Case studies, health research, research design, interdisciplinary research, qualitative research, literature review
Case study research is an increasingly popular approach among qualitative researchers (Thomas, 2011 ). Several prominent authors have contributed to methodological developments, which has increased the popularity of case study approaches across disciplines (Creswell, 2013b ; Denzin & Lincoln, 2011b ; Merriam, 2009 ; Ragin & Becker, 1992 ; Stake, 1995 ; Yin, 2009 ). Current qualitative case study approaches are shaped by paradigm, study design, and selection of methods, and, as a result, case studies in the published literature vary. Differences between published case studies can make it difficult for researchers to define and understand case study as a methodology.
Experienced qualitative researchers have identified case study research as a stand-alone qualitative approach (Denzin & Lincoln, 2011b ). Case study research has a level of flexibility that is not readily offered by other qualitative approaches such as grounded theory or phenomenology. Case studies are designed to suit the case and research question and published case studies demonstrate wide diversity in study design. There are two popular case study approaches in qualitative research. The first, proposed by Stake ( 1995 ) and Merriam ( 2009 ), is situated in a social constructivist paradigm, whereas the second, by Yin ( 2012 ), Flyvbjerg ( 2011 ), and Eisenhardt ( 1989 ), approaches case study from a post-positivist viewpoint. Scholarship from both schools of inquiry has contributed to the popularity of case study and development of theoretical frameworks and principles that characterize the methodology.
The diversity of case studies reported in the published literature, and on-going debates about credibility and the use of case study in qualitative research practice, suggests that differences in perspectives on case study methodology may prevent researchers from developing a mutual understanding of practice and rigour. In addition, discussion about case study limitations has led some authors to query whether case study is indeed a methodology (Luck, Jackson, & Usher, 2006 ; Meyer, 2001 ; Thomas, 2010 ; Tight, 2010 ). Methodological discussion of qualitative case study research is timely, and a review is required to analyse and understand how this methodology is applied in the qualitative research literature. The aims of this study were to review methodological descriptions of published qualitative case studies, to review how the case study methodological approach was applied, and to identify issues that need to be addressed by researchers, editors, and reviewers. An outline of the current definitions of case study and an overview of the issues proposed in the qualitative methodological literature are provided to set the scene for the review.
Definitions of qualitative case study research
Case study research is an investigation and analysis of a single or collective case, intended to capture the complexity of the object of study (Stake, 1995 ). Qualitative case study research, as described by Stake ( 1995 ), draws together “naturalistic, holistic, ethnographic, phenomenological, and biographic research methods” in a bricoleur design, or in his words, “a palette of methods” (Stake, 1995 , pp. xi–xii). Case study methodology maintains deep connections to core values and intentions and is “particularistic, descriptive and heuristic” (Merriam, 2009 , p. 46).
As a study design, case study is defined by interest in individual cases rather than the methods of inquiry used. The selection of methods is informed by researcher and case intuition and makes use of naturally occurring sources of knowledge, such as people or observations of interactions that occur in the physical space (Stake, 1998 ). Thomas ( 2011 ) suggested that “analytical eclecticism” is a defining factor (p. 512). Multiple data collection and analysis methods are adopted to further develop and understand the case, shaped by context and emergent data (Stake, 1995 ). This qualitative approach “explores a real-life, contemporary bounded system (a case ) or multiple bounded systems (cases) over time, through detailed, in-depth data collection involving multiple sources of information … and reports a case description and case themes ” (Creswell, 2013b , p. 97). Case study research has been defined by the unit of analysis, the process of study, and the outcome or end product, all essentially the case (Merriam, 2009 ).
The case is an object to be studied for an identified reason that is peculiar or particular. Classification of the case and case selection procedures informs development of the study design and clarifies the research question. Stake ( 1995 ) proposed three types of cases and study design frameworks. These include the intrinsic case, the instrumental case, and the collective instrumental case. The intrinsic case is used to understand the particulars of a single case, rather than what it represents. An instrumental case study provides insight on an issue or is used to refine theory. The case is selected to advance understanding of the object of interest. A collective refers to an instrumental case which is studied as multiple, nested cases, observed in unison, parallel, or sequential order. More than one case can be simultaneously studied; however, each case study is a concentrated, single inquiry, studied holistically in its own entirety (Stake, 1995 , 1998 ).
Researchers who use case study are urged to seek out what is common and what is particular about the case. This involves careful and in-depth consideration of the nature of the case, historical background, physical setting, and other institutional and political contextual factors (Stake, 1998 ). An interpretive or social constructivist approach to qualitative case study research supports a transactional method of inquiry, where the researcher has a personal interaction with the case. The case is developed in a relationship between the researcher and informants, and presented to engage the reader, inviting them to join in this interaction and in case discovery (Stake, 1995 ). A postpositivist approach to case study involves developing a clear case study protocol with careful consideration of validity and potential bias, which might involve an exploratory or pilot phase, and ensures that all elements of the case are measured and adequately described (Yin, 2009 , 2012 ).
Current methodological issues in qualitative case study research
The future of qualitative research will be influenced and constructed by the way research is conducted, and by what is reviewed and published in academic journals (Morse, 2011 ). If case study research is to further develop as a principal qualitative methodological approach, and make a valued contribution to the field of qualitative inquiry, issues related to methodological credibility must be considered. Researchers are required to demonstrate rigour through adequate descriptions of methodological foundations. Case studies published without sufficient detail for the reader to understand the study design, and without rationale for key methodological decisions, may lead to research being interpreted as lacking in quality or credibility (Hallberg, 2013 ; Morse, 2011 ).
There is a level of artistic license that is embraced by qualitative researchers and distinguishes practice, which nurtures creativity, innovation, and reflexivity (Denzin & Lincoln, 2011b ; Morse, 2009 ). Qualitative research is “inherently multimethod” (Denzin & Lincoln, 2011a , p. 5); however, with this creative freedom, it is important for researchers to provide adequate description for methodological justification (Meyer, 2001 ). This includes paradigm and theoretical perspectives that have influenced study design. Without adequate description, study design might not be understood by the reader, and can appear to be dishonest or inaccurate. Reviewers and readers might be confused by the inconsistent or inappropriate terms used to describe case study research approach and methods, and be distracted from important study findings (Sandelowski, 2000 ). This issue extends beyond case study research, and others have noted inconsistencies in reporting of methodology and method by qualitative researchers. Sandelowski ( 2000 , 2010 ) argued for accurate identification of qualitative description as a research approach. She recommended that the selected methodology should be harmonious with the study design, and be reflected in methods and analysis techniques. Similarly, Webb and Kevern ( 2000 ) uncovered inconsistencies in qualitative nursing research with focus group methods, recommending that methodological procedures must cite seminal authors and be applied with respect to the selected theoretical framework. Incorrect labelling using case study might stem from the flexibility in case study design and non-directional character relative to other approaches (Rosenberg & Yates, 2007 ). Methodological integrity is required in design of qualitative studies, including case study, to ensure study rigour and to enhance credibility of the field (Morse, 2011 ).
Case study has been unnecessarily devalued by comparisons with statistical methods (Eisenhardt, 1989 ; Flyvbjerg, 2006 , 2011 ; Jensen & Rodgers, 2001 ; Piekkari, Welch, & Paavilainen, 2009 ; Tight, 2010 ; Yin, 1999 ). It is reputed to be the “the weak sibling” in comparison to other, more rigorous, approaches (Yin, 2009 , p. xiii). Case study is not an inherently comparative approach to research. The objective is not statistical research, and the aim is not to produce outcomes that are generalizable to all populations (Thomas, 2011 ). Comparisons between case study and statistical research do little to advance this qualitative approach, and fail to recognize its inherent value, which can be better understood from the interpretive or social constructionist viewpoint of other authors (Merriam, 2009 ; Stake, 1995 ). Building on discussions relating to “fuzzy” (Bassey, 2001 ), or naturalistic generalizations (Stake, 1978 ), or transference of concepts and theories (Ayres, Kavanaugh, & Knafl, 2003 ; Morse et al., 2011 ) would have more relevance.
Case study research has been used as a catch-all design to justify or add weight to fundamental qualitative descriptive studies that do not fit with other traditional frameworks (Merriam, 2009 ). A case study has been a “convenient label for our research—when we ‘can't think of anything ‘better”—in an attempt to give it [qualitative methodology] some added respectability” (Tight, 2010 , p. 337). Qualitative case study research is a pliable approach (Merriam, 2009 ; Meyer, 2001 ; Stake, 1995 ), and has been likened to a “curious methodological limbo” (Gerring, 2004 , p. 341) or “paradigmatic bridge” (Luck et al., 2006 , p. 104), that is on the borderline between postpositivist and constructionist interpretations. This has resulted in inconsistency in application, which indicates that flexibility comes with limitations (Meyer, 2001 ), and the open nature of case study research might be off-putting to novice researchers (Thomas, 2011 ). The development of a well-(in)formed theoretical framework to guide a case study should improve consistency, rigour, and trust in studies published in qualitative research journals (Meyer, 2001 ).
Assessment of rigour
The purpose of this study was to analyse the methodological descriptions of case studies published in qualitative methods journals. To do this we needed to develop a suitable framework, which used existing, established criteria for appraising qualitative case study research rigour (Creswell, 2013b ; Merriam, 2009 ; Stake, 1995 ). A number of qualitative authors have developed concepts and criteria that are used to determine whether a study is rigorous (Denzin & Lincoln, 2011b ; Lincoln, 1995 ; Sandelowski & Barroso, 2002 ). The criteria proposed by Stake ( 1995 ) provide a framework for readers and reviewers to make judgements regarding case study quality, and identify key characteristics essential for good methodological rigour. Although each of the factors listed in Stake's criteria could enhance the quality of a qualitative research report, in Table I we present an adapted criteria used in this study, which integrates more recent work by Merriam ( 2009 ) and Creswell ( 2013b ). Stake's ( 1995 ) original criteria were separated into two categories. The first list of general criteria is “relevant for all qualitative research.” The second list, “high relevance to qualitative case study research,” was the criteria that we decided had higher relevance to case study research. This second list was the main criteria used to assess the methodological descriptions of the case studies reviewed. The complete table has been preserved so that the reader can determine how the original criteria were adapted.
Framework for assessing quality in qualitative case study research.
Adapted from Stake ( 1995 , p. 131).
Study design
The critical review method described by Grant and Booth ( 2009 ) was used, which is appropriate for the assessment of research quality, and is used for literature analysis to inform research and practice. This type of review goes beyond the mapping and description of scoping or rapid reviews, to include “analysis and conceptual innovation” (Grant & Booth, 2009 , p. 93). A critical review is used to develop existing, or produce new, hypotheses or models. This is different to systematic reviews that answer clinical questions. It is used to evaluate existing research and competing ideas, to provide a “launch pad” for conceptual development and “subsequent testing” (Grant & Booth, 2009 , p. 93).
Qualitative methods journals were located by a search of the 2011 ISI Journal Citation Reports in Social Science, via the database Web of Knowledge (see m.webofknowledge.com). No “qualitative research methods” category existed in the citation reports; therefore, a search of all categories was performed using the term “qualitative.” In Table II , we present the qualitative methods journals located, ranked by impact factor. The highest ranked journals were selected for searching. We acknowledge that the impact factor ranking system might not be the best measure of journal quality (Cheek, Garnham, & Quan, 2006 ); however, this was the most appropriate and accessible method available.
International Journal of Qualitative Studies on Health and Well-being.
Search strategy
In March 2013, searches of the journals, Qualitative Health Research , Qualitative Research , and Qualitative Inquiry were completed to retrieve studies with “case study” in the abstract field. The search was limited to the past 5 years (1 January 2008 to 1 March 2013). The objective was to locate published qualitative case studies suitable for assessment using the adapted criterion. Viewpoints, commentaries, and other article types were excluded from review. Title and abstracts of the 45 retrieved articles were read by the first author, who identified 34 empirical case studies for review. All authors reviewed the 34 studies to confirm selection and categorization. In Table III , we present the 34 case studies grouped by journal, and categorized by research topic, including health sciences, social sciences and anthropology, and methods research. There was a discrepancy in categorization of one article on pedagogy and a new teaching method published in Qualitative Inquiry (Jorrín-Abellán, Rubia-Avi, Anguita-Martínez, Gómez-Sánchez, & Martínez-Mones, 2008 ). Consensus was to allocate to the methods category.
Outcomes of search of qualitative methods journals.
In Table III , the number of studies located, and final numbers selected for review have been reported. Qualitative Health Research published the most empirical case studies ( n= 16). In the health category, there were 12 case studies of health conditions, health services, and health policy issues, all published in Qualitative Health Research . Seven case studies were categorized as social sciences and anthropology research, which combined case study with biography and ethnography methodologies. All three journals published case studies on methods research to illustrate a data collection or analysis technique, methodological procedure, or related issue.
The methodological descriptions of 34 case studies were critically reviewed using the adapted criteria. All articles reviewed contained a description of study methods; however, the length, amount of detail, and position of the description in the article varied. Few studies provided an accurate description and rationale for using a qualitative case study approach. In the 34 case studies reviewed, three described a theoretical framework informed by Stake ( 1995 ), two by Yin ( 2009 ), and three provided a mixed framework informed by various authors, which might have included both Yin and Stake. Few studies described their case study design, or included a rationale that explained why they excluded or added further procedures, and whether this was to enhance the study design, or to better suit the research question. In 26 of the studies no reference was provided to principal case study authors. From reviewing the description of methods, few authors provided a description or justification of case study methodology that demonstrated how their study was informed by the methodological literature that exists on this approach.
The methodological descriptions of each study were reviewed using the adapted criteria, and the following issues were identified: case study methodology or method; case of something particular and case selection; contextually bound case study; researcher and case interactions and triangulation; and, study design inconsistent with methodology. An outline of how the issues were developed from the critical review is provided, followed by a discussion of how these relate to the current methodological literature.
Case study methodology or method
A third of the case studies reviewed appeared to use a case report method, not case study methodology as described by principal authors (Creswell, 2013b ; Merriam, 2009 ; Stake, 1995 ; Yin, 2009 ). Case studies were identified as a case report because of missing methodological detail and by review of the study aims and purpose. These reports presented data for small samples of no more than three people, places or phenomenon. Four studies, or “case reports” were single cases selected retrospectively from larger studies (Bronken, Kirkevold, Martinsen, & Kvigne, 2012 ; Coltart & Henwood, 2012 ; Hooghe, Neimeyer, & Rober, 2012 ; Roscigno et al., 2012 ). Case reports were not a case of something, instead were a case demonstration or an example presented in a report. These reports presented outcomes, and reported on how the case could be generalized. Descriptions focussed on the phenomena, rather than the case itself, and did not appear to study the case in its entirety.
Case reports had minimal in-text references to case study methodology, and were informed by other qualitative traditions or secondary sources (Adamson & Holloway, 2012 ; Buzzanell & D'Enbeau, 2009 ; Nagar-Ron & Motzafi-Haller, 2011 ). This does not suggest that case study methodology cannot be multimethod, however, methodology should be consistent in design, be clearly described (Meyer, 2001 ; Stake, 1995 ), and maintain focus on the case (Creswell, 2013b ).
To demonstrate how case reports were identified, three examples are provided. The first, Yeh ( 2013 ) described their study as, “the examination of the emergence of vegetarianism in Victorian England serves as a case study to reveal the relationships between boundaries and entities” (p. 306). The findings were a historical case report, which resulted from an ethnographic study of vegetarianism. Cunsolo Willox, Harper, Edge, ‘My Word’: Storytelling and Digital Media Lab, and Rigolet Inuit Community Government (2013) used “a case study that illustrates the usage of digital storytelling within an Inuit community” (p. 130). This case study reported how digital storytelling can be used with indigenous communities as a participatory method to illuminate the benefits of this method for other studies. This “case study was conducted in the Inuit community” but did not include the Inuit community in case analysis (Cunsolo Willox et al., 2013 , p. 130). Bronken et al. ( 2012 ) provided a single case report to demonstrate issues observed in a larger clinical study of aphasia and stroke, without adequate case description or analysis.
Case study of something particular and case selection
Case selection is a precursor to case analysis, which needs to be presented as a convincing argument (Merriam, 2009 ). Descriptions of the case were often not adequate to ascertain why the case was selected, or whether it was a particular exemplar or outlier (Thomas, 2011 ). In a number of case studies in the health and social science categories, it was not explicit whether the case was of something particular, or peculiar to their discipline or field (Adamson & Holloway, 2012 ; Bronken et al., 2012 ; Colón-Emeric et al., 2010 ; Jackson, Botelho, Welch, Joseph, & Tennstedt, 2012 ; Mawn et al., 2010 ; Snyder-Young, 2011 ). There were exceptions in the methods category ( Table III ), where cases were selected by researchers to report on a new or innovative method. The cases emerged through heuristic study, and were reported to be particular, relative to the existing methods literature (Ajodhia-Andrews & Berman, 2009 ; Buckley & Waring, 2013 ; Cunsolo Willox et al., 2013 ; De Haene, Grietens, & Verschueren, 2010 ; Gratton & O'Donnell, 2011 ; Sumsion, 2013 ; Wimpenny & Savin-Baden, 2012 ).
Case selection processes were sometimes insufficient to understand why the case was selected from the global population of cases, or what study of this case would contribute to knowledge as compared with other possible cases (Adamson & Holloway, 2012 ; Bronken et al., 2012 ; Colón-Emeric et al., 2010 ; Jackson et al., 2012 ; Mawn et al., 2010 ). In two studies, local cases were selected (Barone, 2010 ; Fourie & Theron, 2012 ) because the researcher was familiar with and had access to the case. Possible limitations of a convenience sample were not acknowledged. Purposeful sampling was used to recruit participants within the case of one study, but not of the case itself (Gallagher et al., 2013 ). Random sampling was completed for case selection in two studies (Colón-Emeric et al., 2010 ; Jackson et al., 2012 ), which has limited meaning in interpretive qualitative research.
To demonstrate how researchers provided a good justification for the selection of case study approaches, four examples are provided. The first, cases of residential care homes, were selected because of reported occurrences of mistreatment, which included residents being locked in rooms at night (Rytterström, Unosson, & Arman, 2013 ). Roscigno et al. ( 2012 ) selected cases of parents who were admitted for early hospitalization in neonatal intensive care with a threatened preterm delivery before 26 weeks. Hooghe et al. ( 2012 ) used random sampling to select 20 couples that had experienced the death of a child; however, the case study was of one couple and a particular metaphor described only by them. The final example, Coltart and Henwood ( 2012 ), provided a detailed account of how they selected two cases from a sample of 46 fathers based on personal characteristics and beliefs. They described how the analysis of the two cases would contribute to their larger study on first time fathers and parenting.
Contextually bound case study
The limits or boundaries of the case are a defining factor of case study methodology (Merriam, 2009 ; Ragin & Becker, 1992 ; Stake, 1995 ; Yin, 2009 ). Adequate contextual description is required to understand the setting or context in which the case is revealed. In the health category, case studies were used to illustrate a clinical phenomenon or issue such as compliance and health behaviour (Colón-Emeric et al., 2010 ; D'Enbeau, Buzzanell, & Duckworth, 2010 ; Gallagher et al., 2013 ; Hooghe et al., 2012 ; Jackson et al., 2012 ; Roscigno et al., 2012 ). In these case studies, contextual boundaries, such as physical and institutional descriptions, were not sufficient to understand the case as a holistic system, for example, the general practitioner (GP) clinic in Gallagher et al. ( 2013 ), or the nursing home in Colón-Emeric et al. ( 2010 ). Similarly, in the social science and methods categories, attention was paid to some components of the case context, but not others, missing important information required to understand the case as a holistic system (Alexander, Moreira, & Kumar, 2012 ; Buzzanell & D'Enbeau, 2009 ; Nairn & Panelli, 2009 ; Wimpenny & Savin-Baden, 2012 ).
In two studies, vicarious experience or vignettes (Nairn & Panelli, 2009 ) and images (Jorrín-Abellán et al., 2008 ) were effective to support description of context, and might have been a useful addition for other case studies. Missing contextual boundaries suggests that the case might not be adequately defined. Additional information, such as the physical, institutional, political, and community context, would improve understanding of the case (Stake, 1998 ). In Boxes 1 and 2 , we present brief synopses of two studies that were reviewed, which demonstrated a well bounded case. In Box 1 , Ledderer ( 2011 ) used a qualitative case study design informed by Stake's tradition. In Box 2 , Gillard, Witt, and Watts ( 2011 ) were informed by Yin's tradition. By providing a brief outline of the case studies in Boxes 1 and 2 , we demonstrate how effective case boundaries can be constructed and reported, which may be of particular interest to prospective case study researchers.
Box 1. Article synopsis of case study research using Stake's tradition.
Ledderer ( 2011 ) used a qualitative case study research design, informed by modern ethnography. The study is bounded to 10 general practice clinics in Denmark, who had received federal funding to implement preventative care services based on a Motivational Interviewing intervention. The researcher question focussed on “why is it so difficult to create change in medical practice?” (Ledderer, 2011 , p. 27). The study context was adequately described, providing detail on the general practitioner (GP) clinics and relevant political and economic influences. Methodological decisions are described in first person narrative, providing insight on researcher perspectives and interaction with the case. Forty-four interviews were conducted, which focussed on how GPs conducted consultations, and the form, nature and content, rather than asking their opinion or experience (Ledderer, 2011 , p. 30). The duration and intensity of researcher immersion in the case enhanced depth of description and trustworthiness of study findings. Analysis was consistent with Stake's tradition, and the researcher provided examples of inquiry techniques used to challenge assumptions about emerging themes. Several other seminal qualitative works were cited. The themes and typology constructed are rich in narrative data and storytelling by clinic staff, demonstrating individual clinic experiences as well as shared meanings and understandings about changing from a biomedical to psychological approach to preventative health intervention. Conclusions make note of social and cultural meanings and lessons learned, which might not have been uncovered using a different methodology.
Box 2. Article synopsis of case study research using Yin's tradition.
Gillard et al. ( 2011 ) study of camps for adolescents living with HIV/AIDs provided a good example of Yin's interpretive case study approach. The context of the case is bounded by the three summer camps of which the researchers had prior professional involvement. A case study protocol was developed that used multiple methods to gather information at three data collection points coinciding with three youth camps (Teen Forum, Discover Camp, and Camp Strong). Gillard and colleagues followed Yin's ( 2009 ) principles, using a consistent data protocol that enhanced cross-case analysis. Data described the young people, the camp physical environment, camp schedule, objectives and outcomes, and the staff of three youth camps. The findings provided a detailed description of the context, with less detail of individual participants, including insight into researcher's interpretations and methodological decisions throughout the data collection and analysis process. Findings provided the reader with a sense of “being there,” and are discovered through constant comparison of the case with the research issues; the case is the unit of analysis. There is evidence of researcher immersion in the case, and Gillard reports spending significant time in the field in a naturalistic and integrated youth mentor role.
This case study is not intended to have a significant impact on broader health policy, although does have implications for health professionals working with adolescents. Study conclusions will inform future camps for young people with chronic disease, and practitioners are able to compare similarities between this case and their own practice (for knowledge translation). No limitations of this article were reported. Limitations related to publication of this case study were that it was 20 pages long and used three tables to provide sufficient description of the camp and program components, and relationships with the research issue.
Researcher and case interactions and triangulation
Researcher and case interactions and transactions are a defining feature of case study methodology (Stake, 1995 ). Narrative stories, vignettes, and thick description are used to provoke vicarious experience and a sense of being there with the researcher in their interaction with the case. Few of the case studies reviewed provided details of the researcher's relationship with the case, researcher–case interactions, and how these influenced the development of the case study (Buzzanell & D'Enbeau, 2009 ; D'Enbeau et al., 2010 ; Gallagher et al., 2013 ; Gillard et al., 2011 ; Ledderer, 2011 ; Nagar-Ron & Motzafi-Haller, 2011 ). The role and position of the researcher needed to be self-examined and understood by readers, to understand how this influenced interactions with participants, and to determine what triangulation is needed (Merriam, 2009 ; Stake, 1995 ).
Gillard et al. ( 2011 ) provided a good example of triangulation, comparing data sources in a table (p. 1513). Triangulation of sources was used to reveal as much depth as possible in the study by Nagar-Ron and Motzafi-Haller ( 2011 ), while also enhancing confirmation validity. There were several case studies that would have benefited from improved range and use of data sources, and descriptions of researcher–case interactions (Ajodhia-Andrews & Berman, 2009 ; Bronken et al., 2012 ; Fincham, Scourfield, & Langer, 2008 ; Fourie & Theron, 2012 ; Hooghe et al., 2012 ; Snyder-Young, 2011 ; Yeh, 2013 ).
Study design inconsistent with methodology
Good, rigorous case studies require a strong methodological justification (Meyer, 2001 ) and a logical and coherent argument that defines paradigm, methodological position, and selection of study methods (Denzin & Lincoln, 2011b ). Methodological justification was insufficient in several of the studies reviewed (Barone, 2010 ; Bronken et al., 2012 ; Hooghe et al., 2012 ; Mawn et al., 2010 ; Roscigno et al., 2012 ; Yeh, 2013 ). This was judged by the absence, or inadequate or inconsistent reference to case study methodology in-text.
In six studies, the methodological justification provided did not relate to case study. There were common issues identified. Secondary sources were used as primary methodological references indicating that study design might not have been theoretically sound (Colón-Emeric et al., 2010 ; Coltart & Henwood, 2012 ; Roscigno et al., 2012 ; Snyder-Young, 2011 ). Authors and sources cited in methodological descriptions were inconsistent with the actual study design and practices used (Fourie & Theron, 2012 ; Hooghe et al., 2012 ; Jorrín-Abellán et al., 2008 ; Mawn et al., 2010 ; Rytterström et al., 2013 ; Wimpenny & Savin-Baden, 2012 ). This occurred when researchers cited Stake or Yin, or both (Mawn et al., 2010 ; Rytterström et al., 2013 ), although did not follow their paradigmatic or methodological approach. In 26 studies there were no citations for a case study methodological approach.
The findings of this study have highlighted a number of issues for researchers. A considerable number of case studies reviewed were missing key elements that define qualitative case study methodology and the tradition cited. A significant number of studies did not provide a clear methodological description or justification relevant to case study. Case studies in health and social sciences did not provide sufficient information for the reader to understand case selection, and why this case was chosen above others. The context of the cases were not described in adequate detail to understand all relevant elements of the case context, which indicated that cases may have not been contextually bounded. There were inconsistencies between reported methodology, study design, and paradigmatic approach in case studies reviewed, which made it difficult to understand the study methodology and theoretical foundations. These issues have implications for methodological integrity and honesty when reporting study design, which are values of the qualitative research tradition and are ethical requirements (Wager & Kleinert, 2010a ). Poorly described methodological descriptions may lead the reader to misinterpret or discredit study findings, which limits the impact of the study, and, as a collective, hinders advancements in the broader qualitative research field.
The issues highlighted in our review build on current debates in the case study literature, and queries about the value of this methodology. Case study research can be situated within different paradigms or designed with an array of methods. In order to maintain the creativity and flexibility that is valued in this methodology, clearer descriptions of paradigm and theoretical position and methods should be provided so that study findings are not undervalued or discredited. Case study research is an interdisciplinary practice, which means that clear methodological descriptions might be more important for this approach than other methodologies that are predominantly driven by fewer disciplines (Creswell, 2013b ).
Authors frequently omit elements of methodologies and include others to strengthen study design, and we do not propose a rigid or purist ideology in this paper. On the contrary, we encourage new ideas about using case study, together with adequate reporting, which will advance the value and practice of case study. The implications of unclear methodological descriptions in the studies reviewed were that study design appeared to be inconsistent with reported methodology, and key elements required for making judgements of rigour were missing. It was not clear whether the deviations from methodological tradition were made by researchers to strengthen the study design, or because of misinterpretations. Morse ( 2011 ) recommended that innovations and deviations from practice are best made by experienced researchers, and that a novice might be unaware of the issues involved with making these changes. To perpetuate the tradition of case study research, applications in the published literature should have consistencies with traditional methodological constructions, and deviations should be described with a rationale that is inherent in study conduct and findings. Providing methodological descriptions that demonstrate a strong theoretical foundation and coherent study design will add credibility to the study, while ensuring the intrinsic meaning of case study is maintained.
The value of this review is that it contributes to discussion of whether case study is a methodology or method. We propose possible reasons why researchers might make this misinterpretation. Researchers may interchange the terms methods and methodology, and conduct research without adequate attention to epistemology and historical tradition (Carter & Little, 2007 ; Sandelowski, 2010 ). If the rich meaning that naming a qualitative methodology brings to the study is not recognized, a case study might appear to be inconsistent with the traditional approaches described by principal authors (Creswell, 2013a ; Merriam, 2009 ; Stake, 1995 ; Yin, 2009 ). If case studies are not methodologically and theoretically situated, then they might appear to be a case report.
Case reports are promoted by university and medical journals as a method of reporting on medical or scientific cases; guidelines for case reports are publicly available on websites ( http://www.hopkinsmedicine.org/institutional_review_board/guidelines_policies/guidelines/case_report.html ). The various case report guidelines provide a general criteria for case reports, which describes that this form of report does not meet the criteria of research, is used for retrospective analysis of up to three clinical cases, and is primarily illustrative and for educational purposes. Case reports can be published in academic journals, but do not require approval from a human research ethics committee. Traditionally, case reports describe a single case, to explain how and what occurred in a selected setting, for example, to illustrate a new phenomenon that has emerged from a larger study. A case report is not necessarily particular or the study of a case in its entirety, and the larger study would usually be guided by a different research methodology.
This description of a case report is similar to what was provided in some studies reviewed. This form of report lacks methodological grounding and qualities of research rigour. The case report has publication value in demonstrating an example and for dissemination of knowledge (Flanagan, 1999 ). However, case reports have different meaning and purpose to case study, which needs to be distinguished. Findings of our review suggest that the medical understanding of a case report has been confused with qualitative case study approaches.
In this review, a number of case studies did not have methodological descriptions that included key characteristics of case study listed in the adapted criteria, and several issues have been discussed. There have been calls for improvements in publication quality of qualitative research (Morse, 2011 ), and for improvements in peer review of submitted manuscripts (Carter & Little, 2007 ; Jasper, Vaismoradi, Bondas, & Turunen, 2013 ). The challenging nature of editor and reviewers responsibilities are acknowledged in the literature (Hames, 2013 ; Wager & Kleinert, 2010b ); however, review of case study methodology should be prioritized because of disputes on methodological value.
Authors using case study approaches are recommended to describe their theoretical framework and methods clearly, and to seek and follow specialist methodological advice when needed (Wager & Kleinert, 2010a ). Adequate page space for case study description would contribute to better publications (Gillard et al., 2011 ). Capitalizing on the ability to publish complementary resources should be considered.
Limitations of the review
There is a level of subjectivity involved in this type of review and this should be considered when interpreting study findings. Qualitative methods journals were selected because the aims and scope of these journals are to publish studies that contribute to methodological discussion and development of qualitative research. Generalist health and social science journals were excluded that might have contained good quality case studies. Journals in business or education were also excluded, although a review of case studies in international business journals has been published elsewhere (Piekkari et al., 2009 ).
The criteria used to assess the quality of the case studies were a set of qualitative indicators. A numerical or ranking system might have resulted in different results. Stake's ( 1995 ) criteria have been referenced elsewhere, and was deemed the best available (Creswell, 2013b ; Crowe et al., 2011 ). Not all qualitative studies are reported in a consistent way and some authors choose to report findings in a narrative form in comparison to a typical biomedical report style (Sandelowski & Barroso, 2002 ), if misinterpretations were made this may have affected the review.
Case study research is an increasingly popular approach among qualitative researchers, which provides methodological flexibility through the incorporation of different paradigmatic positions, study designs, and methods. However, whereas flexibility can be an advantage, a myriad of different interpretations has resulted in critics questioning the use of case study as a methodology. Using an adaptation of established criteria, we aimed to identify and assess the methodological descriptions of case studies in high impact, qualitative methods journals. Few articles were identified that applied qualitative case study approaches as described by experts in case study design. There were inconsistencies in methodology and study design, which indicated that researchers were confused whether case study was a methodology or a method. Commonly, there appeared to be confusion between case studies and case reports. Without clear understanding and application of the principles and key elements of case study methodology, there is a risk that the flexibility of the approach will result in haphazard reporting, and will limit its global application as a valuable, theoretically supported methodology that can be rigorously applied across disciplines and fields.
Conflict of interest and funding
The authors have not received any funding or benefits from industry or elsewhere to conduct this study.
- Adamson S, Holloway M. Negotiating sensitivities and grappling with intangibles: Experiences from a study of spirituality and funerals. Qualitative Research. 2012;12(6):735–752. doi: 10.1177/1468794112439008. [ DOI ] [ Google Scholar ]
- Ajodhia-Andrews A, Berman R. Exploring school life from the lens of a child who does not use speech to communicate. Qualitative Inquiry. 2009;15(5):931–951. doi: 10.1177/1077800408322789. [ DOI ] [ Google Scholar ]
- Alexander B. K, Moreira C, Kumar H. S. Resisting (resistance) stories: A tri-autoethnographic exploration of father narratives across shades of difference. Qualitative Inquiry. 2012;18(2):121–133. doi: 10.1177/1077800411429087. [ DOI ] [ Google Scholar ]
- Austin W, Park C, Goble E. From interdisciplinary to transdisciplinary research: A case study. Qualitative Health Research. 2008;18(4):557–564. doi: 10.1177/1049732307308514. [ DOI ] [ PubMed ] [ Google Scholar ]
- Ayres L, Kavanaugh K, Knafl K. A. Within-case and across-case approaches to qualitative data analysis. Qualitative Health Research. 2003;13(6):871–883. doi: 10.1177/1049732303013006008. [ DOI ] [ PubMed ] [ Google Scholar ]
- Barone T. L. Culturally sensitive care 1969–2000: The Indian Chicano Health Center. Qualitative Health Research. 2010;20(4):453–464. doi: 10.1177/1049732310361893. [ DOI ] [ PubMed ] [ Google Scholar ]
- Bassey M. A solution to the problem of generalisation in educational research: Fuzzy prediction. Oxford Review of Education. 2001;27(1):5–22. doi: 10.1080/03054980123773. [ DOI ] [ Google Scholar ]
- Bronken B. A, Kirkevold M, Martinsen R, Kvigne K. The aphasic storyteller: Coconstructing stories to promote psychosocial well-being after stroke. Qualitative Health Research. 2012;22(10):1303–1316. doi: 10.1177/1049732312450366. [ DOI ] [ PubMed ] [ Google Scholar ]
- Broyles L. M, Rodriguez K. L, Price P. A, Bayliss N. K, Sevick M. A. Overcoming barriers to the recruitment of nurses as participants in health care research. Qualitative Health Research. 2011;21(12):1705–1718. doi: 10.1177/1049732311417727. [ DOI ] [ PubMed ] [ Google Scholar ]
- Buckley C. A, Waring M. J. Using diagrams to support the research process: Examples from grounded theory. Qualitative Research. 2013;13(2):148–172. doi: 10.1177/1468794112472280. [ DOI ] [ Google Scholar ]
- Buzzanell P. M, D'Enbeau S. Stories of caregiving: Intersections of academic research and women's everyday experiences. Qualitative Inquiry. 2009;15(7):1199–1224. doi: 10.1177/1077800409338025. [ DOI ] [ Google Scholar ]
- Carter S. M, Little M. Justifying knowledge, justifying method, taking action: Epistemologies, methodologies, and methods in qualitative research. Qualitative Health Research. 2007;17(10):1316–1328. doi: 10.1177/1049732307306927. [ DOI ] [ PubMed ] [ Google Scholar ]
- Cheek J, Garnham B, Quan J. What's in a number? Issues in providing evidence of impact and quality of research(ers) Qualitative Health Research. 2006;16(3):423–435. doi: 10.1177/1049732305285701. [ DOI ] [ PubMed ] [ Google Scholar ]
- Colón-Emeric C. S, Plowman D, Bailey D, Corazzini K, Utley-Smith Q, Ammarell N, et al. Regulation and mindful resident care in nursing homes. Qualitative Health Research. 2010;20(9):1283–1294. doi: 10.1177/1049732310369337. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Coltart C, Henwood K. On paternal subjectivity: A qualitative longitudinal and psychosocial case analysis of men's classed positions and transitions to first-time fatherhood. Qualitative Research. 2012;12(1):35–52. doi: 10.1177/1468794111426224. [ DOI ] [ Google Scholar ]
- Creswell J. W. Five qualitative approaches to inquiry. In: Creswell J. W, editor. Qualitative inquiry and research design: Choosing among five approaches. 3rd ed. Thousand Oaks, CA: Sage; 2013a. pp. 53–84. [ Google Scholar ]
- Creswell J. W. Qualitative inquiry and research design: Choosing among five approaches. 3rd ed. Thousand Oaks, CA: Sage; 2013b. [ Google Scholar ]
- Crowe S, Cresswell K, Robertson A, Huby G, Avery A, Sheikh A. The case study approach. BMC Medical Research Methodology. 2011;11(1):1–9. doi: 10.1186/1471-2288-11-100. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Cunsolo Willox A, Harper S. L, Edge V. L, ‘My Word’: Storytelling and Digital Media Lab, & Rigolet Inuit Community Government Storytelling in a digital age: Digital storytelling as an emerging narrative method for preserving and promoting indigenous oral wisdom. Qualitative Research. 2013;13(2):127–147. doi: 10.1177/1468794112446105. [ DOI ] [ Google Scholar ]
- De Haene L, Grietens H, Verschueren K. Holding harm: Narrative methods in mental health research on refugee trauma. Qualitative Health Research. 2010;20(12):1664–1676. doi: 10.1177/1049732310376521. [ DOI ] [ PubMed ] [ Google Scholar ]
- D'Enbeau S, Buzzanell P. M, Duckworth J. Problematizing classed identities in fatherhood: Development of integrative case studies for analysis and praxis. Qualitative Inquiry. 2010;16(9):709–720. doi: 10.1177/1077800410374183. [ DOI ] [ Google Scholar ]
- Denzin N. K, Lincoln Y. S. Introduction: Disciplining the practice of qualitative research. In: Denzin N. K, Lincoln Y. S, editors. The SAGE handbook of qualitative research. 4th ed. Thousand Oaks, CA: Sage; 2011a. pp. 1–6. [ Google Scholar ]
- Denzin N. K, Lincoln Y. S, editors. The SAGE handbook of qualitative research. 4th ed. Thousand Oaks, CA: Sage; 2011b. [ Google Scholar ]
- Edwards R, Weller S. Shifting analytic ontology: Using I-poems in qualitative longitudinal research. Qualitative Research. 2012;12(2):202–217. doi: 10.1177/1468794111422040. [ DOI ] [ Google Scholar ]
- Eisenhardt K. M. Building theories from case study research. The Academy of Management Review. 1989;14(4):532–550. doi: 10.2307/258557. [ DOI ] [ Google Scholar ]
- Fincham B, Scourfield J, Langer S. The impact of working with disturbing secondary data: Reading suicide files in a coroner's office. Qualitative Health Research. 2008;18(6):853–862. doi: 10.1177/1049732307308945. [ DOI ] [ PubMed ] [ Google Scholar ]
- Flanagan J. Public participation in the design of educational programmes for cancer nurses: A case report. European Journal of Cancer Care. 1999;8(2):107–112. doi: 10.1046/j.1365-2354.1999.00141.x. [ DOI ] [ PubMed ] [ Google Scholar ]
- Flyvbjerg B. Five misunderstandings about case-study research. Qualitative Inquiry. 2006;12(2):219–245. doi: 10.1177/1077800405284.363. [ DOI ] [ Google Scholar ]
- Flyvbjerg B. Case study. In: Denzin N. K, Lincoln Y. S, editors. The SAGE handbook of qualitative research. 4th ed. Thousand Oaks, CA: Sage; 2011. pp. 301–316. [ Google Scholar ]
- Fourie C. L, Theron L. C. Resilience in the face of fragile X syndrome. Qualitative Health Research. 2012;22(10):1355–1368. doi: 10.1177/1049732312451871. [ DOI ] [ PubMed ] [ Google Scholar ]
- Gallagher N, MacFarlane A, Murphy A. W, Freeman G. K, Glynn L. G, Bradley C. P. Service users’ and caregivers’ perspectives on continuity of care in out-of-hours primary care. Qualitative Health Research. 2013;23(3):407–421. doi: 10.1177/1049732312470521. [ DOI ] [ PubMed ] [ Google Scholar ]
- Gerring J. What is a case study and what is it good for? American Political Science Review. 2004;98(2):341–354. doi: 10.1017/S0003055404001182. [ DOI ] [ Google Scholar ]
- Gillard A, Witt P. A, Watts C. E. Outcomes and processes at a camp for youth with HIV/AIDS. Qualitative Health Research. 2011;21(11):1508–1526. doi: 10.1177/1049732311413907. [ DOI ] [ PubMed ] [ Google Scholar ]
- Grant M, Booth A. A typology of reviews: An analysis of 14 review types and associated methodologies. Health Information and Libraries Journal. 2009;26:91–108. doi: 10.1111/j.1471-1842.2009.00848.x. [ DOI ] [ PubMed ] [ Google Scholar ]
- Gratton M.-F, O'Donnell S. Communication technologies for focus groups with remote communities: A case study of research with First Nations in Canada. Qualitative Research. 2011;11(2):159–175. doi: 10.1177/1468794110394068. [ DOI ] [ Google Scholar ]
- Hallberg L. Quality criteria and generalization of results from qualitative studies. International Journal of Qualitative Studies on Health and Wellbeing. 2013;8:1. doi: 10.3402/qhw.v8i0.20647. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Hames I. Committee on Publication Ethics, 1. 2013, March. COPE Ethical guidelines for peer reviewers. Retrieved April 7, 2013, from http://publicationethics.org/resources/guidelines . [ Google Scholar ]
- Hooghe A, Neimeyer R. A, Rober P. “Cycling around an emotional core of sadness”: Emotion regulation in a couple after the loss of a child. Qualitative Health Research. 2012;22(9):1220–1231. doi: 10.1177/1049732312449209. [ DOI ] [ PubMed ] [ Google Scholar ]
- Jackson C. B, Botelho E. M, Welch L. C, Joseph J, Tennstedt S. L. Talking with others about stigmatized health conditions: Implications for managing symptoms. Qualitative Health Research. 2012;22(11):1468–1475. doi: 10.1177/1049732312450323. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Jasper M, Vaismoradi M, Bondas T, Turunen H. Validity and reliability of the scientific review process in nursing journals—time for a rethink? Nursing Inquiry. 2013 doi: 10.1111/nin.12030. [ DOI ] [ PubMed ] [ Google Scholar ]
- Jensen J. L, Rodgers R. Cumulating the intellectual gold of case study research. Public Administration Review. 2001;61(2):235–246. doi: 10.1111/0033-3352.00025. [ DOI ] [ Google Scholar ]
- Jorrín-Abellán I. M, Rubia-Avi B, Anguita-Martínez R, Gómez-Sánchez E, Martínez-Mones A. Bouncing between the dark and bright sides: Can technology help qualitative research? Qualitative Inquiry. 2008;14(7):1187–1204. doi: 10.1177/1077800408318435. [ DOI ] [ Google Scholar ]
- Ledderer L. Understanding change in medical practice: The role of shared meaning in preventive treatment. Qualitative Health Research. 2011;21(1):27–40. doi: 10.1177/1049732310377451. [ DOI ] [ PubMed ] [ Google Scholar ]
- Lincoln Y. S. Emerging criteria for quality in qualitative and interpretive research. Qualitative Inquiry. 1995;1(3):275–289. doi: 10.1177/107780049500100301. [ DOI ] [ Google Scholar ]
- Luck L, Jackson D, Usher K. Case study: A bridge across the paradigms. Nursing Inquiry. 2006;13(2):103–109. doi: 10.1111/j.1440-1800.2006.00309.x. [ DOI ] [ PubMed ] [ Google Scholar ]
- Mawn B, Siqueira E, Koren A, Slatin C, Devereaux Melillo K, Pearce C, et al. Health disparities among health care workers. Qualitative Health Research. 2010;20(1):68–80. doi: 10.1177/1049732309355590. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Merriam S. B. Qualitative research: A guide to design and implementation. 3rd ed. San Francisco, CA: Jossey-Bass; 2009. [ Google Scholar ]
- Meyer C. B. A case in case study methodology. Field Methods. 2001;13(4):329–352. doi: 10.1177/1525822x0101300402. [ DOI ] [ Google Scholar ]
- Morse J. M. Mixing qualitative methods. Qualitative Health Research. 2009;19(11):1523–1524. doi: 10.1177/1049732309349360. [ DOI ] [ PubMed ] [ Google Scholar ]
- Morse J. M. Molding qualitative health research. Qualitative Health Research. 2011;21(8):1019–1021. doi: 10.1177/1049732311404706. [ DOI ] [ PubMed ] [ Google Scholar ]
- Morse J. M, Dimitroff L. J, Harper R, Koontz A, Kumra S, Matthew-Maich N, et al. Considering the qualitative–quantitative language divide. Qualitative Health Research. 2011;21(9):1302–1303. doi: 10.1177/1049732310392386. [ DOI ] [ PubMed ] [ Google Scholar ]
- Nagar-Ron S, Motzafi-Haller P. “My life? There is not much to tell”: On voice, silence and agency in interviews with first-generation Mizrahi Jewish women immigrants to Israel. Qualitative Inquiry. 2011;17(7):653–663. doi: 10.1177/1077800411414007. [ DOI ] [ Google Scholar ]
- Nairn K, Panelli R. Using fiction to make meaning in research with young people in rural New Zealand. Qualitative Inquiry. 2009;15(1):96–112. doi: 10.1177/1077800408318314. [ DOI ] [ Google Scholar ]
- Nespor J. The afterlife of “teachers’ beliefs”: Qualitative methodology and the textline. Qualitative Inquiry. 2012;18(5):449–460. doi: 10.1177/1077800412439530. [ DOI ] [ Google Scholar ]
- Piekkari R, Welch C, Paavilainen E. The case study as disciplinary convention: Evidence from international business journals. Organizational Research Methods. 2009;12(3):567–589. doi: 10.1177/1094428108319905. [ DOI ] [ Google Scholar ]
- Ragin C. C, Becker H. S. What is a case?: Exploring the foundations of social inquiry. Cambridge: Cambridge University Press; 1992. [ Google Scholar ]
- Roscigno C. I, Savage T. A, Kavanaugh K, Moro T. T, Kilpatrick S. J, Strassner H. T, et al. Divergent views of hope influencing communications between parents and hospital providers. Qualitative Health Research. 2012;22(9):1232–1246. doi: 10.1177/1049732312449210. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Rosenberg J. P, Yates P. M. Schematic representation of case study research designs. Journal of Advanced Nursing. 2007;60(4):447–452. doi: 10.1111/j.1365-2648.2007.04385.x. [ DOI ] [ PubMed ] [ Google Scholar ]
- Rytterström P, Unosson M, Arman M. Care culture as a meaning- making process: A study of a mistreatment investigation. Qualitative Health Research. 2013;23:1179–1187. doi: 10.1177/1049732312470760. [ DOI ] [ PubMed ] [ Google Scholar ]
- Sandelowski M. Whatever happened to qualitative description? Research in Nursing & Health. 2000;23(4):334–340. doi: 10.1002/1098-240X. [ DOI ] [ PubMed ] [ Google Scholar ]
- Sandelowski M. What's in a name? Qualitative description revisited. Research in Nursing & Health. 2010;33(1):77–84. doi: 10.1002/nur.20362. [ DOI ] [ PubMed ] [ Google Scholar ]
- Sandelowski M, Barroso J. Reading qualitative studies. International Journal of Qualitative Methods. 2002;1(1):74–108. [ Google Scholar ]
- Snyder-Young D. “Here to tell her story”: Analyzing the autoethnographic performances of others. Qualitative Inquiry. 2011;17(10):943–951. doi: 10.1177/1077800411425149. [ DOI ] [ Google Scholar ]
- Stake R. E. The case study method in social inquiry. Educational Researcher. 1978;7(2):5–8. [ Google Scholar ]
- Stake R. E. The art of case study research. Thousand Oaks, CA: Sage; 1995. [ Google Scholar ]
- Stake R. E. Case studies. In: Denzin N. K, Lincoln Y. S, editors. Strategies of qualitative inquiry. Thousand Oaks, CA: Sage; 1998. pp. 86–109. [ Google Scholar ]
- Sumsion J. Opening up possibilities through team research: Investigating infants’ experiences of early childhood education and care. Qualitative Research. 2013;14(2):149–165. doi: 10.1177/1468794112468471.. [ DOI ] [ Google Scholar ]
- Thomas G. Doing case study: Abduction not induction, phronesis not theory. Qualitative Inquiry. 2010;16(7):575–582. doi: 10.1177/1077800410372601. [ DOI ] [ Google Scholar ]
- Thomas G. A typology for the case study in social science following a review of definition, discourse, and structure. Qualitative Inquiry. 2011;17(6):511–521. doi: 10.1177/1077800411409884. [ DOI ] [ Google Scholar ]
- Tight M. The curious case of case study: A viewpoint. International Journal of Social Research Methodology. 2010;13(4):329–339. doi: 10.1080/13645570903187181. [ DOI ] [ Google Scholar ]
- Wager E, Kleinert S. Responsible research publication: International standards for authors. A position statement developed at the 2nd World Conference on Research Integrity, Singapore, July 22–24, 2010. In: Mayer T, Steneck N, editors. Promoting research integrity in a global environment. Singapore: Imperial College Press/World Scientific; 2010a. pp. 309–316. [ Google Scholar ]
- Wager E, Kleinert S. Responsible research publication: International standards for editors. A position statement developed at the 2nd World Conference on Research Integrity, Singapore, July 22–24, 2010. In: Mayer T, Steneck N, editors. Promoting research integrity in a global environment. Singapore: Imperial College Press/World Scientific; 2010b. pp. 317–328. [ Google Scholar ]
- Webb C, Kevern J. Focus groups as a research method: A critique of some aspects of their use in nursing research. Journal of Advanced Nursing. 2000;33(6):798–805. doi: 10.1046/j.1365-2648.2001.01720.x. [ DOI ] [ PubMed ] [ Google Scholar ]
- Wimpenny K, Savin-Baden M. Exploring and implementing participatory action synthesis. Qualitative Inquiry. 2012;18(8):689–698. doi: 10.1177/1077800412452854. [ DOI ] [ Google Scholar ]
- Yeh H.-Y. Boundaries, entities, and modern vegetarianism: Examining the emergence of the first vegetarian organization. Qualitative Inquiry. 2013;19(4):298–309. doi: 10.1177/1077800412471516. [ DOI ] [ Google Scholar ]
- Yin R. K. Enhancing the quality of case studies in health services research. Health Services Research. 1999;34(5 Pt 2):1209–1224. [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Yin R. K. Case study research: Design and methods. 4th ed. Thousand Oaks, CA: Sage; 2009. [ Google Scholar ]
- Yin R. K. Applications of case study research. 3rd ed. Thousand Oaks, CA: Sage; 2012. [ Google Scholar ]
- View on publisher site
- PDF (212.6 KB)
- Collections
Similar articles
Cited by other articles, links to ncbi databases.
- Download .nbib .nbib
- Format: AMA APA MLA NLM
IMAGES
VIDEO
COMMENTS
This paper discusses literature review as a methodology for conducting research and offers an overview of different types of reviews, as well as some guidelines to how to both conduct and evaluate a literature review paper. It also discusses common pitfalls and how to get literature reviews published. Previous. Next.
Therefore, the purpose of this paper is to provide a concise explanation of four common qualitative approaches, case study, ethnography, narrative, and phenomenology, demonstrating how each approach is linked to specific types of data collection and analysis.
In identifying similarities and differences across the scholars’ approaches, the analysis includes: (a) definitions of and steps in research design, and (b) the perspectives on research methods and research methodology.
This paper represents an initial effort to promote high quality critical evaluations of the literature regarding problematic methods topics, which have the potential to promote clearer, shared understandings, and accelerate advances in research methods.
They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste. We provide an overview of some of the key aspects of methodological studies such as what they are, and when, how and why they are done.
Qualitative research methods are widely used in the social sciences and the humanities, but they can also complement quantitative approaches used in clinical research. In this article, we discuss the key features and contributions of qualitative research methods.
Introduction. “… It is important to regularly dialogue about what makes for good qualitative research” (Tracy, 2010, p. 837) To decide what represents good qualitative research is highly debatable. There are numerous methods that are contained within qualitative research and that are established on diverse philosophical perspectives.
Our review of 999 articles from five journals over a period of 5 years indicated that psychology research is conducted in 10 topics via predominantly quantitative research methods. Of these 10 topics, social psychology was the most popular.
Qualitative case study research, as described by Stake (1995), draws together “naturalistic, holistic, ethnographic, phenomenological, and biographic research methods” in a bricoleur design, or in his words, “a palette of methods” (Stake, 1995, pp. xi–xii).
This study provides a comprehensive review of qualitative, quantitative, and mixed-method research methods. Each method is clearly defined and specifically discussed based on applications, types, advantages, and limitations to help researchers identify select the most relevant type based on each study and navigate accordingly.