Qualitative Research: Characteristics, Design, Methods & Examples

Lauren McCall

MSc Health Psychology Graduate

MSc, Health Psychology, University of Nottingham

Lauren obtained an MSc in Health Psychology from The University of Nottingham with a distinction classification.

Learn about our Editorial Process

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

“Not everything that can be counted counts, and not everything that counts can be counted“ (Albert Einstein)

Qualitative research is a process used for the systematic collection, analysis, and interpretation of non-numerical data (Punch, 2013). 

Qualitative research can be used to: (i) gain deep contextual understandings of the subjective social reality of individuals and (ii) to answer questions about experience and meaning from the participant’s perspective (Hammarberg et al., 2016).

Unlike quantitative research, which focuses on gathering and analyzing numerical data for statistical analysis, qualitative research focuses on thematic and contextual information.

Characteristics of Qualitative Research 

Reality is socially constructed.

Qualitative research aims to understand how participants make meaning of their experiences – individually or in social contexts. It assumes there is no objective reality and that the social world is interpreted (Yilmaz, 2013). 

The primacy of subject matter 

The primary aim of qualitative research is to understand the perspectives, experiences, and beliefs of individuals who have experienced the phenomenon selected for research rather than the average experiences of groups of people (Minichiello, 1990).

Variables are complex, interwoven, and difficult to measure

Factors such as experiences, behaviors, and attitudes are complex and interwoven, so they cannot be reduced to isolated variables , making them difficult to measure quantitatively.

However, a qualitative approach enables participants to describe what, why, or how they were thinking/ feeling during a phenomenon being studied (Yilmaz, 2013). 

Emic (insider’s point of view)

The phenomenon being studied is centered on the participants’ point of view (Minichiello, 1990).

Emic is used to describe how participants interact, communicate, and behave in the context of the research setting (Scarduzio, 2017).

Why Conduct Qualitative Research? 

In order to gain a deeper understanding of how people experience the world, individuals are studied in their natural setting. This enables the researcher to understand a phenomenon close to how participants experience it. 

Qualitative research allows researchers to gain an in-depth understanding, which is difficult to attain using quantitative methods. 

An in-depth understanding is attained since qualitative techniques allow participants to freely disclose their experiences, thoughts, and feelings without constraint (Tenny et al., 2022). 

This helps to further investigate and understand quantitative data by discovering reasons for the outcome of a study – answering the why question behind statistics. 

The exploratory nature of qualitative research helps to generate hypotheses that can then be tested quantitatively (Busetto et al., 2020).

To design hypotheses, theory must be researched using qualitative methods to find out what is important in order to begin research. 

For example, by conducting interviews or focus groups with key stakeholders to discover what is important to them. 

Examples of qualitative research questions include: 

  • How does stress influence young adults’ behavior?
  • What factors influence students’ school attendance rates in developed countries?
  • How do adults interpret binge drinking in the UK?
  • What are the psychological impacts of cervical cancer screening in women?
  • How can mental health lessons be integrated into the school curriculum? 

Collecting Qualitative Data

There are four main research design methods used to collect qualitative data: observations, interviews,  focus groups, and ethnography.

Observations

This method involves watching and recording phenomena as they occur in nature. Observation can be divided into two types: participant and non-participant observation.

In participant observation, the researcher actively participates in the situation/events being observed.

In non-participant observation, the researcher is not an active part of the observation and tries not to influence the behaviors they are observing (Busetto et al., 2020). 

Observations can be covert (participants are unaware that a researcher is observing them) or overt (participants are aware of the researcher’s presence and know they are being observed).

However, awareness of an observer’s presence may influence participants’ behavior. 

Interviews give researchers a window into the world of a participant by seeking their account of an event, situation, or phenomenon. They are usually conducted on a one-to-one basis and can be distinguished according to the level at which they are structured (Punch, 2013). 

Structured interviews involve predetermined questions and sequences to ensure replicability and comparability. However, they are unable to explore emerging issues.

Informal interviews consist of spontaneous, casual conversations which are closer to the truth of a phenomenon. However, information is gathered using quick notes made by the researcher and is therefore subject to recall bias. 

Semi-structured interviews have a flexible structure, phrasing, and placement so emerging issues can be explored (Denny & Weckesser, 2022).

The use of probing questions and clarification can lead to a detailed understanding, but semi-structured interviews can be time-consuming and subject to interviewer bias. 

Focus groups 

Similar to interviews, focus groups elicit a rich and detailed account of an experience. However, focus groups are more dynamic since participants with shared characteristics construct this account together (Denny & Weckesser, 2022).

A shared narrative is built between participants to capture a group experience shaped by a shared context. 

The researcher takes on the role of a moderator, who will establish ground rules and guide the discussion by following a topic guide to focus the group discussions.

Typically, focus groups have 4-10 participants as a discussion can be difficult to facilitate with more than this, and this number allows everyone the time to speak.

Ethnography

Ethnography is a methodology used to study a group of people’s behaviors and social interactions in their environment (Reeves et al., 2008).

Data are collected using methods such as observations, field notes, or structured/ unstructured interviews.

The aim of ethnography is to provide detailed, holistic insights into people’s behavior and perspectives within their natural setting. In order to achieve this, researchers immerse themselves in a community or organization. 

Due to the flexibility and real-world focus of ethnography, researchers are able to gather an in-depth, nuanced understanding of people’s experiences, knowledge and perspectives that are influenced by culture and society.

In order to develop a representative picture of a particular culture/ context, researchers must conduct extensive field work. 

This can be time-consuming as researchers may need to immerse themselves into a community/ culture for a few days, or possibly a few years.

Qualitative Data Analysis Methods

Different methods can be used for analyzing qualitative data. The researcher chooses based on the objectives of their study. 

The researcher plays a key role in the interpretation of data, making decisions about the coding, theming, decontextualizing, and recontextualizing of data (Starks & Trinidad, 2007). 

Grounded theory

Grounded theory is a qualitative method specifically designed to inductively generate theory from data. It was developed by Glaser and Strauss in 1967 (Glaser & Strauss, 2017).

 This methodology aims to develop theories (rather than test hypotheses) that explain a social process, action, or interaction (Petty et al., 2012). To inform the developing theory, data collection and analysis run simultaneously. 

There are three key types of coding used in grounded theory: initial (open), intermediate (axial), and advanced (selective) coding. 

Throughout the analysis, memos should be created to document methodological and theoretical ideas about the data. Data should be collected and analyzed until data saturation is reached and a theory is developed. 

Content analysis

Content analysis was first used in the early twentieth century to analyze textual materials such as newspapers and political speeches.

Content analysis is a research method used to identify and analyze the presence and patterns of themes, concepts, or words in data (Vaismoradi et al., 2013). 

This research method can be used to analyze data in different formats, which can be written, oral, or visual. 

The goal of content analysis is to develop themes that capture the underlying meanings of data (Schreier, 2012). 

Qualitative content analysis can be used to validate existing theories, support the development of new models and theories, and provide in-depth descriptions of particular settings or experiences.

The following six steps provide a guideline for how to conduct qualitative content analysis.
  • Define a Research Question : To start content analysis, a clear research question should be developed.
  • Identify and Collect Data : Establish the inclusion criteria for your data. Find the relevant sources to analyze.
  • Define the Unit or Theme of Analysis : Categorize the content into themes. Themes can be a word, phrase, or sentence.
  • Develop Rules for Coding your Data : Define a set of coding rules to ensure that all data are coded consistently.
  • Code the Data : Follow the coding rules to categorize data into themes.
  • Analyze the Results and Draw Conclusions : Examine the data to identify patterns and draw conclusions in relation to your research question.

Discourse analysis

Discourse analysis is a research method used to study written/ spoken language in relation to its social context (Wood & Kroger, 2000).

In discourse analysis, the researcher interprets details of language materials and the context in which it is situated.

Discourse analysis aims to understand the functions of language (how language is used in real life) and how meaning is conveyed by language in different contexts. Researchers use discourse analysis to investigate social groups and how language is used to achieve specific communication goals.

Different methods of discourse analysis can be used depending on the aims and objectives of a study. However, the following steps provide a guideline on how to conduct discourse analysis.
  • Define the Research Question : Develop a relevant research question to frame the analysis.
  • Gather Data and Establish the Context : Collect research materials (e.g., interview transcripts, documents). Gather factual details and review the literature to construct a theory about the social and historical context of your study.
  • Analyze the Content : Closely examine various components of the text, such as the vocabulary, sentences, paragraphs, and structure of the text. Identify patterns relevant to the research question to create codes, then group these into themes.
  • Review the Results : Reflect on the findings to examine the function of the language, and the meaning and context of the discourse. 

Thematic analysis

Thematic analysis is a method used to identify, interpret, and report patterns in data, such as commonalities or contrasts. 

Although the origin of thematic analysis can be traced back to the early twentieth century, understanding and clarity of thematic analysis is attributed to Braun and Clarke (2006).

Thematic analysis aims to develop themes (patterns of meaning) across a dataset to address a research question. 

In thematic analysis, qualitative data is gathered using techniques such as interviews, focus groups, and questionnaires. Audio recordings are transcribed. The dataset is then explored and interpreted by a researcher to identify patterns. 

This occurs through the rigorous process of data familiarisation, coding, theme development, and revision. These identified patterns provide a summary of the dataset and can be used to address a research question.

Themes are developed by exploring the implicit and explicit meanings within the data. Two different approaches are used to generate themes: inductive and deductive. 

An inductive approach allows themes to emerge from the data. In contrast, a deductive approach uses existing theories or knowledge to apply preconceived ideas to the data.

Phases of Thematic Analysis

Braun and Clarke (2006) provide a guide of the six phases of thematic analysis. These phases can be applied flexibly to fit research questions and data. 

Template analysis

Template analysis refers to a specific method of thematic analysis which uses hierarchical coding (Brooks et al., 2014).

Template analysis is used to analyze textual data, for example, interview transcripts or open-ended responses on a written questionnaire.

To conduct template analysis, a coding template must be developed (usually from a subset of the data) and subsequently revised and refined. This template represents the themes identified by researchers as important in the dataset. 

Codes are ordered hierarchically within the template, with the highest-level codes demonstrating overarching themes in the data and lower-level codes representing constituent themes with a narrower focus.

A guideline for the main procedural steps for conducting template analysis is outlined below.
  • Familiarization with the Data : Read (and reread) the dataset in full. Engage, reflect, and take notes on data that may be relevant to the research question.
  • Preliminary Coding : Identify initial codes using guidance from the a priori codes, identified before the analysis as likely to be beneficial and relevant to the analysis.
  • Organize Themes : Organize themes into meaningful clusters. Consider the relationships between the themes both within and between clusters.
  • Produce an Initial Template : Develop an initial template. This may be based on a subset of the data.
  • Apply and Develop the Template : Apply the initial template to further data and make any necessary modifications. Refinements of the template may include adding themes, removing themes, or changing the scope/title of themes. 
  • Finalize Template : Finalize the template, then apply it to the entire dataset. 

Frame analysis

Frame analysis is a comparative form of thematic analysis which systematically analyzes data using a matrix output.

Ritchie and Spencer (1994) developed this set of techniques to analyze qualitative data in applied policy research. Frame analysis aims to generate theory from data.

Frame analysis encourages researchers to organize and manage their data using summarization.

This results in a flexible and unique matrix output, in which individual participants (or cases) are represented by rows and themes are represented by columns. 

Each intersecting cell is used to summarize findings relating to the corresponding participant and theme.

Frame analysis has five distinct phases which are interrelated, forming a methodical and rigorous framework.
  • Familiarization with the Data : Familiarize yourself with all the transcripts. Immerse yourself in the details of each transcript and start to note recurring themes.
  • Develop a Theoretical Framework : Identify recurrent/ important themes and add them to a chart. Provide a framework/ structure for the analysis.
  • Indexing : Apply the framework systematically to the entire study data.
  • Summarize Data in Analytical Framework : Reduce the data into brief summaries of participants’ accounts.
  • Mapping and Interpretation : Compare themes and subthemes and check against the original transcripts. Group the data into categories and provide an explanation for them.

Preventing Bias in Qualitative Research

To evaluate qualitative studies, the CASP (Critical Appraisal Skills Programme) checklist for qualitative studies can be used to ensure all aspects of a study have been considered (CASP, 2018).

The quality of research can be enhanced and assessed using criteria such as checklists, reflexivity, co-coding, and member-checking. 

Co-coding 

Relying on only one researcher to interpret rich and complex data may risk key insights and alternative viewpoints being missed. Therefore, coding is often performed by multiple researchers.

A common strategy must be defined at the beginning of the coding process  (Busetto et al., 2020). This includes establishing a useful coding list and finding a common definition of individual codes.

Transcripts are initially coded independently by researchers and then compared and consolidated to minimize error or bias and to bring confirmation of findings. 

Member checking

Member checking (or respondent validation) involves checking back with participants to see if the research resonates with their experiences (Russell & Gregory, 2003).

Data can be returned to participants after data collection or when results are first available. For example, participants may be provided with their interview transcript and asked to verify whether this is a complete and accurate representation of their views.

Participants may then clarify or elaborate on their responses to ensure they align with their views (Shenton, 2004).

This feedback becomes part of data collection and ensures accurate descriptions/ interpretations of phenomena (Mays & Pope, 2000). 

Reflexivity in qualitative research

Reflexivity typically involves examining your own judgments, practices, and belief systems during data collection and analysis. It aims to identify any personal beliefs which may affect the research. 

Reflexivity is essential in qualitative research to ensure methodological transparency and complete reporting. This enables readers to understand how the interaction between the researcher and participant shapes the data.

Depending on the research question and population being researched, factors that need to be considered include the experience of the researcher, how the contact was established and maintained, age, gender, and ethnicity.

These details are important because, in qualitative research, the researcher is a dynamic part of the research process and actively influences the outcome of the research (Boeije, 2014). 

Reflexivity Example

Who you are and your characteristics influence how you collect and analyze data. Here is an example of a reflexivity statement for research on smoking. I am a 30-year-old white female from a middle-class background. I live in the southwest of England and have been educated to master’s level. I have been involved in two research projects on oral health. I have never smoked, but I have witnessed how smoking can cause ill health from my volunteering in a smoking cessation clinic. My research aspirations are to help to develop interventions to help smokers quit.

Establishing Trustworthiness in Qualitative Research

Trustworthiness is a concept used to assess the quality and rigor of qualitative research. Four criteria are used to assess a study’s trustworthiness: credibility, transferability, dependability, and confirmability.

Credibility in Qualitative Research

Credibility refers to how accurately the results represent the reality and viewpoints of the participants.

To establish credibility in research, participants’ views and the researcher’s representation of their views need to align (Tobin & Begley, 2004).

To increase the credibility of findings, researchers may use data source triangulation, investigator triangulation, peer debriefing, or member checking (Lincoln & Guba, 1985). 

Transferability in Qualitative Research

Transferability refers to how generalizable the findings are: whether the findings may be applied to another context, setting, or group (Tobin & Begley, 2004).

Transferability can be enhanced by giving thorough and in-depth descriptions of the research setting, sample, and methods (Nowell et al., 2017). 

Dependability in Qualitative Research

Dependability is the extent to which the study could be replicated under similar conditions and the findings would be consistent.

Researchers can establish dependability using methods such as audit trails so readers can see the research process is logical and traceable (Koch, 1994).

Confirmability in Qualitative Research

Confirmability is concerned with establishing that there is a clear link between the researcher’s interpretations/ findings and the data.

Researchers can achieve confirmability by demonstrating how conclusions and interpretations were arrived at (Nowell et al., 2017).

This enables readers to understand the reasoning behind the decisions made. 

Audit Trails in Qualitative Research

An audit trail provides evidence of the decisions made by the researcher regarding theory, research design, and data collection, as well as the steps they have chosen to manage, analyze, and report data. 

The researcher must provide a clear rationale to demonstrate how conclusions were reached in their study.

A clear description of the research path must be provided to enable readers to trace through the researcher’s logic (Halpren, 1983).

Researchers should maintain records of the raw data, field notes, transcripts, and a reflective journal in order to provide a clear audit trail. 

Discovery of unexpected data

Open-ended questions in qualitative research mean the researcher can probe an interview topic and enable the participant to elaborate on responses in an unrestricted manner.

This allows unexpected data to emerge, which can lead to further research into that topic. 

Flexibility

Data collection and analysis can be modified and adapted to take the research in a different direction if new ideas or patterns emerge in the data.

This enables researchers to investigate new opportunities while firmly maintaining their research goals. 

Naturalistic settings

The behaviors of participants are recorded in real-world settings. Studies that use real-world settings have high ecological validity since participants behave more authentically. 

Limitations

Time-consuming .

Qualitative research results in large amounts of data which often need to be transcribed and analyzed manually.

Even when software is used, transcription can be inaccurate, and using software for analysis can result in many codes which need to be condensed into themes. 

Subjectivity 

The researcher has an integral role in collecting and interpreting qualitative data. Therefore, the conclusions reached are from their perspective and experience.

Consequently, interpretations of data from another researcher may vary greatly. 

Limited generalizability

The aim of qualitative research is to provide a detailed, contextualized understanding of an aspect of the human experience from a relatively small sample size.

Despite rigorous analysis procedures, conclusions drawn cannot be generalized to the wider population since data may be biased or unrepresentative.

Therefore, results are only applicable to a small group of the population. 

Extraneous variables

Qualitative research is often conducted in real-world settings. This may cause results to be unreliable since extraneous variables may affect the data, for example:

  • Situational variables : different environmental conditions may influence participants’ behavior in a study. The random variation in factors (such as noise or lighting) may be difficult to control in real-world settings.
  • Participant characteristics : this includes any characteristics that may influence how a participant answers/ behaves in a study. This may include a participant’s mood, gender, age, ethnicity, sexual identity, IQ, etc.
  • Experimenter effect : experimenter effect refers to how a researcher’s unintentional influence can change the outcome of a study. This occurs when (i) their interactions with participants unintentionally change participants’ behaviors or (ii) due to errors in observation, interpretation, or analysis. 

What sample size should qualitative research be?

The sample size for qualitative studies has been recommended to include a minimum of 12 participants to reach data saturation (Braun, 2013).

Are surveys qualitative or quantitative?

Surveys can be used to gather information from a sample qualitatively or quantitatively. Qualitative surveys use open-ended questions to gather detailed information from a large sample using free text responses.

The use of open-ended questions allows for unrestricted responses where participants use their own words, enabling the collection of more in-depth information than closed-ended questions.

In contrast, quantitative surveys consist of closed-ended questions with multiple-choice answer options. Quantitative surveys are ideal to gather a statistical representation of a population.

What are the ethical considerations of qualitative research?

Before conducting a study, you must think about any risks that could occur and take steps to prevent them. Participant Protection : Researchers must protect participants from physical and mental harm. This means you must not embarrass, frighten, offend, or harm participants. Transparency : Researchers are obligated to clearly communicate how they will collect, store, analyze, use, and share the data. Confidentiality : You need to consider how to maintain the confidentiality and anonymity of participants’ data.

What is triangulation in qualitative research?

Triangulation refers to the use of several approaches in a study to comprehensively understand phenomena. This method helps to increase the validity and credibility of research findings. 

Types of triangulation include method triangulation (using multiple methods to gather data); investigator triangulation (multiple researchers for collecting/ analyzing data), theory triangulation (comparing several theoretical perspectives to explain a phenomenon), and data source triangulation (using data from various times, locations, and people; Carter et al., 2014).

Why is qualitative research important?

Qualitative research allows researchers to describe and explain the social world. The exploratory nature of qualitative research helps to generate hypotheses that can then be tested quantitatively.

In qualitative research, participants are able to express their thoughts, experiences, and feelings without constraint.

Additionally, researchers are able to follow up on participants’ answers in real-time, generating valuable discussion around a topic. This enables researchers to gain a nuanced understanding of phenomena which is difficult to attain using quantitative methods.

What is coding data in qualitative research?

Coding data is a qualitative data analysis strategy in which a section of text is assigned with a label that describes its content.

These labels may be words or phrases which represent important (and recurring) patterns in the data.

This process enables researchers to identify related content across the dataset. Codes can then be used to group similar types of data to generate themes.

What is the difference between qualitative and quantitative research?

Qualitative research involves the collection and analysis of non-numerical data in order to understand experiences and meanings from the participant’s perspective.

This can provide rich, in-depth insights on complicated phenomena. Qualitative data may be collected using interviews, focus groups, or observations.

In contrast, quantitative research involves the collection and analysis of numerical data to measure the frequency, magnitude, or relationships of variables. This can provide objective and reliable evidence that can be generalized to the wider population.

Quantitative data may be collected using closed-ended questionnaires or experiments.

What is trustworthiness in qualitative research?

Trustworthiness is a concept used to assess the quality and rigor of qualitative research. Four criteria are used to assess a study’s trustworthiness: credibility, transferability, dependability, and confirmability. 

Credibility refers to how accurately the results represent the reality and viewpoints of the participants. Transferability refers to whether the findings may be applied to another context, setting, or group.

Dependability is the extent to which the findings are consistent and reliable. Confirmability refers to the objectivity of findings (not influenced by the bias or assumptions of researchers).

What is data saturation in qualitative research?

Data saturation is a methodological principle used to guide the sample size of a qualitative research study.

Data saturation is proposed as a necessary methodological component in qualitative research (Saunders et al., 2018) as it is a vital criterion for discontinuing data collection and/or analysis. 

The intention of data saturation is to find “no new data, no new themes, no new coding, and ability to replicate the study” (Guest et al., 2006). Therefore, enough data has been gathered to make conclusions.

Why is sampling in qualitative research important?

In quantitative research, large sample sizes are used to provide statistically significant quantitative estimates.

This is because quantitative research aims to provide generalizable conclusions that represent populations.

However, the aim of sampling in qualitative research is to gather data that will help the researcher understand the depth, complexity, variation, or context of a phenomenon. The small sample sizes in qualitative studies support the depth of case-oriented analysis.

Boeije, H. (2014). Analysis in qualitative research. Sage.

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative research in psychology , 3 (2), 77-101. https://doi.org/10.1191/1478088706qp063oa

Brooks, J., McCluskey, S., Turley, E., & King, N. (2014). The utility of template analysis in qualitative psychology research. Qualitative Research in Psychology , 12 (2), 202–222. https://doi.org/10.1080/14780887.2014.955224

Busetto, L., Wick, W., & Gumbinger, C. (2020). How to use and assess qualitative research methods. Neurological research and practice , 2 (1), 14-14. https://doi.org/10.1186/s42466-020-00059-z 

Carter, N., Bryant-Lukosius, D., DiCenso, A., Blythe, J., & Neville, A. J. (2014). The use of triangulation in qualitative research. Oncology nursing forum , 41 (5), 545–547. https://doi.org/10.1188/14.ONF.545-547

Critical Appraisal Skills Programme. (2018). CASP Checklist: 10 questions to help you make sense of a Qualitative research. https://casp-uk.net/images/checklist/documents/CASP-Qualitative-Studies-Checklist/CASP-Qualitative-Checklist-2018_fillable_form.pdf Accessed: March 15 2023

Clarke, V., & Braun, V. (2013). Successful qualitative research: A practical guide for beginners. Successful Qualitative Research , 1-400.

Denny, E., & Weckesser, A. (2022). How to do qualitative research?: Qualitative research methods. BJOG : an international journal of obstetrics and gynaecology , 129 (7), 1166-1167. https://doi.org/10.1111/1471-0528.17150 

Glaser, B. G., & Strauss, A. L. (2017). The discovery of grounded theory. The Discovery of Grounded Theory , 1–18. https://doi.org/10.4324/9780203793206-1

Guest, G., Bunce, A., & Johnson, L. (2006). How many interviews are enough? An experiment with data saturation and variability. Field Methods, 18 (1), 59-82. doi:10.1177/1525822X05279903

Halpren, E. S. (1983). Auditing naturalistic inquiries: The development and application of a model (Unpublished doctoral dissertation). Indiana University, Bloomington.

Hammarberg, K., Kirkman, M., & de Lacey, S. (2016). Qualitative research methods: When to use them and how to judge them. Human Reproduction , 31 (3), 498–501. https://doi.org/10.1093/humrep/dev334

Koch, T. (1994). Establishing rigour in qualitative research: The decision trail. Journal of Advanced Nursing, 19, 976–986. doi:10.1111/ j.1365-2648.1994.tb01177.x

Lincoln, Y., & Guba, E. G. (1985). Naturalistic inquiry. Newbury Park, CA: Sage.

Mays, N., & Pope, C. (2000). Assessing quality in qualitative research. BMJ, 320(7226), 50–52.

Minichiello, V. (1990). In-Depth Interviewing: Researching People. Longman Cheshire.

Nowell, L. S., Norris, J. M., White, D. E., & Moules, N. J. (2017). Thematic Analysis: Striving to Meet the Trustworthiness Criteria. International Journal of Qualitative Methods, 16 (1). https://doi.org/10.1177/1609406917733847

Petty, N. J., Thomson, O. P., & Stew, G. (2012). Ready for a paradigm shift? part 2: Introducing qualitative research methodologies and methods. Manual Therapy , 17 (5), 378–384. https://doi.org/10.1016/j.math.2012.03.004

Punch, K. F. (2013). Introduction to social research: Quantitative and qualitative approaches. London: Sage

Reeves, S., Kuper, A., & Hodges, B. D. (2008). Qualitative research methodologies: Ethnography. BMJ , 337 (aug07 3). https://doi.org/10.1136/bmj.a1020

Russell, C. K., & Gregory, D. M. (2003). Evaluation of qualitative research studies. Evidence Based Nursing, 6 (2), 36–40.

Saunders, B., Sim, J., Kingstone, T., Baker, S., Waterfield, J., Bartlam, B., Burroughs, H., & Jinks, C. (2018). Saturation in qualitative research: exploring its conceptualization and operationalization. Quality & quantity , 52 (4), 1893–1907. https://doi.org/10.1007/s11135-017-0574-8

Scarduzio, J. A. (2017). Emic approach to qualitative research. The International Encyclopedia of Communication Research Methods, 1–2 . https://doi.org/10.1002/9781118901731.iecrm0082

Schreier, M. (2012). Qualitative content analysis in practice / Margrit Schreier.

Shenton, A. K. (2004). Strategies for ensuring trustworthiness in qualitative research projects. Education for Information, 22 , 63–75.

Starks, H., & Trinidad, S. B. (2007). Choose your method: a comparison of phenomenology, discourse analysis, and grounded theory. Qualitative health research , 17 (10), 1372–1380. https://doi.org/10.1177/1049732307307031

Tenny, S., Brannan, J. M., & Brannan, G. D. (2022). Qualitative Study. In StatPearls. StatPearls Publishing.

Tobin, G. A., & Begley, C. M. (2004). Methodological rigour within a qualitative framework. Journal of Advanced Nursing, 48, 388–396. doi:10.1111/j.1365-2648.2004.03207.x

Vaismoradi, M., Turunen, H., & Bondas, T. (2013). Content analysis and thematic analysis: Implications for conducting a qualitative descriptive study. Nursing & health sciences , 15 (3), 398-405. https://doi.org/10.1111/nhs.12048

Wood L. A., Kroger R. O. (2000). Doing discourse analysis: Methods for studying action in talk and text. Sage.

Yilmaz, K. (2013). Comparison of Quantitative and Qualitative Research Traditions: epistemological, theoretical, and methodological differences. European journal of education , 48 (2), 311-325. https://doi.org/10.1111/ejed.12014

Print Friendly, PDF & Email

Criteria for Good Qualitative Research: A Comprehensive Review

  • Regular Article
  • Open access
  • Published: 18 September 2021
  • Volume 31 , pages 679–689, ( 2022 )

Cite this article

You have full access to this open access article

characteristics of qualitative research pdf

  • Drishti Yadav   ORCID: orcid.org/0000-0002-2974-0323 1  

82k Accesses

28 Citations

71 Altmetric

Explore all metrics

This review aims to synthesize a published set of evaluative criteria for good qualitative research. The aim is to shed light on existing standards for assessing the rigor of qualitative research encompassing a range of epistemological and ontological standpoints. Using a systematic search strategy, published journal articles that deliberate criteria for rigorous research were identified. Then, references of relevant articles were surveyed to find noteworthy, distinct, and well-defined pointers to good qualitative research. This review presents an investigative assessment of the pivotal features in qualitative research that can permit the readers to pass judgment on its quality and to condemn it as good research when objectively and adequately utilized. Overall, this review underlines the crux of qualitative research and accentuates the necessity to evaluate such research by the very tenets of its being. It also offers some prospects and recommendations to improve the quality of qualitative research. Based on the findings of this review, it is concluded that quality criteria are the aftereffect of socio-institutional procedures and existing paradigmatic conducts. Owing to the paradigmatic diversity of qualitative research, a single and specific set of quality criteria is neither feasible nor anticipated. Since qualitative research is not a cohesive discipline, researchers need to educate and familiarize themselves with applicable norms and decisive factors to evaluate qualitative research from within its theoretical and methodological framework of origin.

Similar content being viewed by others

characteristics of qualitative research pdf

Good Qualitative Research: Opening up the Debate

Beyond qualitative/quantitative structuralism: the positivist qualitative research and the paradigmatic disclaimer.

characteristics of qualitative research pdf

What is Qualitative in Research

Avoid common mistakes on your manuscript.

Introduction

“… It is important to regularly dialogue about what makes for good qualitative research” (Tracy, 2010 , p. 837)

To decide what represents good qualitative research is highly debatable. There are numerous methods that are contained within qualitative research and that are established on diverse philosophical perspectives. Bryman et al., ( 2008 , p. 262) suggest that “It is widely assumed that whereas quality criteria for quantitative research are well‐known and widely agreed, this is not the case for qualitative research.” Hence, the question “how to evaluate the quality of qualitative research” has been continuously debated. There are many areas of science and technology wherein these debates on the assessment of qualitative research have taken place. Examples include various areas of psychology: general psychology (Madill et al., 2000 ); counseling psychology (Morrow, 2005 ); and clinical psychology (Barker & Pistrang, 2005 ), and other disciplines of social sciences: social policy (Bryman et al., 2008 ); health research (Sparkes, 2001 ); business and management research (Johnson et al., 2006 ); information systems (Klein & Myers, 1999 ); and environmental studies (Reid & Gough, 2000 ). In the literature, these debates are enthused by the impression that the blanket application of criteria for good qualitative research developed around the positivist paradigm is improper. Such debates are based on the wide range of philosophical backgrounds within which qualitative research is conducted (e.g., Sandberg, 2000 ; Schwandt, 1996 ). The existence of methodological diversity led to the formulation of different sets of criteria applicable to qualitative research.

Among qualitative researchers, the dilemma of governing the measures to assess the quality of research is not a new phenomenon, especially when the virtuous triad of objectivity, reliability, and validity (Spencer et al., 2004 ) are not adequate. Occasionally, the criteria of quantitative research are used to evaluate qualitative research (Cohen & Crabtree, 2008 ; Lather, 2004 ). Indeed, Howe ( 2004 ) claims that the prevailing paradigm in educational research is scientifically based experimental research. Hypotheses and conjectures about the preeminence of quantitative research can weaken the worth and usefulness of qualitative research by neglecting the prominence of harmonizing match for purpose on research paradigm, the epistemological stance of the researcher, and the choice of methodology. Researchers have been reprimanded concerning this in “paradigmatic controversies, contradictions, and emerging confluences” (Lincoln & Guba, 2000 ).

In general, qualitative research tends to come from a very different paradigmatic stance and intrinsically demands distinctive and out-of-the-ordinary criteria for evaluating good research and varieties of research contributions that can be made. This review attempts to present a series of evaluative criteria for qualitative researchers, arguing that their choice of criteria needs to be compatible with the unique nature of the research in question (its methodology, aims, and assumptions). This review aims to assist researchers in identifying some of the indispensable features or markers of high-quality qualitative research. In a nutshell, the purpose of this systematic literature review is to analyze the existing knowledge on high-quality qualitative research and to verify the existence of research studies dealing with the critical assessment of qualitative research based on the concept of diverse paradigmatic stances. Contrary to the existing reviews, this review also suggests some critical directions to follow to improve the quality of qualitative research in different epistemological and ontological perspectives. This review is also intended to provide guidelines for the acceleration of future developments and dialogues among qualitative researchers in the context of assessing the qualitative research.

The rest of this review article is structured in the following fashion: Sect.  Methods describes the method followed for performing this review. Section Criteria for Evaluating Qualitative Studies provides a comprehensive description of the criteria for evaluating qualitative studies. This section is followed by a summary of the strategies to improve the quality of qualitative research in Sect.  Improving Quality: Strategies . Section  How to Assess the Quality of the Research Findings? provides details on how to assess the quality of the research findings. After that, some of the quality checklists (as tools to evaluate quality) are discussed in Sect.  Quality Checklists: Tools for Assessing the Quality . At last, the review ends with the concluding remarks presented in Sect.  Conclusions, Future Directions and Outlook . Some prospects in qualitative research for enhancing its quality and usefulness in the social and techno-scientific research community are also presented in Sect.  Conclusions, Future Directions and Outlook .

For this review, a comprehensive literature search was performed from many databases using generic search terms such as Qualitative Research , Criteria , etc . The following databases were chosen for the literature search based on the high number of results: IEEE Explore, ScienceDirect, PubMed, Google Scholar, and Web of Science. The following keywords (and their combinations using Boolean connectives OR/AND) were adopted for the literature search: qualitative research, criteria, quality, assessment, and validity. The synonyms for these keywords were collected and arranged in a logical structure (see Table 1 ). All publications in journals and conference proceedings later than 1950 till 2021 were considered for the search. Other articles extracted from the references of the papers identified in the electronic search were also included. A large number of publications on qualitative research were retrieved during the initial screening. Hence, to include the searches with the main focus on criteria for good qualitative research, an inclusion criterion was utilized in the search string.

From the selected databases, the search retrieved a total of 765 publications. Then, the duplicate records were removed. After that, based on the title and abstract, the remaining 426 publications were screened for their relevance by using the following inclusion and exclusion criteria (see Table 2 ). Publications focusing on evaluation criteria for good qualitative research were included, whereas those works which delivered theoretical concepts on qualitative research were excluded. Based on the screening and eligibility, 45 research articles were identified that offered explicit criteria for evaluating the quality of qualitative research and were found to be relevant to this review.

Figure  1 illustrates the complete review process in the form of PRISMA flow diagram. PRISMA, i.e., “preferred reporting items for systematic reviews and meta-analyses” is employed in systematic reviews to refine the quality of reporting.

figure 1

PRISMA flow diagram illustrating the search and inclusion process. N represents the number of records

Criteria for Evaluating Qualitative Studies

Fundamental criteria: general research quality.

Various researchers have put forward criteria for evaluating qualitative research, which have been summarized in Table 3 . Also, the criteria outlined in Table 4 effectively deliver the various approaches to evaluate and assess the quality of qualitative work. The entries in Table 4 are based on Tracy’s “Eight big‐tent criteria for excellent qualitative research” (Tracy, 2010 ). Tracy argues that high-quality qualitative work should formulate criteria focusing on the worthiness, relevance, timeliness, significance, morality, and practicality of the research topic, and the ethical stance of the research itself. Researchers have also suggested a series of questions as guiding principles to assess the quality of a qualitative study (Mays & Pope, 2020 ). Nassaji ( 2020 ) argues that good qualitative research should be robust, well informed, and thoroughly documented.

Qualitative Research: Interpretive Paradigms

All qualitative researchers follow highly abstract principles which bring together beliefs about ontology, epistemology, and methodology. These beliefs govern how the researcher perceives and acts. The net, which encompasses the researcher’s epistemological, ontological, and methodological premises, is referred to as a paradigm, or an interpretive structure, a “Basic set of beliefs that guides action” (Guba, 1990 ). Four major interpretive paradigms structure the qualitative research: positivist and postpositivist, constructivist interpretive, critical (Marxist, emancipatory), and feminist poststructural. The complexity of these four abstract paradigms increases at the level of concrete, specific interpretive communities. Table 5 presents these paradigms and their assumptions, including their criteria for evaluating research, and the typical form that an interpretive or theoretical statement assumes in each paradigm. Moreover, for evaluating qualitative research, quantitative conceptualizations of reliability and validity are proven to be incompatible (Horsburgh, 2003 ). In addition, a series of questions have been put forward in the literature to assist a reviewer (who is proficient in qualitative methods) for meticulous assessment and endorsement of qualitative research (Morse, 2003 ). Hammersley ( 2007 ) also suggests that guiding principles for qualitative research are advantageous, but methodological pluralism should not be simply acknowledged for all qualitative approaches. Seale ( 1999 ) also points out the significance of methodological cognizance in research studies.

Table 5 reflects that criteria for assessing the quality of qualitative research are the aftermath of socio-institutional practices and existing paradigmatic standpoints. Owing to the paradigmatic diversity of qualitative research, a single set of quality criteria is neither possible nor desirable. Hence, the researchers must be reflexive about the criteria they use in the various roles they play within their research community.

Improving Quality: Strategies

Another critical question is “How can the qualitative researchers ensure that the abovementioned quality criteria can be met?” Lincoln and Guba ( 1986 ) delineated several strategies to intensify each criteria of trustworthiness. Other researchers (Merriam & Tisdell, 2016 ; Shenton, 2004 ) also presented such strategies. A brief description of these strategies is shown in Table 6 .

It is worth mentioning that generalizability is also an integral part of qualitative research (Hays & McKibben, 2021 ). In general, the guiding principle pertaining to generalizability speaks about inducing and comprehending knowledge to synthesize interpretive components of an underlying context. Table 7 summarizes the main metasynthesis steps required to ascertain generalizability in qualitative research.

Figure  2 reflects the crucial components of a conceptual framework and their contribution to decisions regarding research design, implementation, and applications of results to future thinking, study, and practice (Johnson et al., 2020 ). The synergy and interrelationship of these components signifies their role to different stances of a qualitative research study.

figure 2

Essential elements of a conceptual framework

In a nutshell, to assess the rationale of a study, its conceptual framework and research question(s), quality criteria must take account of the following: lucid context for the problem statement in the introduction; well-articulated research problems and questions; precise conceptual framework; distinct research purpose; and clear presentation and investigation of the paradigms. These criteria would expedite the quality of qualitative research.

How to Assess the Quality of the Research Findings?

The inclusion of quotes or similar research data enhances the confirmability in the write-up of the findings. The use of expressions (for instance, “80% of all respondents agreed that” or “only one of the interviewees mentioned that”) may also quantify qualitative findings (Stenfors et al., 2020 ). On the other hand, the persuasive reason for “why this may not help in intensifying the research” has also been provided (Monrouxe & Rees, 2020 ). Further, the Discussion and Conclusion sections of an article also prove robust markers of high-quality qualitative research, as elucidated in Table 8 .

Quality Checklists: Tools for Assessing the Quality

Numerous checklists are available to speed up the assessment of the quality of qualitative research. However, if used uncritically and recklessly concerning the research context, these checklists may be counterproductive. I recommend that such lists and guiding principles may assist in pinpointing the markers of high-quality qualitative research. However, considering enormous variations in the authors’ theoretical and philosophical contexts, I would emphasize that high dependability on such checklists may say little about whether the findings can be applied in your setting. A combination of such checklists might be appropriate for novice researchers. Some of these checklists are listed below:

The most commonly used framework is Consolidated Criteria for Reporting Qualitative Research (COREQ) (Tong et al., 2007 ). This framework is recommended by some journals to be followed by the authors during article submission.

Standards for Reporting Qualitative Research (SRQR) is another checklist that has been created particularly for medical education (O’Brien et al., 2014 ).

Also, Tracy ( 2010 ) and Critical Appraisal Skills Programme (CASP, 2021 ) offer criteria for qualitative research relevant across methods and approaches.

Further, researchers have also outlined different criteria as hallmarks of high-quality qualitative research. For instance, the “Road Trip Checklist” (Epp & Otnes, 2021 ) provides a quick reference to specific questions to address different elements of high-quality qualitative research.

Conclusions, Future Directions, and Outlook

This work presents a broad review of the criteria for good qualitative research. In addition, this article presents an exploratory analysis of the essential elements in qualitative research that can enable the readers of qualitative work to judge it as good research when objectively and adequately utilized. In this review, some of the essential markers that indicate high-quality qualitative research have been highlighted. I scope them narrowly to achieve rigor in qualitative research and note that they do not completely cover the broader considerations necessary for high-quality research. This review points out that a universal and versatile one-size-fits-all guideline for evaluating the quality of qualitative research does not exist. In other words, this review also emphasizes the non-existence of a set of common guidelines among qualitative researchers. In unison, this review reinforces that each qualitative approach should be treated uniquely on account of its own distinctive features for different epistemological and disciplinary positions. Owing to the sensitivity of the worth of qualitative research towards the specific context and the type of paradigmatic stance, researchers should themselves analyze what approaches can be and must be tailored to ensemble the distinct characteristics of the phenomenon under investigation. Although this article does not assert to put forward a magic bullet and to provide a one-stop solution for dealing with dilemmas about how, why, or whether to evaluate the “goodness” of qualitative research, it offers a platform to assist the researchers in improving their qualitative studies. This work provides an assembly of concerns to reflect on, a series of questions to ask, and multiple sets of criteria to look at, when attempting to determine the quality of qualitative research. Overall, this review underlines the crux of qualitative research and accentuates the need to evaluate such research by the very tenets of its being. Bringing together the vital arguments and delineating the requirements that good qualitative research should satisfy, this review strives to equip the researchers as well as reviewers to make well-versed judgment about the worth and significance of the qualitative research under scrutiny. In a nutshell, a comprehensive portrayal of the research process (from the context of research to the research objectives, research questions and design, speculative foundations, and from approaches of collecting data to analyzing the results, to deriving inferences) frequently proliferates the quality of a qualitative research.

Prospects : A Road Ahead for Qualitative Research

Irrefutably, qualitative research is a vivacious and evolving discipline wherein different epistemological and disciplinary positions have their own characteristics and importance. In addition, not surprisingly, owing to the sprouting and varied features of qualitative research, no consensus has been pulled off till date. Researchers have reflected various concerns and proposed several recommendations for editors and reviewers on conducting reviews of critical qualitative research (Levitt et al., 2021 ; McGinley et al., 2021 ). Following are some prospects and a few recommendations put forward towards the maturation of qualitative research and its quality evaluation:

In general, most of the manuscript and grant reviewers are not qualitative experts. Hence, it is more likely that they would prefer to adopt a broad set of criteria. However, researchers and reviewers need to keep in mind that it is inappropriate to utilize the same approaches and conducts among all qualitative research. Therefore, future work needs to focus on educating researchers and reviewers about the criteria to evaluate qualitative research from within the suitable theoretical and methodological context.

There is an urgent need to refurbish and augment critical assessment of some well-known and widely accepted tools (including checklists such as COREQ, SRQR) to interrogate their applicability on different aspects (along with their epistemological ramifications).

Efforts should be made towards creating more space for creativity, experimentation, and a dialogue between the diverse traditions of qualitative research. This would potentially help to avoid the enforcement of one's own set of quality criteria on the work carried out by others.

Moreover, journal reviewers need to be aware of various methodological practices and philosophical debates.

It is pivotal to highlight the expressions and considerations of qualitative researchers and bring them into a more open and transparent dialogue about assessing qualitative research in techno-scientific, academic, sociocultural, and political rooms.

Frequent debates on the use of evaluative criteria are required to solve some potentially resolved issues (including the applicability of a single set of criteria in multi-disciplinary aspects). Such debates would not only benefit the group of qualitative researchers themselves, but primarily assist in augmenting the well-being and vivacity of the entire discipline.

To conclude, I speculate that the criteria, and my perspective, may transfer to other methods, approaches, and contexts. I hope that they spark dialog and debate – about criteria for excellent qualitative research and the underpinnings of the discipline more broadly – and, therefore, help improve the quality of a qualitative study. Further, I anticipate that this review will assist the researchers to contemplate on the quality of their own research, to substantiate research design and help the reviewers to review qualitative research for journals. On a final note, I pinpoint the need to formulate a framework (encompassing the prerequisites of a qualitative study) by the cohesive efforts of qualitative researchers of different disciplines with different theoretic-paradigmatic origins. I believe that tailoring such a framework (of guiding principles) paves the way for qualitative researchers to consolidate the status of qualitative research in the wide-ranging open science debate. Dialogue on this issue across different approaches is crucial for the impending prospects of socio-techno-educational research.

Amin, M. E. K., Nørgaard, L. S., Cavaco, A. M., Witry, M. J., Hillman, L., Cernasev, A., & Desselle, S. P. (2020). Establishing trustworthiness and authenticity in qualitative pharmacy research. Research in Social and Administrative Pharmacy, 16 (10), 1472–1482.

Article   Google Scholar  

Barker, C., & Pistrang, N. (2005). Quality criteria under methodological pluralism: Implications for conducting and evaluating research. American Journal of Community Psychology, 35 (3–4), 201–212.

Bryman, A., Becker, S., & Sempik, J. (2008). Quality criteria for quantitative, qualitative and mixed methods research: A view from social policy. International Journal of Social Research Methodology, 11 (4), 261–276.

Caelli, K., Ray, L., & Mill, J. (2003). ‘Clear as mud’: Toward greater clarity in generic qualitative research. International Journal of Qualitative Methods, 2 (2), 1–13.

CASP (2021). CASP checklists. Retrieved May 2021 from https://casp-uk.net/casp-tools-checklists/

Cohen, D. J., & Crabtree, B. F. (2008). Evaluative criteria for qualitative research in health care: Controversies and recommendations. The Annals of Family Medicine, 6 (4), 331–339.

Denzin, N. K., & Lincoln, Y. S. (2005). Introduction: The discipline and practice of qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.), The sage handbook of qualitative research (pp. 1–32). Sage Publications Ltd.

Google Scholar  

Elliott, R., Fischer, C. T., & Rennie, D. L. (1999). Evolving guidelines for publication of qualitative research studies in psychology and related fields. British Journal of Clinical Psychology, 38 (3), 215–229.

Epp, A. M., & Otnes, C. C. (2021). High-quality qualitative research: Getting into gear. Journal of Service Research . https://doi.org/10.1177/1094670520961445

Guba, E. G. (1990). The paradigm dialog. In Alternative paradigms conference, mar, 1989, Indiana u, school of education, San Francisco, ca, us . Sage Publications, Inc.

Hammersley, M. (2007). The issue of quality in qualitative research. International Journal of Research and Method in Education, 30 (3), 287–305.

Haven, T. L., Errington, T. M., Gleditsch, K. S., van Grootel, L., Jacobs, A. M., Kern, F. G., & Mokkink, L. B. (2020). Preregistering qualitative research: A Delphi study. International Journal of Qualitative Methods, 19 , 1609406920976417.

Hays, D. G., & McKibben, W. B. (2021). Promoting rigorous research: Generalizability and qualitative research. Journal of Counseling and Development, 99 (2), 178–188.

Horsburgh, D. (2003). Evaluation of qualitative research. Journal of Clinical Nursing, 12 (2), 307–312.

Howe, K. R. (2004). A critique of experimentalism. Qualitative Inquiry, 10 (1), 42–46.

Johnson, J. L., Adkins, D., & Chauvin, S. (2020). A review of the quality indicators of rigor in qualitative research. American Journal of Pharmaceutical Education, 84 (1), 7120.

Johnson, P., Buehring, A., Cassell, C., & Symon, G. (2006). Evaluating qualitative management research: Towards a contingent criteriology. International Journal of Management Reviews, 8 (3), 131–156.

Klein, H. K., & Myers, M. D. (1999). A set of principles for conducting and evaluating interpretive field studies in information systems. MIS Quarterly, 23 (1), 67–93.

Lather, P. (2004). This is your father’s paradigm: Government intrusion and the case of qualitative research in education. Qualitative Inquiry, 10 (1), 15–34.

Levitt, H. M., Morrill, Z., Collins, K. M., & Rizo, J. L. (2021). The methodological integrity of critical qualitative research: Principles to support design and research review. Journal of Counseling Psychology, 68 (3), 357.

Lincoln, Y. S., & Guba, E. G. (1986). But is it rigorous? Trustworthiness and authenticity in naturalistic evaluation. New Directions for Program Evaluation, 1986 (30), 73–84.

Lincoln, Y. S., & Guba, E. G. (2000). Paradigmatic controversies, contradictions and emerging confluences. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed., pp. 163–188). Sage Publications.

Madill, A., Jordan, A., & Shirley, C. (2000). Objectivity and reliability in qualitative analysis: Realist, contextualist and radical constructionist epistemologies. British Journal of Psychology, 91 (1), 1–20.

Mays, N., & Pope, C. (2020). Quality in qualitative research. Qualitative Research in Health Care . https://doi.org/10.1002/9781119410867.ch15

McGinley, S., Wei, W., Zhang, L., & Zheng, Y. (2021). The state of qualitative research in hospitality: A 5-year review 2014 to 2019. Cornell Hospitality Quarterly, 62 (1), 8–20.

Merriam, S., & Tisdell, E. (2016). Qualitative research: A guide to design and implementation. San Francisco, US.

Meyer, M., & Dykes, J. (2019). Criteria for rigor in visualization design study. IEEE Transactions on Visualization and Computer Graphics, 26 (1), 87–97.

Monrouxe, L. V., & Rees, C. E. (2020). When I say… quantification in qualitative research. Medical Education, 54 (3), 186–187.

Morrow, S. L. (2005). Quality and trustworthiness in qualitative research in counseling psychology. Journal of Counseling Psychology, 52 (2), 250.

Morse, J. M. (2003). A review committee’s guide for evaluating qualitative proposals. Qualitative Health Research, 13 (6), 833–851.

Nassaji, H. (2020). Good qualitative research. Language Teaching Research, 24 (4), 427–431.

O’Brien, B. C., Harris, I. B., Beckman, T. J., Reed, D. A., & Cook, D. A. (2014). Standards for reporting qualitative research: A synthesis of recommendations. Academic Medicine, 89 (9), 1245–1251.

O’Connor, C., & Joffe, H. (2020). Intercoder reliability in qualitative research: Debates and practical guidelines. International Journal of Qualitative Methods, 19 , 1609406919899220.

Reid, A., & Gough, S. (2000). Guidelines for reporting and evaluating qualitative research: What are the alternatives? Environmental Education Research, 6 (1), 59–91.

Rocco, T. S. (2010). Criteria for evaluating qualitative studies. Human Resource Development International . https://doi.org/10.1080/13678868.2010.501959

Sandberg, J. (2000). Understanding human competence at work: An interpretative approach. Academy of Management Journal, 43 (1), 9–25.

Schwandt, T. A. (1996). Farewell to criteriology. Qualitative Inquiry, 2 (1), 58–72.

Seale, C. (1999). Quality in qualitative research. Qualitative Inquiry, 5 (4), 465–478.

Shenton, A. K. (2004). Strategies for ensuring trustworthiness in qualitative research projects. Education for Information, 22 (2), 63–75.

Sparkes, A. C. (2001). Myth 94: Qualitative health researchers will agree about validity. Qualitative Health Research, 11 (4), 538–552.

Spencer, L., Ritchie, J., Lewis, J., & Dillon, L. (2004). Quality in qualitative evaluation: A framework for assessing research evidence.

Stenfors, T., Kajamaa, A., & Bennett, D. (2020). How to assess the quality of qualitative research. The Clinical Teacher, 17 (6), 596–599.

Taylor, E. W., Beck, J., & Ainsworth, E. (2001). Publishing qualitative adult education research: A peer review perspective. Studies in the Education of Adults, 33 (2), 163–179.

Tong, A., Sainsbury, P., & Craig, J. (2007). Consolidated criteria for reporting qualitative research (COREQ): A 32-item checklist for interviews and focus groups. International Journal for Quality in Health Care, 19 (6), 349–357.

Tracy, S. J. (2010). Qualitative quality: Eight “big-tent” criteria for excellent qualitative research. Qualitative Inquiry, 16 (10), 837–851.

Download references

Open access funding provided by TU Wien (TUW).

Author information

Authors and affiliations.

Faculty of Informatics, Technische Universität Wien, 1040, Vienna, Austria

Drishti Yadav

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Drishti Yadav .

Ethics declarations

Conflict of interest.

The author declares no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Yadav, D. Criteria for Good Qualitative Research: A Comprehensive Review. Asia-Pacific Edu Res 31 , 679–689 (2022). https://doi.org/10.1007/s40299-021-00619-0

Download citation

Accepted : 28 August 2021

Published : 18 September 2021

Issue Date : December 2022

DOI : https://doi.org/10.1007/s40299-021-00619-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative research
  • Evaluative criteria
  • Find a journal
  • Publish with us
  • Track your research
  • Open access
  • Published: 09 May 2024

Examining the feasibility of assisted index case testing for HIV case-finding: a qualitative analysis of barriers and facilitators to implementation in Malawi

  • Caroline J. Meek 1 , 2 ,
  • Tiwonge E. Mbeya Munkhondya 3 ,
  • Mtisunge Mphande 4 ,
  • Tapiwa A. Tembo 4 ,
  • Mike Chitani 4 ,
  • Milenka Jean-Baptiste 2 ,
  • Dhrutika Vansia 4 ,
  • Caroline Kumbuyo 4 ,
  • Jiayu Wang 2 ,
  • Katherine R. Simon 4 ,
  • Sarah E. Rutstein 5 ,
  • Clare Barrington 2 ,
  • Maria H. Kim 4 ,
  • Vivian F. Go 2 &
  • Nora E. Rosenberg 2  

BMC Health Services Research volume  24 , Article number:  606 ( 2024 ) Cite this article

102 Accesses

Metrics details

Assisted index case testing (ICT), in which health care workers take an active role in referring at-risk contacts of people living with HIV for HIV testing services, has been widely recognized as an evidence-based intervention with high potential to increase status awareness in people living with HIV. While the available evidence from eastern and southern Africa suggests that assisted ICT can be an effective, efficient, cost-effective, acceptable, and low-risk strategy to implement in the region, it reveals that feasibility barriers to implementation exist. This study aims to inform the design of implementation strategies to mitigate these feasibility barriers by examining “assisting” health care workers’ experiences of how barriers manifest throughout the assisted ICT process, as well as their perceptions of potential opportunities to facilitate feasibility.

In-depth interviews were conducted with 26 lay health care workers delivering assisted ICT in Malawian health facilities. Interviews explored health care workers’ experiences counseling index clients and tracing these clients’ contacts, aiming to inform development of a blended learning implementation package. Transcripts were inductively analyzed using Dedoose coding software to identify and describe key factors influencing feasibility of assisted ICT. Analysis included multiple rounds of coding and iteration with the data collection team.

Participants reported a variety of barriers to feasibility of assisted index case testing implementation, including sensitivities around discussing ICT with clients, privacy concerns, limited time for assisted index case testing amid high workloads, poor quality contact information, and logistical obstacles to tracing. Participants also reported several health care worker characteristics that facilitate feasibility (knowledge, interpersonal skills, non-stigmatizing attitudes and behaviors, and a sense of purpose), as well as identified process improvements with the potential to mitigate barriers.

Conclusions

Maximizing assisted ICT’s potential to increase status awareness in people living with HIV requires equipping health care workers with effective training and support to address and overcome the many feasibility barriers that they face in implementation. Findings demonstrate the need for, as well as inform the development of, implementation strategies to mitigate barriers and promote facilitators to feasibility of assisted ICT.

Trial registration

NCT05343390. Date of registration: April 25, 2022.

Peer Review reports

Introduction

To streamline progress towards its goal of ending AIDS as a public health threat by 2030, the Joint United Nations Programme on HIV/AIDS (UNAIDS) launched a set of HIV testing and treatment targets [ 1 ]. Adopted by United Nations member states in June 2021, the targets call for 95% of all people living with HIV (PLHIV) to know their HIV status, 95% of all PLHIV to be accessing sustained antiretroviral therapy (ART), and 95% of all people receiving ART to achieve viral suppression by 2025 [ 2 ]. Eastern and southern Africa has seen promising regional progress towards these targets in recent years, and the region is approaching the first target related to status awareness in PLHIV- in 2022, 92% of PLHIV in the region were aware of their status [ 3 ]. However, several countries in the region lag behind [ 4 ], and as 2025 approaches, it is critical to scale up adoption of evidence-based interventions to sustain and accelerate progress.

Index case testing (ICT), which targets provision of HIV testing services (HTS) for sexual partners, biological children, and other contacts of known PLHIV (“index clients”), is a widely recognized evidence-based intervention used to identify PLHIV by streamlining testing efforts to populations most at risk [ 5 , 6 , 7 ]. Traditional approaches to ICT rely on passive referral, in which index clients invite their contacts for testing [ 5 ]. However, the World Health Organization (WHO) and the President’s Emergency Plan for HIV/AIDS Relief (PEPFAR) have both recommended assisted approaches to ICT [ 6 , 8 , 9 , 10 ], in which health care workers (HCWs) take an active role in referral of at-risk contacts for testing, due to evidence of improved effectiveness in identifying PLHIV compared to passive approaches [ 10 , 11 , 12 , 13 , 14 ]. As a result, there have been several efforts to scale assisted ICT throughout eastern and southern Africa in recent years [ 15 , 16 , 17 , 18 , 19 , 20 ]. In addition to evidence indicating that assisted ICT can be effective in increasing HIV testing and case-finding [ 16 , 17 , 21 , 22 , 23 , 24 ], implementation evidence [ 25 ] from the region suggests that assisted ICT can be an efficient [ 14 ], acceptable [ 5 , 13 , 15 , 18 , 20 , 21 , 26 ], cost-effective [ 27 ], and low-risk [ 21 , 22 , 24 , 28 , 29 ] strategy to promote PLHIV status awareness. However, the few studies that focus on feasibility, or the extent to which HCWs can successfully carry out assisted ICT [ 25 ], suggest that barriers exist to feasibility of effective implementation [ 18 , 19 , 20 , 30 , 31 , 32 ]. Developing informed implementation strategies to mitigate these barriers requires more detailed examination of how these barriers manifest throughout the assisted ICT process, as well as of potential opportunities to facilitate feasibility, from the perspective of the HCWs who are doing the “assisting”.

This qualitative analysis addresses this need for further detail by exploring “assisting” HCWs’ perspectives of factors that influence the feasibility of assisted ICT, with a unique focus on informing development of effective implementation strategies to best support assisted ICT delivery in the context of an implementation science trial in Malawi.

This study was conducted in the Machinga and Balaka districts of Malawi. Malawi is a country in southeastern Africa in which 7.1% of the population lives with HIV and 94% of PLHIV know their status [ 4 ]. Machinga and Balaka are two relatively densely populated districts in the southern region of Malawi [ 33 ] with HIV prevalence rates similar to the national average [ 34 ]. We selected Machinga and Balaka because they are prototypical of districts in Malawi implementing Ministry of Health programs with support from an implementing partner.

Malawi has a long-established passive ICT program, and in 2019 the country also adopted an assisted component, known as voluntary assisted partner notification, as part of its national HIV testing policy [ 32 ]. In Malawi, ICT is conducted through the following four methods, voluntarily selected by the index client: 1) passive referral, in which HCWs encourage the index client to refer partners for voluntary HTS, 2) contract referral, in which HCWs establish an informal ‘contract’ with index clients that agrees upon a date that the HCW can contact the contact clients if they have not yet presented for HTS; 3) provider referral, in which HCWs contact and offer voluntary HTS to contact clients; and 3) dual referral, in which HCWs accompany and provide support to index clients in disclosing their status and offering HTS to their partners [ 8 ]. 

While Malawi has one of the lowest rates of qualified clinical HCWs globally (< 5 clinicians per 100,000 people) [ 35 ], the country has a strong track record of shifting HTS tasks to lay HCWs, who have been informally trained to perform certain health care delivery functions but do not have a formal professional/para-professional certification or tertiary education degree, in order to mitigate this limited medical workforce capacity [ 32 , 36 ]. In Malawi, lay HCW roles include HIV Diagnostic Assistants (who are primarily responsible for HIV testing and counseling, including index case counseling) and community health workers (who are responsible for a wider variety of tasks, including index case counseling and contact tracing) [ 32 ]. Non-governmental organization implementing partners, such as the Tingathe Program, play a critical role in harnessing Malawian lay HCW capacity to rapidly and efficiently scale up HTS, including assisted ICT [ 32 , 37 , 38 , 39 ].

Study design

Data for this analysis were collected as part of formative research for a two-arm cluster randomized control trial examining a blended learning implementation package as a strategy for building HCW capacity in assisted ICT [ 40 ]. Earlier work [ 32 ] established the theoretical basis for testing the blended learning implementation package, which combines individual asynchronous modules with synchronous small-group interactive sessions to enhance training and foster continuous quality improvement. The formative research presented in this paper aimed to further explore factors influencing feasibility of the assisted ICT from the perspective of HCWs in order to inform development of the blended learning implementation package.

Prior to the start of the trial (October-December 2021), the research team conducted 26 in-depth interviews (IDIs) with lay HCWs at 14 of the 34 facilities included in the parent trial. We purposively selected different types of facilities (hospitals, health centers, and dispensaries) in both districts and from both randomization arms, as this served as a qualitative baseline for a randomized trial. Within these facilities, we worked with facility supervisors to purposively select HCWs who were actively engaged in Malawi’s ICT program from the larger sample of HCWs eligible for the parent trial (had to be at least 18 years old, employed full-time at one of the health facilities included in the parent trial, and involved in counseling index clients and/or tracing their contacts). The parent trial enrolled 306 HCWs, who were primarily staff hired by Tingathe Program to support facilities implementing Malawi’s national HIV program.

Data collection

IDIs were conducted by three trained Malawian interviewers in a private setting using a semi-structured guide. IDIs were conducted over the phone when possible ( n  = 18) or in-person at sites with limited phone service ( n  = 8). The semi-structured guide was developed for this study through a series of rigorous, iterative discussions among the research team (Additional file 1 ). The questions used for this analysis were a subset of a larger interview. The interview guide questions for this analysis explored HCWs’ experiences with assisted ICT, including barriers and facilitators to implementation. Probing separately about the processes of counseling index clients and tracing their contacts, interviewers asked questions such as “What is the first thing that comes to mind when you think of counseling index clients/tracing contacts?”, “What aspects do you [like/not like] about…?” and “What do your colleagues say about…?”. When appropriate, interviewers probed further about how specific factors mentioned by the participant facilitate or impede the ICT implementation experience.

The IDIs lasted from 60–90 min and were conducted in Chichewa, a local language in Malawi. Eleven audio recordings were transcribed verbatim in Chichewa before being translated into English and 15 recordings were directly translated and transcribed into English. Interviewers summarized each IDI after it was completed, and these summaries were discussed with the research team routinely.

Data analysis

The research team first reviewed all of the interview summaries individually and then met multiple times to discuss initial observations, refining the research question and scope of analysis. A US-based analyst (CJM) with training in qualitative analysis used an inductive approach to develop a codebook, deriving broad codes from the implementation factors mentioned by participants throughout their interviews. Along with focused examination of the transcripts, she consulted team members who had conducted the IDIs with questions or clarifications. CJM regularly met with Malawian team members (TEMM, MM, TAT) who possess the contextual expertise necessary to verify and enhance meaning. She used the Dedoose (2019) web application to engage in multiple rounds of coding, starting with codes representing broad implementation factors and then further refining the codebook as needed to capture the nuanced manifestations of these barriers and facilitators. Throughout codebook development and refinement, the analyst engaged in memoing to track first impressions, thought processes, and coding decisions. The analyst presented the codebook and multiple rounds of draft results to the research team. All transcripts and applied codes were also reviewed in detail by additional team members (MJB, DV). Additional refinements to the codebook and results interpretations were iteratively made based on team feedback.

Ethical clearance

Ethical clearance was provided by UNC’s IRB, Malawi’s National Health Sciences Research Committee, and the Baylor College of Medicine IRB. Written informed consent was obtained from all participants in the main study and interviewers confirmed verbal consent before starting the IDIs.

Participant characteristics are described in Table  1 below.

Factors influencing feasibility of assisted ICT: barriers and facilitators

Participants described a variety of barriers and facilitators to feasibility of assisted ICT, manifesting across the index client counseling and contact client tracing phases of the implementation process. Identified barriers included sensitivities around discussing ICT with clients, privacy concerns, limited time for ICT amid high workloads, poor quality contact information, and logistical obstacles to tracing. In addition to these barriers, participants also described several HCW characteristics that facilitated feasibility: ICT knowledge, interpersonal skills, positive attitudes towards clients, and sense of purpose. Barriers and facilitators are mapped to the ICT process in Fig.  1 and described in greater detail in further sections.

figure 1

Conceptual diagram mapping feasibility barriers and facilitators to the ICT process

Feasibility barriers

Sensitivities around discussing ict with clients.

Participants described ICT as a highly sensitive topic to approach with clients. Many expressed a feeling of uncertainty around how open index clients will be to sharing information about their contacts, as well as how contacts will react when approached for HTS. When asked about difficult aspects of counseling index clients, many HCWs mentioned clients’ hesitance or declination to participate in assisted ICT and share their contacts. Further, several HCWs mentioned that some index clients would provide false contact information. These index client behaviors were often attributed to confidentiality concerns, fear of unwanted status disclosure, and fear of the resulting implications of status disclosure: “They behave that way because they think you will be telling other people about their status…they also think that since you know it means their life is done, you will be looking at them differently .” Populations commonly identified as particularly likely to hesitate, refuse, or provide false information included youth (described as “ shy ” “ thinking they know a lot ” and “ difficult to reveal their contacts ”) and newly diagnosed clients (“it may be hard for them to accept [their HIV diagnosis]” ). One participant suggested that efforts to pair index clients with same-sex HCWs could make them more comfortable to discuss their contacts.

When asked about the first things that come to mind when starting to trace contacts, many participants discussed wondering how they will be received by the contact and preparing themselves to approach the contact. When conducting provider or contract referral, HCWs described a variety of challenging reactions that can occur when they approach a contact for HTS- including delay or refusal of testing, excessive questioning about the identity of the index client who referred them for testing, and even anger or aggression. Particularly mentioned in the context of male clients, these kinds of reactions can lead to stress and uncertain next steps for HCWs: “I was very tensed up. I was wondering to myself what was going to happen…he was talking with anger.”

Participants also noted the unique sensitivities inherent in conducting dual referral and interacting with sexual partners of index clients, explaining that HIV disclosure can create acute conflict in couples due to perceived blame and assumptions of infidelity. They recounted these scenarios as particularly difficult to navigate, with high stakes that require high-quality counseling skills: “sometimes if you do not have good counseling the marriage happens to get to an end.” . Some participants discussed concern about index client risk of intimate partner violence (IPV) upon partner disclosure: “they think that if they go home and [disclose their HIV status], the marriage will end right there, or for some getting to a point of [being] beaten.”

Privacy concerns

Participants also reported that clients highly value privacy, which can be difficult to secure throughout the ICT process. In the facility, while participants largely indicated that counseling index clients was much more successful when conducted in a private area, many reported limited availability of private counseling space. One participant described this challenge: “ if I’m counseling an index client and people keep coming into the room…this compromises the whole thing because the client becomes uncomfortable in the end.” Some HCWs mentioned working around this issue through use of screens, “do-not-disturb” signs, outdoor spots, and tents.

Participants also noted maintaining privacy as a challenge when tracing contact clients in the field, as they sometimes find clients in a situation that is not conducive to private conversations. One participant described: “ we get to the house and find that there are 4, 5 people with our [contact client]…it doesn’t go well…That is a mission gone wrong. ” Participants also noted that HCWs are also often easily recognizable in the community due to their bikes and cars, which exacerbates the risk of compromising privacy. To address privacy challenges in the community, participants reported strategies to increase discretion, including dressing to blend in with the community, preparing an alternate reason to be looking for the client, and offering HTS to multiple people or households to avoid singling out one person.

Limited time for ICT amid high workloads

Some participants indicated that strained staffing capacity leads HCWs to have to perform multiple roles, expressing challenges in balancing their ICT work with their other tasks. As one participant described, “Sometimes it is found that you are assigned a task here at the hospital to screen anyone who comes for blood testing, but you are also supposed to follow up [with] the contacts the same day- so it becomes a problem…you fail to follow up [with] the contacts.” Some also described being the only, or one of few staff responsible for ICT: “You’re doing this work alone, so you can see that it is a big task to do it single-handedly.” The need to counsel each index client individually, as a result of confidentiality concerns, further increases workload for the limited staff assigned to this work. Further, HCWs often described contact tracing in the field as time-consuming and physically taxing, which leaves them less time and energy for counseling. Many HCWs noted the need to hire more staff dedicated to ICT work.

High workloads also resulted in shorter appointments and less time to counsel index clients, which participants reported limits the opportunity for rapport that facilitates openness or probes for detailed information about sexual partners. Participants emphasized the importance of having enough time to meaningfully engage with index clients: “For counseling you cannot have a limit to say, ‘I will talk to him for 5 min only.’ …That is not counseling then. You are supposed to stay up until…you feel that this [person] is fulfilled.” . In addition, high workload can reduce the capacity of HCWs to deliver quality counseling: “So you find that as you go along with the counseling, you can do better with the first three clients but the rest, you are tired and you do short cuts.”

High workloads also lead to longer queues, which may deter clients from coming into the clinic or cause them to leave before receiving services: “Sometimes because of shortage of staff, it happens that you have been assigned a certain task that you were supposed to do but at the same time there are clients who were supposed to be counseled. As a result, because you spent more time on the other task as a result you lose out some of the clients because you find that they have gone.” In response to long queues, several participants described ‘fast-tracking’ contact clients who come in for HTS in effort to maximize case-finding by prioritizing those who have been identified as at risk of HIV.

Poor quality contact information

Participants repeatedly discussed the importance of eliciting accurate information about a person’s sexual partners, including where, when, and how to best contact them. As one participant said, “ Once the index has given us the wrong information then everything cannot work, it becomes wrong…if he gives us full information [with] the right details then everything becomes successful and happens without a problem. ” Adequate information is a critical component of the ICT process, and incorrect or incomplete information delays or prevents communication with contact clients.

Inadequate information, which can include incorrect or incomplete names, phone numbers, physical addresses, and contextual details, can arise from a variety of scenarios. Most participants mentioned index clients providing incorrect information as a concern. This occurred either intentionally to avoid disclosure or unintentionally if information was not known. Poor quality contact information also results from insufficient probing and poor documentation, which is often exacerbated by aforementioned HCW time and energy constraints. In one participant’s words, “The person who has enlisted the contact…is the key person who can make sure that our tracing is made easy.” Participants noted the pivotal role of the original HCW who first interacts with the index client in not only eliciting correct locator information but also eliciting detailed contextual information. For example, details about a contact client’s profession are helpful to trace the client at a time when they will likely be at home. Other helpful information included nicknames, HIV testing history, and notes about confidentiality concerns.

Logistical obstacles to tracing

Some contact clients are reached by phone whereas others must be physically traced in the community. Some participants reported difficulty with tracing via phone, frequently citing network problems and lack of sufficient airtime allocated by the facility. Participants also reported that some clients were unreachable by phone, necessitating physical tracing. Physically tracing a contact client requires a larger investment of resources than phone tracing, especially when the client lives at a far distance from the clinic. Participants frequently discussed having to travel far distances to reach contact clients, an issue some saw as exacerbated by people who travel to clinics at far distances due to privacy concerns.

While most participants reported walking or biking to reach contact clients in the community, some mentioned using a motorcycle or Tingathe vehicle. However, access to vehicles is often limited and these transportation methods require additional expenses for fuel. Walking or biking was also reported to expose HCWs to inclement weather, including hot or rainy seasons, and potential safety risks such as violence.

Participants reported that traveling far distances can be physically taxing and time-consuming, sometimes rendering them too tired or busy to attend to other tasks. Frequent travel influenced HCW morale, particularly when a tracing effort did not result in successfully recruiting a contact client. Participants frequently described this perception of wasted time and energy as “ painful ”, with the level of distress often portrayed as increasing with the distance travelled. As one HCW said, “You [can] find out that he gave a false address. That is painful because it means you have done nothing for the person, you travelled for nothing.”

HCWs described multiple approaches used to strategically allocate limited resources for long distances. These approaches included waiting to physically trace until there are multiple clients in a particular area, reserving vehicle use for longer trips, and coordinating across HCWs to map out contact client locations. HCWs also mentioned provision of rain gear and sun protection to mitigate uncomfortable travel. Another approach involved allocating contact tracing to HCWs based in the same communities as the contact clients.

Feasibility facilitators

Hcw knowledge about ict.

Participants reported that HCWs with a thorough understanding of ICT’s rationale and purpose can facilitate client openness. Clients were more likely to engage with HCWs about assisted ICT if they understood the benefits to themselves and their loved ones. One HCW stated, “If the person understands why we need the information, they will give us accurate information.”

Participants also discussed the value of deep HCW familiarity with ICT procedures and processes, particularly regarding screening clients for IPV and choosing referral method. One participant described the importance of clearly explaining various referral methods to clients: “So…people come and choose the method they like…when you explain things clearly it is like the index client is free to choose a method which the contact can use for testing”. Thorough knowledge of available referral methods allows HCWs to actively engage with index clients to discuss strategies to refer contacts in a way that fits their unique confidentiality needs, which was framed as particularly important when IPV is identified as a concern. Multiple participants suggested the use of flipcharts or videos, saying these would save limited HCW time and energy, fill information gaps, and provide clients with a visual aid to supplement the counseling. Others suggested recurring opportunities for training, to continuously “refresh” their ICT knowledge in order to facilitate implementation.

HCW interpersonal skills

In addition, HCWs’ ability to navigate sensitive conversations about HIV was noted as a key facilitator of successful implementation. Interpersonal skills were mentioned as mitigating the role’s day-to-day uncertainty by preparing HCWs to engage with clients, especially newly diagnosed clients: “ I need to counsel them skillfully so that they understand what I mean regardless that they have just tested positive for HIV.”

When discussing strategies to build HCW skills in counseling index clients and tracing contact clients, participants suggested establishing regular opportunities to discuss challenges and share approaches to address these challenges: “ I think that there should be much effort on the [HCWs] doing [ICT]. For example, what do I mean, they should be having a meeting with the facility people to ask what challenges are you facing and how can we end them?”. Another participant further elaborated, saying “We should be able to share experiences with our [colleagues] so that we can all learn from one another. And also, there are other people who are really brilliant at their job. Those people ought to come visit us and see how we are doing. That is very motivating.”

HCW non-stigmatizing attitudes and behaviors

Participants also highlighted the role of empathy and non-judgement in building trust with clients: “ Put yourself in that other person’s shoes. In so doing, the counseling session goes well. Understanding that person, that what is happening to them can also happen to you. ”. Participants viewed trust-building as critical to facilitating client comfort and openness: “if they trust you enough, they will give you the right information.” Further, participants associated HCW assurance of confidentiality with promoting trust and greater information sharing: “ Also assuring them on the issue of confidentiality because confidentiality is a paramount. If there will not be confidentiality then the clients will not reveal.”

HCW sense of purpose

Lastly, several participants reported that a sense of purpose and desire to help people motivated them to overcome the challenges of delivering assisted ICT. One participant said, “ Some of these jobs are a ministry. Counseling is not easy. You just need to tell yourself that you are there to help that person. ” Many seemed to take comfort in the knowledge that their labors, however taxing, would ultimately allow people to know their status, take control of their health, and prevent the spread of HIV. Participants framed the sense of fulfillment from successful ICT implementation as a mitigating factor amidst challenges: “ If [the contact client] has accepted it then I feel that mostly I have achieved the aim of being in the health field…that is why it is appealing to me ”.

Participants described a variety of barriers to assisted ICT implementation, including sensitivities around discussing ICT with clients, privacy concerns, limited time for ICT amid high workloads, poor quality contact information, and logistical obstacles to tracing. These barriers manifested across each step of the process of counseling index clients and tracing contacts. However, participants also identified HCW characteristics and process improvements that can mitigate these barriers.

Further, participants’ descriptions of the assisted ICT process revealed the intimately interconnected nature of factors that influence feasibility of assisted ICT. Sensitivities around HIV, privacy limitations, time constraints, and HCW characteristics all contribute to the extent to which counseling index clients elicits adequate information to facilitate contact tracing. Information quality has implications for HCW capacity, as inadequate information can lead to wasted resources, including HCW time and energy, on contact tracing. The opportunity cost of wasted efforts, which increases as the distance from which the contact client lives from the clinic increases, depletes HCW morale. The resulting acceleration of burnout, which is already fueled by busy workloads and the inherent uncertainty of day-to-day ICT work, further impairs HCW capacity to effectively engage in quality counseling that elicits adequate information from index clients. This interconnectedness suggests that efforts to mitigate barriers at any step of the assisted ICT process may have the potential to ripple across the whole process.

Participants’ descriptions of client confidentiality and privacy concerns, as well as fear of consequences of disclosure, align with previous studies that emphasize stigma as a key barrier to assisted ICT [ 15 , 18 , 19 , 20 , 30 , 31 ] and the overall HIV testing and treatment cascade [ 41 ]. Our findings suggest that anticipated stigma, or the fear of discrimination upon disclosure [ 42 ], drives several key barriers to feasibility of assisted ICT implementation. Previous studies also highlight the key role of HCWs in mitigating barriers related to anticipated stigma; noting the key role of HCW ICT knowledge, interpersonal skills, and non-stigmatizing attitudes/behaviors in securing informed consent from clients for ICT, tailoring the referral strategy to minimize risk to client confidentiality and safety, building trust and rapport with the client, and eliciting accurate contact information from index clients to facilitate contact tracing [ 18 , 19 , 20 , 30 ].

Our findings also reflect previous evidence of logistical challenges related to limited time, space, and resources that can present barriers to feasibility for HCWs [ 18 , 19 , 20 , 30 , 31 ]. Participants in the current study described these logistical challenges as perpetuating HCW burnout, making it harder for them to engage in effective counseling. Cumulative evidence of barriers across different settings (further validated by this study) suggests that assisted ICT implementation may pose greater burden on HCWs than previously thought [ 7 ]. However, our findings also suggest that strategic investment in targeted implementation strategies has the potential to help overcome these feasibility barriers.

In our own work, these findings affirmed the rationale for and informed the development of the blended learning implementation package tested in our trial [ 40 , 43 ]. Findings indicated the need for evidence-based training and support to promote HCW capacity to foster facilitating characteristics. Participants discussed the value of "refresher" opportunities in building knowledge, as well as the value of learning from other’s experiences. The blended learning implementation package balances both needs by providing time for HCWs to master ICT knowledge and skills with a combination of asynchronous, digitally delivered content (which allows for continuous review as a "refresher") and in-person sessions (which allow for sharing, practicing, and feedback). Our findings also highlight the value of flexible referral methods that align with the client’s needs, so our training content includes a detailed description of each referral method process. Further, our training content emphasizes client-centered, non-judgmental counseling as our findings add to cumulative evidence of stigma as a key barrier to assisted ICT implementation [ 41 ].

In addition, participants frequently mentioned informal workarounds currently in use to mitigate barriers or offered up ideas for potential solutions to try. Our blended learning implementation package streamlines these problem-solving processes by offering monthly continuous quality improvement sessions at each facility in our enhanced arm. These sessions allow for structured time to discuss identified barriers, share ideas to mitigate barriers, and develop solutions for sustained process improvement tailored to their specific setting. Initial focus areas for continuous quality improvement discussions include use of space, staffing, allocation of airtime and vehicles, and documentation, which were identified as barriers to feasibility in the current study.

Our study provides a uniquely in-depth examination of HCWs’ experiences implementing assisted ICT, exploring how barriers can manifest and interact with each other at each step of the process to hinder successful implementation. Further, our study has a highly actionable focus on informing development of implementation strategies to support HCWs implementing assisted ICT. Our study also has limitations. Firstly, while our sole focus on HCWs allowed for deeper exploration of assisted ICT from the perspective of those actually implementing it on the ground, this meant that our analysis did not include perspectives of index or contact clients. In addition, we did not conduct sub-group analyses as interpretation of results would be limited by our small sample size.

Assisted ICT has been widely recognized as an evidence-based intervention with high promise to increase PLHIV status awareness [ 5 , 6 , 7 , 10 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 23 , 24 , 26 , 27 , 28 , 29 ], which is important as countries in eastern and southern Africa strive to reach global UNAIDS targets. Study findings support cumulative evidence that HCWs face a variety of feasibility barriers to assisted ICT implementation in the region; further, the study’s uniquely in-depth focus on the experiences of those doing the “assisting” enhances understanding of how these barriers manifest and informs the development of implementation strategies to mitigate these barriers. Maximizing assisted ICT’s potential to increase HIV testing requires equipping HCWs with effective training and support to address and overcome the many feasibility barriers they face in implementation. Findings demonstrate the need for, as well as inform the development of, implementation strategies to mitigate barriers and promote facilitators to feasibility of assisted ICT.

Availability of data and materials

Qualitative data on which this analysis is based, as well as data collection materials and codebooks, are available from the last author upon reasonable request. The interview guide is included as an additional file.

Abbreviations

Acquired Immunodeficiency Syndrome

Antiretroviral Therapy

Health Care Worker

Human Immunodeficiency Virus

HIV Testing Services

Index Case Testing

In-Depth Interview

Intimate Partner Violence

Institutional Review Board

President’s Emergency Plan for HIV/AIDS Relief

People Living With HIV

Joint United Nations Programme on HIV/AIDS

World Health Organization

UNAIDS. Prevailing against pandemics by putting people at the centre. Geneva: UNAIDS; 2020.

Google Scholar  

Frescura L, Godfrey-Faussett P, Feizzadeh AA, El-Sadr W, Syarif O, Ghys PD, et al. Achieving the 95 95 95 targets for all: A pathway to ending AIDS. PLoS One. 2022;17(8):e0272405.

Article   CAS   PubMed   PubMed Central   Google Scholar  

UNAIDS. UNAIDS global AIDS update 2023: The path that ends AIDS. New York: United Nations; 2023.

Book   Google Scholar  

UNAIDS. UNAIDS data 2023. Geneva: Joint United Nations Programme on HIV/AIDS; 2023.

Kahabuka C, Plotkin M, Christensen A, Brown C, Njozi M, Kisendi R, et al. Addressing the first 90: A highly effective partner notification approach reaches previously undiagnosed sexual partners in Tanzania. AIDS Behav. 2017;21(8):2551–60.

Article   PubMed   PubMed Central   Google Scholar  

Lasry A, Medley A, Behel S, Mujawar MI, Cain M, Diekman ST, et al. Scaling up testing for human immunodeficiency virus infection among contacts of index patients - 20 countries, 2016–2018. MMWR Morb Mortal Wkly Rep. 2019;68(21):474–7.

Onovo A, Kalaiwo A, Agweye A, Emmanuel G, Keiser O. Diagnosis and case finding according to key partner risk populations of people living with HIV in Nigeria: A retrospective analysis of community-led index partner testing services. EClinicalMedicine. 2022;43:101265.

World Health Organization (WHO). Guidelines on HIV self-testing and partner notification : supplement to Consolidated guidelines on HIV testing services. 2016. https://apps.who.int/iris/bitstream/handle/10665/251655/9789241549868-eng.pdf?sequence=1 . Accessed 19 Apr 2024.

Watts H. Why PEPFAR is going all in on partner notification services. 2019. https://programme.ias2019.org/PAGMaterial/PPT/1934_117/Why%20PEPFAR%20is%20all%20in%20for%20PNS%2007192019%20rev.pptx . Accessed 19 Apr 2024.

Dalal S, Johnson C, Fonner V, Kennedy CE, Siegfried N, Figueroa C, et al. Improving HIV test uptake and case finding with assisted partner notification services. AIDS. 2017;31(13):1867–76.

Article   PubMed   Google Scholar  

Mathews C, Coetzee N, Zwarenstein M, Lombard C, Guttmacher S, Oxman A, et al. A systematic review of strategies for partner notification for sexually transmitted diseases, including HIV/AIDS. Int J STD AIDS. 2002;13(5):285–300.

Hogben M, McNally T, McPheeters M, Hutchinson AB. The effectiveness of HIV partner counseling and referral services in increasing identification of HIV-positive individuals a systematic review. Am J Prev Med. 2007;33(2 Suppl):S89-100.

Brown LB, Miller WC, Kamanga G, Nyirenda N, Mmodzi P, Pettifor A, et al. HIV partner notification is effective and feasible in sub-Saharan Africa: opportunities for HIV treatment and prevention. J Acquir Immune Defic Syndr. 2011;56(5):437–42.

Sharma M, Ying R, Tarr G, Barnabas R. Systematic review and meta-analysis of community and facility-based HIV testing to address linkage to care gaps in sub-Saharan Africa. Nature. 2015;528(7580):S77-85.

Edosa M, Merdassa E, Turi E. Acceptance of index case HIV testing and its associated factors among HIV/AIDS Clients on ART follow-up in West Ethiopia: A multi-centered facility-based cross-sectional study. HIV AIDS (Auckl). 2022;14:451–60.

PubMed   Google Scholar  

Williams D, MacKellar D, Dlamini M, Byrd J, Dube L, Mndzebele P, et al. HIV testing and ART initiation among partners, family members, and high-risk associates of index clients participating in the CommLink linkage case management program, Eswatini, 2016–2018. PLoS ONE. 2021;16(12):e0261605.

Remera E, Nsanzimana S, Chammartin F, Semakula M, Rwibasira GN, Malamba SS, et al. Brief report: Active HIV case finding in the city of Kigali, Rwanda: Assessment of voluntary assisted partner notification modalities to detect undiagnosed HIV infections. J Acquir Immune Defic Syndr. 2022;89(4):423–7.

Article   CAS   PubMed   Google Scholar  

Quinn C, Nakyanjo N, Ddaaki W, Burke VM, Hutchinson N, Kagaayi J, et al. HIV partner notification values and preferences among sex workers, fishermen, and mainland community members in Rakai, Uganda: A qualitative study. AIDS Behav. 2018;22(10):3407–16.

Monroe-Wise A, Maingi Mutiti P, Kimani H, Moraa H, Bukusi DE, Farquhar C. Assisted partner notification services for patients receiving HIV care and treatment in an HIV clinic in Nairobi, Kenya: a qualitative assessment of barriers and opportunities for scale-up. J Int AIDS Soc. 2019;22 Suppl 3(Suppl Suppl 3):e25315.

Liu W, Wamuti BM, Owuor M, Lagat H, Kariithi E, Obong’o C, et al. “It is a process” - a qualitative evaluation of provider acceptability of HIV assisted partner services in western Kenya: experiences, challenges, and facilitators. BMC Health Serv Res. 2022;22(1):616.

Myers RS, Feldacker C, Cesar F, Paredes Z, Augusto G, Muluana C, et al. Acceptability and effectiveness of assisted human immunodeficiency virus partner services in Mozambique: Results from a pilot program in a public. Urban Clinic Sex Transm Dis. 2016;43(11):690–5.

Rosenberg NE, Mtande TK, Saidi F, Stanley C, Jere E, Paile L, et al. Recruiting male partners for couple HIV testing and counselling in Malawi’s option B+ programme: an unblinded randomised controlled trial. Lancet HIV. 2015;2(11):e483–91.

Mahachi N, Muchedzi A, Tafuma TA, Mawora P, Kariuki L, Semo BW, et al. Sustained high HIV case-finding through index testing and partner notification services: experiences from three provinces in Zimbabwe. J Int AIDS Soc. 2019;22 Suppl 3(Suppl Suppl 3):e25321.

Cherutich P, Golden MR, Wamuti B, Richardson BA, Asbjornsdottir KH, Otieno FA, et al. Assisted partner services for HIV in Kenya: a cluster randomised controlled trial. Lancet HIV. 2017;4(2):e74–82.

Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38(2):65–76.

Kamanga G, Brown L, Jawati P, Chiwanda D, Nyirenda N. Maximizing HIV partner notification opportunities for index patients and their sexual partners in Malawi. Malawi Med J. 2015;27(4):140–4.

Rutstein SE, Brown LB, Biddle AK, Wheeler SB, Kamanga G, Mmodzi P, et al. Cost-effectiveness of provider-based HIV partner notification in urban Malawi. Health Policy Plan. 2014;29(1):115–26.

Wamuti BM, Welty T, Nambu W, Chimoun FT, Shields R, Golden MR, et al. Low risk of social harms in an HIV assisted partner services programme in Cameroon. J Int AIDS Soc. 2019;22 Suppl 3(Suppl Suppl 3):e25308.

Henley C, Forgwei G, Welty T, Golden M, Adimora A, Shields R, et al. Scale-up and case-finding effectiveness of an HIV partner services program in Cameroon: an innovative HIV prevention intervention for developing countries. Sex Transm Dis. 2013;40(12):909–14.

Klabbers RE, Muwonge TR, Ayikobua E, Izizinga D, Bassett IV, Kambugu A, et al. Health worker perspectives on barriers and facilitators of assisted partner notification for HIV for refugees and Ugandan nationals: A mixed methods study in West Nile Uganda. AIDS Behav. 2021;25(10):3206–22.

Mugisha N, Tirera F, Coulibaly-Kouyate N, Aguie W, He Y, Kemper K, et al. Implementation process and challenges of index testing in Cote d’Ivoire from healthcare workers’ perspectives. PLoS One. 2023;18(2):e0280623.

Rosenberg NE, Tembo TA, Simon KR, Mollan K, Rutstein SE, Mwapasa V, et al. Development of a Blended Learning Approach to Delivering HIV-Assisted Contact Tracing in Malawi: Applied Theory and Formative Research. JMIR Form Res. 2022;6(4):e32899.

Government of Malawi National Statistical Office. 2018 Malawi population and housing census : main report. 2019.  https://malawi.unfpa.org/sites/default/files/resource-pdf/2018%20Malawi%20Population%20and%20Housing%20Census%20Main%20Report%20%281%29.pdf . Accessed 19 April 2024. 

Wolock TM, Flaxman S, Chimpandule T, Mbiriyawanda S, Jahn A, Nyirenda R, et al. Subnational HIV incidence trends in Malawi: large, heterogeneous declines across space. medRxiv (PREPRINT). 2023. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9915821/ . Accessed 19 Apr 2024.

World Health Organization (WHO). Medical doctors (per 10,000). 2020. https://www.who.int/data/gho/data/indicators/indicator-details/GHO/medical-doctors-(per-10-000-population . Accessed 19 Apr 2024.

Flick RJ, Simon KR, Nyirenda R, Namachapa K, Hosseinipour MC, Schooley A, et al. The HIV diagnostic assistant: early findings from a novel HIV testing cadre in Malawi. AIDS. 2019;33(7):1215–24.

Kim MH, Ahmed S, Buck WC, Preidis GA, Hosseinipour MC, Bhalakia A, et al. The Tingathe programme: a pilot intervention using community health workers to create a continuum of care in the prevention of mother to child transmission of HIV (PMTCT) cascade of services in Malawi. J Int AIDS Soc. 2012;15(Suppl 2):17389.

Simon KR, Hartig M, Abrams EJ, Wetzel E, Ahmed S, Chester E, et al. The Tingathe Surge: a multi-strategy approach to accelerate HIV case finding in Malawi. Public Health Action. 2019;9(3):128–34.

Ahmed S, Kim MH, Dave AC, Sabelli R, Kanjelo K, Preidis GA, et al. Improved identification and enrolment into care of HIV-exposed and -infected infants and children following a community health worker intervention in Lilongwe, Malawi. J Int AIDS Soc. 2015;18(1):19305.

Tembo TA, Mollan K, Simon K, Rutstein S, Chitani MJ, Saha PT, et al. Does a blended learning implementation package enhance HIV index case testing in Malawi? A protocol for a cluster randomised controlled trial. BMJ Open. 2024;14(1):e077706.

Nyblade L, Mingkwan P, Stockton MA. Stigma reduction: an essential ingredient to ending AIDS by 2030. Lancet HIV. 2021;8(2):e106–13.

Nyblade L, Stockton M, Nyato D, Wamoyi J. Perceived, anticipated and experienced stigma: exploring manifestations and implications for young people’s sexual and reproductive health and access to care in North-Western Tanzania. Cult Health Sex. 2017;19(10):1092–107.

Tembo TA, Simon KR, Kim MH, Chikoti C, Huffstetler HE, Ahmed S, et al. Pilot-Testing a Blended Learning Package for Health Care Workers to Improve Index Testing Services in Southern Malawi: An Implementation Science Study. J Acquir Immune Defic Syndr. 2021;88(5):470–6.

Download references

Acknowledgements

We are grateful to the Malawian health care workers who shared their experiences through in-depth interviews, as well as to the study team members in Malawi and the United States for their contributions.

Research reported in this publication was funded by the National Institutes of Health (R01 MH124526) with support from the University of North Carolina at Chapel Hill Center for AIDS Research (P30 AI50410) and the Fogarty International Center of the National Institutes of Health (D43 TW010060 and R01 MH115793-04). The funders had no role in trial design, data collection and analysis, decision to publish or preparation of the manuscript.

Author information

Authors and affiliations.

RTI International, Research Triangle Park, NC, USA

Caroline J. Meek

Gillings School of Global Public Health, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA

Caroline J. Meek, Milenka Jean-Baptiste, Jiayu Wang, Clare Barrington, Vivian F. Go & Nora E. Rosenberg

Kamuzu University of Health Sciences, Blantyre, Malawi

Tiwonge E. Mbeya Munkhondya

Baylor College of Medicine Children’s Foundation, Lilongwe, Malawi

Mtisunge Mphande, Tapiwa A. Tembo, Mike Chitani, Dhrutika Vansia, Caroline Kumbuyo, Katherine R. Simon & Maria H. Kim

Department of Medicine, Division of Infectious Diseases, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA

Sarah E. Rutstein

You can also search for this author in PubMed   Google Scholar

Contributions

TAT, KRS, SER, MHK, VFG, and NER contributed to overall study conceptualization, with CJM, CB, and NER leading conceptualization of the analysis presented in this study. Material preparation and data collection were performed by TEMM, MM, TAT, MC, and CK. Analysis was led by CJM with support from MJB and DV. The first draft of the manuscript was written by CJM with consultation from NER, TEMM, MM, TAT, MJB, and DV. JW provided quantitative analysis support for participant characteristics. All authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Caroline J. Meek .

Ethics declarations

Ethics approval and consent to participate.

Ethical clearance was provided by the Malawi National Health Science Research Committee (NHSRC; #20/06/2566), University of North Carolina Institution Review Board (UNC IRB; #20–1810) and the Baylor College of Medicine institutional review board (Baylor IRB; H-48800). The procedures used in this study adhere to the tenets of the Declaration of Helsinki. Written informed consent for participation was obtained from all study participants prior to enrollment in the parent study. Interviewers also engaged in informal verbal discussion of consent immediately ahead of in-depth interviews.

Consent for publication

Not applicable. No identifying information is included in the manuscript.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Meek, C.J., Munkhondya, T.E.M., Mphande, M. et al. Examining the feasibility of assisted index case testing for HIV case-finding: a qualitative analysis of barriers and facilitators to implementation in Malawi. BMC Health Serv Res 24 , 606 (2024). https://doi.org/10.1186/s12913-024-10988-z

Download citation

Received : 31 August 2023

Accepted : 12 April 2024

Published : 09 May 2024

DOI : https://doi.org/10.1186/s12913-024-10988-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • HIV testing and counseling
  • Index case testing
  • Assisted partner notification services
  • Implementation science
  • Health care workers

BMC Health Services Research

ISSN: 1472-6963

characteristics of qualitative research pdf

COMMENTS

  1. PDF Qualitative Research

    A book chapter that introduces qualitative research methods and their applications in various fields. It covers the definition, epistemology, traditions, and design of qualitative research, with examples and guidance for applied research.

  2. PDF Chapter 1 The Main Features and Uses of Qualitative Research

    Learn about the key features of qualitative research, such as thick description, data primacy, contextualisation, immersion and emic perspective. This PDF chapter from a book by Koro-Ljungberg and Scott explains the different types and methods of qualitative inquiry in healthcare settings.

  3. PDF An Introduction to Qualitative Research

    An Introduction to Qualitative Research 5 using statistical methods, whereas qualitative research entails collecting primarily textual data and examining it using interpretive analysis . Mixed methods research employs both quantitative and qualitative research according to the aims and context of the individual project and the nature

  4. PDF Criteria for Good Qualitative Research: A Comprehensive Review

    This article synthesizes a published set of evaluative criteria for good qualitative research based on different paradigmatic standpoints. It also offers some prospects and recommendations to improve the quality of qualitative research and how to assess its findings.

  5. (PDF) What Is Qualitative Research?

    and qualitative research: (1) the distinction between explanation and understanding as the purpose of. inquiry; (2) the distinction between a personal and impersonal role for the. researcher; and ...

  6. PDF What is Qualitative in Qualitative Research

    Qualitative research involves the studied use and collection of a variety of empirical materials case study, personal experience, introspective, life story, -. interview, observational, historical, interactional, and visual texts that describe routine and. -. problematic moments and meanings in individuals lives.

  7. (PDF) Qualitative Research Methods: A Practice-Oriented Introduction

    The book aims at achieving e ects in three domains: (a) the. personal, (b) the scholarly, and (c) the practical. The personal goal. is to demystify qualitative methods, give readers a feel for ...

  8. PDF QUALITATIVE RESEARCH PRACTICE

    out qualitative research. Indeed, how researchers proceed depends upon a range of factors, including their beliefs about the nature of the social world (ontology), the nature of knowledge and how it can be acquired (epistemology), the purpose(s) and goals of the research, the characteristics of research participants, the audience for

  9. PDF Open Educational Resources

    Open Educational Resources

  10. PDF Module 1 Qualitative Research Methods Overview

    Learn the fundamentals of qualitative research, its methods, and its differences from quantitative research. This guide covers topics such as sampling, recruitment, ethical guidelines, and suggested readings for qualitative research.

  11. PDF The Nature of Qualitative Research

    The following are the defining characteristics of qualitative research: Inductive: Starts with field research and then develops theory and concepts. Interpretivist: Seeks to understand the social world through people's interpretations of it. Constructionist: Believes that social life is the result of interactions and negotiations between ...

  12. (Pdf) the Characteristics of Qualitative Research: a Study With Theses

    This study aims to analyze characteristics of PhD theses that adopted the qualitative approach, defended at a Postgraduate Program in Education (PPGE) of a University of the Northeast Region of ...

  13. Characteristics of Qualitative Research

    Qualitative research is a method of inquiry used in various disciplines, including social sciences, education, and health, to explore and understand human behavior, experiences, and social phenomena. It focuses on collecting non-numerical data, such as words, images, or objects, to gain in-depth insights into people's thoughts, feelings, motivations, and perspectives.

  14. PDF CHAPTER 4 Quantitative and Qualitative Research

    Quantitative research is an inquiry into an identified problem, based on testing a theory, measured with numbers, and analyzed using statistical techniques. The goal of quantitative methods is to determine whether the predictive generalizations of a theory hold true. By contrast, a study based upon a qualitative process of inquiry has the goal ...

  15. Criteria for Good Qualitative Research: A Comprehensive Review

    This review aims to synthesize a published set of evaluative criteria for good qualitative research. The aim is to shed light on existing standards for assessing the rigor of qualitative research encompassing a range of epistemological and ontological standpoints. Using a systematic search strategy, published journal articles that deliberate criteria for rigorous research were identified. Then ...

  16. Saturation in qualitative research: An evolutionary concept analysis

    Saturation in qualitative research is a context-dependent, subjective process that requires detailed systematic analysis. Saturation is used in four ways in qualitative research: theoretical saturation, data saturation, code or thematic saturation, and meaning saturation. The antecedents of saturation were classified into two categories: study ...

  17. Examining the feasibility of assisted index case testing for HIV case

    The research team first reviewed all of the interview summaries individually and then met multiple times to discuss initial observations, refining the research question and scope of analysis. A US-based analyst (CJM) with training in qualitative analysis used an inductive approach to develop a codebook, deriving broad codes from the ...

  18. (PDF) Features of qualitative research

    PDF | On Jan 1, 2016, M. Rezaul Islam and others published Features of qualitative research | Find, read and cite all the research you need on ResearchGate

  19. PDF Type of the Paper: Article Title: HIV and Pregnancy among Adolescents

    qualitative and quantitative data, with a total of 178,227 participants in the age range of 12-18 years. The majority of the studies reported quantitative data, while a few presented qualitative or mixed-methods findings. This scoping review highlights disparities in pregnancy prevalence, revealing varied rates among different age groups.

  20. PDF Obscured Complexity: How External Cycles Simplify the ...

    Results: The simulations exhibit distinct characteristics in terms of plain visualization, Fourier analysis, and entropy analysis along the Poincare sections. Under normal work sleep conditions (= 0:35), the system demonstrates speci c resetting at particular times within a total period. In shift work (= 0:34) or long-term constant temperature

  21. (PDF) Characteristics of Qualitative Research

    PDF | On Nov 3, 2022, Abid Hussain published Characteristics of Qualitative Research | Find, read and cite all the research you need on ResearchGate

  22. Animals

    Y-27632, as a cytoskeleton protector, is commonly used for low-temperature preservation of cells. Goat sperm are prone to damage to the cytoskeleton under low-temperature conditions, leading to a loss of sperm vitality. However, the Y-27632 small molecule has not yet been used in research on low-temperature preservation of goat semen. This study aims to address the issue of low temperature ...