Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Types of Interviews in Research | Guide & Examples

Types of Interviews in Research | Guide & Examples

Published on March 10, 2022 by Tegan George . Revised on June 22, 2023.

An interview is a qualitative research method that relies on asking questions in order to collect data . Interviews involve two or more people, one of whom is the interviewer asking the questions.

There are several types of interviews, often differentiated by their level of structure.

  • Structured interviews have predetermined questions asked in a predetermined order.
  • Unstructured interviews are more free-flowing.
  • Semi-structured interviews fall in between.

Interviews are commonly used in market research, social science, and ethnographic research .

Table of contents

What is a structured interview, what is a semi-structured interview, what is an unstructured interview, what is a focus group, examples of interview questions, advantages and disadvantages of interviews, other interesting articles, frequently asked questions about types of interviews.

Structured interviews have predetermined questions in a set order. They are often closed-ended, featuring dichotomous (yes/no) or multiple-choice questions. While open-ended structured interviews exist, they are much less common. The types of questions asked make structured interviews a predominantly quantitative tool.

Asking set questions in a set order can help you see patterns among responses, and it allows you to easily compare responses between participants while keeping other factors constant. This can mitigate   research biases and lead to higher reliability and validity. However, structured interviews can be overly formal, as well as limited in scope and flexibility.

  • You feel very comfortable with your topic. This will help you formulate your questions most effectively.
  • You have limited time or resources. Structured interviews are a bit more straightforward to analyze because of their closed-ended nature, and can be a doable undertaking for an individual.
  • Your research question depends on holding environmental conditions between participants constant.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

interview research limitations

Semi-structured interviews are a blend of structured and unstructured interviews. While the interviewer has a general plan for what they want to ask, the questions do not have to follow a particular phrasing or order.

Semi-structured interviews are often open-ended, allowing for flexibility, but follow a predetermined thematic framework, giving a sense of order. For this reason, they are often considered “the best of both worlds.”

However, if the questions differ substantially between participants, it can be challenging to look for patterns, lessening the generalizability and validity of your results.

  • You have prior interview experience. It’s easier than you think to accidentally ask a leading question when coming up with questions on the fly. Overall, spontaneous questions are much more difficult than they may seem.
  • Your research question is exploratory in nature. The answers you receive can help guide your future research.

An unstructured interview is the most flexible type of interview. The questions and the order in which they are asked are not set. Instead, the interview can proceed more spontaneously, based on the participant’s previous answers.

Unstructured interviews are by definition open-ended. This flexibility can help you gather detailed information on your topic, while still allowing you to observe patterns between participants.

However, so much flexibility means that they can be very challenging to conduct properly. You must be very careful not to ask leading questions, as biased responses can lead to lower reliability or even invalidate your research.

  • You have a solid background in your research topic and have conducted interviews before.
  • Your research question is exploratory in nature, and you are seeking descriptive data that will deepen and contextualize your initial hypotheses.
  • Your research necessitates forming a deeper connection with your participants, encouraging them to feel comfortable revealing their true opinions and emotions.

A focus group brings together a group of participants to answer questions on a topic of interest in a moderated setting. Focus groups are qualitative in nature and often study the group’s dynamic and body language in addition to their answers. Responses can guide future research on consumer products and services, human behavior, or controversial topics.

Focus groups can provide more nuanced and unfiltered feedback than individual interviews and are easier to organize than experiments or large surveys . However, their small size leads to low external validity and the temptation as a researcher to “cherry-pick” responses that fit your hypotheses.

  • Your research focuses on the dynamics of group discussion or real-time responses to your topic.
  • Your questions are complex and rooted in feelings, opinions, and perceptions that cannot be answered with a “yes” or “no.”
  • Your topic is exploratory in nature, and you are seeking information that will help you uncover new questions or future research ideas.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Depending on the type of interview you are conducting, your questions will differ in style, phrasing, and intention. Structured interview questions are set and precise, while the other types of interviews allow for more open-endedness and flexibility.

Here are some examples.

  • Semi-structured
  • Unstructured
  • Focus group
  • Do you like dogs? Yes/No
  • Do you associate dogs with feeling: happy; somewhat happy; neutral; somewhat unhappy; unhappy
  • If yes, name one attribute of dogs that you like.
  • If no, name one attribute of dogs that you don’t like.
  • What feelings do dogs bring out in you?
  • When you think more deeply about this, what experiences would you say your feelings are rooted in?

Interviews are a great research tool. They allow you to gather rich information and draw more detailed conclusions than other research methods, taking into consideration nonverbal cues, off-the-cuff reactions, and emotional responses.

However, they can also be time-consuming and deceptively challenging to conduct properly. Smaller sample sizes can cause their validity and reliability to suffer, and there is an inherent risk of interviewer effect arising from accidentally leading questions.

Here are some advantages and disadvantages of each type of interview that can help you decide if you’d like to utilize this research method.

Advantages and disadvantages of interviews
Type of interview Advantages Disadvantages
Structured interview
Semi-structured interview , , , and
Unstructured interview , , , and
Focus group , , and , since there are multiple people present

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

The four most common types of interviews are:

  • Structured interviews : The questions are predetermined in both topic and order. 
  • Semi-structured interviews : A few questions are predetermined, but other questions aren’t planned.
  • Unstructured interviews : None of the questions are predetermined.
  • Focus group interviews : The questions are presented to a group instead of one individual.

The interviewer effect is a type of bias that emerges when a characteristic of an interviewer (race, age, gender identity, etc.) influences the responses given by the interviewee.

There is a risk of an interviewer effect in all types of interviews , but it can be mitigated by writing really high-quality interview questions.

Social desirability bias is the tendency for interview participants to give responses that will be viewed favorably by the interviewer or other participants. It occurs in all types of interviews and surveys , but is most common in semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias can be mitigated by ensuring participants feel at ease and comfortable sharing their views. Make sure to pay attention to your own body language and any physical or verbal cues, such as nodding or widening your eyes.

This type of bias can also occur in observations if the participants know they’re being observed. They might alter their behavior accordingly.

A focus group is a research method that brings together a small group of people to answer questions in a moderated setting. The group is chosen due to predefined demographic traits, and the questions are designed to shed light on a topic of interest. It is one of 4 types of interviews .

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

George, T. (2023, June 22). Types of Interviews in Research | Guide & Examples. Scribbr. Retrieved September 3, 2024, from https://www.scribbr.com/methodology/interviews-research/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, unstructured interview | definition, guide & examples, structured interview | definition, guide & examples, semi-structured interview | definition, guide & examples, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

The Interview Method In Psychology

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Interviews involve a conversation with a purpose, but have some distinct features compared to ordinary conversation, such as being scheduled in advance, having an asymmetry in outcome goals between interviewer and interviewee, and often following a question-answer format.

Interviews are different from questionnaires as they involve social interaction. Unlike questionnaire methods, researchers need training in interviewing (which costs money).

Multiracial businesswomen talk brainstorm at team meeting discuss business ideas together. Diverse multiethnic female colleagues or partners engaged in discussion. Interview concept

How Do Interviews Work?

Researchers can ask different types of questions, generating different types of data . For example, closed questions provide people with a fixed set of responses, whereas open questions allow people to express what they think in their own words.

The researcher will often record interviews, and the data will be written up as a transcript (a written account of interview questions and answers) which can be analyzed later.

It should be noted that interviews may not be the best method for researching sensitive topics (e.g., truancy in schools, discrimination, etc.) as people may feel more comfortable completing a questionnaire in private.

There are different types of interviews, with a key distinction being the extent of structure. Semi-structured is most common in psychology research. Unstructured interviews have a free-flowing style, while structured interviews involve preset questions asked in a particular order.

Structured Interview

A structured interview is a quantitative research method where the interviewer a set of prepared closed-ended questions in the form of an interview schedule, which he/she reads out exactly as worded.

Interviews schedules have a standardized format, meaning the same questions are asked to each interviewee in the same order (see Fig. 1).

interview schedule example

   Figure 1. An example of an interview schedule

The interviewer will not deviate from the interview schedule (except to clarify the meaning of the question) or probe beyond the answers received.  Replies are recorded on a questionnaire, and the order and wording of questions, and sometimes the range of alternative answers, is preset by the researcher.

A structured interview is also known as a formal interview (like a job interview).

  • Structured interviews are easy to replicate as a fixed set of closed questions are used, which are easy to quantify – this means it is easy to test for reliability .
  • Structured interviews are fairly quick to conduct which means that many interviews can take place within a short amount of time. This means a large sample can be obtained, resulting in the findings being representative and having the ability to be generalized to a large population.

Limitations

  • Structured interviews are not flexible. This means new questions cannot be asked impromptu (i.e., during the interview), as an interview schedule must be followed.
  • The answers from structured interviews lack detail as only closed questions are asked, which generates quantitative data . This means a researcher won’t know why a person behaves a certain way.

Unstructured Interview

Unstructured interviews do not use any set questions, instead, the interviewer asks open-ended questions based on a specific research topic, and will try to let the interview flow like a natural conversation. The interviewer modifies his or her questions to suit the candidate’s specific experiences.

Unstructured interviews are sometimes referred to as ‘discovery interviews’ and are more like a ‘guided conservation’ than a strictly structured interview. They are sometimes called informal interviews.

Unstructured interviews are most useful in qualitative research to analyze attitudes and values. Though they rarely provide a valid basis for generalization, their main advantage is that they enable the researcher to probe social actors’ subjective points of view.

Interviewer Self-Disclosure

Interviewer self-disclosure involves the interviewer revealing personal information or opinions during the research interview. This may increase rapport but risks changing dynamics away from a focus on facilitating the interviewee’s account.

In unstructured interviews, the informal conversational style may deliberately include elements of interviewer self-disclosure, mirroring ordinary conversation dynamics.

Interviewer self-disclosure risks changing the dynamics away from facilitation of interviewee accounts. It should not be ruled out entirely but requires skillful handling informed by reflection.

  • An informal interviewing style with some interviewer self-disclosure may increase rapport and participant openness. However, it also increases the chance of the participant converging opinions with the interviewer.
  • Complete interviewer neutrality is unlikely. However, excessive informality and self-disclosure risk the interview becoming more of an ordinary conversation and producing consensus accounts.
  • Overly personal disclosures could also be seen as irrelevant and intrusive by participants. They may invite increased intimacy on uncomfortable topics.
  • The safest approach seems to be to avoid interviewer self-disclosures in most cases. Where an informal style is used, disclosures require careful judgment and substantial interviewing experience.
  • If asked for personal opinions during an interview, the interviewer could highlight the defined roles and defer that discussion until after the interview.
  • Unstructured interviews are more flexible as questions can be adapted and changed depending on the respondents’ answers. The interview can deviate from the interview schedule.
  • Unstructured interviews generate qualitative data through the use of open questions. This allows the respondent to talk in some depth, choosing their own words. This helps the researcher develop a real sense of a person’s understanding of a situation.
  • They also have increased validity because it gives the interviewer the opportunity to probe for a deeper understanding, ask for clarification & allow the interviewee to steer the direction of the interview, etc. Interviewers have the chance to clarify any questions of participants during the interview.
  • It can be time-consuming to conduct an unstructured interview and analyze the qualitative data (using methods such as thematic analysis).
  • Employing and training interviewers is expensive and not as cheap as collecting data via questionnaires . For example, certain skills may be needed by the interviewer. These include the ability to establish rapport and knowing when to probe.
  • Interviews inevitably co-construct data through researchers’ agenda-setting and question-framing. Techniques like open questions provide only limited remedies.

Focus Group Interview

Focus group interview is a qualitative approach where a group of respondents are interviewed together, used to gain an in‐depth understanding of social issues.

This type of interview is often referred to as a focus group because the job of the interviewer ( or moderator ) is to bring the group to focus on the issue at hand. Initially, the goal was to reach a consensus among the group, but with the development of techniques for analyzing group qualitative data, there is less emphasis on consensus building.

The method aims to obtain data from a purposely selected group of individuals rather than from a statistically representative sample of a broader population.

The role of the interview moderator is to make sure the group interacts with each other and do not drift off-topic. Ideally, the moderator will be similar to the participants in terms of appearance, have adequate knowledge of the topic being discussed, and exercise mild unobtrusive control over dominant talkers and shy participants.

A researcher must be highly skilled to conduct a focus group interview. For example, the moderator may need certain skills, including the ability to establish rapport and know when to probe.

  • Group interviews generate qualitative narrative data through the use of open questions. This allows the respondents to talk in some depth, choosing their own words. This helps the researcher develop a real sense of a person’s understanding of a situation. Qualitative data also includes observational data, such as body language and facial expressions.
  • Group responses are helpful when you want to elicit perspectives on a collective experience, encourage diversity of thought, reduce researcher bias, and gather a wider range of contextualized views.
  • They also have increased validity because some participants may feel more comfortable being with others as they are used to talking in groups in real life (i.e., it’s more natural).
  • When participants have common experiences, focus groups allow them to build on each other’s comments to provide richer contextual data representing a wider range of views than individual interviews.
  • Focus groups are a type of group interview method used in market research and consumer psychology that are cost – effective for gathering the views of consumers .
  • The researcher must ensure that they keep all the interviewees” details confidential and respect their privacy. This is difficult when using a group interview. For example, the researcher cannot guarantee that the other people in the group will keep information private.
  • Group interviews are less reliable as they use open questions and may deviate from the interview schedule, making them difficult to repeat.
  • It is important to note that there are some potential pitfalls of focus groups, such as conformity, social desirability, and oppositional behavior, that can reduce the usefulness of the data collected.
For example, group interviews may sometimes lack validity as participants may lie to impress the other group members. They may conform to peer pressure and give false answers.

To avoid these pitfalls, the interviewer needs to have a good understanding of how people function in groups as well as how to lead the group in a productive discussion.

Semi-Structured Interview

Semi-structured interviews lie between structured and unstructured interviews. The interviewer prepares a set of same questions to be answered by all interviewees. Additional questions might be asked during the interview to clarify or expand certain issues.

In semi-structured interviews, the interviewer has more freedom to digress and probe beyond the answers. The interview guide contains a list of questions and topics that need to be covered during the conversation, usually in a particular order.

Semi-structured interviews are most useful to address the ‘what’, ‘how’, and ‘why’ research questions. Both qualitative and quantitative analyses can be performed on data collected during semi-structured interviews.

  • Semi-structured interviews allow respondents to answer more on their terms in an informal setting yet provide uniform information making them ideal for qualitative analysis.
  • The flexible nature of semi-structured interviews allows ideas to be introduced and explored during the interview based on the respondents’ answers.
  • Semi-structured interviews can provide reliable and comparable qualitative data. Allows the interviewer to probe answers, where the interviewee is asked to clarify or expand on the answers provided.
  • The data generated remain fundamentally shaped by the interview context itself. Analysis rarely acknowledges this endemic co-construction.
  • They are more time-consuming (to conduct, transcribe, and analyze) than structured interviews.
  • The quality of findings is more dependent on the individual skills of the interviewer than in structured interviews. Skill is required to probe effectively while avoiding biasing responses.

The Interviewer Effect

Face-to-face interviews raise methodological problems. These stem from the fact that interviewers are themselves role players, and their perceived status may influence the replies of the respondents.

Because an interview is a social interaction, the interviewer’s appearance or behavior may influence the respondent’s answers. This is a problem as it can bias the results of the study and make them invalid.

For example, the gender, ethnicity, body language, age, and social status of the interview can all create an interviewer effect. If there is a perceived status disparity between the interviewer and the interviewee, the results of interviews have to be interpreted with care. This is pertinent for sensitive topics such as health.

For example, if a researcher was investigating sexism amongst males, would a female interview be preferable to a male? It is possible that if a female interviewer was used, male participants might lie (i.e., pretend they are not sexist) to impress the interviewer, thus creating an interviewer effect.

Flooding interviews with researcher’s agenda

The interactional nature of interviews means the researcher fundamentally shapes the discourse, rather than just neutrally collecting it. This shapes what is talked about and how participants can respond.
  • The interviewer’s assumptions, interests, and categories don’t just shape the specific interview questions asked. They also shape the framing, task instructions, recruitment, and ongoing responses/prompts.
  • This flooding of the interview interaction with the researcher’s agenda makes it very difficult to separate out what comes from the participant vs. what is aligned with the interviewer’s concerns.
  • So the participant’s talk ends up being fundamentally shaped by the interviewer rather than being a more natural reflection of the participant’s own orientations or practices.
  • This effect is hard to avoid because interviews inherently involve the researcher setting an agenda. But it does mean the talk extracted may say more about the interview process than the reality it is supposed to reflect.

Interview Design

First, you must choose whether to use a structured or non-structured interview.

Characteristics of Interviewers

Next, you must consider who will be the interviewer, and this will depend on what type of person is being interviewed. There are several variables to consider:

  • Gender and age : This can greatly affect respondents’ answers, particularly on personal issues.
  • Personal characteristics : Some people are easier to get on with than others. Also, the interviewer’s accent and appearance (e.g., clothing) can affect the rapport between the interviewer and interviewee.
  • Language : The interviewer’s language should be appropriate to the vocabulary of the group of people being studied. For example, the researcher must change the questions’ language to match the respondents’ social background” age / educational level / social class/ethnicity, etc.
  • Ethnicity : People may have difficulty interviewing people from different ethnic groups.
  • Interviewer expertise should match research sensitivity – inexperienced students should avoid interviewing highly vulnerable groups.

Interview Location

The location of a research interview can influence the way in which the interviewer and interviewee relate and may exaggerate a power dynamic in one direction or another. It is usual to offer interviewees a choice of location as part of facilitating their comfort and encouraging participation.

However, the safety of the interviewer is an overriding consideration and, as mentioned, a minimal requirement should be that a responsible person knows where the interviewer has gone and when they are due back.

Remote Interviews

The COVID-19 pandemic necessitated remote interviewing for research continuity. However online interview platforms provide increased flexibility even under normal conditions.

They enable access to participant groups across geographical distances without travel costs or arrangements. Online interviews can be efficiently scheduled to align with researcher and interviewee availability.

There are practical considerations in setting up remote interviews. Interviewees require access to internet and an online platform such as Zoom, Microsoft Teams or Skype through which to connect.

Certain modifications help build initial rapport in the remote format. Allowing time at the start of the interview for casual conversation while testing audio/video quality helps participants settle in. Minor delays can disrupt turn-taking flow, so alerting participants to speak slightly slower than usual minimizes accidental interruptions.

Keeping remote interviews under an hour avoids fatigue for stare at a screen. Seeking advanced ethical clearance for verbal consent at the interview start saves participant time. Adapting to the remote context shows care for interviewees and aids rich discussion.

However, it remains important to critically reflect on how removing in-person dynamics may shape the co-created data. Perhaps some nuances of trust and disclosure differ over video.

Vulnerable Groups

The interviewer must ensure that they take special care when interviewing vulnerable groups, such as children. For example, children have a limited attention span, so lengthy interviews should be avoided.

Developing an Interview Schedule

An interview schedule is a list of pre-planned, structured questions that have been prepared, to serve as a guide for interviewers, researchers and investigators in collecting information or data about a specific topic or issue.
  • List the key themes or topics that must be covered to address your research questions. This will form the basic content.
  • Organize the content logically, such as chronologically following the interviewee’s experiences. Place more sensitive topics later in the interview.
  • Develop the list of content into actual questions and prompts. Carefully word each question – keep them open-ended, non-leading, and focused on examples.
  • Add prompts to remind you to cover areas of interest.
  • Pilot test the interview schedule to check it generates useful data and revise as needed.
  • Be prepared to refine the schedule throughout data collection as you learn which questions work better.
  • Practice skills like asking follow-up questions to get depth and detail. Stay flexible to depart from the schedule when needed.
  • Keep questions brief and clear. Avoid multi-part questions that risk confusing interviewees.
  • Listen actively during interviews to determine which pre-planned questions can be skipped based on information the participant has already provided.

The key is balancing preparation with the flexibility to adapt questions based on each interview interaction. With practice, you’ll gain skills to conduct productive interviews that obtain rich qualitative data.

The Power of Silence

Strategic use of silence is a key technique to generate interviewee-led data, but it requires judgment about appropriate timing and duration to maintain mutual understanding.
  • Unlike ordinary conversation, the interviewer aims to facilitate the interviewee’s contribution without interrupting. This often means resisting the urge to speak at the end of the interviewee’s turn construction units (TCUs).
  • Leaving a silence after a TCU encourages the interviewee to provide more material without being led by the interviewer. However, this simple technique requires confidence, as silence can feel socially awkward.
  • Allowing longer silences (e.g. 24 seconds) later in interviews can work well, but early on even short silences may disrupt rapport if they cause misalignment between speakers.
  • Silence also allows interviewees time to think before answering. Rushing to re-ask or amend questions can limit responses.
  • Blunt backchannels like “mm hm” also avoid interrupting flow. Interruptions, especially to finish an interviewee’s turn, are problematic as they make the ownership of perspectives unclear.
  • If interviewers incorrectly complete turns, an upside is it can produce extended interviewee narratives correcting the record. However, silence would have been better to let interviewees shape their own accounts.

Recording & Transcription

Design choices.

Design choices around recording and engaging closely with transcripts influence analytic insights, as well as practical feasibility. Weighing up relevant tradeoffs is key.
  • Audio recording is standard, but video better captures contextual details, which is useful for some topics/analysis approaches. Participants may find video invasive for sensitive research.
  • Digital formats enable the sharing of anonymized clips. Additional microphones reduce audio issues.
  • Doing all transcription is time-consuming. Outsourcing can save researcher effort but needs confidentiality assurances. Always carefully check outsourced transcripts.
  • Online platform auto-captioning can facilitate rapid analysis, but accuracy limitations mean full transcripts remain ideal. Software cleans up caption file formatting.
  • Verbatim transcripts best capture nuanced meaning, but the level of detail needed depends on the analysis approach. Referring back to recordings is still advisable during analysis.
  • Transcripts versus recordings highlight different interaction elements. Transcripts make overt disagreements clearer through the wording itself. Recordings better convey tone affiliativeness.

Transcribing Interviews & Focus Groups

Here are the steps for transcribing interviews:
  • Play back audio/video files to develop an overall understanding of the interview
  • Format the transcription document:
  • Add line numbers
  • Separate interviewer questions and interviewee responses
  • Use formatting like bold, italics, etc. to highlight key passages
  • Provide sentence-level clarity in the interviewee’s responses while preserving their authentic voice and word choices
  • Break longer passages into smaller paragraphs to help with coding
  • If translating the interview to another language, use qualified translators and back-translate where possible
  • Select a notation system to indicate pauses, emphasis, laughter, interruptions, etc., and adapt it as needed for your data
  • Insert screenshots, photos, or documents discussed in the interview at the relevant point in the transcript
  • Read through multiple times, revising formatting and notations
  • Double-check the accuracy of transcription against audio/videos
  • De-identify transcript by removing identifying participant details

The goal is to produce a formatted written record of the verbal interview exchange that captures the meaning and highlights important passages ready for the coding process. Careful transcription is the vital first step in analysis.

Coding Transcripts

The goal of transcription and coding is to systematically transform interview responses into a set of codes and themes that capture key concepts, experiences and beliefs expressed by participants. Taking care with transcription and coding procedures enhances the validity of qualitative analysis .
  • Read through the transcript multiple times to become immersed in the details
  • Identify manifest/obvious codes and latent/underlying meaning codes
  • Highlight insightful participant quotes that capture key concepts (in vivo codes)
  • Create a codebook to organize and define codes with examples
  • Use an iterative cycle of inductive (data-driven) coding and deductive (theory-driven) coding
  • Refine codebook with clear definitions and examples as you code more transcripts
  • Collaborate with other coders to establish the reliability of codes

Ethical Issues

Informed consent.

The participant information sheet must give potential interviewees a good idea of what is involved if taking part in the research.

This will include the general topics covered in the interview, where the interview might take place, how long it is expected to last, how it will be recorded, the ways in which participants’ anonymity will be managed, and incentives offered.

It might be considered good practice to consider true informed consent in interview research to require two distinguishable stages:

  • Consent to undertake and record the interview and
  • Consent to use the material in research after the interview has been conducted and the content known, or even after the interviewee has seen a copy of the transcript and has had a chance to remove sections, if desired.

Power and Vulnerability

  • Early feminist views that sensitivity could equalize power differences are likely naive. The interviewer and interviewee inhabit different knowledge spheres and social categories, indicating structural disparities.
  • Power fluctuates within interviews. Researchers rely on participation, yet interviewees control openness and can undermine data collection. Assumptions should be avoided.
  • Interviews on sensitive topics may feel like quasi-counseling. Interviewers must refrain from dual roles, instead supplying support service details to all participants.
  • Interviewees recruited for trauma experiences may reveal more than anticipated. While generating analytic insights, this risks leaving them feeling exposed.
  • Ultimately, power balances resist reconciliation. But reflexively analyzing operations of power serves to qualify rather than nullify situtated qualitative accounts.

Some groups, like those with mental health issues, extreme views, or criminal backgrounds, risk being discredited – treated skeptically by researchers.

This creates tensions with qualitative approaches, often having an empathetic ethos seeking to center subjective perspectives. Analysis should balance openness to offered accounts with critically examining stakes and motivations behind them.

Potter, J., & Hepburn, A. (2005). Qualitative interviews in psychology: Problems and possibilities.  Qualitative research in Psychology ,  2 (4), 281-307.

Houtkoop-Steenstra, H. (2000). Interaction and the standardized survey interview: The living questionnaire . Cambridge University Press

Madill, A. (2011). Interaction in the semi-structured interview: A comparative analysis of the use of and response to indirect complaints. Qualitative Research in Psychology, 8 (4), 333–353.

Maryudi, A., & Fisher, M. (2020). The power in the interview: A practical guide for identifying the critical role of actor interests in environment research. Forest and Society, 4 (1), 142–150

O’Key, V., Hugh-Jones, S., & Madill, A. (2009). Recruiting and engaging with people in deprived locales: Interviewing families about their eating patterns. Social Psychological Review, 11 (20), 30–35.

Puchta, C., & Potter, J. (2004). Focus group practice . Sage.

Schaeffer, N. C. (1991). Conversation with a purpose— Or conversation? Interaction in the standardized interview. In P. P. Biemer, R. M. Groves, L. E. Lyberg, & N. A. Mathiowetz (Eds.), Measurement errors in surveys (pp. 367–391). Wiley.

Silverman, D. (1973). Interview talk: Bringing off a research instrument. Sociology, 7 (1), 31–48.

Print Friendly, PDF & Email

  • Harvard Library
  • Research Guides
  • Faculty of Arts & Sciences Libraries

Library Support for Qualitative Research

  • Interview Research

General Handbooks and Overviews

Qualitative research communities.

  • Types of Interviews
  • Recruiting & Engaging Participants
  • Interview Questions
  • Conducting Interviews
  • Recording & Transcription
  • Data Analysis
  • Managing Interview Data
  • Finding Extant Interviews
  • Past Workshops on Interview Research
  • Methodological Resources
  • Remote & Virtual Fieldwork
  • Data Management & Repositories
  • Campus Access
  • Interviews as a Method for Qualitative Research (video) This short video summarizes why interviews can serve as useful data in qualitative research.  
  • InterViews by Steinar Kvale  Interviewing is an essential tool in qualitative research and this introduction to interviewing outlines both the theoretical underpinnings and the practical aspects of the process. After examining the role of the interview in the research process, Steinar Kvale considers some of the key philosophical issues relating to interviewing: the interview as conversation, hermeneutics, phenomenology, concerns about ethics as well as validity, and postmodernism. Having established this framework, the author then analyzes the seven stages of the interview process - from designing a study to writing it up.  
  • Practical Evaluation by Michael Quinn Patton  Surveys different interviewing strategies, from, a) informal/conversational, to b) interview guide approach, to c) standardized and open-ended, to d) closed/quantitative. Also discusses strategies for wording questions that are open-ended, clear, sensitive, and neutral, while supporting the speaker. Provides suggestions for probing and maintaining control of the interview process, as well as suggestions for recording and transcription.  
  • The SAGE Handbook of Interview Research by Amir B. Marvasti (Editor); James A. Holstein (Editor); Jaber F. Gubrium (Editor); Karyn D. McKinney (Editor)  The new edition of this landmark volume emphasizes the dynamic, interactional, and reflexive dimensions of the research interview. Contributors highlight the myriad dimensions of complexity that are emerging as researchers increasingly frame the interview as a communicative opportunity as much as a data-gathering format. The book begins with the history and conceptual transformations of the interview, which is followed by chapters that discuss the main components of interview practice. Taken together, the contributions to The SAGE Handbook of Interview Research: The Complexity of the Craft encourage readers simultaneously to learn the frameworks and technologies of interviewing and to reflect on the epistemological foundations of the interview craft.
  • International Congress of Qualitative Inquiry They host an annual confrerence at the University of Illinois at Urbana-Champaign, which aims to facilitate the development of qualitative research methods across a wide variety of academic disciplines, among other initiatives.
  • METHODSPACE An online home of the research methods community, where practicing researchers share how to make research easier.
  • Social Research Association, UK The SRA is the membership organisation for social researchers in the UK and beyond. It supports researchers via training, guidance, publications, research ethics, events, branches, and careers.
  • Social Science Research Council The SSRC administers fellowships and research grants that support the innovation and evaluation of new policy solutions. They convene researchers and stakeholders to share evidence-based policy solutions and incubate new research agendas, produce online knowledge platforms and technical reports that catalog research-based policy solutions, and support mentoring programs that broaden problem-solving research opportunities.
  • << Previous: Taguette
  • Next: Types of Interviews >>

Except where otherwise noted, this work is subject to a Creative Commons Attribution 4.0 International License , which allows anyone to share and adapt our material as long as proper attribution is given. For details and exceptions, see the Harvard Library Copyright Policy ©2021 Presidents and Fellows of Harvard College.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 05 October 2018

Interviews and focus groups in qualitative research: an update for the digital age

  • P. Gill 1 &
  • J. Baillie 2  

British Dental Journal volume  225 ,  pages 668–672 ( 2018 ) Cite this article

32k Accesses

63 Citations

20 Altmetric

Metrics details

Highlights that qualitative research is used increasingly in dentistry. Interviews and focus groups remain the most common qualitative methods of data collection.

Suggests the advent of digital technologies has transformed how qualitative research can now be undertaken.

Suggests interviews and focus groups can offer significant, meaningful insight into participants' experiences, beliefs and perspectives, which can help to inform developments in dental practice.

Qualitative research is used increasingly in dentistry, due to its potential to provide meaningful, in-depth insights into participants' experiences, perspectives, beliefs and behaviours. These insights can subsequently help to inform developments in dental practice and further related research. The most common methods of data collection used in qualitative research are interviews and focus groups. While these are primarily conducted face-to-face, the ongoing evolution of digital technologies, such as video chat and online forums, has further transformed these methods of data collection. This paper therefore discusses interviews and focus groups in detail, outlines how they can be used in practice, how digital technologies can further inform the data collection process, and what these methods can offer dentistry.

You have full access to this article via your institution.

Similar content being viewed by others

interview research limitations

Interviews in the social sciences

interview research limitations

Professionalism in dentistry: deconstructing common terminology

A review of technical and quality assessment considerations of audio-visual and web-conferencing focus groups in qualitative health research, introduction.

Traditionally, research in dentistry has primarily been quantitative in nature. 1 However, in recent years, there has been a growing interest in qualitative research within the profession, due to its potential to further inform developments in practice, policy, education and training. Consequently, in 2008, the British Dental Journal (BDJ) published a four paper qualitative research series, 2 , 3 , 4 , 5 to help increase awareness and understanding of this particular methodological approach.

Since the papers were originally published, two scoping reviews have demonstrated the ongoing proliferation in the use of qualitative research within the field of oral healthcare. 1 , 6 To date, the original four paper series continue to be well cited and two of the main papers remain widely accessed among the BDJ readership. 2 , 3 The potential value of well-conducted qualitative research to evidence-based practice is now also widely recognised by service providers, policy makers, funding bodies and those who commission, support and use healthcare research.

Besides increasing standalone use, qualitative methods are now also routinely incorporated into larger mixed method study designs, such as clinical trials, as they can offer additional, meaningful insights into complex problems that simply could not be provided by quantitative methods alone. Qualitative methods can also be used to further facilitate in-depth understanding of important aspects of clinical trial processes, such as recruitment. For example, Ellis et al . investigated why edentulous older patients, dissatisfied with conventional dentures, decline implant treatment, despite its established efficacy, and frequently refuse to participate in related randomised clinical trials, even when financial constraints are removed. 7 Through the use of focus groups in Canada and the UK, the authors found that fears of pain and potential complications, along with perceived embarrassment, exacerbated by age, are common reasons why older patients typically refuse dental implants. 7

The last decade has also seen further developments in qualitative research, due to the ongoing evolution of digital technologies. These developments have transformed how researchers can access and share information, communicate and collaborate, recruit and engage participants, collect and analyse data and disseminate and translate research findings. 8 Where appropriate, such technologies are therefore capable of extending and enhancing how qualitative research is undertaken. 9 For example, it is now possible to collect qualitative data via instant messaging, email or online/video chat, using appropriate online platforms.

These innovative approaches to research are therefore cost-effective, convenient, reduce geographical constraints and are often useful for accessing 'hard to reach' participants (for example, those who are immobile or socially isolated). 8 , 9 However, digital technologies are still relatively new and constantly evolving and therefore present a variety of pragmatic and methodological challenges. Furthermore, given their very nature, their use in many qualitative studies and/or with certain participant groups may be inappropriate and should therefore always be carefully considered. While it is beyond the scope of this paper to provide a detailed explication regarding the use of digital technologies in qualitative research, insight is provided into how such technologies can be used to facilitate the data collection process in interviews and focus groups.

In light of such developments, it is perhaps therefore timely to update the main paper 3 of the original BDJ series. As with the previous publications, this paper has been purposely written in an accessible style, to enhance readability, particularly for those who are new to qualitative research. While the focus remains on the most common qualitative methods of data collection – interviews and focus groups – appropriate revisions have been made to provide a novel perspective, and should therefore be helpful to those who would like to know more about qualitative research. This paper specifically focuses on undertaking qualitative research with adult participants only.

Overview of qualitative research

Qualitative research is an approach that focuses on people and their experiences, behaviours and opinions. 10 , 11 The qualitative researcher seeks to answer questions of 'how' and 'why', providing detailed insight and understanding, 11 which quantitative methods cannot reach. 12 Within qualitative research, there are distinct methodologies influencing how the researcher approaches the research question, data collection and data analysis. 13 For example, phenomenological studies focus on the lived experience of individuals, explored through their description of the phenomenon. Ethnographic studies explore the culture of a group and typically involve the use of multiple methods to uncover the issues. 14

While methodology is the 'thinking tool', the methods are the 'doing tools'; 13 the ways in which data are collected and analysed. There are multiple qualitative data collection methods, including interviews, focus groups, observations, documentary analysis, participant diaries, photography and videography. Two of the most commonly used qualitative methods are interviews and focus groups, which are explored in this article. The data generated through these methods can be analysed in one of many ways, according to the methodological approach chosen. A common approach is thematic data analysis, involving the identification of themes and subthemes across the data set. Further information on approaches to qualitative data analysis has been discussed elsewhere. 1

Qualitative research is an evolving and adaptable approach, used by different disciplines for different purposes. Traditionally, qualitative data, specifically interviews, focus groups and observations, have been collected face-to-face with participants. In more recent years, digital technologies have contributed to the ongoing evolution of qualitative research. Digital technologies offer researchers different ways of recruiting participants and collecting data, and offer participants opportunities to be involved in research that is not necessarily face-to-face.

Research interviews are a fundamental qualitative research method 15 and are utilised across methodological approaches. Interviews enable the researcher to learn in depth about the perspectives, experiences, beliefs and motivations of the participant. 3 , 16 Examples include, exploring patients' perspectives of fear/anxiety triggers in dental treatment, 17 patients' experiences of oral health and diabetes, 18 and dental students' motivations for their choice of career. 19

Interviews may be structured, semi-structured or unstructured, 3 according to the purpose of the study, with less structured interviews facilitating a more in depth and flexible interviewing approach. 20 Structured interviews are similar to verbal questionnaires and are used if the researcher requires clarification on a topic; however they produce less in-depth data about a participant's experience. 3 Unstructured interviews may be used when little is known about a topic and involves the researcher asking an opening question; 3 the participant then leads the discussion. 20 Semi-structured interviews are commonly used in healthcare research, enabling the researcher to ask predetermined questions, 20 while ensuring the participant discusses issues they feel are important.

Interviews can be undertaken face-to-face or using digital methods when the researcher and participant are in different locations. Audio-recording the interview, with the consent of the participant, is essential for all interviews regardless of the medium as it enables accurate transcription; the process of turning the audio file into a word-for-word transcript. This transcript is the data, which the researcher then analyses according to the chosen approach.

Types of interview

Qualitative studies often utilise one-to-one, face-to-face interviews with research participants. This involves arranging a mutually convenient time and place to meet the participant, signing a consent form and audio-recording the interview. However, digital technologies have expanded the potential for interviews in research, enabling individuals to participate in qualitative research regardless of location.

Telephone interviews can be a useful alternative to face-to-face interviews and are commonly used in qualitative research. They enable participants from different geographical areas to participate and may be less onerous for participants than meeting a researcher in person. 15 A qualitative study explored patients' perspectives of dental implants and utilised telephone interviews due to the quality of the data that could be yielded. 21 The researcher needs to consider how they will audio record the interview, which can be facilitated by purchasing a recorder that connects directly to the telephone. One potential disadvantage of telephone interviews is the inability of the interviewer and researcher to see each other. This is resolved using software for audio and video calls online – such as Skype – to conduct interviews with participants in qualitative studies. Advantages of this approach include being able to see the participant if video calls are used, enabling observation of non-verbal communication, and the software can be free to use. However, participants are required to have a device and internet connection, as well as being computer literate, potentially limiting who can participate in the study. One qualitative study explored the role of dental hygienists in reducing oral health disparities in Canada. 22 The researcher conducted interviews using Skype, which enabled dental hygienists from across Canada to be interviewed within the research budget, accommodating the participants' schedules. 22

A less commonly used approach to qualitative interviews is the use of social virtual worlds. A qualitative study accessed a social virtual world – Second Life – to explore the health literacy skills of individuals who use social virtual worlds to access health information. 23 The researcher created an avatar and interview room, and undertook interviews with participants using voice and text methods. 23 This approach to recruitment and data collection enables individuals from diverse geographical locations to participate, while remaining anonymous if they wish. Furthermore, for interviews conducted using text methods, transcription of the interview is not required as the researcher can save the written conversation with the participant, with the participant's consent. However, the researcher and participant need to be familiar with how the social virtual world works to engage in an interview this way.

Conducting an interview

Ensuring informed consent before any interview is a fundamental aspect of the research process. Participants in research must be afforded autonomy and respect; consent should be informed and voluntary. 24 Individuals should have the opportunity to read an information sheet about the study, ask questions, understand how their data will be stored and used, and know that they are free to withdraw at any point without reprisal. The qualitative researcher should take written consent before undertaking the interview. In a face-to-face interview, this is straightforward: the researcher and participant both sign copies of the consent form, keeping one each. However, this approach is less straightforward when the researcher and participant do not meet in person. A recent protocol paper outlined an approach for taking consent for telephone interviews, which involved: audio recording the participant agreeing to each point on the consent form; the researcher signing the consent form and keeping a copy; and posting a copy to the participant. 25 This process could be replicated in other interview studies using digital methods.

There are advantages and disadvantages of using face-to-face and digital methods for research interviews. Ultimately, for both approaches, the quality of the interview is determined by the researcher. 16 Appropriate training and preparation are thus required. Healthcare professionals can use their interpersonal communication skills when undertaking a research interview, particularly questioning, listening and conversing. 3 However, the purpose of an interview is to gain information about the study topic, 26 rather than offering help and advice. 3 The researcher therefore needs to listen attentively to participants, enabling them to describe their experience without interruption. 3 The use of active listening skills also help to facilitate the interview. 14 Spradley outlined elements and strategies for research interviews, 27 which are a useful guide for qualitative researchers:

Greeting and explaining the project/interview

Asking descriptive (broad), structural (explore response to descriptive) and contrast (difference between) questions

Asymmetry between the researcher and participant talking

Expressing interest and cultural ignorance

Repeating, restating and incorporating the participant's words when asking questions

Creating hypothetical situations

Asking friendly questions

Knowing when to leave.

For semi-structured interviews, a topic guide (also called an interview schedule) is used to guide the content of the interview – an example of a topic guide is outlined in Box 1 . The topic guide, usually based on the research questions, existing literature and, for healthcare professionals, their clinical experience, is developed by the research team. The topic guide should include open ended questions that elicit in-depth information, and offer participants the opportunity to talk about issues important to them. This is vital in qualitative research where the researcher is interested in exploring the experiences and perspectives of participants. It can be useful for qualitative researchers to pilot the topic guide with the first participants, 10 to ensure the questions are relevant and understandable, and amending the questions if required.

Regardless of the medium of interview, the researcher must consider the setting of the interview. For face-to-face interviews, this could be in the participant's home, in an office or another mutually convenient location. A quiet location is preferable to promote confidentiality, enable the researcher and participant to concentrate on the conversation, and to facilitate accurate audio-recording of the interview. For interviews using digital methods the same principles apply: a quiet, private space where the researcher and participant feel comfortable and confident to participate in an interview.

Box 1: Example of a topic guide

Study focus: Parents' experiences of brushing their child's (aged 0–5) teeth

1. Can you tell me about your experience of cleaning your child's teeth?

How old was your child when you started cleaning their teeth?

Why did you start cleaning their teeth at that point?

How often do you brush their teeth?

What do you use to brush their teeth and why?

2. Could you explain how you find cleaning your child's teeth?

Do you find anything difficult?

What makes cleaning their teeth easier for you?

3. How has your experience of cleaning your child's teeth changed over time?

Has it become easier or harder?

Have you changed how often and how you clean their teeth? If so, why?

4. Could you describe how your child finds having their teeth cleaned?

What do they enjoy about having their teeth cleaned?

Is there anything they find upsetting about having their teeth cleaned?

5. Where do you look for information/advice about cleaning your child's teeth?

What did your health visitor tell you about cleaning your child's teeth? (If anything)

What has the dentist told you about caring for your child's teeth? (If visited)

Have any family members given you advice about how to clean your child's teeth? If so, what did they tell you? Did you follow their advice?

6. Is there anything else you would like to discuss about this?

Focus groups

A focus group is a moderated group discussion on a pre-defined topic, for research purposes. 28 , 29 While not aligned to a particular qualitative methodology (for example, grounded theory or phenomenology) as such, focus groups are used increasingly in healthcare research, as they are useful for exploring collective perspectives, attitudes, behaviours and experiences. Consequently, they can yield rich, in-depth data and illuminate agreement and inconsistencies 28 within and, where appropriate, between groups. Examples include public perceptions of dental implants and subsequent impact on help-seeking and decision making, 30 and general dental practitioners' views on patient safety in dentistry. 31

Focus groups can be used alone or in conjunction with other methods, such as interviews or observations, and can therefore help to confirm, extend or enrich understanding and provide alternative insights. 28 The social interaction between participants often results in lively discussion and can therefore facilitate the collection of rich, meaningful data. However, they are complex to organise and manage, due to the number of participants, and may also be inappropriate for exploring particularly sensitive issues that many participants may feel uncomfortable about discussing in a group environment.

Focus groups are primarily undertaken face-to-face but can now also be undertaken online, using appropriate technologies such as email, bulletin boards, online research communities, chat rooms, discussion forums, social media and video conferencing. 32 Using such technologies, data collection can also be synchronous (for example, online discussions in 'real time') or, unlike traditional face-to-face focus groups, asynchronous (for example, online/email discussions in 'non-real time'). While many of the fundamental principles of focus group research are the same, regardless of how they are conducted, a number of subtle nuances are associated with the online medium. 32 Some of which are discussed further in the following sections.

Focus group considerations

Some key considerations associated with face-to-face focus groups are: how many participants are required; should participants within each group know each other (or not) and how many focus groups are needed within a single study? These issues are much debated and there is no definitive answer. However, the number of focus groups required will largely depend on the topic area, the depth and breadth of data needed, the desired level of participation required 29 and the necessity (or not) for data saturation.

The optimum group size is around six to eight participants (excluding researchers) but can work effectively with between three and 14 participants. 3 If the group is too small, it may limit discussion, but if it is too large, it may become disorganised and difficult to manage. It is, however, prudent to over-recruit for a focus group by approximately two to three participants, to allow for potential non-attenders. For many researchers, particularly novice researchers, group size may also be informed by pragmatic considerations, such as the type of study, resources available and moderator experience. 28 Similar size and mix considerations exist for online focus groups. Typically, synchronous online focus groups will have around three to eight participants but, as the discussion does not happen simultaneously, asynchronous groups may have as many as 10–30 participants. 33

The topic area and potential group interaction should guide group composition considerations. Pre-existing groups, where participants know each other (for example, work colleagues) may be easier to recruit, have shared experiences and may enjoy a familiarity, which facilitates discussion and/or the ability to challenge each other courteously. 3 However, if there is a potential power imbalance within the group or if existing group norms and hierarchies may adversely affect the ability of participants to speak freely, then 'stranger groups' (that is, where participants do not already know each other) may be more appropriate. 34 , 35

Focus group management

Face-to-face focus groups should normally be conducted by two researchers; a moderator and an observer. 28 The moderator facilitates group discussion, while the observer typically monitors group dynamics, behaviours, non-verbal cues, seating arrangements and speaking order, which is essential for transcription and analysis. The same principles of informed consent, as discussed in the interview section, also apply to focus groups, regardless of medium. However, the consent process for online discussions will probably be managed somewhat differently. For example, while an appropriate participant information leaflet (and consent form) would still be required, the process is likely to be managed electronically (for example, via email) and would need to specifically address issues relating to technology (for example, anonymity and use, storage and access to online data). 32

The venue in which a face to face focus group is conducted should be of a suitable size, private, quiet, free from distractions and in a collectively convenient location. It should also be conducted at a time appropriate for participants, 28 as this is likely to promote attendance. As with interviews, the same ethical considerations apply (as discussed earlier). However, online focus groups may present additional ethical challenges associated with issues such as informed consent, appropriate access and secure data storage. Further guidance can be found elsewhere. 8 , 32

Before the focus group commences, the researchers should establish rapport with participants, as this will help to put them at ease and result in a more meaningful discussion. Consequently, researchers should introduce themselves, provide further clarity about the study and how the process will work in practice and outline the 'ground rules'. Ground rules are designed to assist, not hinder, group discussion and typically include: 3 , 28 , 29

Discussions within the group are confidential to the group

Only one person can speak at a time

All participants should have sufficient opportunity to contribute

There should be no unnecessary interruptions while someone is speaking

Everyone can be expected to be listened to and their views respected

Challenging contrary opinions is appropriate, but ridiculing is not.

Moderating a focus group requires considered management and good interpersonal skills to help guide the discussion and, where appropriate, keep it sufficiently focused. Avoid, therefore, participating, leading, expressing personal opinions or correcting participants' knowledge 3 , 28 as this may bias the process. A relaxed, interested demeanour will also help participants to feel comfortable and promote candid discourse. Moderators should also prevent the discussion being dominated by any one person, ensure differences of opinions are discussed fairly and, if required, encourage reticent participants to contribute. 3 Asking open questions, reflecting on significant issues, inviting further debate, probing responses accordingly, and seeking further clarification, as and where appropriate, will help to obtain sufficient depth and insight into the topic area.

Moderating online focus groups requires comparable skills, particularly if the discussion is synchronous, as the discussion may be dominated by those who can type proficiently. 36 It is therefore important that sufficient time and respect is accorded to those who may not be able to type as quickly. Asynchronous discussions are usually less problematic in this respect, as interactions are less instant. However, moderating an asynchronous discussion presents additional challenges, particularly if participants are geographically dispersed, as they may be online at different times. Consequently, the moderator will not always be present and the discussion may therefore need to occur over several days, which can be difficult to manage and facilitate and invariably requires considerable flexibility. 32 It is also worth recognising that establishing rapport with participants via online medium is often more challenging than via face-to-face and may therefore require additional time, skills, effort and consideration.

As with research interviews, focus groups should be guided by an appropriate interview schedule, as discussed earlier in the paper. For example, the schedule will usually be informed by the review of the literature and study aims, and will merely provide a topic guide to help inform subsequent discussions. To provide a verbatim account of the discussion, focus groups must be recorded, using an audio-recorder with a good quality multi-directional microphone. While videotaping is possible, some participants may find it obtrusive, 3 which may adversely affect group dynamics. The use (or not) of a video recorder, should therefore be carefully considered.

At the end of the focus group, a few minutes should be spent rounding up and reflecting on the discussion. 28 Depending on the topic area, it is possible that some participants may have revealed deeply personal issues and may therefore require further help and support, such as a constructive debrief or possibly even referral on to a relevant third party. It is also possible that some participants may feel that the discussion did not adequately reflect their views and, consequently, may no longer wish to be associated with the study. 28 Such occurrences are likely to be uncommon, but should they arise, it is important to further discuss any concerns and, if appropriate, offer them the opportunity to withdraw (including any data relating to them) from the study. Immediately after the discussion, researchers should compile notes regarding thoughts and ideas about the focus group, which can assist with data analysis and, if appropriate, any further data collection.

Qualitative research is increasingly being utilised within dental research to explore the experiences, perspectives, motivations and beliefs of participants. The contributions of qualitative research to evidence-based practice are increasingly being recognised, both as standalone research and as part of larger mixed-method studies, including clinical trials. Interviews and focus groups remain commonly used data collection methods in qualitative research, and with the advent of digital technologies, their utilisation continues to evolve. However, digital methods of qualitative data collection present additional methodological, ethical and practical considerations, but also potentially offer considerable flexibility to participants and researchers. Consequently, regardless of format, qualitative methods have significant potential to inform important areas of dental practice, policy and further related research.

Gussy M, Dickson-Swift V, Adams J . A scoping review of qualitative research in peer-reviewed dental publications. Int J Dent Hygiene 2013; 11 : 174–179.

Article   Google Scholar  

Burnard P, Gill P, Stewart K, Treasure E, Chadwick B . Analysing and presenting qualitative data. Br Dent J 2008; 204 : 429–432.

Gill P, Stewart K, Treasure E, Chadwick B . Methods of data collection in qualitative research: interviews and focus groups. Br Dent J 2008; 204 : 291–295.

Gill P, Stewart K, Treasure E, Chadwick B . Conducting qualitative interviews with school children in dental research. Br Dent J 2008; 204 : 371–374.

Stewart K, Gill P, Chadwick B, Treasure E . Qualitative research in dentistry. Br Dent J 2008; 204 : 235–239.

Masood M, Thaliath E, Bower E, Newton J . An appraisal of the quality of published qualitative dental research. Community Dent Oral Epidemiol 2011; 39 : 193–203.

Ellis J, Levine A, Bedos C et al. Refusal of implant supported mandibular overdentures by elderly patients. Gerodontology 2011; 28 : 62–68.

Macfarlane S, Bucknall T . Digital Technologies in Research. In Gerrish K, Lathlean J (editors) The Research Process in Nursing . 7th edition. pp. 71–86. Oxford: Wiley Blackwell; 2015.

Google Scholar  

Lee R, Fielding N, Blank G . Online Research Methods in the Social Sciences: An Editorial Introduction. In Fielding N, Lee R, Blank G (editors) The Sage Handbook of Online Research Methods . pp. 3–16. London: Sage Publications; 2016.

Creswell J . Qualitative inquiry and research design: Choosing among five designs . Thousand Oaks, CA: Sage, 1998.

Guest G, Namey E, Mitchell M . Qualitative research: Defining and designing In Guest G, Namey E, Mitchell M (editors) Collecting Qualitative Data: A Field Manual For Applied Research . pp. 1–40. London: Sage Publications, 2013.

Chapter   Google Scholar  

Pope C, Mays N . Qualitative research: Reaching the parts other methods cannot reach: an introduction to qualitative methods in health and health services research. BMJ 1995; 311 : 42–45.

Giddings L, Grant B . A Trojan Horse for positivism? A critique of mixed methods research. Adv Nurs Sci 2007; 30 : 52–60.

Hammersley M, Atkinson P . Ethnography: Principles in Practice . London: Routledge, 1995.

Oltmann S . Qualitative interviews: A methodological discussion of the interviewer and respondent contexts Forum Qualitative Sozialforschung/Forum: Qualitative Social Research. 2016; 17 : Art. 15.

Patton M . Qualitative Research and Evaluation Methods . Thousand Oaks, CA: Sage, 2002.

Wang M, Vinall-Collier K, Csikar J, Douglas G . A qualitative study of patients' views of techniques to reduce dental anxiety. J Dent 2017; 66 : 45–51.

Lindenmeyer A, Bowyer V, Roscoe J, Dale J, Sutcliffe P . Oral health awareness and care preferences in patients with diabetes: a qualitative study. Fam Pract 2013; 30 : 113–118.

Gallagher J, Clarke W, Wilson N . Understanding the motivation: a qualitative study of dental students' choice of professional career. Eur J Dent Educ 2008; 12 : 89–98.

Tod A . Interviewing. In Gerrish K, Lacey A (editors) The Research Process in Nursing . Oxford: Blackwell Publishing, 2006.

Grey E, Harcourt D, O'Sullivan D, Buchanan H, Kipatrick N . A qualitative study of patients' motivations and expectations for dental implants. Br Dent J 2013; 214 : 10.1038/sj.bdj.2012.1178.

Farmer J, Peressini S, Lawrence H . Exploring the role of the dental hygienist in reducing oral health disparities in Canada: A qualitative study. Int J Dent Hygiene 2017; 10.1111/idh.12276.

McElhinney E, Cheater F, Kidd L . Undertaking qualitative health research in social virtual worlds. J Adv Nurs 2013; 70 : 1267–1275.

Health Research Authority. UK Policy Framework for Health and Social Care Research. Available at https://www.hra.nhs.uk/planning-and-improving-research/policies-standards-legislation/uk-policy-framework-health-social-care-research/ (accessed September 2017).

Baillie J, Gill P, Courtenay P . Knowledge, understanding and experiences of peritonitis among patients, and their families, undertaking peritoneal dialysis: A mixed methods study protocol. J Adv Nurs 2017; 10.1111/jan.13400.

Kvale S . Interviews . Thousand Oaks (CA): Sage, 1996.

Spradley J . The Ethnographic Interview . New York: Holt, Rinehart and Winston, 1979.

Goodman C, Evans C . Focus Groups. In Gerrish K, Lathlean J (editors) The Research Process in Nursing . pp. 401–412. Oxford: Wiley Blackwell, 2015.

Shaha M, Wenzell J, Hill E . Planning and conducting focus group research with nurses. Nurse Res 2011; 18 : 77–87.

Wang G, Gao X, Edward C . Public perception of dental implants: a qualitative study. J Dent 2015; 43 : 798–805.

Bailey E . Contemporary views of dental practitioners' on patient safety. Br Dent J 2015; 219 : 535–540.

Abrams K, Gaiser T . Online Focus Groups. In Field N, Lee R, Blank G (editors) The Sage Handbook of Online Research Methods . pp. 435–450. London: Sage Publications, 2016.

Poynter R . The Handbook of Online and Social Media Research . West Sussex: John Wiley & Sons, 2010.

Kevern J, Webb C . Focus groups as a tool for critical social research in nurse education. Nurse Educ Today 2001; 21 : 323–333.

Kitzinger J, Barbour R . Introduction: The Challenge and Promise of Focus Groups. In Barbour R S K J (editor) Developing Focus Group Research . pp. 1–20. London: Sage Publications, 1999.

Krueger R, Casey M . Focus Groups: A Practical Guide for Applied Research. 4th ed. Thousand Oaks, California: SAGE; 2009.

Download references

Author information

Authors and affiliations.

Senior Lecturer (Adult Nursing), School of Healthcare Sciences, Cardiff University,

Lecturer (Adult Nursing) and RCBC Wales Postdoctoral Research Fellow, School of Healthcare Sciences, Cardiff University,

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to P. Gill .

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Gill, P., Baillie, J. Interviews and focus groups in qualitative research: an update for the digital age. Br Dent J 225 , 668–672 (2018). https://doi.org/10.1038/sj.bdj.2018.815

Download citation

Accepted : 02 July 2018

Published : 05 October 2018

Issue Date : 12 October 2018

DOI : https://doi.org/10.1038/sj.bdj.2018.815

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Assessment of women’s needs and wishes regarding interprofessional guidance on oral health in pregnancy – a qualitative study.

  • Merle Ebinghaus
  • Caroline Johanna Agricola
  • Birgit-Christiane Zyriax

BMC Pregnancy and Childbirth (2024)

Translating brand reputation into equity from the stakeholder’s theory: an approach to value creation based on consumer’s perception & interactions

  • Olukorede Adewole

International Journal of Corporate Social Responsibility (2024)

Perceptions and beliefs of community gatekeepers about genomic risk information in African cleft research

  • Abimbola M. Oladayo
  • Oluwakemi Odukoya
  • Azeez Butali

BMC Public Health (2024)

Assessment of women’s needs, wishes and preferences regarding interprofessional guidance on nutrition in pregnancy – a qualitative study

‘baby mamas’ in urban ghana: an exploratory qualitative study on the factors influencing serial fathering among men in accra, ghana.

  • Rosemond Akpene Hiadzi
  • Jemima Akweley Agyeman
  • Godwin Banafo Akrong

Reproductive Health (2023)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

interview research limitations

  • University Libraries
  • Research Guides
  • Topic Guides
  • Research Methods Guide
  • Interview Research

Research Methods Guide: Interview Research

  • Introduction
  • Research Design & Method
  • Survey Research
  • Data Analysis
  • Resources & Consultation

Tutorial Videos: Interview Method

Interview as a Method for Qualitative Research

interview research limitations

Goals of Interview Research

  • Preferences
  • They help you explain, better understand, and explore research subjects' opinions, behavior, experiences, phenomenon, etc.
  • Interview questions are usually open-ended questions so that in-depth information will be collected.

Mode of Data Collection

There are several types of interviews, including:

  • Face-to-Face
  • Online (e.g. Skype, Googlehangout, etc)

FAQ: Conducting Interview Research

What are the important steps involved in interviews?

  • Think about who you will interview
  • Think about what kind of information you want to obtain from interviews
  • Think about why you want to pursue in-depth information around your research topic
  • Introduce yourself and explain the aim of the interview
  • Devise your questions so interviewees can help answer your research question
  • Have a sequence to your questions / topics by grouping them in themes
  • Make sure you can easily move back and forth between questions / topics
  • Make sure your questions are clear and easy to understand
  • Do not ask leading questions
  • Do you want to bring a second interviewer with you?
  • Do you want to bring a notetaker?
  • Do you want to record interviews? If so, do you have time to transcribe interview recordings?
  • Where will you interview people? Where is the setting with the least distraction?
  • How long will each interview take?
  • Do you need to address terms of confidentiality?

Do I have to choose either a survey or interviewing method?

No.  In fact, many researchers use a mixed method - interviews can be useful as follow-up to certain respondents to surveys, e.g., to further investigate their responses.

Is training an interviewer important?

Yes, since the interviewer can control the quality of the result, training the interviewer becomes crucial.  If more than one interviewers are involved in your study, it is important to have every interviewer understand the interviewing procedure and rehearse the interviewing process before beginning the formal study.

  • << Previous: Survey Research
  • Next: Data Analysis >>
  • Last Updated: Aug 21, 2023 10:42 AM

Interviews in Social Research: Advantages and Disadvantages

Table of Contents

Last Updated on September 11, 2023 by Karl Thompson

An interview involves an interviewer asking questions verbally to a respondent. Interviews involve a more direct interaction between the researcher and the respondent than questionnaires. Interviews can either be conducted face to face, via phone, video link or social media.

This post has primarily been written for students studying the Research Methods aspect of A-level sociology, but it should also be useful for students studying methods for psychology, business studies and maybe other subjects too!

Types of interview

Structured or formal interviews are those in which the interviewer asks the interviewee the same questions in the same way to different respondents. This will typically involve reading out questions from a pre-written and pre-coded structured questionnaire, which forms the interview schedule. The most familiar form of this is with market research, where you may have been stopped on the street with a researcher ticking boxes based on your responses.

Unstructured or Informal interviews (also called discovery interviews) are more like a guided conversation. Here the interviewer has a list of topics they want the respondent to talk about, but the interviewer has complete freedom to vary the specific questions from respondent to respondent, so they can follow whatever lines of enquiry they think are most appropriated, depending on the responses given by each respondent.

Semi-Structured interviews are those in which respondents have a list of questions, but they are free to ask further, differentiated questions based on the responses given. This allows more flexibility that the structured interview yet more structure than the informal interview.

Group interviews – Interviews can be conducted either one to one (individual interviews) or in a a group, in which the interviewer interviews two or more respondents at a time. Group discussions among respondents may lead to deeper insight than just interviewing people along, as respondents ‘encourage’ each other.

Focus groups are a type of group interview in which respondents are asked to discuss certain topics.

Interviews: key terms

The Interview Schedule – A list of questions or topic areas the interviewer wishes to ask or cover in the course of the interview. The more structured the interview, the more rigid the interiew schedule will be. Before conducting an interview it is usual for the reseracher to know something about the topic area and the respondents themselves, and so they will have at least some idea of the questions they are likely to ask: even if they are doing ‘unstructred interviews’ an interviewer will have some kind of interview schedule, even if it is just a list of broad topic areas to discuss, or an opening question.

The Strengths and Limitations of Unstructured Interviews 

Unstructured Interviews Mind Map

The strengths of unstructured interviews

The key strength of unstructured interviews is good validity , but for this to happen questioning should be as open ended as possible to gain genuine, spontaneous information rather than ‘rehearsed responses’ and questioning needs to be sufficient enough to elicit in-depth answers rather than glib, easy answers.

Rapport and empathy – unstructured interviews encourage a good rapport between interviewee and interviewer. Because of their informal nature, like guided conversations, unstructured interviews are more likely to make respondents feel at ease than with the more formal setting of a structured questionnaire or experiment. This should encourage openness, trust and empathy.

Checking understanding – unstructured interviews also allow the interviewer to check understanding. If an interviewee doesn’t understand a question, the interviewer is free to rephrase it, or to ask follow up questions to clarify aspects of answers that were not clear in the first instance.

They are good for finding out why respondents do not do certain things . For example postal surveys asking why people do not claim benefits have very low response rates, but informal interviews are perfect for researching people who may have low literacy skills.

The Limitations of unstructured interviews

The main theoretical disadvantage is the lack of reliability – unstructured Interviews lack reliability because each interview is unique – a variety of different questions are asked and phrased in a variety of different ways to different respondents.

We also need to keep in mind that interviews can only tap into what people SAY about their values, beliefs and actions, we don’t actually get to see these in action, like we would do with observational studies such as Participant Observation. This has been a particular problem with self-report studies of criminal behaviour. These have been tested using polygraphs, and follow up studies of school and criminal records and responses found to be lacking in validity, so much so that victim-surveys have become the standard method for measuring crime rather than self-report studies.

Sudman and Bradburn (1974) conducted a review of literature and found that responses varied depending on the relative demographics of the interviewer and respondent. For example white interviewers received more socially acceptable responses from black respondents than they did from white respondents. Similar findings have been found with different ethnicities, age, social class and religion.

Practical disadvantages – unstructured Interviews may take a relatively long time to conduct. Some interviews can take hours. They also need to be taped and transcribed, and in the analysis phase there may be a lot of information that is not directly relevant to one’s research topic that needs to be sifted through.

There are few ethical problems , assuming that informed consent is gained and confidentially ensured. Although having said this, the fact that the researcher is getting more in-depth data, more of an insight into who the person really is, does offer the potential for the information to do more harm to the respondent if it got into the wrong hands (but this in turn depends on the topics discussed and the exact content of the interviews.

Sociological perspectives on interviews

Fo r Interactionists , interviews are based on mutual participant observation. The context of the interview is intrinsic to understanding responses and no distinction between research interviews and other social interaction is recognised. Data are valid when mutual understanding between interviewer and respondent is agreed.

Related Posts

For more posts on research methods please see my research methods page.

Please click here to return to the homepage – ReviseSociology.com

Share this:

3 thoughts on “interviews in social research: advantages and disadvantages”, leave a reply cancel reply.

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Discover more from ReviseSociology

Research Design Review

A discussion of qualitative & quantitative research design, strengths & limitations of the in-depth interview method: an overview.

The following is a modified excerpt from Applied Qualitative Research Design: A Total Quality Framework Approach (Roller & Lavrakas, 2015, pp. 56-57).

Two people talking

An additional strength of the IDI method is the flexibility of the interview format, which allows the interviewer to tailor the order in which questions are asked, modify the question wording as appropriate, ask follow-up questions to clarify interviewees’ responses, and use indirect questions (e.g., the use of projective techniques ) to stimulate subconscious opinions or recall. It should be noted, however, that “flexibility” does not mean a willy-nilly approach to interviewing, and, indeed, the interviewer should employ quality measures such as those outlined in “Applying a Quality Framework to the In-depth Interview Method.”

A third key strength of the IDI method—analyzability of the data—is a byproduct of the interviewer–interviewee relationship and the depth of interviewing techniques, which produce a granularity in the IDI data that is rich in fine details and serves as the basis for deciphering the narrative within each interview. These details also enable researchers to readily identify where they agree or disagree with the meanings of codes and themes associated with specific responses, which ultimately leads to the identification of themes and connections across interview participants.

Limitations

The IDI method also presents challenges and limitations that deserve the researcher’s attention. The most important, from a Total Quality Framework standpoint, has to do with what is also considered a key strength of the IDI method: the interviewer–interviewee relationship. There are two key aspects of the relationship that can potentially limit (or even undermine) the effectiveness of the IDI method: the interviewer and the social context. The main issue with respect to the interviewer is his/her potential for biasing the information that is gathered. This can happen due to  (a) personal characteristics such as gender, age, race, ethnicity, and education (e.g., a 60-year-old Caucasian male interviewer may stifle or skew responses from young, female, African American participants); (b) personal values or beliefs (e.g., an interviewer with strongly held beliefs about global warming and its damaging impact on the environment may “tune out” or misconstrue the comments from interviewees who believe global warming is a myth); and/or (c) other factors (e.g., an interviewer’s stereotyping, misinterpreting, and/or presumptions about the interviewee based solely on the interviewee’s outward appearance). Any of these characteristics may negatively influence an interviewee’s responses to the researcher’s questions and/or the accuracy of the interviewer’s data gathering. A result of these interviewer effects may be the “difficulty of seeing the people as complex, and . . . a reduction of their humanity to a stereotypical, flat, one-dimensional paradigm” (Krumer-Nevo, 2002, p. 315).

The second key area of concern with the IDI method is related to the broader social context of the relationship, particularly what Kvale (2006) calls the “power dynamics” within the interview environment, characterized by the possibility of “a one-way dialogue” whereby “the interviewer rules the interview” (p. 484). It is important, therefore, for the researcher to carefully consider the social interactions that are integral to the interviewing process and the possible impact these interactions may have on the credibility of an IDI study. For example, the trained interviewer will maximize the social interaction by utilizing positive engagement techniques such as establishing rapport (i.e., being approachable), asking thoughtful questions that indicate the interviewer is listening carefully to the interviewee, and knowing when to stay silent and let the interviewee talk freely.

Krumer-Nevo, M. (2002). The arena of othering: A life-story study with women living in poverty and social marginality. Qualitative Social Work , 1 (3), 303–318.

Kvale, S. (2006). Dominance through interviews and dialogues. Qualitative Inquiry , 12 (3), 480–500.

Image captured from: https://upgradedhumans.com/2015/10/21/a-mile-wide-and-an-inch-deep/

Share this:

  • Click to share on Reddit (Opens in new window)
  • Click to share on Twitter (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • Click to share on Tumblr (Opens in new window)
  • Click to email a link to a friend (Opens in new window)
  • Click to print (Opens in new window)

Leave a comment Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed .

' src=

  • Already have a WordPress.com account? Log in now.
  • Subscribe Subscribed
  • Copy shortlink
  • Report this content
  • View post in Reader
  • Manage subscriptions
  • Collapse this bar

Logo for British Columbia/Yukon Open Authoring Platform

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 10: Qualitative Data Collection & Analysis Methods

10.7 Strengths and Weaknesses of Qualitative Interviews

As the preceding sections have suggested, qualitative interviews are an excellent way to gather detailed information. Whatever topic is of interest to the researcher can be explored in much more depth by employing this method than with almost any other method. Not only are participants given the opportunity to elaborate in a way that is not possible with other methods, such as survey research, but, in addition, they are able share information with researchers in their own words and from their own perspectives, rather than attempting to fit those perspectives into the perhaps limited response options provided by the researcher. Because qualitative interviews are designed to elicit detailed information, they are especially useful when a researcher’s aim is to study social processes, or the “how” of various phenomena. Yet another, and sometimes overlooked, benefit of qualitative interviews that occurs in person is that researchers can make observations beyond those that a respondent is orally reporting. A respondent’s body language, and even her or his choice of time and location for the interview, might provide a researcher with useful data.

As with quantitative survey research, qualitative interviews rely on respondents’ ability to accurately and honestly recall whatever details about their lives, circumstances, thoughts, opinions, or behaviors are being examined. Qualitative interviewing is also time-intensive and can be quite expensive. Creating an interview guide, identifying a sample, and conducting interviews are just the beginning of the process. Transcribing interviews is labor-intensive, even before coding begins. It is also not uncommon to offer respondents some monetary incentive or thank-you for participating, because you are asking for more of the participants’ time than if you had mailed them a questionnaire containing closed-ended questions. Conducting qualitative interviews is not only labor intensive but also emotionally taxing. Researchers embarking on a qualitative interview project with a subject that is sensitive in nature should keep in mind their own abilities to listen to stories that may be difficult to hear.

Research Methods for the Social Sciences: An Introduction Copyright © 2020 by Valerie Sheppard is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Advantages and Disadvantages of Interview in Research

Approaching the Respondent- according to the Interviewer’s Manual, the introductory tasks of the interviewer are: tell the interviewer is and whom he or she represents; telling him about what the study is, in a way to stimulate his interest. The interviewer has also ensured at this stage that his answers are confidential; tell the respondent how he was chosen; use letters and clippings of surveys in order to show the importance of the study to the respondent. The interviewer must be adaptable, friendly, responsive, and should make the interviewer feel at ease to say anything, even if it is irrelevant.

Dealing with Refusal- there can be plenty of reasons for refusing for an interview, for example, a respondent may feel that surveys are a waste of time, or may express anti-government feeling. It is the interviewer’s job to determine the reason for the refusal of the interview and attempt to overcome it.

Conducting the Interview- the questions should be asked as worded for all respondents in order to avoid misinterpretation of the question. Clarification of the question should also be avoided for the same reason. However, the questions can be repeated in case of misunderstanding. The questions should be asked in the same order as mentioned in the questionnaire, as a particular question might not make sense if the questions before they are skipped. The interviewers must be very careful to be neutral before starting the interview so as not to lead the respondent, hence minimizing bias.

listing out the advantages of interview studies, which are noted below:

There are certain disadvantages of interview studies as well which are:.

INTERVIEW AS SOCIAL INTERACTION

The interview subjects to the same rules and regulations of other instances of social interaction. It is believed that conducting interview studies has possibilities for all sorts of bias, inconsistency, and inaccuracies and hence many researchers are critical of the surveys and interviews. T.R. William says that in certain societies there may be patterns of people saying one thing, but doing another. He also believes that the responses should be interpreted in context and two social contexts should not be compared to each other. Derek L. Phillips says that the survey method itself can manipulate the data, and show the results that actually does not exist in the population in real. Social research becomes very difficult due to the variability in human behavior and attitude. Other errors that can be caused in social research include-

Apart from the errors caused by the responder, there are also certain errors made by the interviewers that may include-

Bailey, K. (1994). Interview Studies in Methods of social research. Simonand Schuster, 4th ed. The Free Press, New York NY 10020.Ch8. Pp.173-213.

We believe in sharing knowledge with everyone and making a positive change in society through our work and contributions. If you are interested in joining us, please check our ‘About’ page for more information

Institutvert.org

interviews in research advantages and disadvantages

Interviews are a widely used research method that allows researchers to gather valuable information directly from participants. This article explores the advantages and disadvantages of conducting interviews in research, providing insights into the strengths and weaknesses of this approach.

Advantages of Interviews in Research

1. rich and in-depth data:.

Interviews provide researchers with the opportunity to delve deep into a topic and obtain detailed information from participants. Through open-ended questions, researchers can explore various aspects and gain a comprehensive understanding of the subject matter.

2. Flexibility:

Interviews offer flexibility in terms of location, timing, and format. Researchers can choose to conduct interviews face-to-face, over the phone, or even through video conferencing. This flexibility allows for convenience and increases the likelihood of participation.

3. Probing and Clarification:

Unlike other research methods, interviews allow for immediate clarification and probing. Researchers can ask follow-up questions, seek elaboration, or request examples during the interview, ensuring a clearer understanding of the participant’s responses.

4. Personal Connection:

Interviews foster a personal connection between the researcher and the participant. This connection often leads to a greater level of trust, resulting in participants sharing more detailed and honest responses. It also provides an opportunity to observe non-verbal cues, gestures, and emotions that may contribute to the research findings.

5. Adaptability:

Researchers can adapt their interviews based on the participant’s background, knowledge, or cultural context. This adaptability allows for a tailored approach that enhances the quality and relevance of the data obtained.

Advantages Disadvantages
Rich and in-depth data Potential for bias
Flexibility Time-consuming
Probing and clarification Difficulty in generalizing findings
Personal connection Interviewer influence
Adaptability Resource-intensive

Disadvantages of Interviews in Research

1. potential for bias:.

Interviews may introduce bias as the researcher’s personal presence and interaction can influence the participant’s responses. Researchers must remain impartial and minimize any potential bias or leading questions.

2. Time-consuming:

Conducting interviews can be time-consuming as it requires scheduling, preparation, execution, and transcription of the recorded data. Researchers must allocate ample time and resources to ensure thorough data collection and analysis.

3. Difficulty in Generalizing Findings:

While interviews provide rich and detailed data, it can be challenging to generalize the findings to a larger population. The sample size is often limited, making it difficult to draw broad conclusions from interview-based research.

4. Interviewer Influence:

The presence and behavior of the interviewer may impact the participant’s responses. Participants might alter their answers based on their perception of the researcher’s expectations, potentially leading to skewed or inaccurate data.

5. Resource-Intensive:

Conducting interviews requires significant resources, including time, manpower, and financial investment. Expenses may include travel costs, transcription services, and compensation for participants, making interviews a more resource-intensive research method.

Benefits of Knowing the Interviews in Research Advantages and Disadvantages

Understanding the advantages and disadvantages of interviews in research can significantly benefit researchers in several ways:

  • Improved Research Design: Knowledge of the strengths and limitations of interviews helps researchers design studies that leverage the advantages while mitigating potential drawbacks.
  • Informed Decision-Making: Researchers can make informed choices about when to use interviews as a research method and when to employ other techniques better suited to their objectives.
  • Data Quality Enhancement: Awareness of the disadvantages allows researchers to implement strategies to minimize bias and increase the reliability and validity of the data collected through interviews.
  • Ethical Considerations: Understanding the advantages and disadvantages helps researchers navigate potential ethical dilemmas during the interview process and ensures the protection of participants’ rights and well-being.

In conclusion, interviews offer valuable advantages in research, including rich and in-depth data, flexibility, probing capabilities, personal connection, and adaptability. However, there are also disadvantages to consider, such as the potential for bias, time consumption, difficulty in generalizing findings, interviewer influence, and resource intensiveness. By understanding these advantages and disadvantages, researchers can make more informed decisions, enhance their research methodologies, and ensure the validity and integrity of their findings.

  • a thorough list of balanced scorecard advantages and disadvantages
  • 5 advantages and 5 disadvantages
  • 13 traditional economy advantages and disadvantages
  • about internet advantages and disadvantages
  • total quality management advantages and disadvantages
  • authoritative leadership advantages and disadvantages
  • 3d printers advantages and disadvantages

Research-Methodology

Interviews can be defined as a qualitative research technique which involves “conducting intensive individual interviews with a small number of respondents to explore their perspectives on a particular idea, program or situation.” [1]

There are three different formats of interviews: structured, semi-structured and unstructured.

Structured interviews consist of a series of pre-determined questions that all interviewees answer in the same order. Data analysis usually tends to be more straightforward because researcher can compare and contrast different answers given to the same questions.

Unstructured interviews are usually the least reliable from research viewpoint, because no questions are prepared prior to the interview and data collection is conducted in an informal manner. Unstructured interviews can be associated with a high level of bias and comparison of answers given by different respondents tends to be difficult due to the differences in formulation of questions.

Semi-structured interviews contain the components of both, structured and unstructured interviews. In semi-structured interviews, interviewer prepares a set of same questions to be answered by all interviewees. At the same time, additional questions might be asked during interviews to clarify and/or further expand certain issues.

Advantages of interviews include possibilities of collecting detailed information about research questions.  Moreover, in in this type of primary data collection researcher has direct control over the flow of process and she has a chance to clarify certain issues during the process if needed. Disadvantages, on the other hand, include longer time requirements and difficulties associated with arranging an appropriate time with perspective sample group members to conduct interviews.

When conducting interviews you should have an open mind and refrain from displaying disagreements in any forms when viewpoints expressed by interviewees contradict your own ideas. Moreover, timing and environment for interviews need to be scheduled effectively. Specifically, interviews need to be conducted in a relaxed environment, free of any forms of pressure for interviewees whatsoever.

Respected scholars warn that “in conducting an interview the interviewer should attempt to create a friendly, non-threatening atmosphere. Much as one does with a cover letter, the interviewer should give a brief, casual introduction to the study; stress the importance of the person’s participation; and assure anonymity, or at least confidentiality, when possible.” [2]

There is a risk of interviewee bias during the primary data collection process and this would seriously compromise the validity of the project findings. Some interviewer bias can be avoided by ensuring that the interviewer does not overreact to responses of the interviewee. Other steps that can be taken to help avoid or reduce interviewer bias include having the interviewer dress inconspicuously and appropriately for the environment and holding the interview in a private setting.  [3]

My e-book, The Ultimate Guide to Writing a Dissertation in Business Studies: a step by step assistance offers practical assistance to complete a dissertation with minimum or no stress. The e-book covers all stages of writing a dissertation starting from the selection to the research area to submitting the completed version of the work within the deadline.John Dudovskiy

Interviews

[1] Boyce, C. & Neale, P. (2006) “Conducting in-depth Interviews: A Guide for Designing and Conducting In-Depth Interviews”, Pathfinder International Tool Series

[2] Connaway, L.S.& Powell, R.P.(2010) “Basic Research Methods for Librarians” ABC-CLIO

[3] Connaway, L.S.& Powell, R.P.(2010) “Basic Research Methods for Librarians” ABC-CLIO

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

Best Practices for Reducing Bias in the Interview Process

Ilana bergelson.

Department of Urology, University of Iowa, Iowa City, USA

Elizabeth Takacs

Purpose of review.

Objective measures of residency applicants do not correlate to success within residency. While industry and business utilize standardized interviews with blinding and structured questions, residency programs have yet to uniformly incorporate these techniques. This review focuses on an in-depth evaluation of these practices and how they impact interview formatting and resident selection.

Recent Findings

Structured interviews use standardized questions that are behaviorally or situationally anchored. This requires careful creation of a scoring rubric and interviewer training, ultimately leading to improved interrater agreements and biases as compared to traditional interviews. Blinded interviews eliminate even further biases, such as halo, horn, and affinity bias. This has also been seen in using multiple interviewers, such as in the multiple mini-interview format, which also contributes to increased diversity in programs. These structured formats can be adopted to the virtual interviews as well.

There is growing literature that using structured interviews reduces bias, increases diversity, and recruits successful residents. Further research to measure the extent of incorporating this method into residency interviews will be needed in the future.

Introduction

Optimizing the criteria to rank residency applicants is a difficult task. The National Residency Matching Program (NRMP) is designed to be applicant-centric, with the overarching goal to provide favorable outcomes to the applicant while providing opportunity for programs to match high-quality candidates. From a program’s perspective, the NRMP is composed of three phases: the screening of applicants, the interview, and the creation of the rank list. While it is easy to compare candidates based on objective measures, these do not always reflect qualities required to be a successful resident or physician. Prior studies have demonstrated that objective measures such as Alpha Omega Alpha status, United States Medical Licensing Exams (USMLE), and class rank do not correlate with residency performance measures [ 1 ]. Due to the variability of these factors to predict success and recognition of the importance of the non-cognitive traits, most programs place increased emphasis on candidate interviews to assess fit [ 2 ].

Unfortunately, the interview process lacks standardization across residency programs. Industry and business have more standardized interviews and utilize best practices that include blinded interviewers, use of structured questions (situational and/or behavioral anchored questions), and skills testing. Due to residency interview heterogeneity, studies evaluating the interview as a predictor of success have failed to reliably predict who will perform well during residency. Additionally, resident success has many components, such that isolating any one factor, such as the interview, may be problematic and argues for a more holistic approach to resident selection [ 3 ]. Nevertheless, there are multiple ways the application review and interview can be standardized to promote transparency and improve resident selection.

Residency programs have begun adopting best practices from business models for interviewing, which include standardized questions, situational and/or behavioral anchored questions, blinded interviewers, and use of the multiple mini-interview (MMI) model. The focus of this review is to take a more in-depth look at practices that have become standard in business and to review the available data on the impact of these practices in resident selection.

Unstructured Versus Structured Interviews

Unstructured interviews are those in which questions are not set in advance and represent a free-flowing discussion that is conversational in nature. The course of an unstructured interview often depends on the candidate’s replies and may offer opportunities to divert away from topics that are important to applicant selection. While unstructured interviews may involve specific questions such as “tell me about a recent book you read” or “tell me about your research,” the questions do not seek to determine specific applicant attributes and may vary significantly between applicants. Due to their free-form nature, unstructured interviews may be prone to biased or illegal questions. Additionally, due to a lack of a specific scoring rubric, unstructured interviews are open to multiple biases in answer interpretation and as such generally show limited validity [ 4 ]. For the applicant, unstructured interviews allow more freedom to choose a response, with some studies reporting higher interviewee satisfaction with these questions [ 5 ].

In contrast to the unstructured interview, structured interviews use standardized questions that are written prior to an interview, are asked of every candidate, and are scored using an established rubric. Standardized questions may be behaviorally or situationally anchored [ 5 ]. Due to their uniformity, standardized interviews have higher interrater reliability and are less prone to biased or illegal questions.

Behavioral questions ask the candidate to discuss a specific response to a prior experience, which can provide insight into how an applicant may behave in the future [ 5 ]. Not only does the candidate’s response reflect a possible prediction of future behavior, it can also demonstrate the knowledge, priorities, and values of the candidate [ 5 ]. Questions are specifically targeted to reflect qualities the program is searching for (Table ​ (Table1) 1 ) [ 5 – 7 ].

Behavioral questions and character traits [ 5 – 7 ]

Behavioral question exampleTrait evaluated
Tell me about a time in which you had to use your spoken communication skills to get a point across that was important to you.Communication, patience
Can you tell me a time during one of your rotations where you needed to take a leadership role in the case workup or care of the patient? How did this occur and what was the outcome?Drive, determination
Tell us about a time when you made a major mistake. How did you handle it?Integrity
What is the most difficult experience you have had in medical school?Recognition of own limitations

Situational questions require an applicant to predict how they would act in a hypothetical situation and are intended to reflect a realistic scenario the applicant may encounter during residency; this can provide insight into priorities and values [ 5 ]. For example, asking what an applicant would do when receiving sole credit for something they worked on with a colleague can provide insight into the integrity of a candidate [ 4 ]. These types of questions can be especially helpful for fellowships, as applicants would already have the clinical experience of residency to draw from [ 5 ].

Using standardized questions provides a method to recruit candidates with characteristics that ultimately correlate to resident success and good performance. Indeed, structured interview scores have demonstrated an ability to predict which students perform better with regard to communication skills, patient care, and professionalism in surgical and non-surgical specialties [ 8 •]. In fields such as radiology, non-cognitive abilities that can be evaluated in behavioral questions, such as conscientiousness or confidence, are thought to critically influence success in residency and even influence cognitive performance [ 1 ]. This has also been demonstrated in obstetrics and gynecology, where studies have shown that resident clinical performance after 1 year had a positive correlation with the rank list percentile that was generated using a structured interview process [ 9 ].

Creating Effective Structured Interviews

To be effective, standardized interview questions should be designed in a methodical manner. The first step in standardizing the interview process is determining which core values predict resident success in a particular program. To that end, educational leaders and faculty within the department should come to a consensus on the main qualities they seek in a resident. From there, questions can be formatted to elicit those traits during the interview process. Some programs have used personality assessment inventories to establish these qualities. Examples include openness to experience, humility, conscientiousness, and honesty. Further program-specific additions can be included, such as potential for success in an urban versus rural environment [ 10 ].

Once key attributes have been chosen and questions have been selected, a scoring rubric can be created. The scoring of each question is important as it helps define what makes a high-performing versus low-performing answer. Once a scoring system is determined, interviewers can be trained to review the questions, score applicant responses, and ensure they do not revise the questions during the interview [ 11 ]. Questions and the grading rubric should be further scrutinized through mock interviews with current residents, including discussing responses of the mock interviewee and modifying the questions and rubric prior to formal implementation [ 12 ]. Interviewer training itself is critical, as adequate training leads to improved interrater agreements [ 13 ]. Figure  1 demonstrates the steps to develop a behavioral interview question.

An external file that holds a picture, illustration, etc.
Object name is 11934_2022_1116_Fig1_HTML.jpg

Example of standardized question to evaluate communication with scoring criteria

Rating the responses of the applicants can come with errors that ultimately reduce validity. For example, central tendency error involves interviewers not rating students at the extremes of a scale but rather placing all applicants in the middle; leniency versus severity refers to interviewers who either give all applicants high marks or give everyone low marks; contrast effects involve comparing one applicant to another rather than solely focusing on the rubric for each interviewee. These rating errors reflect the importance of training and providing feedback to interviewers [ 4 ].

Blinded Interviewers

Blinding the interviewers to the application prior to meeting with a candidate is intended to eliminate various biases within the interview process (Table ​ (Table2) 2 ) [ 14 , 15 ]. In addition to grades and test scores, aspects of the application that can either introduce or exacerbate bias include photographs, demographics, letters of recommendation, selection to medical honor societies, and even hobbies. Impressions of candidates can be formed prematurely, with the interview then serving to simply confirm (or contradict) those impressions [ 16 •]. Importantly, application blinding may also decrease implicit bias against applicants who identify as underrepresented in medicine [ 17 ].

Examples of bias [ 14 , 15 ]

Type of biasDefinition
HaloTaking someone’s positive characteristic and ignoring any other information that may contradict this positive perception
HornTaking someone’s negative characteristic and ignoring any other information that may contradict this negative perception
AffinityIncreased affinity with those who have shared experiences, such as hometown or education
ConformityWhen the view of the majority can push one individual to also feel similarly about a candidate, regardless of whether this reflects their true feelings; can occur when there are multiple interviewers on one panel
ConfirmationMaking an initial opinion and then looking for specific information to support that opinion

Despite the proven success of these various interview tactics, their use in resident selection remains limited, with only 5% of general surgery programs using standardized interview questions and less than 20% using even a limited amount of blinding (e.g., blinding of photograph) [ 2 ]. Some programs have continued to rely on unblinded interviews and prioritize USMLE scores and course grades in ranking [ 18 ]. Due to their potential benefits and ability to standardize the interview process, it is critical that programs become familiar with the various interview practices so that they can select the best applicants while minimizing the significant bias in traditional interview formats.

Multiple Mini-interview (MMI)

The use of multiple interviews by multiple interviewers provides an opportunity to ask the applicant more varied questions and also allows for the averaging out of potential interviewer bias leading to more consistent applicant scoring and ability to predict applicant success [ 7 ]. Training of the interviewers in interviewing techniques, scoring, and avoiding bias is also likely to decrease scoring variability. Similarly, the use of the same group of interviewers for all candidates should be encouraged in order to limit variance in scoring amongst certain faculty [ 19 ].

One interview method that incorporates multiple interviewers and has had growing frequency in medical school interviews as well as residency interviews is the MMI model. This system provides multiple interviews in the form of 6–12 stations, each of which evaluates a non-medical question designed to assess specific non-academic applicant qualities [ 20 ]. While the MMI format can intimidate some candidates, others find that it provides an opportunity to demonstrate traits that would not be observed in an unstructured interview, such as multitasking, efficiency, flexibility, interpersonal skills, and ethical decision-making [ 21 ]. Furthermore, MMI has been shown to have increased reliability as shown in a study of five California medical schools that showed inter-interviewer consistency was higher for MMIs than traditional interviews which were unstructured and had a 1:1 ratio of interviewer to applicant [ 22 ].

The MMI format is also versatile enough to incorporate technical competencies even through a virtual platform. In general surgery interviews, MMI platforms have been designed to test traits such as communication and empathy but also clinical knowledge and surgical aptitude through anatomy questions and surgical skills (knot tying and suturing). Thus, MMIs are not only versatile, but also have an ability to evaluate cognitive traits and practical skills [ 23 ].

MMI also has the potential to reduce resident attrition. For example, in evaluating students applying to midwifery programs in Australia, attrition rates and grades were compared for admitted students using academic rank and MMI scores obtained before and after the incorporation of MMIs into their selection program. The authors found that when using MMIs, enrolled students had not only higher grades but significantly lower attrition rates. MMI was better suited to show applicants’ passion and commitment, which then led to similar mindsets of accepted applicants as well as a support network [ 24 ]. Furthermore, attrition rates have been found to be higher in female residents in general surgery programs [ 25 ]. Perhaps with greater diversity, which is associated with use of standardized interviews, the number of women can increase in surgical specialties and thus reduce attrition rate in this setting as well.

Impact of Interview Best Practices on Bias and Diversity

An imperative of all training programs is to produce a cohort of physicians with broad and diverse experiences representative of the patient populations they treat. To better address diversity within surgical residencies, particularly regarding women and those who are underrepresented in medicine, it is important that interviews be designed to minimize bias against any one portion of the applicant pool. Diverse backgrounds and cultures within a program enhance research, innovation, and collaboration as well as benefit patients [ 26 ]. Patients have shown greater satisfaction and reception when they share ethnicity or background with their provider, and underrepresented minorities in medicine often go on to work in underserved communities [ 27 ].

All interviewers undoubtedly have elements of implicit bias; Table ​ Table2 2 describes the common subtypes of implicit bias [ 14 ]. While it is difficult to eliminate bias in the interview process, unstructured or “traditional” interviews are more likely to risk bias toward candidates than structured interviews. Studies have demonstrated that Hispanic and Black applicants receive scores one quarter of a standard deviation lower than Caucasian applicants [ 28 ]. “Like me” bias is just one example of increased subjectivity with unstructured interviews, where interviewers prefer candidates who may look like, speak like, or share personal experiences with the interviewer [ 29 ].

Furthermore, unstructured interviews provide opportunities to ask inappropriate or illegal questions, including those that center on religion, child planning, and sexual orientation [ 30 ]. Inappropriate questions tend to be disproportionately directed toward certain groups, with women more likely to get questions regarding marital status and to be questioned and interrupted than male counterparts [ 28 , 31 ].

Structured interviews, conversely, have been shown to decrease bias in the application process. Faculty trained in behavior-based interviews for fellowship applications demonstrated that there were reduced racial biases in candidate evaluations due to scoring rubrics [ 12 ]. Furthermore, as structured questions are determined prior to the interview and involve training of interviewers, structured interviews are less prone to illegal and inappropriate questions [ 32 ]. Interviewers can ask additional questions such as “could you be more specific?” with the caveat that probing should be minimized and kept consistent between applications. This way the risk of prompting the applicant toward a response is reduced [ 4 ].

Implementing Interview Types During the Virtual Interview Process

An added complexity to creating standardized interviews is incorporating a virtual platform. Even prior to the move toward virtual interviews instituted during the COVID-19 pandemic, studies on virtual interviews showed that they provided several advantages over in-person interviews, including decreased cost, reduction in time away from commitments for applicants and staff, and ability to interview at more programs. A significant limitation, for applicants and for programs, is the inability to interact informally, which allows applicants to evaluate the environment of the hospital and the surrounding community [ 33 •]. Following their abrupt implementation in 2020 during the COVID-19 pandemic, virtual interviews have remained in place and likely will remain in place in some form into the future due to their significant benefits in reducing applicant cost and improving interview efficiency. Although these types of interviews are in their relative infancy in the resident selection process, studies have found that standardized questions and scoring rubrics that have been used in person can still be applied to a virtual interview setting without degrading interview quality [ 34 ].

The virtual format may also allow for further interview innovation in the form of standardized video interviews. For medical student applicants, the Association of American Medical Colleges (AAMC) has trialed a standardized video interview (SVI) that includes recording of applicant responses, scoring, and subsequent release to the Electronic Residency Application Service (ERAS) application. Though early data in the pilot was promising, the program was not continued after the 2020 cycle due to lack of interest [ 35 ]. There is limited evidence supporting the utility of this type of interview in residency training, and one study found that these interviews did not add significant benefit as the scores did not associate with other candidate attributes such as professionalism [ 32 ]. Similarly, a separate study found no correlation between standardized video interviews and faculty scores on traits such as communication and professionalism. Granted, there was no standardization in what the faculty asked, and they were not blinded to academic performance of the applicants [ 36 ]. While there was an evaluation of six emergency medicine programs that demonstrated a positive linear correlation between the SVI score and the traditional interview score, it was a very low r coefficient; thus the authors concluded that the SVI was not adequate to replace the interview itself [ 37 ].

Conclusions: Future Steps in Urology and Beyond

The shift to structured interviews in urology has been slow. Within the last decade, studies consistent with other specialties demonstrated that urology program directors prioritized USMLE scores, reference letters, and away rotations at the program director’s institution as the key factors in choosing applicants [ 38 ]. More recently, a survey of urology programs found < 10% blinded the recruitment team at the screening step, with < 20% blinding the recruitment team during the interview itself [ 39 ]. In 2020 our program began using structured interview questions and blinded interviewers to all but the personal statement and letters of recommendation. After querying faculty and interviewees, we have found that most interviewers do not miss the additional information, and applicants feel that they are able to have more eye contact with faculty who are not looking down at the application during the interview. Structured behavioral interview questions have allowed us to focus on the key attributes important to our program. With time we hope to see that inclusion of these metrics helps diversify our resident cohort, improve resident satisfaction with the training program, and produce successful future urologists.

Despite the slow transition in urology and other fields, there is a growing body of literature in support of standardized interviews for evaluating key candidate traits that ultimately lead to resident success and reducing bias while increasing diversity. With time, the hope is that programs will continue incorporating these types of interviews in the resident selection process.

Compliance with Ethical Standards

The authors have no financial or non-financial interests to disclose.

This article does not contain any studies with human or animal subjects performed by any of the authors.

This article is part of Topical Collection on Education

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Papers of particular interest, published recently, have been highlighted as: • Of importance

  • Skip to main content
  • Skip to footer
  • Explore More ASAE Sites
  • Associations Now
  • Collaborate
  • ASAE Research Foundation
  • ASAE Business Solutions
  • ASAE 401(k) Retirement Program
  • Marketplace

Qualitative Interview Pros and Cons

qualitative interview

Interviews with members and nonmembers can help tell the story behind your quantitative research data, but only if done right. Find out how to make interviews effective and what pitfalls to avoid.

The proliferation of cheap, high-quality online survey tools has revolutionized our ability to conduct surveys to obtain quick snapshots of what our members are thinking. For all but the most complex projects, it is possible to begin and conclude a well-defined study of a subject of interest with the participation of a representative group of members within a 10-day timeframe. Perhaps more than any other development, this technology has made it possible for us to become "data driven" associations, as was put forth in 7 Measures of Success: What Remarkable Associations Do That Others Don't .

At the same time, we need to balance this easy source of quantitative data with a similar easy source of qualitative data. Why? Survey tools do a great job of providing us with definitive numbers and visuals to help tell our story, to ensure that our colleagues or committees understand and buy into our key findings. Databases also provide much better snapshots in the form of statistical reports and charts to help us document usage and sales baselines and trends. But to obtain greater insight into what actually determines these levels and drives changes, we often need to go deeper and directly engage representative members and customers in two-way dialogue. 

Structure and Method for Member Interviews

There are entire books regarding methodology for qualitative interviews, but as associations we often benefit by simplifying them considerably. Here are some suggestions:

Conduct only telephone interviews (rather than face-to-face), prescheduling from a small random sample of the members or other targeted constituency. This allows them to speak at their convenience, although speaking to them at home or on a cellphone undermines the quality of conversation.

Prepare a discussion guide in advance. Rather than treat this as an agenda or survey form, keep it broad and flexible. After all, an interview is two-way communication. The majority of each interview will probably consist of follow-up questions to probe initial responses more deeply.

Use (or be) good interviewers. Effective interviewers (and facilitators) are friendly and open, and they know how to probe effectively. Through active listening, surface level discussions rapidly give way to deeper motivations, and if the interviewer can demonstrate objectivity and candor, he or she can quickly establish a trusting relationship in the interview. You should welcome digressions, and don't worry if every interview is unique. The end product of aggregating all the interviews will be far more robust as a result.

Guarantee confidentiality. Ensure participants that no individual information or attribution will be released to others in transcripts or written reports.

Allow interviews to run long. Even with the shortest guides and most focused of objectives, we often find that interviews run 30 minutes or more. Members rarely get a chance to speak directly with their association. A member who begins an interview emphasizing their time constraints inevitably is the one who will speak the longest.

Don't do too many interviews. Since time is money, structure the interviews as a discrete project, with a limited number of conversations. For any specific topic, we find most issues converge within 10 to 15 interviews—that is, we begin to hear repeated comments and similar thinking so we are not learning much new information from each new conversation. As with all qualitative research, we are generally not trying to establish or force a consensus; instead we want to hear the widest range of perspectives possible and understand why members feel that way.

Strengths of Interviews

Often we think of focus groups when considering qualitative research. Group dynamics are sometimes important to measure, and focus groups have also migrated online to a certain extent, but there are several reasons why in-depth interviews are superior.

Relative absence of bias. Interviews generally have less observer or participant bias. Even a trained moderator will encounter subtle bias in membership focus groups. In associations people often know each other, which can lead to conscious or subconscious posturing or suppression of some comments. Groups may seem to have homogeneous participants, yet some factor differentiates them once they are in the room. For example, we may find while discussing a service with which two people have had negative experiences that they are overly eager to share. If they speak first, it can undermine the perceptions of those who speak later and have had no experiences, or only positive ones. In interviews, the member is rarely trying to impress the interviewer except by trying to be as articulate and well understood as possible.

Built-in flexibility. Although you lose some rapport and communications through phone contact, it is far more cost-effective, allowing you to efficiently conduct interviews back-to-back and to give members who miss an appointment to call back at their convenience. Too often we are constrained in focus groups by having members gathered at a conference or in their local area, which yields a sampling of only our most motivated “super-users” and cognoscenti, or group dynamics reflecting participants who are very familiar with one another.

An interview is two-way communication. The majority of each interview will probably consist of follow-up questions to probe initial responses more deeply.

"Feed" your survey. Often we design surveys based on our assumptions regarding what matters, drawn from internal management perspectives, questions from the last survey, or good ideas a consultant brought in. However, it is harder to get a candid take on current issues that are of greatest concern to members. Conducting interviews as part of the process of designing the survey helps provide timely, titillating observations, unproven hypotheses, and possible hidden connections between attitudes and behavior that you will want to quantify in the survey work.

Enough talk time for members. A 90-minute focus group allows each participant to speak perhaps eight to 10 minutes. Online surveys generally take between five and 15 minutes. Many members may only have only a few minutes of thought to share, but for subjects that do warrant more in-depth discussion and a clear understanding of their background, a 20- to 30-minute period for one person's feedback is more appropriate. Interviews lose a group dynamic, but they also spare interviewees from spending time listening to others—helpful particularly if your members have type A personalities and tend to equate "listening" with "waiting to speak again." (Yes, we all have many of them in our databases!)

Candor and intimacy. Even if you have never spoken to members regarding their inner feelings, don't worry—they will make it easy for you. Often members are flattered to be asked. They make the time to speak with you and they reward you with candor. Sometimes you may not like what you hear, but the more the interviewer plays the role of objective outsider, the better the process will be. As a market research director, I often introduced myself as "acting as an independent researcher today" and that's often all you need in order to pull yourself out of the equation and to put the focus of conversation where it belongs—on the member or customer you're interviewing.

Low-cost and easy interpretation. Even surveys that are easy to administer online require some statistical knowledge to properly interpret. To conduct and analyze, interviews require a finger to dial, an ear to listen, a telephone, and a keyboard or notepad. Like surveys today, interviews can launch in real time, and it is easy to share top-line reports in a day for time-sensitive projects.

Weaknesses of Interviews

Of course, interviews also have inherent weaknesses. These are a few of their  limitations:

Missing objectivity. There is a potential for observer bias in just about all qualitative research. If the people conducting the interviews are staff or service providers who can't maintain a strong sense of objectivity inside and out, the interviewee will pull their punches and not tell the whole truth, or the interpretation of the end results starts to resemble a process of hearing what you want to hear. Be on the lookout for what can be an almost subliminal bias.

Negative reactions. I often found that associations need to be prepared to accept what they hear. Not all of it is pleasant. The kneejerk reaction to negative feedback often can be outright rejection—a belief that the method just wasn't reliable enough. This may be true, but it is important to balance a sudden keen interest in valid methodology with an urgent need to cover one's backside. We are often politically sensitive and very PC, and when interviewees take advantage of glasnost to say exactly what they feel, it can be jarring. Sometimes you will need to smooth off the rough edges and edit the unadulterated stream of feedback, unless you are a big fan of Impromptu Job Loss or like being perceived as a traitor when you're only the messenger.

Open-endedness. Digressions and lack of standardization across interviews can be a good or a bad thing. When you try to make interviews "sum up" to a consensus or quantify them, you'll be disappointed. To push for consensus is to force interviews to do something they don't do well. It is best to accept this limitation, even to the point of managing your interviewee's expectations upfront. Sometimes an interviewee will refer to our "phone survey" and we gently correct them, since survey implies a rigid format. Interviews often yield digressions into arcane specialties, heretical opinions, conspiracy theories, and wildly inventive suggestions that each represent a unique viewpoint.

Subject to these caveats, qualitative interviews can be a valuable tool to help inform most association problems. Like some people we know, the feedback we receive may be amorphous, messy, and sometimes contradictory. However, regular use of the method can improve member and customer relations and provide a critical additional source of intelligence that we rarely obtain otherwise.

Editor’s Note: This article, originally published in 2009, has been updated.

Kevin Whorton is principal of Whorton Marketing & Research in Silver Spring, Maryland.

Email: [email protected]                       

Business Intelligence

Kevin whorton, read these next.

  • {{article.Title}}
  • Back to main content
  • Back to header

Your Article Library

The interview method: advantages and limitations | social research.

interview research limitations

ADVERTISEMENTS:

After reading this article you will learn about the advantages and disadvantages of the interview method of conducting social research.

Advantages of the Interview Method :

(1) The personal interviews, compared especially to questionnaires usually yield a high percentage of returns.

(2) The interview method can be made to yield an almost perfect sample of the general population because practically everyone can be reached by and can respond to this approach. It will be remembered that the questionnaire approach is severely limited by the fact that only the literate persons can be covered by it.

Again, the observational approach is also subject to limitations because many things or facts cannot be observed on the spot.

(3) The information secured through interviews is likely to be more correct compared to that secured through other techniques. The interviewer who is present on the spot can clear up the seemingly inaccurate or irrelevant answers by explaining the questions to the informant. If the informant deliberately falsifies replies, the interviewer is able to effectively check them and use special devices to verify the replies.

(4) The interviewer can collect supplementary information about the informant’s personal characteristics and environment which is often of great value in interpreting results. Interview is a much more flexible approach, allowing for posing of new questions or check-questions if such a need arises.

Its flexibility makes the interview a superior technique for the exploration of areas where there is little basis for knowing what questions to ask and how to formulate them.

(5) In as much as the interviewer is present on the spot, he can observe the facial expressions and gestures etc., of the informants as also the existing pressures obtaining in the interview-situation. The facility of such observations helps the interviewer to evaluate the meaning of the verbal replies given by informants.

For example, hesitation, particular inhibitive reactions etc., may give rise to certain doubts about the reliability of the responses and the interviewer may then ask indirect questions to verify his doubts.

(6) Scoring and test-devices can be used, the interviewer acting as experimenter. At the same time, visual stimuli to which the informant may react can be presented.

(7) The use of interview method ensures greater number of usable returns compared to other methods. Returned visits to complete items on the schedule or to correct mistakes can usually be made without annoying the informant.

(8) The interviewer can usually control which person or persons will answer the questions. This is not possible in the mailed questionnaire approach. If so desired and warranted group discussions may also be held.

(9) A personal interview may take long enough to allow the informant to become oriented to the topic under investigation. Thus, recall of relevant information is facilitated. The informant can be made to devote more time if, as is the case, the interviewer is present on the spot to elicit and record the information.

The interviewer’s presence is a double headed weapon, the advantageous aspect of it being that face-to-face contact provides enough stimulation to the respondent to probe deeper within himself. As we have suggested, interviewer acts as a catalyst.

(10) The interviewer may catch the informant off his guard and thus secure the most spontaneous reactions than would be the case if mailed questionnaire were used.

(11) The interview method allows for many facilities which aid on the spot adjustments and thus ensure rich response material. For example, the interviewer can carefully sandwich the questions about which the informant is likely to be sensitive.

The interviewer can also change the subject by observing informant’s reactions or give explanations if the interviewee needs them. In other words, a delicate situation can usually be handled more effectively by personal interview method.

(12) The language of the interview can be adapted to the ability or educational level of the person interviewed. Therefore, it is comparatively easy to avoid misinterpretations or misleading questions.

(13) The interview is a more appropriate technique for revealing information about complex, emotionally-laden subjects or for probing the sentiments underlying an expressed opinion.

Major Limitations of the Interview Method :

(1) In terms of cost, energy and time, the interview approach poses a heavy demand. The transportation cost and the time required to cover addresses in a large area as also possibility of non-availability or ‘not at home’, may make the interview method uneconomical and often inoperable.

(2) The efficacy of interviews depends on a thorough training and skill of interviewers as also on a rigorous supervision over them. Failing this, data recorded may be inaccurate and incomplete.

(3) The human equation may distort the returns. If an interviewer has a certain bias, he may unconsciously devise questions so as to secure confirmation of his views.

(4) The presence of the interviewer on the spot may over stimulate the respondent, sometimes even to the extent that he may give imaginary information just to make it interesting. He may tell things about which he may not himself be very sure.

He may also get emotionally involved with the interviewer and give answers that he anticipates would please the interviewer. It is also possible that the interviewer’s presence may inhibit free responses because there is no anonymity. The respondent may hesitate to give correct answers for the fear that it would adversely affect his image. Some fear of this information being used against him may grip him.

(5) In the interview method, the organization required for selecting, training and supervising a field staff is more complex.

(6) It is the usual experience that costs per interview are higher when field investigators are employed. This is especially so when the area to be covered is widely spread out.

(7) The personal interview usually takes more time. Sometimes, the interview lasts for hours on end and the interviewer cannot check the free flow of the respondent’s replies for fear that it may disrupt the ‘rapport.’ Added to this is the time spent for journeys to and fro to the addresses and the possibility of not always being able to meet them.

(8) Effective interview presupposes proper rapport with the respondent and controlling of interview atmosphere in a manner that would facilitate free and frank responses. This is often a very difficult requirement, it needs time, skills and often resources.

Secondly, it is not always possible for the interviewer to judge whether the interview atmosphere is how it should ideally be and whether or not ‘rapport’ has been established.

(9) Interviewing may also introduce systematic errors. For example, if the interviews are conducted at their homes during the day, a majority of informants will be housewives. Now if the information is to be obtained from the male members, most of the field-work will have to be done in the evening or on holidays. If this be the case, only a few hours can be used per week for interviewing.

(10) Many actions human beings carry out are not easily verbalized, but easily observed. Through observation a social process may be followed as it develops. Verbal techniques such as interview may give valuable reports, but post hoc, unless one is dealing with rather unusual respondents capable of acting and being interviewed at the same time.

Some of the prerequisites that ensure successful interviewing. The quality of interviewing depends, firstly on a proper study-design. It should be noted that even the most skilled interviewer will not be able to collect valid and useful data, if the schedule of questions is inadequate or unrelated to the objectives of research.

If a well-designed, standardized schedule can elicit the required information, a staff of ordinary men and women, properly selected, and trained, can serve well enough.

Within the limits of a study-design, there is some room for the art of interviewing to come into play. Interviewing is an art governed by certain scientific principles. The interviewer’s art consists in creating a situation wherein the respondents’ answers will be reliable and valid.

This ideal requires a permissive situation in which the respondent is encouraged to voice his frank opinion without the fear of his attitudes being revealed to others.

The basic requirement of successful interviewing, understandably, is to create a friendly atmosphere; on of trust and confidence that will put the respondent at ease. Through subsequent stages, the interviewer’s art consists in asking questions properly and intelligently, in obtaining a valid and meaningful response and in recording the responses accurately and completely.

Let us consider how the interviewer can create a friendly interview atmosphere. It is the interviewer’s approach that really does the trick. The interviewer should introduce himself briefly and explain clearly the purpose of his study.

Interviewer’s approach should be positive. His aim should be to interview everyone included in the sample. It is possible that a small proportion of respondents will be suspicious or hostile and the large number may require a little encouragement and persuasion.

Many people ar6 flattered to be selected for an interview. The interviewer should answer any legitimate questions and clear any doubt the respondent has. He should also if need be, explain that the respondent should not be afraid of being identified and that interview is not a test and that the interviewer just wants to know how people feel about certain issues and the only way to find out, is to ask them.

The interviewer’s manners should be friendly, courteous, conversational and unbiased. He should represent the golden mean — neither too grim, nor too effusive, neither too talkative nor too timid. The main idea should be to put the respondent at ease so that he will talk freely and fully.

It helps if the interview starts with the casual conversation about weather, pets or children. An informal conversational interview, above all, is dependent upon a thorough mastery by the interviewer over the actual questions in the schedule.

He should be able to ask them conversationally rather than real them stiffly. He should know what questions are coming next so that there will be no awkward disruption of smooth interaction. Fundamentally, the interviewer’s job is that of a reporter.

He should not act as an adviser, custodian of morality, curio-seeker or debator. He should not show surprise or disapproval of a respondent’s answer. He should show an interested disposition toward his respondent’s opinion. On his own, he should never divulge his own. The interviewer must keep the direction of interview in his Own hand, discouraging irrelevant conversation and trying to keep the respondent on the track.

Next, we turn to consider how the interviewer should ask his questions. The interviewer must be alert to the importance of asking each question exactly as it is worded unless the interview is unstructured. Interviewers should remember that even a slight rewording of a question can so change the stimulus as to elicit answers in a different frame of reference.

The interviewer should refrain from giving unwarranted explanation of questions because this also may change the frame of reference, or inject bias into the response. If each interviewer were permitted to vary the questions according to his sweet will, the resulting responses recorded by different interviewers may not be comparable.

If at all the interviewer has to offer any explanation to the respondents, he should offer only those that he has specifically been authorized to do. Should the respondent fail to understand the question, the interviewer may advisedly repeat it slowly and with proper emphasis.

Questions must be asked in the sequence they appear on the schedule. Varying this order will change the respondent’s frame of reference since each question sets up a frame of reference for the following questions. Thus, if the sequences vary from interviewer to interviewer, the responses will not be comparable. The interviewer must make it a point to ask every question, unless directions permit skipping a few.

It may seem that the respondent has given his opinion on a subsequent question in answering an earlier question, but he must nevertheless ask the question in order to be sure.

A question may appear to be naive or inapplicable but the interviewer should never omit asking it. Wherever necessary and appropriate, the interviewer should preface the question with certain conversational phrases to maintain continuity and tempo.

We shall now consider another important requirement of successful interviewing. It is often difficult as interviewers have often experienced, to obtain a specific complete response. This is perhaps the most difficult part of his job. Respondents often qualify or hedge their opinions.

They often answer, ‘do not know’ just to avoid thinking about the question, they misinterpret the question, divert the process of interview by launching off an irrelevant discussion or they give self-contradictory answers. In all these cases the interviewer has to probe deeper.

The test of a good interviewer is that he is alert to incomplete or nonspecific answers. Each interviewer must understand fully the overall objective of each question and what it is precisely trying to measure. A pre-test on the interviewers helps to equip them with such understanding.

The interviewer should be able to ask himself after every reply the respondent gives whether the question is completely answered. If the respondent’s answer is vague or diffuse or incomplete, effective probe questions should be asked.

The interviewer must be careful at every stage, not to suggest a possible reply, that is, the interviewer should riot ask leading questions (i.e., put words into the subject’s mouth). The “don’t know” reply is another problem of the interviewer.

Sometimes, this response may be due to a genuine lack of opinion or knowledge, but at other times it may be a cloak wittingly or unwittingly used by the interviewee to hide many attitudes, fear, reluctance, vague opinions, lack of understanding, etc. The interviewer should distinguish between the different types of’ don’t-know response’ and repeat the questions with suitable assurances.

An important consideration in successful interviewing relates to recording the responses of interviewees. There are two chief means of recording responses during the interview. If the question is a fixed alternative one, the interviewer need only mark or check an appropriate category. But if the question is open-ended, the interviewer is expected to record the response, verbatim.

On pre-coded schedules, errors and omissions in recording the replies are a frequent source of interview-error. In the midst of various tasks that the interviewer is supposed to perform in the course of interviews, viz., trying to pin the respondent down to a specific answer, remembering the sequence of questions, observing facial expressions etc., the interviewer may sometimes neglect to indicate the respondent’s reply to some item, overlook a particular question or check the wrong category, etc.

Even the best interviewer should, therefore, make it a habit to inspect each interview to make sure that it is filled in accurately and completely.

If any information is lacking he should go back and ask the respondent for it. He should correct the errors and omissions in the schedules on the spot. If he has recorded verbatim replies only sketchily, he should correct the weakness right there. It is not at all proper to wait until later in the day or until he returns home at night, since by then he may have forgotten quite a few crucial circumstances of the interview.

The interviewer should understand that the omission or inaccurate reporting of a single answer can make the entire interview worthless since the schedule is designed as an integral whole.

In reporting responses to open-ended or free answer questions, the interviewer should give complete, verbatim reporting. It may often be difficult to fulfill this requirement, but apart from obvious irrelevancies and repetitions, this should be the goal.

It is necessary that the interviewers have some idea of the coding process. This will ensure that they are able to record responses in such a manner that the coders will be able to reconstruct the whole set of responses correctly in a codified form.

The interviewer should ideally quote the respondents exactly. Paraphrasing the replies, summarizing them in one’s own words or “polishing up” any slang or cursing etc., not only might distort the respondent’s meanings and emphases but also miss the tenor of his replies.

Although it is frequently difficult to record responses verbatim without using short­hand, a few simple techniques can greatly increase the interviewer’s speed and honest reproduction.

The interviewer can ask the subject to wait until the interviewer has written the last thought but this may slow down the interview and may have certain adverse effects. In order not to slow up the interview, the interviewer should be prepared to write at the same time as the respondent talks.

This may prevent him from watching the expressions of the respondent but some adjustments have got to be made. The interviewer may also use common abbreviations. He may also use a telegraphic style of recording. In doing so, the interviewer must not make the recording incomprehensible to the coders.

One final point related to successful interviewing is, how to minimize bias introduced by the interviewer. Known as the interviewer-“bias”, it refers to systematic differences from interviewer to interviewer or occasionally systematic errors on the part of the interviewers in the selection of the samples (e.g., in quota sampling where the selection of interviewees is left to the interviewers), in asking questions, eliciting and recording responses.

Much of what we call interviewer-bias, can be more correctly described as interviewer- differences which are inherent in the fact that interviewers are human beings and not machines and thus they do not work identically or infallibly.

The fact that respondents too are human beings with differing perceptions, judgements, etc., simply enhances the differences that would occur if different interviewers were dealing with physical rather than human materials. It is too much to expect, therefore, that the interviewers will return complete, comparable and valid reports.

Even assuming an unbiased selection of respondents, bias in the interview-situation may stem from two sources:

(a) Respondent’s perception of the interviewer.

(b) Interviewer’s perception of the respondent.

‘Perception’ here points to the manner in which the relation between the interviewer and respondent is influenced and modified by their wishes, expectations and personality structure.

There is a sizable experimental evidence to prove that bias may result under certain conditions in spite of anything the interviewer may do to eliminate it. Respondents have been shown to frequently answer differently when interviewed by people from different social strata or ethnic group or nationality group. For example, the working-class respondents are less likely to talk freely to middle-class interviewers.

The magnitude of these effects naturally varies with the way in which the respondents perceive the situation. The biasing effects can often be reduced by altering the respondent’s perception of the situation, e.g., by assuring him that his identity will not be revealed but these effects can seldom be completely eliminated.

The interviewers should dress inconspicuously so that their appearance will not adversely sensitize certain categories of respondents.

The staff in a large-scale research project should be instructed to interview the respondent privately (unless the whole group is to be interviewed) so that his opinions will not be affected by the presence of some third person and to adopt an informal and conversational attitude in an effort to achieve the best possible ‘report.’

It should be noted that not all interview-biasing effects operate through the respondent’s perception by an interviewer. Some respondents may be totally immune to the most crucial biasing’s characteristics of the interviewer. The other dimension, we must consider in this context, is the interviewer’s perception of the respondent.

This is as important a source of bias as the respondent’s perception of the interviewer. No matter how standardized the schedule and how rigidly the interviewer is instructed, he still has much opportunity to exercise freedom of choice during the actual interview.

Thus, it is often his perception of the respondent that determines the manner in which he asks questions, the way in which he probes, his classification of responses to pre-coded questions and his recording of verbatim answers.

The interviewers often have strong expectations from respondents and as such, stereotypes are likely to come into play during the interview. On the basis of their past experience, judgements or prior answers received from other respondents, the interviewers may often quite unconsciously assume that they are inferior to him or that they are hostile, deceptive or ignorant, etc.

Such expectations will affect their performance. For example, a ‘No response’ from an educated well-to-do respondent may be probed into on the assumption that an opinion may be lurking somewhere or the interviewer may think that the respondents do not mean what they say.

Experiments have shown that the interviewers tend to select from long verbatim answers those parts that most closely conform to their expectations, beliefs or opinions and discard the rest.

An important source of bias arises from the interviewer’s perception of the situation. If he sees the results of the study as a possible threat to his interests, he is likely to introduce bias. Such difficulties can be overcome by proper motivation and supervision.

The interviewers being human, such biasing’s factors can never be overcome completely. Of Course, their effects can be reduced by standardizing the interview so that the interviewer has as little free choice as possible. If interviewers are given clear and standard instructions on questioning procedures, on the classification of responses etc., their biases will have lesser chances of operation.

It should not be overlooked, however, that if the interviewer’s freedom is restricted, correspondingly, the opportunities for effective use of his insight are restricted too. The more responsibilities the interviewer is given for questioning and evaluating the respondent’s opinions, the more likely it is that bias will result. This calls for a very careful selection of some middle course.

In so far as bias, in the sense of different interviewers not returning exactly the same responses from equivalent respondents, can never be totally eliminated, the main responsibility of the director of the research project is to select, train and supervise his staff in such a way that any net result of bias will be at a minimum.

He must be aware of the possibilities of bias at various points so that he is in a position to discount their effects in his analysis.

Related Articles:

  • Interview Schedule : Meaning, Uses and Limitations
  • Interview Techniques for Doing a Research

Techniques , Research , Social Research , The Interview Method

Comments are closed.

web statistics

  • Open access
  • Published: 02 September 2024

“I am there just to get on with it”: a qualitative study on the labour of the patient and public involvement workforce

  • Stan Papoulias   ORCID: orcid.org/0000-0002-7891-0923 1 &
  • Louca-Mai Brady 2  

Health Research Policy and Systems volume  22 , Article number:  118 ( 2024 ) Cite this article

18 Altmetric

Metrics details

Workers tasked with specific responsibilities around patient and public involvement (PPI) are now routinely part of the organizational landscape for applied health research in the United Kingdom. Even as the National Institute for Health and Care Research (NIHR) has had a pioneering role in developing a robust PPI infrastructure for publicly funded health research in the United Kingdom, considerable barriers remain to embedding substantive and sustainable public input in the design and delivery of research. Notably, researchers and clinicians report a tension between funders’ orientation towards deliverables and the resources and labour required to embed public involvement in research. These and other tensions require further investigation.

This was a qualitative study with participatory elements. Using purposive and snowball sampling and attending to regional and institutional diversity, we conducted 21 semi-structured interviews with individuals holding NIHR-funded formal PPI roles across England. Interviews were analysed through reflexive thematic analysis with coding and framing presented and adjusted through two workshops with study participants.

We generated five overarching themes which signal a growing tension between expectations put on staff in PPI roles and the structural limitations of these roles: (i) the instability of support; (ii) the production of invisible labour; (iii) PPI work as more than a job; (iv) accountability without control; and (v) delivering change without changing.

Conclusions

The NIHR PPI workforce has enabled considerable progress in embedding patient and public input in research activities. However, the role has led not to a resolution of the tension between performance management priorities and the labour of PPI, but rather to its displacement and – potentially – its intensification. We suggest that the expectation to “deliver” PPI hinges on a paradoxical demand to deliver a transformational intervention that is fundamentally divorced from any labour of transformation. We conclude that ongoing efforts to transform health research ecologies so as to better respond to the needs of patients will need to grapple with the force and consequences of this paradoxical demand.

Peer Review reports

Introduction – the labour of PPI

The inclusion of patients, service users and members of the public in the design, delivery and governance of health research is increasingly embedded in policy internationally, as partnerships with the beneficiaries of health research are seen to increase its relevance, acceptability and implementability. In this context, a growing number of studies have sought to evaluate the impact of public participation on research, including identifying the barriers and facilitators of good practice [ 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 ]. Some of this inquiry has centred on power, control and agency. Attention has been drawn, for example, to the scarcity of user or community-led research and to the low status of experiential knowledge in the hierarchies of knowledge production guiding evidence-based medicine [ 9 ]. Such hierarchies, authors have argued, constrain the legitimacy that the experiential knowledge of patients can achieve within academic-led research [ 10 ], may block the possibility of equitable partnerships such as those envisioned in co-production [ 11 ] and may function as a pull back against more participatory or emancipatory models of research [ 12 , 13 , 14 ]. In this way, patient and public inclusion in research may become less likely to aim towards inclusion of public and patient-led priorities, acting instead as kind of a “handmaiden” to research, servicing and validating institutionally pre-defined research goals [ 15 , 16 , 17 ].

Research on how public participation-related activities function as a form of labour within a research ecosystem, however, is scarce [ 18 ]. In this paper, we examine the labour of embedding such participation, with the aim of understanding how such labour fits within the regimes of performance management underpinning current research systems. We argue that considering this “fit” is crucial for a broader understanding of the implementation of public participation and therefore its potential impact on research delivery. To this end, we present findings from a UK study of the labour of an emerging professional cadre: “patient and public involvement” leads, managers and co-ordinators (henceforth PPI, the term routinely used for public participation in the United Kingdom). We concentrate specifically on staff working on research partnerships and centres funded by the National Institute for Health and Care Research (NIHR). This focus on the NIHR is motivated by the organization’s status as the centralized research and development arm of the National Health Service (NHS), with an important role in shaping health research systems in the United Kingdom since 2006. NIHR explicitly installed PPI in research as a foundational part of its mission and is currently considered a global leader in the field [ 19 ]. We contend that exploring the labour of this radically under-investigated workforce is crucial for understanding what we see as the shifting tensions – outlined in later sections – that underpin the key policy priority of embedding patients as collaborators in applied health research. To contextualize our study, we first consider how the requirement for PPI in research relates to the overall policy rationale underpinning the organizational mission of the NIHR as the NHS’s research arm, then consider existing research on tensions identified in efforts to embed PPI in a health system governed through regimes of performance management and finally articulate the ways in which dedicated PPI workers’ responsibilities have been developed as a way to address these tensions.

The NIHR as a site of “reformed managerialism”

The NIHR was founded in 2006 with the aim of centralizing and rationalizing NHS research and development activities. Its foundation instantiated the then Labour government’s efforts to strengthen and consolidate health research in the UK while also tackling some of the problems associated with the earlier introduction of new public management (NPM) principles in the governance of public services. NPM had been introduced in the UK public sector by Margaret Thatcher’s government, in line with similar trends in much of the Global North [ 20 ]. The aim was to curb what the Conservatives saw as saw as excesses in both public spending and professional autonomy. NPM consisted in management techniques adapted from the private sector: in the NHS this introduction was formalized via the 1990 National Health Service and Community Care Act, which created an internal market for services, with local authorities purchasing services from local health providers (NHS Trusts) [ 21 ]; top-down management control; an emphasis on cost-efficiency; a focus on targets and outputs over process; an intensification of metrics for performance management; and a positioning of patients and the public as consumers of health services with a right to choose [ 22 , 23 ]. In the context of the NHS, cost-efficiency meant concentrating on services and on research which would have the greatest positive impact on population health while preventing research waste [ 24 ]. By the mid-1990s, however, considerable criticism had been directed towards this model, including concerns that NPM techniques resulted in silo-like operations and public sector fragmentation, which limited the capacity for collaboration between services essential for effective policy. Importantly, there was also a sense that an excessive managerialism had resulted in a disconnection of public services from public and civic aims, that is, from the values, voices and interests of the public [ 25 , 26 ].

In this context, the emergence of the NIHR can be contextualized through the succeeding Labour government’s much publicized reformed managerialism, announced in their 1997 white paper “The New NHS: Modern, Dependable” [ 27 ]. Here, the reworking of NPM towards “network governance” meant that the silo-like effects of competition and marketization were to be attenuated through a turn to cross-sector partnerships and a renewed attention to quality standards and to patients’ voices [ 28 ]. It has been argued, however, that the new emphasis on partnerships did not undermine the dominance of performance management, while the investment in national standards for quality and safety resulted in an intensified metricization, with the result that this reform may have been more apparent than real, amounting to “NPM with a human face” [ 29 , 30 , 31 ]. Indeed, the NIHR can be seen as an exemplary instantiation of this model: as a centralized commissioner of research for the NHS, the NIHR put in place reporting mechanisms and performance indicators to ensure transparent and cost-efficient use of funds, with outputs and impact measured, managed and ranked [ 24 ]. At the same time, the founding document of the NIHR, Best Research for Best Health, articulates the redirection of such market-oriented principles towards a horizon of public good and patient benefit. The document firmly and explicitly positioned patients and the public as both primary beneficiaries of and important partners in the delivery of health research. People (patients) were to be placed “at the centre of a research system that focuses on quality, transparency and value for money” [ 32 ], a mission implemented through the installation of “structures and mechanisms to facilitate increased involvement of patients and the public in all stages of NHS Research & Development” [ 33 ]. This involvement would be supported by the advisory group INVOLVE, a key part of the new centralized health research system. INVOLVE, which had started life in 1996 as Consumers in NHS Research, funded by the Department of Health, testified to the Labour administration’s investment in championing “consumer” involvement in NHS research as a means of increasing research relevance [ 34 ]. The foundation of the NIHR then exemplified the beneficent alignment of NPM with public benefit, represented through the imaginary of a patient-centred NHS, performing accountability to the consumers/taxpayers through embedding PPI in all its activities. In this context, “public involvement” functioned as the lynchpin through which such alignment could be effected.

PPI work and the “logic of deliverables”: a site of tension

Existing research on the challenges of embedding PPI has typically focussed on the experiences of academics tasked with doing so within university research processes. For example, Pollard and Evans, in a 2013 paper, argue that undertaking PPI work in mental health research can be arduous, emotionally taxing and time consuming, and as such, can be in tension with expectations for cost-efficient and streamlined delivery of research outputs [ 35 ]. Similarly, Papoulias and Callard found that the “logic of deliverables” governing research funding can militate against undertaking PPI or even constitute PPI as “out of sync” with research timelines [ 36 ]. While recent years have seen a deepening operationalization of PPI in the NIHR and beyond, there are indications that this process, rather than removing these tensions, may have recast them in a different form. For example, when PPI is itself set up as performance-based obligation, researchers, faced with the requirement to satisfy an increasing number of such obligations, may either engage in “surface-level spectacles” to impress the funder while eschewing the long-term commitment necessary for substantive and ongoing PPI, or altogether refuse to undertake PPI, relegating the responsibility to others [ 37 , 38 ]. Such refusals may then contribute to a sharpening of workplace inequalities: insofar as PPI work is seen as “low priority” for more established academic staff, it can be unevenly distributed within research organizations, with precariously employed junior researchers and women typically assigned PPI responsibilities with the assumption that they possess the “soft skills” necessary for these roles [ 39 ].

Notably, the emergence of a dedicated PPI workforce is intended as a remedy for this tension by providing support, expertise and ways of negotiating the challenges associated with undertaking PPI responsibilities. In the NIHR, this workforce is part of a burgeoning infrastructure for public involvement which includes national standards, training programmes, payment guidelines, reporting frameworks and impact assessments [ 40 , 41 , 42 , 43 , 44 , 45 ]. By 2015, an INVOLVE review of PPI activities during the first 10 years of the NIHR attested to “a frenzy of involvement activity…across the system”, including more than 200 staff in PPI-related roles [ 40 ]. As NIHR expectations regarding PPI have become more extensive, responsibilities of PPI workers have proliferated, with INVOLVE organizing surveys and national workshops to identify their skills and support needs [ 41 , 42 ]. In 2019, the NIHR mandated the inclusion of a “designated PPI lead” in all funding applications, listing an extensive and complex roster of responsibilities. These now included delivery and implementation of long-term institutional strategies and objectives, thus testifying to the assimilation of involvement activities within the roster of “performance-based obligations” within research delivery systems [ 43 ]. Notably however, this formalization of PPI responsibilities is ambiguous: the website states that the role “should be a budgeted and resourced team member” and that they should have “the relevant skills, experience and authority”, but it does not specify whether this should be a researcher with skills in undertaking PPI or indeed someone hired specifically for their skills in PPI, that is, a member of the PPI workforce. Equally, the specifications, skills and support needs, which have been brought together into a distinct role, have yet to crystallize into a distinct career trajectory.

Case studies and evaluations of PPI practice often reference the skills and expertise required in leading and managing PPI. Chief among them are relational and communication skills: PPI workers have been described as “brokers” who mediate and enable learning between research and lay spaces [ 44 , 45 ]; skilled facilitators enabling inclusive practice [ 46 , 47 , 48 ]; “boundary spanners” navigating the complexities of bridging researchers with public contributors and undertaking community engagement through ongoing relational work [ 49 ]. While enumerating the skillset required for PPI work, some of these studies have identified a broader organizational devaluation of PPI workers: Brady and colleagues write of PPI roles as typically underfunded with poor job security, which undermines the continuity necessary for generating trust in PPI work [ 46 ], while Mathie and colleagues report that many PPI workers describe their work as “invisible”, a term which the authors relate to the sociological work on women’s labour (particularly housework and care labour) which is unpaid and rendered invisible insofar as it is naturalized as “care” [ 50 ]. Research on the neighbouring role of public engagement professionals in UK universities, which has been more extensive than that on PPI roles, can be instructive in fleshing out some of these points: public engagement professionals (PEPs) are tasked with mediating between academics and various publics in the service of a publicly accountable university. In a series of papers on the status of PEPs in university workplaces, Watermeyer and colleagues argue that, since public engagement labour is relegated to non-academic forms of expertise which lack recognition, PEPs’ efforts in boundary spanning do not confer prestige. This lack of prestige can, in effect, function as a “boundary block” obstructing PEPs’ work [ 51 , 52 ]. Furthermore, like Mathie and Brady, Watermeyer and colleagues also argue that the relational and facilitative nature of engagement labour constitutes such labour as feminized and devalued, with PEPs also reporting that their work remains invisible to colleagues and institutional audit instruments alike [ 50 , 53 ].

The present study seeks to explore further these suggestions that PPI labour, like that of public engagement professionals, lacks recognition and is constituted as invisible. However, we maintain that there are significant differences between the purpose and moral implications of involvement and engagement activities. PPI constitutes an amplification of the moral underpinnings of engagement policies: while public engagement seeks to showcase the public utility of academic research, public involvement aims to directly contribute to optimizing and personalizing healthcare provision by minimizing research waste, ensuring that treatments and services tap into the needs of patient groups, and delivering the vision of a patient-centred NHS. Therefore, even as PPI work may be peripheral to other auditable research activities, it is nevertheless central to the current rationale for publicly funded research ecosystems: by suturing performance management and efficiency metrics onto a discourse of public benefit, such work constitutes the moral underpinnings of performance management in health research systems. Therefore, an analysis of the labour of the dedicated PPI workforce is crucial for understanding how this suturing of performance management and “public benefit” works over the conjured figures of patients in need of benefit. This issue lies at the heart of our research study.

Our interview study formed the first phase of a multi-method qualitative inquiry into the working practices of NIHR-funded PPI leads. While PPI lead posts are in evidence in most NIHR-funded research, we decided to focus on NIHR infrastructure funding specifically: these are 5-year grants absorbing a major tranche of NIHR funds (over £600 million annually in 2024). They function as “strategic investments” embodying the principles outlined in Best Research for Best Health: they are awarded to research organizations and NHS Trusts for the purposes of developing and consolidating capacious environments for early stage and applied clinical research, including building a research delivery workforce and embedding a regional infrastructure of partnerships with industry, the third sector and patients and communities [ 55 ]. We believe that understanding the experience of the PPI workforce funded by these grants may give better insights into NIHR’s ecosystem and priorities, since they are specifically set up to support the development of sustainable partnerships and embed the translational pipeline into clinical practice.

The study used purposive sampling with snowball elements. In 2020–2021, we mapped all 72 NIHR infrastructure grants, identified the PPI teams working in each of these using publicly available information (found on the NIHR website and the websites and PPI pages of every organization awarded infrastructure grants) and sent out invitation emails to all teams. Where applicable, we also sent invitations to mailing lists of PPI-lead national networks connected to these grants. Inclusion criteria were that potential participants should have oversight roles, and/or be tasked with cross-programme/centre responsibilities, meaning that their facilitative and strategy building roles should cover the entirety of activities funded by one (and sometimes more than one) NIHR infrastructure grant or centres including advisory roles over most or all research projects associated with the centre of grant, and that they had worked in this or a comparable environment for 2 years.

The individuals who showed interest received detailed information sheets. Once they agreed to participate, they were sent a consent form and a convenient interview time was agreed. We conducted 21 semi-structured interviews online, between March and June 2021, lasting 60–90 min. The interview topic guide was developed in part through a review of organizational documents outlining the role and through a consideration of existing research on the labour of PPI within health research environments. It focussed on how PPI workers fit within the organization relationship between the actual work undertaken and the way this work is represented to both the organization and the funder. Interview questions included how participants understand their role; how they fit in the organization; how their actual work relates to the job description; how their work is understood by both colleagues and public contributors; the relationship between the work they undertake and how this is represented in reports to funder and presentations; and what they find challenging about their work. Information about participants’ background and what brought them to their present role was also gathered. Audio files were checked, transcribed and the transcripts fully de-identified. All participants were given the opportunity to check transcripts and withdraw them at any point until December 2021. None withdrew.

We analysed the interviews using reflexive thematic analysis with participatory elements [ 54 , 55 ]. Reflexive thematic analysis emphasizes the interpretative aspects of the analytical process, including the data “collection” process itself, which this approach recognizes as a generative act, where meaning is co-created between interviewer and participant and the discussion may be guided by the participant rather than strictly adhering to the topic guide [ 56 ]. We identified patterns of meaning through sustained and immersive engagement with the data. NVivo 12 was used for coding, while additional notes and memos on the Word documents themselves mitigated the over-fragmentation that might potentially limit NVivo as a tool for qualitative analysis. Once we had developed themes which gave a thorough interpretation of the data, we presented these to participants in two separate workshops to test for credibility and ensure that participants felt ownership of the process [ 57 ].

As the population from which the sample was taken is quite small, with some teams working across different infrastructure grants, confidentiality and anonymity were important concerns for participants. We therefore decided neither to collect nor to present extensive demographic information to preserve confidentiality and avoid deductive disclosure [ 58 ]. Out of our 21 participants 20 were women; there was some diversity in age, ethnicity and heritage, with a significant majority identifying as white (British or other European). Participants had diverse employment histories: many had come from other university or NHS posts, often in communications, programme management or human resources; a significant minority had come from the voluntary sector; and a small minority from the private sector. As there was no accredited qualification in PPI at the time this study was undertaken, participants had all learned their skills on their present or previous jobs. A total of 13 participants were on full-time contracts, although in several cases funding for these posts was finite and fragmented, often coming from different budgets.

In this paper we present five inter-related themes drawing on the conceptual architecture we outlined in the first half of this paper to explore how PPI workers navigate a research ecosystem of interlocking institutional spaces that is governed by “NPM with a human face”, while striving to align patients and the public with the imaginary of the patient-centred NHS that mobilizes the NIHR mission. These five themes are: (i) the instability of support; (ii) the production of invisible labour; (iii) PPI as moral imperative; (iv) accountability without control; and (v) delivering change without changing.

“There to grease the cogs rather than be the cogs”: the instability of “support”

Infrastructure grants act as a hub for large numbers of studies, often in diverse health fields, most of which should, ideally, include PPI activities. Here, dedicated PPI staff typically fulfil a cross-cutting role: they are meant to oversee, provide training and advise on embedding PPI activities across the grant and, in so doing, support researchers in undertaking PPI. On paper, support towards the institution in the form of training, delivering strategy for and evaluating PPI is associated with more senior roles (designated manager or lead) whereas support towards so-called public contributors is the remit of more junior roles (designated co-ordinator or officer) and can include doing outreach, facilitating, attending to access needs and developing payment and compensation procedures. However, these distinctions rarely applied in practice: participants typically reported that their work did not neatly fit into these categories and that they often had to fulfil both roles regardless of their title. Some were the only person in the team specifically tasked with PPI, and so their “lead” or “manager” designation was more symbolic than actual:

I have no person to manage, although sometimes I do get a little bit of admin support, but I don’t have any line management responsibility. It is really about managing my workload, working with people and managing the volunteers that I work with and administrating those groups and supporting them (P11).

P11’s title was manager but, as they essentially worked alone, shuttling between junior and senior role responsibilities, they justified and made sense of their title by reframing their support work with public contributors as “management”. Furthermore, other participants reported that researchers often misunderstood PPI workers’ cross-cutting role and expected them to both advise on and deliver PPI activities themselves, even in the context of multiple projects, thus altogether releasing researchers of such responsibility.

As a PPI lead, it is very difficult to define what your role is in different projects….and tasks … So, for example, I would imagine in [some cases] we are seen as the go-to if they have questions. [..] whereas, in [other cases], it is like, “Well, that’s your job because you’re the PPI lead” […] there is not a real understanding that PPI is everyone’s responsibility and that the theme leads are there to facilitate and to grease the cogs rather than be the cogs (P20).

Furthermore, participants reported that the NIHR requirement for a PPI lead in all funding applications might in fact have facilitated this slippage. As already mentioned, the NIHR requirement does not differentiate between someone hired specifically to undertake PPI and a researcher tasked with PPI activities. The presence of a member of staff with a “PPI lead” title thus meant that PPI responsibilities in individual research studies could continue to accrue on that worker:

The people who have been left with the burden of implementing [the NIHR specified PPI lead role] are almost exclusively people like me, though, because now researchers expect me to allow myself to be listed on their project as the PPI lead, and I actually wrote a document about what they can do for the PPI lead that more or less says, “Please don’t list me as your PPI lead. Please put aside funds to buy a PPI lead and I will train them, because there is only one me; I can’t be the PPI lead for everyone” (P10).

This expectation that core members of staff with responsibilities for PPI would also be able to act as PPI leads for numerous research projects suggests that this role lacks firm organizational co-ordinates and boundaries. Here, the presence of a PPI workforce does not, in fact, constitute an appropriate allocation of PPI labour but rather testifies to a continuing institutional misapprehension of the nature of such labour particularly in terms of its duration, location and value.

Conjuring PPI: the production of invisible labour

Participants consistently emphasized the invisibility of the kinds of labour, both administrative and relational, specific to public involvement as a process, confirming the findings of Mathie and colleagues [ 50 ]. This invisibility took different forms and had different justifications. Some argued that key aspects of their work, which are foundational to involvement, such as the process of relationship building, do not lend themselves to recognition as a performance indicator: “ There is absolutely no measure for that because how long is a piece of string” (P11). In addition, relationship building necessitated a considerably greater time investment than was institutionally acceptable, and this was particularly evident when it came to outreach. Participants who did their work in community spaces told stories of uncomprehending line-managers, or annoyed colleagues who wondered where the PPI worker goes and what they do all day:

There is very little understanding from colleagues about what I do on a day-to-day basis, and it has led to considerable conflict …. I would arrive at the office and then I would be disappearing quite promptly out into the community, because that is where I belong […] So, it is actually quite easy to become an absent person (P3).

Once again, the NIHR requirement for designated PPI leads in funding applications, intended to raise the visibility of PPI work by formalizing it as costed labour, could instead further consolidate its invisibility:

I am constantly shoved onto bids as 2% of my full-time equivalent and I think I worked out for a year that would be about 39 hours a year. For a researcher, popping the statistician down and all these different people on that bid, “Everyone is 2% and we need the money to run the trial, so 2% is fine”. And if I said to them, “Well, what do you think I would do in those 39 hours?” they wouldn’t have a clue, not a clue (P17).

The 2% of a full-time allocation is accorded to the PPI worker because 2–5% is the time typically costed for leadership roles or for roles with a circumscribed remit (e.g. statisticians). However, this allocation, in making PPI workers’ labour visible either as oversight (what project leads do) or as methodological expertise (what statisticians do), ends up producing the wrong kind of visibility: the 39 h mentioned here might make sense when the role mainly involves chairing weekly meetings or delivering statistical models but are in no way sufficient for the intense and ongoing labour of trust-building and alignment between institutions and public contributors in PPI.

Indeed, such costings, by eliding the complexity and duration of involvement, may reinforce expectations that PPI can be simply conjured up at will and delivered on demand:

A researcher will say to us, “I would really like you to help me to find some people with lived experience, run a focus group and then I’ll be away”. To them, that is the half-hour meeting to talk about this request, maybe 10 minutes to draft a tweet and an email to a charity that represents people with that condition […] the reality is it is astronomically more than that, because there is all this hidden back and forth. […] [researchers] expect to be able to hand over their protocol and then I will find them patients and those patients will be … representative and I will be able to talk to all of those patients and … write them up a report and …send it all back and they will be able to be like, “Thanks for the PPI”, and be on their merry way (P13).

What P13 communicates in this story is the researcher’s failure to perceive the difference between PPI work and institutional norms for project delivery: the researcher who asks for “some people with lived experience” is not simply underestimating how long this process will take. Rather, involvement work is perceived as homologous to metricized and institutionally recognizable activities (for example, recruitment to trials or producing project reports) for which there already exist standard procedures. Here, the relational complexity and improvised dynamic of involvement is turned into a deliverable (“the PPI”) that can be produced through following an appropriate procedure. When PPI workers are expected to instantly deliver the right contributors to fit the project needs, PPI labour is essentially black boxed and in its place sits “the PPI”, a kind of magical object seemingly conjured out of nowhere.

Such invisibility, however, may also be purposefully produced by the PPI workers themselves. One participant spoke of this at length, when detailing how they worked behind the scenes to ensure public contributors have input into research documents:

When we get a plain English summary from a researcher, we rewrite them completely. If the advisory group [see] … a really bad plain English summary, they are just going to go, “I don’t understand anything”. I might as well do the translation straight away so that they can actually review something they understand. [Researchers then] think, “Oh, [the public advisory group] are so good at writing” … and I am thinking, “Well, they don’t … write, they review, and they will say to me, ‘Maybe move this up there and that up there, and I don’t understand these’”, … They are great, don’t get me wrong, but they don’t write it. And it is the same with a lot of things. They think that [the group] are the ones that do it when it is actually the team (P7).

Here, the invisibility of the PPI worker’s labour is purposefully wrought to create good will and lubricate collaboration. Several participants said that they chose to engage in such purposeful invisibility because they knew that resources were not available to train researchers in plain writing and public contributors in academic writing. PPI workers, in ghost-writing accessible texts, thus effect a shortcut in the institutional labour required to generate alignment between researchers and public contributors. However, this shortcut comes at a price: in effecting it, PPI workers may collude in conjuring “the PPI” – they may themselves make their own work disappear.

“Not a 9 to 5”: PPI work as more than a job

Most participants reported that overtime working was common for themselves and their teammates, whether they were on a fractional or full-time contract. Overall, participants saw undertaking extra work as a necessary consequence of their commitment towards public contributors, a commitment which made it difficult to turn work down:

Everyone loses if you say no: the public contributors aren’t involved in a meaningful way, the project won’t be as good because it doesn’t have meaningful PPI involvement (P20).

While overwork was a common result of this commitment, some participants described such overwork as the feature that distinguished PPI work from what one commonly understands as a “job”, because, in this case, over-work was seen as freely chosen rather than externally imposed:

It is me pushing myself or wanting to get things done because I started it and I think I would get less done if I worked less and that would bother me, but I don’t think it is a pressure necessarily from [line manager] or [the institution] or anyone to be like, “No, do more” (P13).

Participants presented relationship building not only as the most time-consuming but also the most enjoyable aspect of PPI work. Community engagement was a key site for this and once again participants tended to represent this type of work as freely chosen:

I did most of the work in my free time in the end because you have to go into communities and you spend a lot longer there. […] So, all of that kind of thing I was just doing in my spare time and I didn’t really notice at the time because I really enjoyed it (P6).

Thus, time spent in relationship building was constituted as both work and not work. It did not lend itself to metricization via workplace time management and additionally, was not perceived by participants themselves as labour (“I didn’t really notice it at the time”). At the same time, out-of-hours work was rationalized as necessary for inclusivity, set up to enable collaboration with public contributors in so far as these do not have a contractual relationship to the employer:

That is not a 9–5. That is a weekends and holidays sort of job, because our job is to reduce the barriers to involvement and some of those barriers are hours – 9–5 is a barrier for some people (P17).

If working overtime allows PPI workers to reduce barriers and enable collaboration with those who are not employed by the institution, that same overtime work also serves to conceal the contractual nature of the PPI workers’ own labour, which now becomes absorbed into the moral requirements of PPI.

“Caught in the middle”: accountability without control

Participants repeatedly emphasized that their ability to contribute to research delivery was stymied by their lack of control over specific projects and over broader institutional priority setting:

… as a PPI lead we are not full member of staff, we are not responsible for choosing the research topics. We […] can only guide researchers who come to us and tell us what they are doing … we don’t have any power to define what the public involvement looks like in a research project (P6).

Tasked with creating alignments and partnerships between the publics and institutions, participants argued that they did not have the power to make them “stick” because they are not “really” part of the team. However, even as PPI workers lacked the power to cement partnerships, any failure in the partnership could be ascribed to them, perceived as a failure of the PPI worker by both funder and public contributors:

Often you have to hand over responsibility and the researcher [who] can let the panel down and … I feel like I have let the panel member down because … I am the one who said, “Oh yes, this person wants to talk to you”, and I find that really challenging, getting caught in the middle like that (P21).

This pairing of accountability with lack of control became more pronounced in grant applications or reports to the funder:

It is also quite frustrating in the sense that, just because I advise something, it doesn’t necessarily mean that it gets implemented or even included in the final grant. [even so] whatever the feedback is still reflects on us, not necessarily on the people who were making the wider decisions […] As PPI leads, we are still usually the ones that get the blame (P10).

Several participants testified to this double frustration: having to witness their PPI plans being rewritten to fit the constraints (financial, pragmatic) of the funding application, they then often found themselves held accountable if the PPI plans fail to carry favour with the funder. PPI workers then become the site where institutional accountability to both its public partners and to the funder gathers – it is as though, while located outside most decision-making, they nevertheless become the attractors for the institution’s missing accountability, which they experience, in the words of P21, as “ being caught in the middle ” or, as another participant put it, as “ the worry you carry around ” (P16).

“There to just get on with it”: delivering change without changing

Participants recognized that effective collaboration between research institutions and various publics requires fundamental institutional changes. Yet they also argued that while PPI workers are not themselves capable of effecting such change, there is nevertheless considerable institutional pressure to deliver on promises made in grant applications and build PPI strategies on this basis:

So, there is that tension about […] pushing this agenda and encouraging people to do more [….] rather than just accepting the status quo. But actually, the reality is that it is very, very hard to get everybody in [grant name] to change what they do and I can’t make that happen, [senior PPI staff] can’t make that happen, nobody can. The whole systemic issue … But you have got, somehow in the strategy and what you say you are going to do, that tension between aspiration and reality (P4).

This tension between aspiration and reality identified here could not be spelled out in reports for fear of reputational damage. In fact, the expectation to have delivered meaningful PPI, now routinely set up in NIHR applications, could itself militate against such change. For example, a frequently voiced concern was that PPI was being progressively under-resourced:

I feel the bar is getting higher and higher and higher and expectations are higher and we have got no extra resource (P16).

However, annual reports, the mechanism through which the doing of PPI is evidenced, made it difficult to be open about any such under-resourcing.

We will allude to [the lack of resources]. So, we will say things like, “We punch above our weight”, but I am not sure that message gets home to the NIHR very clearly. It is not like the annual report is used to say, “Hey, you’re underfunding this systematically, but here’s all the good stuff we do”, because the annual report is, by essence, a process of saying how great you are, isn’t it? (P3).

The inclusion of PPI as a “deliverable” meant that, in a competitive ecosystem, the pressure is on to report that PPI has always already been delivered. As another participant put it, “ no one is going to report the bad stuff ” (P17). Hence reporting, in setting up PPI as a deliverable, reinforced new zones of invisibility for PPI labour and made it harder to surface any under-resourcing for such labour. Furthermore, such reporting also played down any association between successful PPI and system transformation. Another participant described the resistance they encountered after arguing the organization should move away from “last-minute” PPI:

I think it is really hard when […] these people are essentially paying your pay cheque, to then try to push back on certain things that I don’t think are truly PPI ….[A]s somebody who I felt my role was really to show best practice, for then [to be] seen as this difficult person for raising issues or pushing back rather than just getting things done, is really hard [….] I get the impression, at least within the [organization] … that I am not there to really point out any of the issues. I am there just to get on with it (P14).

This opposition between pointing out the issues and “getting on with it” is telling. It names a contradiction at the heart of PPI labour: here, the very act of pushing back – in this case asking for a commitment to more meaningful and ongoing PPI – can be perceived as going against the PPI worker’s responsibilities, insofar as it delays and undoes team expectations for getting things done, for delivering PPI. Here, then, we find an exemplary instance of the incommensurability between the temporal demands of research and those of meaningful PPI practice.

How do the five themes we have presented help open out how policies around public participation are put into practice—as well as the contradictions that this practice navigates – in health systems organized by the rhetorical suturing of performance management onto public benefit? We have argued that the development of a dedicated workforce represents an attempt to “repair” the tension experienced by researchers between the administrative, facilitative and emotional work of PPI and the kinds of deliverables that the institution requires them to prioritize. We argue that our findings indicate that insofar as PPI workers’ role then becomes one of “delivering” PPI, this tension is reproduced and at times intensified within their work. This is because, as actors in the health research ecosystem, PPI staff are tethered to the very regimes of performance management, which give rise to an institutional misapprehension of the actual labour associated with delivering PPI.

This misapprehension surfaces in the instruments through which the funder costs, measures and generates accountability for PPI – namely, the requirement for a costed PPI lead and the mandatory inclusion of a PPI section in applications and regular reports to funder. The NIHR requirement for a costed PPI lead, intended to legitimize the undertaking of PPI as an integral part of a research team’s responsibilities, may instead continue to position the PPI worker as a site for the research team’s wholesale outsourcing of responsibility for PPI, since this responsibility, while in tension with other institutional priorities, cannot nevertheless be refused by the team. Furthermore, the use of titles such as lead, manager or co-ordinator not only signal an orderly distinction between junior and senior roles, which often does not apply in practice, but also reframes the extra-institutional work of PPI (the forging of relationships and administrative support with public contributors), through the intra-institutional functions of performance/project management. This reframing elides an important difference between the two: public and patient partners, for the most part, do not have a formal contractual relationship with the institution and are not subject to performance management in the way that contracted researchers and healthcare professionals are. Indeed, framing the relationship between PPI workers and public contributors through the language of “management” fundamentally misrecognizes the kinds of relationalities produced in the interactions between PPI workers and public contributors and elides the externality of PPI to the “logic of deliverables” [ 36 ].

The inclusion of a detailed PPI section in grant applications and annual reports to funder further consolidates this misapprehension by also representing public involvement as if it is already enrolled within organizational normative procedures and therefore compels those in receipt of funding to evidence such delivery through annual reports [ 37 ]. This demand puts PPI workers under increasing pressure, since their function is to essentially present PPI objectives as not only achievable but already achieved, thus essentially bracketing out the process of organizational transformation which is a necessary prerequisite to establishing enduring partnerships with patients and the public. This bracketing out is at work in the organizational expectation to “just get on with it”, which structures the labour of delivering PPI in NIHR-funded research. Here, the demand to just get on, to do the work one is paid to do, forecloses the possibility of engaging with the structural obstacles that militate against that work being done. To the extent that both role designation and reporting expectations function to conceal the disjuncture that the establishment of public partnerships represents for regimes of performance management, they generate new invisibilities for PPI workers. These invisibilities radically constrain how such labour can be adequately undertaken, recognized and resourced.

In suggesting that much of the labour of staff in public involvement roles is institutionally invisible, and that organizational structures may obstruct or block their efforts, we concur with the arguments made by Watermeyer, Mathie and colleagues about the position of staff in public engagement and public involvement roles, respectively. However, our account diverges from theirs in our interpretation of how and why this labour is experienced as invisible and how that invisibility could be remedied. Mathie and colleagues in particular attribute this invisibility to a lack of parity and an institutional devaluation of what are perceived as “soft skills” – facilitation and relationship building in particular [ 50 ]. They therefore seek to raise PPI work to visibility by emphasizing the complexity of PPI activities and by calling for a ring-fencing of resources and a development of infrastructures capable of sustaining such work. While we concur that the invisibility of PPI labour is connected to its devaluation within research institutions, we also suggest that, in addition, this invisibility is a symptom of a radical misalignment between regimes of performance management and the establishment of sustainable public partnerships. Establishing such partnerships requires, as a number of researchers have demonstrated [ 18 , 59 , 60 ], considerable institutional transformation, yet those tasked with delivering PPI are not only not in a position to effect such transformation, they are also compelled to conceal its absence.

Recognizing and addressing the misalignment between regimes of performance management and the establishment of sustainable public partnerships becomes particularly pressing given the increasing recognition, in many countries, that public participation in health research and intervention development is an important step to effectively identifying and addressing health inequalities [ 19 , 61 , 62 ]. Calls for widening participation, for the inclusion of under-served populations and for co-designing and co-producing health research, which have been gathering force in the last 20 years, have gained renewed urgency in the wake of the coronavirus disease 2019 (COVID-19) pandemic [ 63 , 64 , 65 , 66 , 67 ]. In the United Kingdom, Best Research for Best Health: The Next Chapter, published by the NIHR in 2021 to define the direction and priorities for NHS Research for the coming decade, exemplifies this urgency. The document asserts that a radical broadening of the scope of PPI (now renamed “public partnerships”) is essential for combatting health inequalities: it explicitly amplifies the ambitions of its 2006 predecessor by setting up as a key objective “close and equitable partnerships with communities and groups, including those who have previously not had a voice in research” [ 68 ]. Here, as in other comparable policy documents, emphasis on extending partnerships to so-called underserved communities rests on the assumption that, to some degree at least, PPI has already become the norm for undertaking research. This assumption, we argue, closes down in advance any engagement with the tensions we have been discussing in this paper, and in so doing risks exacerbating them. The document does recognize that for such inclusive partnerships to be established institutions must “work differently, taking research closer to people [..] and building relationships of trust over time” – though, we would suggest, it is far from clear how ready or able institutions are really to take on what working differently might mean.

Our study engages with and emphasizes this need to “work differently” while also arguing that the demands and expectations set up through regimes of performance management and their “logic of deliverables” are not favourable to an opening of a space in which “working differently” could be explored. In health research systems organized through these regimes, “working differently” is constrained by the application of the very templates, instruments and techniques which constitute and manage “business as usual”. Any ongoing effort to transform health research systems so as better to respond to growing health inequalities, our study implies, needs to combat, both materially and procedurally, the ease with which the disjuncture between embedding public partnerships and normative ways of undertaking research comes to disappear.

Limitations

We focus on the labour of the PPI workforce and their negotiation of performance management regimes, which means that we have not discussed relationships between PPI staff and public contributors nor presented examples of good practice. While these are important domains for study if we are to understand the labour of the PPI workforce, they lie outside the scope of this article. Furthermore, our focus on the UK health research system means that our conclusions may have limited generalizability. However, both the consolidation of NPM principles in public sector institutions and the turn to public and patient participation in the design and delivery of health research are shared developments across countries in the Global North in the last 40 years. Therefore, the tensions we discuss are likely to also manifest in health systems outside the United Kingdom, even as they may take somewhat different forms, given differences in how research and grants are costed, and roles structured. Finally, this project has elements of “insider” research since both authors, while working primarily as researchers, have also had experience of embedding PPI in research studies and programmes. Insider research has specific strengths, which include familiarity with the field and a sense of shared identity with participants which may enhance trust, facilitate disclosure and generate rich data. In common with other insider research endeavours, we have sought to reflexively navigate risks of bias and of interpretative blind spots resulting from over-familiarity with the domain under research [ 69 ] by discussing our findings and interpretations with “non-insider” colleagues while writing up this research.

Our qualitative study is one of the first to investigate how the UK PPI workforce is negotiating the current health research landscape. In doing so, we have focused on the UK’s NIHR since this institution embodied the redirection of performance management regimes towards public benefit by means of public participation. If PPI is set up as both the means of enabling this redirection and an outcome of its success, then the PPI workforce, the professional cadre evolving to support PPI, becomes, we argue, the site where the tensions of attempting this alignment are most keenly experienced.

We suggest that, while such alignment would demand a wholesale transformation of organizational norms, the regimes of performance management underpinning research ecologies may also work to foreclose such transformation, thus hollowing out the promise of patient-centred research policies and systems. Recognizing and attending to this foreclosure is urgent, especially given the current policy emphasis in many countries on broadening the scope, ambition and inclusivity of public participation as a means of increasing the reach, relevance and potential positive impact of health research.

Availability of data and materials

The data that support the findings of this study are available on request from the corresponding author.

Shippee ND, Domecq Garces JP, Prutsky Lopez GJ, Wang Z, Elraiyah TA, Nabhan M, et al. PMC5060820: patient and service user engagement in research: a systematic review and synthesized framework. Health Expect. 2015;18(5):1151–66.

Article   PubMed   Google Scholar  

Domecq JP, Prutsky G, Elraiyah T, Wang Z, Nabhan M, Shippee N, et al. PMC3938901: patient engagement in research: a systematic review. BMC Health Serv Res. 2014;26(14):89.

Article   Google Scholar  

Crocker J, Hughes-Morley A, Petit-Zeman S, Rees S. Assessing the impact of patient and public involvement on recruitment and retention in clinical trials: a systematic review. In: 3rd International Clinical Trials Methodology Conference. 2015;16(S2):O91.

Staniszewska S, Herron-Marx S, Mockford C. Measuring the impact of patient and public involvement: the need for an evidence base. Int J Qual Health Care. 2008;20(6):373–4.

Brett J, Staniszewska S, Mockford C, Herron-Marx S, Hughes J, Tysall C, et al. Mapping the impact of patient and public involvement on health and social care research: a systematic review. Health Expect. 2014;17(5):637–50.

Staniszewska S, Adebajo A, Barber R, Beresford P, Brady LM, Brett J, et al. Developing the evidence base of patient and public involvement in health and social care research: the case for measuring impact. Int J Consum Stud. 2011;35(6):628–32.

Staley K. ‘Is it worth doing?’ Measuring the impact of patient and public involvement in research. Res Involv Engagem. 2015;1(1):6.

Article   PubMed   PubMed Central   Google Scholar  

Brady L, Preston J. How do we know what works? Evaluating data on the extent and impact of young people’s involvement in English health research. Res All. 2020;4(2):194–206.

Daly J. Evidence based medicine and the search for a science of clinical care. Oakland: University of California Press; 2005.

Book   Google Scholar  

Ward PR, Thompson J, Barber R, Armitage CJ, Boote JD, Cooper CL, et al. Critical perspectives on ‘consumer involvement’ in health research epistemological dissonance and the know-do gap. J Sociol. 2010;46(1):63–82.

Rose D, Kalathil J. Power, privilege and knowledge: the untenable promise of co-production in mental “health.” Front Soc. 2019;4(57):435866.

Google Scholar  

Beresford P. PMC7317269: PPI or user Involvement: taking stock from a service user perspective in the twenty first century. Res Involv Engagem. 2020;6:36.

McKevitt C. Experience, knowledge and evidence: a comparison of research relations in health and anthropology. Evid Policy. 2013;9(1):113–30.

Boaz A, Biri D, McKevitt C. Rethinking the relationship between science and society: has there been a shift in attitudes to Patient and Public Involvement and Public Engagement in Science in the United Kingdom? Health Expect. 2016;19(3):592–601.

Green G. Power to the people: to what extent has public involvement in applied health research achieved this? Res Involv Engagem. 2016;2(1):28.

Miller FA, Patton SJ, Dobrow M, Berta W. Public involvement in health research systems: a governance framework. Health Res Policy Syst. 2018;16(1):79.

Madden M, Speed E. Beware zombies and unicorns: toward critical patient and public involvement in health research in a neoliberal context. Front Sociol. 2017;2(7):1–6.

Papoulias S, Callard F. Material and epistemic precarity: it’s time to talk about labour exploitation in mental health research. Soc Sci Med. 2022;306:115102.

Lignou S, Sheehan M, Singh I. ‘A commitment to equality, diversity and inclusion’: a conceptual framework for equality of opportunity in patient and public involvement in research. Res Ethics. 2024;20(2):288–303.

Dorey P. The legacy of Thatcherism—public sector reform. Obs Soc Br. 2015;17:33–60.

National Health Service and Community Care Act. 1990.

Ferlie E, Ashburner L, Fitzgerald L, Pettigrew A. The new public management in action. Oxford: Oxford University Press; 1996.

Lapuente V, Van de Walle S. The effects of new public management on the quality of public services. Governance. 2020;33(3):461–75.

Atkinson P, Sheard S, Walley T. ‘All the stars were aligned’? The origins of England’s National Institute for Health Research. Health Res Policy Syst. 2019;17(1):95.

Weir S, Beetham D. Political power and democratic control in Britain: the democratic audit of the United Kingdom. London: Psychology Press; 1999.

Sullivan HC, Skelcher C. Working across boundaries. 1st ed. Houndmills: Palgrave; 2002.

The new NHS: modern, dependable 1997.

Cutler T, Waine B. Managerialism reformed? New labour and public sector management. Soc Policy Adm. 2000;34(3):318–32.

Speed E. Applying soft bureaucracy to rhetorics of choice: UK NHS 1983–2007. In: Clegg SR, Harris M, Hopfl H, editors. Managing modernity: the end of bureaucracy? Oxford: Oxford University Press; 2011.

Dalingwater L. Post-new public management (NPM) and the reconfiguration of health services in England. Obs Soc Br. 2014;1(16):51–64.

Bennett C, McGivern G, Ferlie E, Dopson S, Fitzgerald L. Making wicked problems governable? The case of managed networks in health care. 1st ed. Oxford: Oxford University Press; 2013.

Hanney S, Kuruvilla S, Soper B, Mays N. Who needs what from a national health research system: lessons from reforms to the English department of Health’s R&D system. Health Res Policy Syst. 2010;13(8):11–11.

Evans TW. Best research for best health: a new national health research strategy. Clin Med. 2006;6(5):435–7.

DeNegri S, Evans D, Palm M, Staniszewka S. The history of INVOLVE—a witness seminar. 2024. https://intppinetwork.wixsite.com/ippin/post/history-of-involve . Accessed Apr 17 2024.

Evans D, Pollard KC. Theorising service user involvement from a researcher perspective. In: Staddon P, editor. Mental health service users in research United States. Bristol: Policy Press; 2013. p. 39.

Papoulias SC, Callard F. ‘A limpet on a ship’: spatio-temporal dynamics of patient and public involvement in research. Health Expect. 2021;24(3):810–8.

Komporozos-Athanasiou A, Paylor J, McKevitt C. Governing researchers through public involvement. J Soc Policy. 2022;51(2):268–83.

Paylor J, McKevitt C. The possibilities and limits of “co-producing” research. Front Sociol. 2019;4:23.

Boylan AM, Locock L, Thomson R, Staniszewska S. “About sixty per cent I want to do it”: health researchers’ attitudes to, and experiences of, patient and public involvement (PPI)—a qualitative interview study. Health Expect. 2019. https://doi.org/10.1111/hex.12883 .

DeNegri S. Going the extra mile: improving the nation’s health and wellbeing through public involvement in research. 2015.

Crowe S, Wray P, Lodemore M. NIHR public involvement leads’ meeting November 25 2016. 2017.

NIHR. Taking stock—NIHR public involvement and engagement. 2019. https://www.nihr.ac.uk/documents/taking-stock-nihr-public-involvement-and-engagement/20566 . Accessed Apr 28 2023.

NIHR. Definition and role of the designated PPI (Patient and Public Involvement) lead in a research team. 2020. https://www.nihr.ac.uk/documents/definition-and-role-of-the-designated-ppi-patient-and-public-involvement-lead-in-a-research-team/23441 . Accessed Apr 28 2023.

Li KK, Abelson J, Giacomini M, Contandriopoulos D. Conceptualizing the use of public involvement in health policy decision-making. Soc Sci Med. 2015;138:14–21.

Staley K, Barron D. Learning as an outcome of involvement in research: what are the implications for practice, reporting and evaluation? Res Involv Engagem. 2019;5(1):14.

Brady L, Miller J, McFarlane-Rose E, Noor J, Noor R, Dahlmann-Noor A. “We know that our voices are valued, and that people are actually going to listen”: co-producing an evaluation of a young people’s research advisory group. Res Involv Engagem. 2023;9(1):1–15.

Knowles S, Sharma V, Fortune S, Wadman R, Churchill R, Hetrick S. Adapting a codesign process with young people to prioritize outcomes for a systematic review of interventions to prevent self-harm and suicide. Health Expect. 2022;25(4):1393–404.

Mathie E, Wythe H, Munday D, Millac P, Rhodes G, Roberts N, et al. Reciprocal relationships and the importance of feedback in patient and public involvement: a mixed methods study. Health Expect. 2018;21(5):899–908.

Wilson P, Mathie E, Keenan J, McNeilly E, Goodman C, Howe A, et al. ReseArch with Patient and Public invOlvement: a RealisT evaluation—the RAPPORT study. Health Serv Deliv Res. 2015;3(38):1–176.

Article   CAS   Google Scholar  

Mathie E, Smeeton N, Munday D, Rhodes G, Wythe H, Jones J. The role of patient and public involvement leads in facilitating feedback: “invisible work.” Res Involv Engagem. 2020;6(1):40.

Watermeyer R, Rowe G. Public engagement professionals in a prestige economy: ghosts in the machine. Stud High Educ. 2022;47(7):1297–310.

Watermeyer R, Lewis J. Institutionalizing public engagement through research in UK universities: perceptions, predictions and paradoxes concerning the state of the art. Stud High Educ. 2018;43(9):1612–24.

Collinson JA. ‘Get yourself some nice, neat, matching box files!’ research administrators and occupational identity work. Stud High Educ. 2007;32(3):295–309.

Braun V, Clarke V. Reflecting on reflexive thematic analysis. Qual Res Sport Exerc Health. 2019;11(4):589–97.

Clarke V, Braun V. Thematic analysis: a practical guide. 2021.

Clarke V, Braun V. Successful qualitative research. London: SAGE; 2013.

Lincoln YS, Guba EG. Naturalistic inquiry. 3rd ed. Beverly Hills: Sage Publications; 1985.

Kaiser K. Protecting respondent confidentiality in qualitative research. Qual Health Res. 2009;19(11):1632–41.

Heney V, Poleykett B. The impossibility of engaged research: complicity and accountability between researchers, ‘publics’ and institutions. Sociol Health Illn. 2022;44(S1):179–94.

MacKinnon KR, Guta A, Voronka J, Pilling M, Williams CC, Strike C, et al. The political economy of peer research: mapping the possibilities and precarities of paying people for lived experience. Br J Soc Work. 2021;51(3):888–906.

Bibbins-Domingo K, Helman A, Dzau VJ. The imperative for diversity and inclusion in clinical trials and health research participation. JAMA. 2022;327(23):2283–4.

Washington V, Franklin JB, Huang ES, Mega JL, Abernethy AP. Diversity, equity, and inclusion in clinical research: a path toward precision health for everyone. Clin Pharmacol Ther. 2023;113(3):575–84.

Graham ID, McCutcheon C, Kothari A. Exploring the frontiers of research co-production: the Integrated Knowledge Translation Research Network concept papers. Health Res Policy Syst. 2019;17(1):88.

Marten R, El-Jardali F, Hafeez A, Hanefeld J, Leung GM, Ghaffar A. Co-producing the covid-19 response in Germany, Hong Kong, Lebanon, and Pakistan. BMJ. 2021;372: n243.

Smith H, Budworth L, Grindey C, Hague I, Hamer N, Kislov R, et al. Co-production practice and future research priorities in United Kingdom-funded applied health research: a scoping review. Health Res Policy Syst. 2022;20(1):36.

World Health Organization. Health inequity and the effects of COVID-19: assessing, responding to and mitigating the socioeconomic impact on health to build a better future. Copenhagen: Regional Office for Europe. World Health Organization; 2020.

Dunston R, Lee A, Boud D, Brodie P, Chiarella M. Co-production and health system reform—from re-imagining to re-making. Aust J Public Adm. 2009;68:39–52.

Department for Health and Social Care. Best research for best health: the next chapter. Bethesda: National Institute for Health Research; 2021.

Wilkinson S, Kitzinger C. Representing our own experience: issues in “insider” research. Psychol Women Q. 2013;37:251–5.

Download references

Acknowledgements

S.P. presented earlier versions of this paper at the 8th annual conference of the Centre for Public Engagement Kingston University, December 2021; at the Medical Sociology conference of the British Sociological Association, September 2022; and at the annual Health Services Research UK Conference, July 2023. They are grateful to the audiences of these presentations for their helpful comments. Both authors are also grateful to the generous participants and to the NIHR Applied Research Collaboration Public Involvement Community for their sustaining support and encouragement during this time. S.P. also wishes to thank Felicity Callard for her comments, advice and suggestions throughout this process: this paper would not have been completed without her.

S.P. is supported by the National Institute for Health and Care Research (NIHR) Applied Research Collaboration (ARC) South London at King’s College Hospital NHS Foundation Trust. The views expressed are those of the author and not necessarily those of the NHS, the NIHR or the Department of Health and Social Care.

Author information

Authors and affiliations.

Health Service & Population Research, King’s College London, London, United Kingdom

Stan Papoulias

Centre for Public Health and Community Care, University of Hertfordshire, Hatfield, United Kingdom

Louca-Mai Brady

You can also search for this author in PubMed   Google Scholar

Contributions

S.P. developed the original idea for this article through earlier collaborations with L.M.B. whose long-term experience as a PPI practitioner has been central to both the project and the article. L.M.B. contributed to conceptualization, wrote the first draft of the background and undertook revisions after the first draft including reconceptualization of results. S.P. contributed to conceptualization, undertook data analysis, wrote the first draft of findings and discussion and revised the first draft in its entirety in consultation with L.M.B. Both authors read and approved the final manuscript.

Corresponding author

Correspondence to Stan Papoulias .

Ethics declarations

Ethics approval and consent to participate.

The study received a favourable opinion from the Psychiatry, Nursing and Midwifery Research Ethics Panel, King’s College London (ref no.: LRS-20/21-21466).

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Papoulias, S., Brady, LM. “I am there just to get on with it”: a qualitative study on the labour of the patient and public involvement workforce. Health Res Policy Sys 22 , 118 (2024). https://doi.org/10.1186/s12961-024-01197-5

Download citation

Received : 17 July 2023

Accepted : 26 July 2024

Published : 02 September 2024

DOI : https://doi.org/10.1186/s12961-024-01197-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Patient and public involvement
  • PPI workforce
  • New public management
  • National Institute for Health and Care Research (NIHR)

Health Research Policy and Systems

ISSN: 1478-4505

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

interview research limitations

Intelligent personal assistants in self-access L2 vocabulary learning

  • Published: 02 September 2024

Cite this article

interview research limitations

  • Assim S. Alrajhi   ORCID: orcid.org/0000-0002-6205-9943 1  

Motivated by the proliferation of artificial intelligence that has the potential to promote self-access learning, this study utilizes a sequential explanatory quasi-experimental mixed methods design to investigate the efficacy of Google Assistant (GA) in facilitating second language (L2) vocabulary learning compared to online dictionaries. A cohort of EFL university students ( n  = 74) was assigned to two groups: a control group using online dictionaries and an experimental group utilizing GA. Over six learning sessions, both groups learned 10% of Coxhead’s ( 2000 ) Academic Vocabulary List. With data drawn from multiple sources, including pre- and post-tests, a survey questionnaire, and individual interviews, the findings reveal significant improvements in vocabulary knowledge for both groups, indicating that GA can be utilized as an effective vocabulary learning tool. Despite concerns regarding the quality of GA’s voice recognition as a potential demotivating factor, learners hold positive views on the efficacy of GA. These perceptions reflect influential factors primarily situated within the cognitive and affective domains of learning. Accordingly, key affordances and limitations of GA are identified. This study proposes pedagogical implications and outlines potential avenues for further research in the domain of intelligent personal assistants-assisted L2 vocabulary learning.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price excludes VAT (USA) Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

interview research limitations

Similar content being viewed by others

interview research limitations

Second and Foreign Language Vocabulary Learning through Digital Reading: A Meta-Analysis

interview research limitations

Personal Vocabulary Recommendation to Support Real Life Needs

interview research limitations

A Personalized Task Recommendation System for Vocabulary Learning Based on Readability and Diversity

Explore related subjects.

  • Artificial Intelligence
  • Digital Education and Educational Technology

Data availability

All data are available upon reasonable request.

Alrajhi, A. S. (2020). Static infographics effects on the receptive knowledge of idiomatic expressions. Indonesian Journal of Applied Linguistics, 10 (2), 315–326. https://doi.org/10.17509/ijal.v10i2.28596

Alrajhi, A. S. (2023). Genre effect on google translate–assisted L2 writing output quality. ReCALL, 35 (3), 305–320. https://doi.org/10.1017/S0958344022000143

Alrajhi, A. S. (2024). Artificial intelligence pedagogical chatbots as L2 conversational agents. Cogent Education, 11 (1), 2327789. https://doi.org/10.1080/2331186X.2024.2327789

Al Shamsi, J. H., Al-Emran, M., & Shaalan, K. (2022). Understanding key drivers affecting students’ use of artificial intelligence-based voice assistants. Education and Information Technologies, 27 , 8071–8091. https://doi.org/10.1007/s10639-022-10947-3

Article   Google Scholar  

Aw, E. C. X., Tan, G. W. H., Cham, T. H., Raman, R., & Ooi, K. B. (2022). Alexa, what’s on my shopping list? Transforming customer experience with digital voice assistants. Technological Forecasting and Social Change, 180 , 121711. https://doi.org/10.1016/j.techfore.2022.121711

Bateson, G., & Daniels, P. (2012). Diversity in technologies. In G. Stockwell (Ed.), Computer assisted language learning: Diversity in research and practice (pp. 127–146). New York, NY: Cambridge University Press. https://doi.org/10.1017/cbo9781139060981

Chapter   Google Scholar  

Belanche, D., Casaló, L. V., Schepers, J., & Flavián, C. (2021). Examining the effects of robots’ physical appearance, warmth, and competence in frontline services: The Humanness-Value-Loyalty model. Psychology & Marketing, 38 (12), 2357–2376. https://doi.org/10.1002/mar.21532

Belk, R. (2021). Ethical issues in service robotics and artificial intelligence. The Service Industries Journal, 41 (13–14), 860–876. https://doi.org/10.1080/02642069.2020.1727892

Berdasco, A., López, G., Diaz, I., Quesada, L., & Guerrero, L. A. (2019). User experience comparison of intelligent personal assistants: Alexa, Google Assistant, Siri and Cortana. Proceedings, 31 (1), 51–58. https://doi.org/10.3390/proceedings2019031051

Bibauw, S., Francois, T., & Desmet, P. (2019). Discussing with a computer to practice a foreign language: Research synthesis and conceptual framework of dialogue-based CALL. Computer Assisted Language Learning, 32 (8), 827–877. https://doi.org/10.1080/09588221.2018.1535508

Botero, G. G., Questier, F., & Zhu, C. (2019). Self-directed language learning in a mobile assisted out-of-class context: Do students walk the talk? Computer Assisted Language Learning, 32 (1–2), 71–97. https://doi.org/10.1080/09588221.2018.1485707

Bräuer, P., & Mazarakis, A. (2024). How to design audio-gamification for language learning with Amazon Alexa?—A long-term field experiment. International Journal of Human-Computer Interaction, 40 (9), 2343–2360. https://doi.org/10.1080/10447318.2022.2160228

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3 (2), 77–101. https://doi.org/10.1191/1478088706qp063oa

Bronstein, M. (2020, 7). A more helpful Google assistant for your every day . [Online] Available: https://www.blog.google/products/assistant/ces-2020-google-assistant/ . (January 7, 2020)

Chen, H.H.-J., Yang, C.T.-Y., & Lai, K.K.-W. (2023). Investigating college EFL learners’ perceptions toward the use of Google Assistant for foreign language learning. Interactive Learning Environments, 31 (3), 1335–1350. https://doi.org/10.1080/10494820.2020.1833043

Choi, S., Jang, Y., & Kim, H. (2023). Exploring factors influencing students’ intention to use intelligent personal assistants for learning.  Interactive Learning Environments , 1–14. https://doi.org/10.1080/10494820.2023.2194927

Coxhead, A. (2000). A new academic word list. TESOL Quarterly, 34 (2), 213–238. https://doi.org/10.2307/3587951

Creswell, J. W., & Miller, D. L. (2000). Determining validity in qualitative inquiry. Theory into Practice, 39 (3), 124–130. https://doi.org/10.1207/s15430421tip3903_2

Daley, S., & Pennington, J. (2020). Alexa the teacher’s pet: A review of research on virtual assistants in education. In T. Bastiaens (Ed), EdMedia + innovate learning 2020 . Online, (pp. 138–146). Waynesville, NC: AACE.

Daniels, P., & Iwago, K. (2017). The suitability of cloud-based speech recognition engines for language learning. The JALT CALL Journal, 13 (3), 211–221.

Dizon, G. (2023). Affordances and constraints of intelligent personal assistants for second-language learning. RELC Journal, 54 (3), 848–855. https://doi.org/10.1177/00336882211020548

Dizon, G., & Tang, D. (2020). Intelligent personal assistants for autonomous second language learning: An investigation of Alexa. The JALT CALL Journal, 16 (2), 107–120.

Dizon, G., Tang, D., & Yamamoto, Y. (2022). A case study of using Alexa for out-of-class, self- directed Japanese language learning. Computers and Education: Artificial Intelligence, 3 , 100088. https://doi.org/10.1016/j.caeai.2022.100088

Dizon, G. (2020). Evaluating intelligent personal assistants for L2 listening and speaking development. Language Learning & Technology, 24 (1), 16–26.

Dörnyei, Z. (2007). Research methods in applied linguistics: Quantitative, qualitative, and mixed methodologies . Oxford University Press.

Google Scholar  

Gardner, D., & Miller, L. (1999). Establishing self-access: From theory to practice . Cambridge University Press.

Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research . Aldine.

Godwin-Jones, R. (2019). Riding the digital wilds: Learner autonomy and informal language learning. Language Learning & Technology, 23 (1), 8–25.

Hatch, E. M., & Lazaraton, A. (1991). The research manual: Design and statistics for applied linguistics . Newbury House.

Hsu, H. L., Chen, H. H. J., & Todd, A. G. (2023). Investigating the impact of the Amazon Alexa on the development of L2 listening and speaking skills. Interactive Learning Environments, 31 (9), 5732–5745. https://doi.org/10.1080/10494820.2021.2016864

Kervenoael, R., Hasan, R., Schwob, A., & Goh, E. (2020). Leveraging human-robot interaction in hospitality services: Incorporating the role of perceived value, empathy, and information sharing into visitors’ intentions to use social robots. Tourism Management, 78 , 104042. https://doi.org/10.1016/j.tourman.2019.104042

Lardinois, F. (2018).  The google assistant is now bilingual . TechCrunch. [Online] Available: https://techcrunch.com/2018/08/30/the-google-assistant-is-now-bilingual/ (August 30, 2020)

Levis, J. (2007). Computer technology in teaching and researching pronunciation. Annual Review of Applied Linguistics, 27 , 184–202. https://doi.org/10.1017/S0267190508070098

Liao, Y., Vitak, J., Kumar, P., Zimmer, M., & Kritikos, K. (2019). Understanding the role of privacy and trust in intelligent personal assistant adoption. In: N. G. Taylor, C. Christian-Lamb, M. H. Martin, & B. Nardi (Eds.), Information in Contemporary Society , 11420 , 102–113. https://doi.org/10.1007/978-3-030-15742-5_9

Locaria. (2020).  The Future of Voice Commerce and Localisation . Locaria. [Online] Available: https://locaria.com/the-future-of-voice-commerce-and-localisation/ (March 6, 2020).

Molinillo, S., Rejón-Guardia, F., Anaya-Sánchez, R., & Liébana-Cabanillas, F. (2023). Impact of perceived value on intention to use voice assistants: The moderating effects of personal innovativeness and experience. Psychology & Marketing, 40 (11), 2272–2290. https://doi.org/10.1002/mar.21887

Moussalli, S., & Cardoso, W. (2020). Intelligent personal assistants: Can they understand and be understood by accented L2 learners? Computer Assisted Language Learning, 33 (8), 865–890. https://doi.org/10.1080/09588221.2019.1595664

Reinders, H., & Hubbard, P. (2013). CALL and learner autonomy: Affordances and constraints. In M. Thomas, H. Reinders, & M. Warschauer (Eds.), Contemporary computer assisted language learning (pp. 359–375). Continuum Books.

Santos, J., Rodrigues, J. J., Silva, B. M., Casal, J., Saleem, K., & Denisov, V. (2016). An IoT-based mobile gateway for intelligent personal assistants on mobile health environments. Journal of Network and Computer Applications, 71 , 194–204. https://doi.org/10.1016/j.jnca.2016.03.014

Singh, S., Singh, N., & Kalinić, Z., & Liébana-Cabanillas, F. J. (2021). Assessing determinants influencing continued use of live streaming services: An extended perceived value theory of streaming addiction. Expert Systems with Applications, 168 , 114241. https://doi.org/10.1016/j.eswa.2020.114241

Tai, T. Y. (2024). Comparing the effects of intelligent personal assistant-human and human-human interactions on EFL learners’ willingness to communicate beyond the classroom. Computers & Education, 210 , 104965. https://doi.org/10.1016/j.compedu.2023.104965

Tai, T. Y., & Chen, H. H. J. (2023). The impact of Google Assistant on adolescent EFL learners’ willingness to communicate. Interactive Learning Environments, 31 (3), 1485–1502. https://doi.org/10.1080/10494820.2020.1841801

Tai, T. Y., & Chen, H. H. J. (2024). The impact of intelligent personal assistants on adolescent EFL learners’ listening comprehension. Computer Assisted Language Learning, 37 (3), 433–460. https://doi.org/10.1080/09588221.2022.2040536

Viberg, O., & Kukulska-Hulme, A. (2021). Fostering learners’ self-regulation and collaboration skills and strategies for mobile language learning beyond the classroom. In H. Reinders, C. Lai, & P. Sundqvist (Eds.), Routledge handbook of language teaching and learning beyond the classroom (pp. 1–15). Routledge.

Xiao, F., Zhao, P., Sha, H., Yang, D., & Warschauer, M. (2023). Conversational agents in language learning. Journal of China Computer-Assisted Language Learning . https://doi.org/10.1515/jccall-2022-0032

Yang, C. T. Y., Lai, S. L., & Chen, H. H. J. (2022). The impact of intelligent personal assistants on learners’ autonomous learning of second language listening and speaking.  Interactive Learning Environments , 1–21. https://doi.org/10.1080/10494820.2022.2141266

Yang, H., & Lee, H. (2019). Understanding user behavior of virtual personal assistant devices. Information Systems and e-Business Management, 17 (1), 65–87. https://doi.org/10.1007/s10257-018-0375-1

Article   MathSciNet   Google Scholar  

Yetişensoy, O., & Karaduman, H. (2024). The effect of AI-powered chatbots in social studies education. Education and Information Technologies . https://doi.org/10.1007/s10639-024-12485-6 . Advance online publication.

Zeithaml, V. A. (1988). Consumer perceptions of price, quality and value: A means-end model and synthesis of evidence. Journal of Marketing, 52 (3), 2–22. https://doi.org/10.1177/002224298805200302

Download references

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Author information

Authors and affiliations.

Department of English Language and Literature, College of Languages and Humanities, Qassim University, Qassim, Saudi Arabia

Assim S. Alrajhi

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Assim S. Alrajhi .

Ethics declarations

Ethics approval.

The identities of all participants in this study were kept confidential and hidden throughout the research process. Participation in this study was entirely voluntary, and participants could withdraw from the study at any time.

Competing interests

The author has no competing interests to declare that are relevant to the content of this article.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Sample of the pre-/post-tests

Choose the correct academic word that matches the shown definition.

To leave a person, place or thing, usually permanently/to stop doing an activity before it is finished. (commodity—abandon—exploit—coincide—mediate)

Something happening through chance rather than reasoning or a plan. (denote—crucial—displacement—contemporary—arbitrary)

Invent a plan or idea / to imagine something. (conceive—undergo—coincide—commence—fluctuate)

Next to, near, or touching. (compile—inherent—thereby—whereby—adjacent)

Extremely large. (commence—enormous—intrinsic—sphere—ambiguous)

Coxhead’s ( 2000 ) AWL ( Sublists 8, 9, & 10) used in this study.

Sublist (8) displacement—arbitrary—denote—offset—exploit—abandon—predominant—thereby—ambiguous—conform—contemporary—accumulate—fluctuate—commodity—prospect—inevitable—induce—crucial—exhibit—bias.

Sublist (9) bulk—commence—anticipate—norms—compatible—concurrent—integral—confine—refine—accommodate—rigid—diminish—analogy—controversy—sphere—mediate—coincide—restrain—inherent.

Sublist (10) whereby—incline—assemble—albeit—enormous—reluctance—persist—undergo—pose—notwithstanding—adjacent—forthcoming—conceive—panel—invoke—integrity—intrinsic—compile.

See Fig. 7 .

figure 7

Sample of vocabulary queries using GA

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Alrajhi, A.S. Intelligent personal assistants in self-access L2 vocabulary learning. Educ Inf Technol (2024). https://doi.org/10.1007/s10639-024-12967-7

Download citation

Received : 04 April 2024

Accepted : 08 August 2024

Published : 02 September 2024

DOI : https://doi.org/10.1007/s10639-024-12967-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Intelligent personal assistant
  • Google assistant
  • Artificial intelligence
  • Human–computer interaction
  • L2 vocabulary
  • EFL learner
  • Self-access learning
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. Limitations in Research

    interview research limitations

  2. What Are The Research Study's limitations, And How To Identify Them

    interview research limitations

  3. Personal Interview Method and Its Limitations

    interview research limitations

  4. 5 Tips for discussing your research limitations

    interview research limitations

  5. In-depth Interview Method: Strengths & Limitations of 4 Modes

    interview research limitations

  6. PPT

    interview research limitations

VIDEO

  1. 9 Tips for Interview Success

  2. Purpose of interview /Research Methodology

  3. Interview Tips

  4. importance of interview research methodology

  5. OR EP 04 PHASES , SCOPE & LIMITATIONS OF OPERATION RESEARCH

  6. What are the limitations of Selenium (Selenium Interview Question #134)

COMMENTS

  1. Working through Challenges in Doing Interview Research

    In this paper, I use a 'constructionist' approach to interviewing, in which interviewers and interviewees are seen to "generate situated accountings and possible ways of talking about research topics" (Roulston, 2010, p. 60). As Holstein and Gubrium (2004) comment: "Both parties to the interview are necessarily and unavoidably active.

  2. Getting more out of interviews. Understanding interviewees' accounts in

    We understand the need for an extensive reliance on interviews and, at the same time, recognise the serious limitations that exist regarding access to the interviewee's worldview, their motivations and orientations. ... Kvale S, Brinkmann S (2009) Interviews: Learning the Craft of Qualitative Research Interviewing. 2nd ed. Los Angeles: Sage.

  3. A Critical Review of Qualitative Interviews

    Patton (2002) identified three basic types of qualitative interviewing for research or evaluation: an interview that is informal and conversational, one that is an interview guide approach, and ...

  4. Types of Interviews in Research

    There are several types of interviews, often differentiated by their level of structure. Structured interviews have predetermined questions asked in a predetermined order. Unstructured interviews are more free-flowing. Semi-structured interviews fall in between. Interviews are commonly used in market research, social science, and ethnographic ...

  5. Interview Method In Psychology Research

    A structured interview is a quantitative research method where the interviewer a set of prepared closed-ended questions in the form of an interview schedule, which he/she reads out exactly as worded. Interviews schedules have a standardized format, meaning the same questions are asked to each interviewee in the same order (see Fig. 1). Figure 1.

  6. Interviews in the social sciences

    Research by Rao draws on dozens of interviews with men and women who had lost their jobs, some of the participants' spouses and follow-up interviews with about half the sample approximately 6 ...

  7. Qualitative research method-interviewing and observation

    Interviewing. This is the most common format of data collection in qualitative research. According to Oakley, qualitative interview is a type of framework in which the practices and standards be not only recorded, but also achieved, challenged and as well as reinforced.[] As no research interview lacks structure[] most of the qualitative research interviews are either semi-structured, lightly ...

  8. (PDF) How to Conduct an Effective Interview; A Guide to Interview

    Vancouver, Canada. Abstract. Interviews are one of the most promising ways of collecting qualitative data throug h establishment of a. communication between r esearcher and the interviewee. Re ...

  9. It's More Complicated Than It Seems: Virtual Qualitative Research in

    COVID-19 has necessitated innovation in many parts of our lives—and qualitative research is no exception. Interviews are often the cornerstone of qualitative research and, historically, conducting them in person has been considered the "gold standard" (Novick, 2008; Opdenakker, 2006; Sy et al., 2020).Yet, in the COVID-19 era, in-person data collection—for semi-structured interviews ...

  10. Interview Research

    This short video summarizes why interviews can serve as useful data in qualitative research. InterViews by Steinar Kvale Interviewing is an essential tool in qualitative research and this introduction to interviewing outlines both the theoretical underpinnings and the practical aspects of the process. After examining the role of the interview ...

  11. Interviews and focus groups in qualitative research: an update for the

    Research interviews are a fundamental qualitative research method 15 and are utilised across methodological approaches. Interviews enable the researcher to learn in depth about the perspectives ...

  12. Research Methods Guide: Interview Research

    Develop an interview guide. Introduce yourself and explain the aim of the interview. Devise your questions so interviewees can help answer your research question. Have a sequence to your questions / topics by grouping them in themes. Make sure you can easily move back and forth between questions / topics. Make sure your questions are clear and ...

  13. Interviews in Social Research: Advantages and Disadvantages

    Interviews in Social Research: Advantages and Disadvantages. The strengths of unstructured interviews are that they are respondent led, flexible, allow empathy and can be empowering, the limitations are poor reliability due to interviewer characteristics and bias, time, and low representativeness.

  14. Strengths & Limitations of the In-depth Interview Method: An Overview

    The following is a modified excerpt from Applied Qualitative Research Design: A Total Quality Framework Approach (Roller & Lavrakas, 2015, pp. 56-57). Strengths The potential advantages or strengths of the in-depth interview (IDI) method reside in three key areas: (1) the interviewer-interviewee relationship, (2) the interview itself, and (3) the analytical component of the process.…

  15. 10.7 Strengths and Weaknesses of Qualitative Interviews

    10.7 Strengths and Weaknesses of Qualitative Interviews ... As with quantitative survey research, qualitative interviews rely on respondents' ability to accurately and honestly recall whatever details about their lives, circumstances, thoughts, opinions, or behaviors are being examined. Qualitative interviewing is also time-intensive and can ...

  16. Qualitative Interviewing

    Abstract. Qualitative interviewing is a foundational method in qualitative research and is widely used in health research and the social sciences. Both qualitative semi-structured and in-depth unstructured interviews use verbal communication, mostly in face-to-face interactions, to collect data about the attitudes, beliefs, and experiences of ...

  17. Advantages and Disadvantages of Interview in Research

    It provides flexibility to the interviewers. The interview has a better response rate than mailed questions, and the people who cannot read and write can also answer the questions. The interviewer can judge the non-verbal behavior of the respondent. The interviewer can decide the place for an interview in a private and silent place, unlike the ...

  18. interviews in research advantages and disadvantages

    Disadvantages of Interviews in Research. 1. Potential for Bias: Interviews may introduce bias as the researcher's personal presence and interaction can influence the participant's responses. Researchers must remain impartial and minimize any potential bias or leading questions. 2.

  19. Sampling in qualitative interview research: criteria, considerations

    In qualitative interview samples in tourism, researcher reflexivity on sample selection and characteristics is crucial (Ateljevic, Harris, Wilson, & Collins, 2005). Thus, considerations of the interviewer's positionality in relation with research participants become an important aspect of the study, which can shape and contextualise the outcomes.

  20. interviews

    Interviews can be defined as a qualitative research technique which involves "conducting intensive individual interviews with a small number of respondents to explore their perspectives on a particular idea, program or situation." There are three different formats of interviews: structured, semi-structured and unstructured.

  21. Best Practices for Reducing Bias in the Interview Process

    Residency programs have begun adopting best practices from business models for interviewing, which include standardized questions, situational and/or behavioral anchored questions, blinded interviewers, and use of the multiple mini-interview (MMI) model. The focus of this review is to take a more in-depth look at practices that have become ...

  22. Qualitative Interview Pros and Cons

    To conduct and analyze, interviews require a finger to dial, an ear to listen, a telephone, and a keyboard or notepad. Like surveys today, interviews can launch in real time, and it is easy to share top-line reports in a day for time-sensitive projects. Weaknesses of Interviews . Of course, interviews also have inherent weaknesses.

  23. The Interview Method: Advantages and Limitations

    Advantages of the Interview Method: (1) The personal interviews, compared especially to questionnaires usually yield a high percentage of returns. (2) The interview method can be made to yield an almost perfect sample of the general population because practically everyone can be reached by and can respond to this approach.

  24. "I am there just to get on with it": a qualitative study on the labour

    Our interview study formed the first phase of a multi-method qualitative inquiry into the working practices of NIHR-funded PPI leads. While PPI lead posts are in evidence in most NIHR-funded research, we decided to focus on NIHR infrastructure funding specifically: these are 5-year grants absorbing a major tranche of NIHR funds (over £600 million annually in 2024).

  25. Full article: Exploring the online presence of food SMEs: a study on

    Second, the research team ensured that firms with a noticeable online presence and organisations with a modest or limited presence were considered. Interviews lasted approximately 1 hour and were transcribed and coded by the authors and a team of research assistants to validate the interpretation of the paragraphs (Gioia et al., Citation 2013 ...

  26. Advocating the Use of Informal Conversations as a Qualitative Method at

    There are benefits and limitations to both approaches: stating this at the start of the conversation ensures the main elements of consent and confidentiality are covered immediately. ... an important point to make is that we are not suggesting that using informal conversations are a replacement for the interview technique, or any other research ...

  27. Intelligent personal assistants in self-access L2 vocabulary learning

    With data drawn from multiple sources, including pre- and post-tests, a survey questionnaire, and individual interviews, the findings reveal significant improvements in vocabulary knowledge for both groups, indicating that GA can be utilized as an effective vocabulary learning tool. ... 7 Limitations and future research.