Structured vs. unstructured interviews: A complete guide

Last updated

7 March 2023

Reviewed by

Miroslav Damyanov

Short on time? Get an AI generated summary of this article instead

Interviews can help you understand the context of a subject, eyewitness accounts of an event, people's perceptions of a product, and more.

In some instances, semi-structured or unstructured interviews can be more helpful; in others, structured interviews are the right choice to obtain the information you seek.

In some cases, structured interviews can save time, making your research more efficient. Let’s dive into everything you need to know about structured interviews.

Analyze all kinds of interviews

Bring all your interviews into one place to analyze and understand

  • What are structured interviews?

Structured interviews are also known as standardized interviews, patterned interviews, or planned interviews. They’re a research instrument that uses a standard sequence of questions to collect information about the research subject. 

Often, you’ll use structured interviews when you need data that’s easy to categorize and quantify for a statistical analysis of responses.

Structured interviews are incredibly effective at helping researchers identify patterns and trends in response data. They’re great at minimizing the time and resources necessary for data collection and analysis.

What types of questions suit structured interviews?

Often, researchers use structured interviews for quantitative research . In these cases, they usually employ close-ended questions. 

Close-ended questions have a fixed set of responses from which the interviewer can choose. Because of the limited response selection set, response data from close-ended questions is easy to aggregate and analyze.

Researchers often employ multiple-choice or dichotomous close-ended questions in interviews. 

For multiple-choice questions, interviewees may choose between three or more possible answers. The interviewer will often restrict the response to four or five possible options. An interviewee will likely need help recalling more, which can slow down and complicate the interview process. 

For dichotomous questions, the interviewee may choose between two possible options. Yes or no and true or false questions are examples of dichotomous questions.

Open-ended questions are common in structured interviews. However, researchers use them when conducting qualitative research and looking for in-depth information about the interviewee's perceptions or experiences. 

These questions take longer for the interviewee to answer, and the answers take longer for the researcher to analyze. There's also a higher possibility of the researcher collecting irrelevant data. However, open-ended questions are more effective than close-ended questions in gathering in-depth information.

Sometimes, researchers use structured interviews in qualitative research. In this case, the research instrument contains open-ended questions in the same sequence. This usage is less common because it can be hard to compare feedback, especially with large sample sizes.

  • What types of structured interviews are there?

Researchers conduct structured interviews face-to-face, via telephone or videoconference, or through a survey instrument. 

Face-to-face interviews help researchers collect data and gather more detailed information. They can collect and analyze facial expressions, body language, tone, and inflection easier than they might through other interview methods . 

However, face-to-face interviews are the most resource-intensive to arrange. You'll likely need to assume travel and other related logistical costs for a face-to-face interview. 

These interviews also take more time and are more vulnerable to bias than some other formats. For these reasons, face-to-face interviews are best with a small sample size.

You can conduct interviews via an audio or video call. They are less resource-intensive than face-to-face interviews and can use a larger sample size. 

However, it can be difficult for the interviewer to engage effectively with the interviewee within this format, which can inject bias or ambiguity into the responses. This is particularly true for audio calls, especially if the interviewer and interviewee have not met before the interview. 

A video call can help the interviewer capture some data from body language and facial expressions, but less so than in a face-to-face interview. Technical issues are another thing to consider. If you’re studying a group of people that live in an area with limited Internet connectivity, this can make a video call challenging.

Survey questionnaires mirror the essential elements of structured interviews by containing a consistent sequence of standard questions. Surveys in quantitative research usually include close-ended questions. This data collection method can be beneficial if you need feedback from a large sample size.

Surveys are resource-efficient from a data administration standpoint but are more limited in the data they can gather. Further, if a survey question is ambiguous, you can’t clear up the ambiguity before someone responds. 

By contrast, in a face-to-face or tele-interview, an interviewee may ask clarifying questions or exhibit confusion when asked an unclear question, allowing the interviewer to clarify.

  • What are some common examples of structured interviews?

Structured interviews are relevant in many fields. You can find structured interviews in human resources, marketing, political science, psychology, and more. 

Academic and applied researchers commonly use them to verify insights from analyzing academic literature or responses from other interview types.

However, one of the most common structured interview applications lies outside the research realm: Human resource professionals and hiring managers commonly use these interviews to hire employees.

A hiring manager can easily compare responses and whittle down the applicant pool by posing a standard set of closed-ended interview questions to multiple applicants. 

Further, standard close-ended or open-ended questions can reduce bias and add objectivity and credibility to the hiring process.

Structured interviews are common in political polling. Candidates and political parties may conduct structured interviews with relatively small voter groups to obtain feedback. They ask questions about issues, messaging, and voting intentions to craft policies and campaigns.

  • What do you need to conduct a structured interview?

The tools you need to conduct a structured interview vary by format. But fundamentally, you will need: 

A participant

An interviewer

A pen and pad (or other note-taking tools)

A recording device

A consent form

A list of interview questions

While some interviewees may express qualms about you recording the interview, it’s challenging to conduct quality interviews while taking detailed notes. Even if you have a note-taker in the room, note-taking may introduce bias and can’t capture body language or facial expressions. 

Depending on the nature of your study, others may wish to review your sources. If they call your conclusions into question, audio recordings are additional evidence in your favor.

To record, you should ask the interviewee to sign a consent form. Check with your employer's legal counsel or institutional review board at your academic institution for guidance about obtaining consent legally in your state. 

If you're conducting a face-to-face interview, a camcorder, digital camera, or even some smartphones are sufficient for recording.

For a tele-interview, you'll find that today's leading video conferencing software applications feature a convenient recording function for data collection.

If a survey is your method of choice, you'll need the survey and a distribution and collection method. Online survey software applications allow you to create surveys by inputting the questions and distributing your survey via text or email. 

In some cases, survey companies even offer packages in which they will call those who do not respond via email or text and conduct the survey over the phone.

  • How to conduct a structured interview

If you're planning a face-to-face interview, you'll need to take a few steps to do it efficiently. 

First, prepare your questions and double-check that the structured interview format is best for your study. Make sure that they are neutral, unbiased, and close-ended. Ask a friend or colleague to test your questions pre-interview to ensure they are clear and straightforward.

Choose the setting for your interviews. Ideally, you'll select a location that is easy to get to. If you live in a city, consider addresses accessible via public transportation. 

The room where your interview takes place should be comfortable, without distraction, and quiet, so your recording device clearly captures your interviewee's audio.

If you're looking to interview people with specific characteristics, you'll need to recruit them. Some companies specialize in interview recruitment. You provide the attributes you need, and they identify a pool of candidates for a fee. Alternatively, you can advertise to participants on social media and other relevant avenues. 

If you're looking for college students in a specific region, look at student newspaper ads or affiliated social media pages. 

You'll also want to incentivize participation, as recruiting interview respondents without compensation is exceedingly difficult. It’s best to include a line or two about requiring written consent for participation and how you’ll use the interview audio.

When you have an interview participant, discuss the intent of your research and acquire their consent. Ensure your recording tools are working well, and begin your interview. 

Don't rely on the recordings alone: Note the most significant insights from your participant, as you could easily forget them when it's time to analyze your data.

You'll want to transcribe your audio at the data analysis stage. Some recording applications use AI to generate transcripts. Remove filler words and other sounds to generate a clear transcript for the best results. 

A written transcript will help you analyze data and pull quotes from your audio to include in your final research paper.

  • What are other common types of interviews?

Typically, you'll find researchers using at least one of these other common interview types:

Semi-structured interviews

As the name suggests, semi-structured interviews include some elements of a structured interview. You’ll include preplanned questions, but you can deviate from those questions to explore the interviewee's answers in greater depth.

Typically, a researcher will conduct a semi-structured interview with preplanned questions and an interview guide. The guide will include topics and potential questions to ask. Sometimes, the guide may also include areas or questions to avoid asking.

Unstructured interviews

In an unstructured interview , the researchers approach the interview subjects without predetermined questions. Researchers often use this qualitative instrument to probe into personal experiences and testimony, typically toward the beginning of a research study. 

Often, you’ll validate the insights you gather during unstructured and semi-structured interviews with structured interviews, surveys, and similar quantitative research tools.

Focus group interviews

Focus group interviews differ from the other three types of interviews as you pose the questions to a small group. Focus groups are typically either structured or semi-structured. When researchers employ structured interview questions, they are typically confident in the areas they wish to explore. 

Semi-structured interviews are perfect for a researcher seeking to explore broad issues. However, you must be careful that unplanned questions are unambiguous and neutral. Otherwise, you could wind up with biased results.

What is a structured vs. an unstructured interview?

A structured interview consists of standard preplanned questions for data collection. These questions may be close-ended, open-ended, or a combination. 

By contrast, an unstructured interview includes unplanned questions. In these interviews, you’ll usually equip facilitators with an interview guide. This includes guidelines for asking questions and samples that can help them ask relevant questions.

What are the advantages of a structured interview?

Relative to other interview formats, a structured interview is usually more time-efficient. With a preplanned set of questions, your interview is less likely to go into tangents, especially if you use close-ended questions. 

The more structure you provide to the interview, the more likely you are to generate responses that are easy to analyze. By contrast, an unstructured interview may involve a freewheeling conversation with off-topic and irrelevant feedback that lasts a long time.

What is an example of a structured question?

A structured question is any question you ask in an interview that you’ve preplanned and standardized.

For example, if you conduct five interviews and the first question you ask each one is, "Do you believe the world is round, yes or no?" you have asked them a structured question. This is also a close-ended dichotomous question.

Should you be using a customer insights hub?

Do you want to discover previous research faster?

Do you share your research findings with others?

Do you analyze research data?

Start for free today, add your research, and get to key insights faster

Editor’s picks

Last updated: 18 April 2023

Last updated: 27 February 2023

Last updated: 6 February 2023

Last updated: 6 October 2023

Last updated: 5 February 2023

Last updated: 16 April 2023

Last updated: 9 March 2023

Last updated: 12 December 2023

Last updated: 11 March 2024

Last updated: 4 July 2024

Last updated: 6 March 2024

Last updated: 5 March 2024

Last updated: 13 May 2024

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next.

structured interview research methodology

Users report unexpectedly high data usage, especially during streaming sessions.

structured interview research methodology

Users find it hard to navigate from the home page to relevant playlists in the app.

structured interview research methodology

It would be great to have a sleep timer feature, especially for bedtime listening.

structured interview research methodology

I need better filters to find the songs or artists I’m looking for.

Log in or sign up

Get started for free

How to Conduct an Effective Interview; A Guide to Interview Design in Research Study

  • January 2022

Hamed Taherdoost at University Canada West

  • University Canada West

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

Hamed Taherdoost

  • Benjamin F Crabtree
  • Daniel W Turner
  • Amanda Bolderston
  • A W Burkard
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

The Interview Method In Psychology

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Interviews involve a conversation with a purpose, but have some distinct features compared to ordinary conversation, such as being scheduled in advance, having an asymmetry in outcome goals between interviewer and interviewee, and often following a question-answer format.

Interviews are different from questionnaires as they involve social interaction. Unlike questionnaire methods, researchers need training in interviewing (which costs money).

Multiracial businesswomen talk brainstorm at team meeting discuss business ideas together. Diverse multiethnic female colleagues or partners engaged in discussion. Interview concept

How Do Interviews Work?

Researchers can ask different types of questions, generating different types of data . For example, closed questions provide people with a fixed set of responses, whereas open questions allow people to express what they think in their own words.

The researcher will often record interviews, and the data will be written up as a transcript (a written account of interview questions and answers) which can be analyzed later.

It should be noted that interviews may not be the best method for researching sensitive topics (e.g., truancy in schools, discrimination, etc.) as people may feel more comfortable completing a questionnaire in private.

There are different types of interviews, with a key distinction being the extent of structure. Semi-structured is most common in psychology research. Unstructured interviews have a free-flowing style, while structured interviews involve preset questions asked in a particular order.

Structured Interview

A structured interview is a quantitative research method where the interviewer a set of prepared closed-ended questions in the form of an interview schedule, which he/she reads out exactly as worded.

Interviews schedules have a standardized format, meaning the same questions are asked to each interviewee in the same order (see Fig. 1).

interview schedule example

   Figure 1. An example of an interview schedule

The interviewer will not deviate from the interview schedule (except to clarify the meaning of the question) or probe beyond the answers received.  Replies are recorded on a questionnaire, and the order and wording of questions, and sometimes the range of alternative answers, is preset by the researcher.

A structured interview is also known as a formal interview (like a job interview).

  • Structured interviews are easy to replicate as a fixed set of closed questions are used, which are easy to quantify – this means it is easy to test for reliability .
  • Structured interviews are fairly quick to conduct which means that many interviews can take place within a short amount of time. This means a large sample can be obtained, resulting in the findings being representative and having the ability to be generalized to a large population.

Limitations

  • Structured interviews are not flexible. This means new questions cannot be asked impromptu (i.e., during the interview), as an interview schedule must be followed.
  • The answers from structured interviews lack detail as only closed questions are asked, which generates quantitative data . This means a researcher won’t know why a person behaves a certain way.

Unstructured Interview

Unstructured interviews do not use any set questions, instead, the interviewer asks open-ended questions based on a specific research topic, and will try to let the interview flow like a natural conversation. The interviewer modifies his or her questions to suit the candidate’s specific experiences.

Unstructured interviews are sometimes referred to as ‘discovery interviews’ and are more like a ‘guided conservation’ than a strictly structured interview. They are sometimes called informal interviews.

Unstructured interviews are most useful in qualitative research to analyze attitudes and values. Though they rarely provide a valid basis for generalization, their main advantage is that they enable the researcher to probe social actors’ subjective points of view.

Interviewer Self-Disclosure

Interviewer self-disclosure involves the interviewer revealing personal information or opinions during the research interview. This may increase rapport but risks changing dynamics away from a focus on facilitating the interviewee’s account.

In unstructured interviews, the informal conversational style may deliberately include elements of interviewer self-disclosure, mirroring ordinary conversation dynamics.

Interviewer self-disclosure risks changing the dynamics away from facilitation of interviewee accounts. It should not be ruled out entirely but requires skillful handling informed by reflection.

  • An informal interviewing style with some interviewer self-disclosure may increase rapport and participant openness. However, it also increases the chance of the participant converging opinions with the interviewer.
  • Complete interviewer neutrality is unlikely. However, excessive informality and self-disclosure risk the interview becoming more of an ordinary conversation and producing consensus accounts.
  • Overly personal disclosures could also be seen as irrelevant and intrusive by participants. They may invite increased intimacy on uncomfortable topics.
  • The safest approach seems to be to avoid interviewer self-disclosures in most cases. Where an informal style is used, disclosures require careful judgment and substantial interviewing experience.
  • If asked for personal opinions during an interview, the interviewer could highlight the defined roles and defer that discussion until after the interview.
  • Unstructured interviews are more flexible as questions can be adapted and changed depending on the respondents’ answers. The interview can deviate from the interview schedule.
  • Unstructured interviews generate qualitative data through the use of open questions. This allows the respondent to talk in some depth, choosing their own words. This helps the researcher develop a real sense of a person’s understanding of a situation.
  • They also have increased validity because it gives the interviewer the opportunity to probe for a deeper understanding, ask for clarification & allow the interviewee to steer the direction of the interview, etc. Interviewers have the chance to clarify any questions of participants during the interview.
  • It can be time-consuming to conduct an unstructured interview and analyze the qualitative data (using methods such as thematic analysis).
  • Employing and training interviewers is expensive and not as cheap as collecting data via questionnaires . For example, certain skills may be needed by the interviewer. These include the ability to establish rapport and knowing when to probe.
  • Interviews inevitably co-construct data through researchers’ agenda-setting and question-framing. Techniques like open questions provide only limited remedies.

Focus Group Interview

Focus group interview is a qualitative approach where a group of respondents are interviewed together, used to gain an in‐depth understanding of social issues.

This type of interview is often referred to as a focus group because the job of the interviewer ( or moderator ) is to bring the group to focus on the issue at hand. Initially, the goal was to reach a consensus among the group, but with the development of techniques for analyzing group qualitative data, there is less emphasis on consensus building.

The method aims to obtain data from a purposely selected group of individuals rather than from a statistically representative sample of a broader population.

The role of the interview moderator is to make sure the group interacts with each other and do not drift off-topic. Ideally, the moderator will be similar to the participants in terms of appearance, have adequate knowledge of the topic being discussed, and exercise mild unobtrusive control over dominant talkers and shy participants.

A researcher must be highly skilled to conduct a focus group interview. For example, the moderator may need certain skills, including the ability to establish rapport and know when to probe.

  • Group interviews generate qualitative narrative data through the use of open questions. This allows the respondents to talk in some depth, choosing their own words. This helps the researcher develop a real sense of a person’s understanding of a situation. Qualitative data also includes observational data, such as body language and facial expressions.
  • Group responses are helpful when you want to elicit perspectives on a collective experience, encourage diversity of thought, reduce researcher bias, and gather a wider range of contextualized views.
  • They also have increased validity because some participants may feel more comfortable being with others as they are used to talking in groups in real life (i.e., it’s more natural).
  • When participants have common experiences, focus groups allow them to build on each other’s comments to provide richer contextual data representing a wider range of views than individual interviews.
  • Focus groups are a type of group interview method used in market research and consumer psychology that are cost – effective for gathering the views of consumers .
  • The researcher must ensure that they keep all the interviewees” details confidential and respect their privacy. This is difficult when using a group interview. For example, the researcher cannot guarantee that the other people in the group will keep information private.
  • Group interviews are less reliable as they use open questions and may deviate from the interview schedule, making them difficult to repeat.
  • It is important to note that there are some potential pitfalls of focus groups, such as conformity, social desirability, and oppositional behavior, that can reduce the usefulness of the data collected.
For example, group interviews may sometimes lack validity as participants may lie to impress the other group members. They may conform to peer pressure and give false answers.

To avoid these pitfalls, the interviewer needs to have a good understanding of how people function in groups as well as how to lead the group in a productive discussion.

Semi-Structured Interview

Semi-structured interviews lie between structured and unstructured interviews. The interviewer prepares a set of same questions to be answered by all interviewees. Additional questions might be asked during the interview to clarify or expand certain issues.

In semi-structured interviews, the interviewer has more freedom to digress and probe beyond the answers. The interview guide contains a list of questions and topics that need to be covered during the conversation, usually in a particular order.

Semi-structured interviews are most useful to address the ‘what’, ‘how’, and ‘why’ research questions. Both qualitative and quantitative analyses can be performed on data collected during semi-structured interviews.

  • Semi-structured interviews allow respondents to answer more on their terms in an informal setting yet provide uniform information making them ideal for qualitative analysis.
  • The flexible nature of semi-structured interviews allows ideas to be introduced and explored during the interview based on the respondents’ answers.
  • Semi-structured interviews can provide reliable and comparable qualitative data. Allows the interviewer to probe answers, where the interviewee is asked to clarify or expand on the answers provided.
  • The data generated remain fundamentally shaped by the interview context itself. Analysis rarely acknowledges this endemic co-construction.
  • They are more time-consuming (to conduct, transcribe, and analyze) than structured interviews.
  • The quality of findings is more dependent on the individual skills of the interviewer than in structured interviews. Skill is required to probe effectively while avoiding biasing responses.

The Interviewer Effect

Face-to-face interviews raise methodological problems. These stem from the fact that interviewers are themselves role players, and their perceived status may influence the replies of the respondents.

Because an interview is a social interaction, the interviewer’s appearance or behavior may influence the respondent’s answers. This is a problem as it can bias the results of the study and make them invalid.

For example, the gender, ethnicity, body language, age, and social status of the interview can all create an interviewer effect. If there is a perceived status disparity between the interviewer and the interviewee, the results of interviews have to be interpreted with care. This is pertinent for sensitive topics such as health.

For example, if a researcher was investigating sexism amongst males, would a female interview be preferable to a male? It is possible that if a female interviewer was used, male participants might lie (i.e., pretend they are not sexist) to impress the interviewer, thus creating an interviewer effect.

Flooding interviews with researcher’s agenda

The interactional nature of interviews means the researcher fundamentally shapes the discourse, rather than just neutrally collecting it. This shapes what is talked about and how participants can respond.
  • The interviewer’s assumptions, interests, and categories don’t just shape the specific interview questions asked. They also shape the framing, task instructions, recruitment, and ongoing responses/prompts.
  • This flooding of the interview interaction with the researcher’s agenda makes it very difficult to separate out what comes from the participant vs. what is aligned with the interviewer’s concerns.
  • So the participant’s talk ends up being fundamentally shaped by the interviewer rather than being a more natural reflection of the participant’s own orientations or practices.
  • This effect is hard to avoid because interviews inherently involve the researcher setting an agenda. But it does mean the talk extracted may say more about the interview process than the reality it is supposed to reflect.

Interview Design

First, you must choose whether to use a structured or non-structured interview.

Characteristics of Interviewers

Next, you must consider who will be the interviewer, and this will depend on what type of person is being interviewed. There are several variables to consider:

  • Gender and age : This can greatly affect respondents’ answers, particularly on personal issues.
  • Personal characteristics : Some people are easier to get on with than others. Also, the interviewer’s accent and appearance (e.g., clothing) can affect the rapport between the interviewer and interviewee.
  • Language : The interviewer’s language should be appropriate to the vocabulary of the group of people being studied. For example, the researcher must change the questions’ language to match the respondents’ social background” age / educational level / social class/ethnicity, etc.
  • Ethnicity : People may have difficulty interviewing people from different ethnic groups.
  • Interviewer expertise should match research sensitivity – inexperienced students should avoid interviewing highly vulnerable groups.

Interview Location

The location of a research interview can influence the way in which the interviewer and interviewee relate and may exaggerate a power dynamic in one direction or another. It is usual to offer interviewees a choice of location as part of facilitating their comfort and encouraging participation.

However, the safety of the interviewer is an overriding consideration and, as mentioned, a minimal requirement should be that a responsible person knows where the interviewer has gone and when they are due back.

Remote Interviews

The COVID-19 pandemic necessitated remote interviewing for research continuity. However online interview platforms provide increased flexibility even under normal conditions.

They enable access to participant groups across geographical distances without travel costs or arrangements. Online interviews can be efficiently scheduled to align with researcher and interviewee availability.

There are practical considerations in setting up remote interviews. Interviewees require access to internet and an online platform such as Zoom, Microsoft Teams or Skype through which to connect.

Certain modifications help build initial rapport in the remote format. Allowing time at the start of the interview for casual conversation while testing audio/video quality helps participants settle in. Minor delays can disrupt turn-taking flow, so alerting participants to speak slightly slower than usual minimizes accidental interruptions.

Keeping remote interviews under an hour avoids fatigue for stare at a screen. Seeking advanced ethical clearance for verbal consent at the interview start saves participant time. Adapting to the remote context shows care for interviewees and aids rich discussion.

However, it remains important to critically reflect on how removing in-person dynamics may shape the co-created data. Perhaps some nuances of trust and disclosure differ over video.

Vulnerable Groups

The interviewer must ensure that they take special care when interviewing vulnerable groups, such as children. For example, children have a limited attention span, so lengthy interviews should be avoided.

Developing an Interview Schedule

An interview schedule is a list of pre-planned, structured questions that have been prepared, to serve as a guide for interviewers, researchers and investigators in collecting information or data about a specific topic or issue.
  • List the key themes or topics that must be covered to address your research questions. This will form the basic content.
  • Organize the content logically, such as chronologically following the interviewee’s experiences. Place more sensitive topics later in the interview.
  • Develop the list of content into actual questions and prompts. Carefully word each question – keep them open-ended, non-leading, and focused on examples.
  • Add prompts to remind you to cover areas of interest.
  • Pilot test the interview schedule to check it generates useful data and revise as needed.
  • Be prepared to refine the schedule throughout data collection as you learn which questions work better.
  • Practice skills like asking follow-up questions to get depth and detail. Stay flexible to depart from the schedule when needed.
  • Keep questions brief and clear. Avoid multi-part questions that risk confusing interviewees.
  • Listen actively during interviews to determine which pre-planned questions can be skipped based on information the participant has already provided.

The key is balancing preparation with the flexibility to adapt questions based on each interview interaction. With practice, you’ll gain skills to conduct productive interviews that obtain rich qualitative data.

The Power of Silence

Strategic use of silence is a key technique to generate interviewee-led data, but it requires judgment about appropriate timing and duration to maintain mutual understanding.
  • Unlike ordinary conversation, the interviewer aims to facilitate the interviewee’s contribution without interrupting. This often means resisting the urge to speak at the end of the interviewee’s turn construction units (TCUs).
  • Leaving a silence after a TCU encourages the interviewee to provide more material without being led by the interviewer. However, this simple technique requires confidence, as silence can feel socially awkward.
  • Allowing longer silences (e.g. 24 seconds) later in interviews can work well, but early on even short silences may disrupt rapport if they cause misalignment between speakers.
  • Silence also allows interviewees time to think before answering. Rushing to re-ask or amend questions can limit responses.
  • Blunt backchannels like “mm hm” also avoid interrupting flow. Interruptions, especially to finish an interviewee’s turn, are problematic as they make the ownership of perspectives unclear.
  • If interviewers incorrectly complete turns, an upside is it can produce extended interviewee narratives correcting the record. However, silence would have been better to let interviewees shape their own accounts.

Recording & Transcription

Design choices.

Design choices around recording and engaging closely with transcripts influence analytic insights, as well as practical feasibility. Weighing up relevant tradeoffs is key.
  • Audio recording is standard, but video better captures contextual details, which is useful for some topics/analysis approaches. Participants may find video invasive for sensitive research.
  • Digital formats enable the sharing of anonymized clips. Additional microphones reduce audio issues.
  • Doing all transcription is time-consuming. Outsourcing can save researcher effort but needs confidentiality assurances. Always carefully check outsourced transcripts.
  • Online platform auto-captioning can facilitate rapid analysis, but accuracy limitations mean full transcripts remain ideal. Software cleans up caption file formatting.
  • Verbatim transcripts best capture nuanced meaning, but the level of detail needed depends on the analysis approach. Referring back to recordings is still advisable during analysis.
  • Transcripts versus recordings highlight different interaction elements. Transcripts make overt disagreements clearer through the wording itself. Recordings better convey tone affiliativeness.

Transcribing Interviews & Focus Groups

Here are the steps for transcribing interviews:
  • Play back audio/video files to develop an overall understanding of the interview
  • Format the transcription document:
  • Add line numbers
  • Separate interviewer questions and interviewee responses
  • Use formatting like bold, italics, etc. to highlight key passages
  • Provide sentence-level clarity in the interviewee’s responses while preserving their authentic voice and word choices
  • Break longer passages into smaller paragraphs to help with coding
  • If translating the interview to another language, use qualified translators and back-translate where possible
  • Select a notation system to indicate pauses, emphasis, laughter, interruptions, etc., and adapt it as needed for your data
  • Insert screenshots, photos, or documents discussed in the interview at the relevant point in the transcript
  • Read through multiple times, revising formatting and notations
  • Double-check the accuracy of transcription against audio/videos
  • De-identify transcript by removing identifying participant details

The goal is to produce a formatted written record of the verbal interview exchange that captures the meaning and highlights important passages ready for the coding process. Careful transcription is the vital first step in analysis.

Coding Transcripts

The goal of transcription and coding is to systematically transform interview responses into a set of codes and themes that capture key concepts, experiences and beliefs expressed by participants. Taking care with transcription and coding procedures enhances the validity of qualitative analysis .
  • Read through the transcript multiple times to become immersed in the details
  • Identify manifest/obvious codes and latent/underlying meaning codes
  • Highlight insightful participant quotes that capture key concepts (in vivo codes)
  • Create a codebook to organize and define codes with examples
  • Use an iterative cycle of inductive (data-driven) coding and deductive (theory-driven) coding
  • Refine codebook with clear definitions and examples as you code more transcripts
  • Collaborate with other coders to establish the reliability of codes

Ethical Issues

Informed consent.

The participant information sheet must give potential interviewees a good idea of what is involved if taking part in the research.

This will include the general topics covered in the interview, where the interview might take place, how long it is expected to last, how it will be recorded, the ways in which participants’ anonymity will be managed, and incentives offered.

It might be considered good practice to consider true informed consent in interview research to require two distinguishable stages:

  • Consent to undertake and record the interview and
  • Consent to use the material in research after the interview has been conducted and the content known, or even after the interviewee has seen a copy of the transcript and has had a chance to remove sections, if desired.

Power and Vulnerability

  • Early feminist views that sensitivity could equalize power differences are likely naive. The interviewer and interviewee inhabit different knowledge spheres and social categories, indicating structural disparities.
  • Power fluctuates within interviews. Researchers rely on participation, yet interviewees control openness and can undermine data collection. Assumptions should be avoided.
  • Interviews on sensitive topics may feel like quasi-counseling. Interviewers must refrain from dual roles, instead supplying support service details to all participants.
  • Interviewees recruited for trauma experiences may reveal more than anticipated. While generating analytic insights, this risks leaving them feeling exposed.
  • Ultimately, power balances resist reconciliation. But reflexively analyzing operations of power serves to qualify rather than nullify situtated qualitative accounts.

Some groups, like those with mental health issues, extreme views, or criminal backgrounds, risk being discredited – treated skeptically by researchers.

This creates tensions with qualitative approaches, often having an empathetic ethos seeking to center subjective perspectives. Analysis should balance openness to offered accounts with critically examining stakes and motivations behind them.

Potter, J., & Hepburn, A. (2005). Qualitative interviews in psychology: Problems and possibilities.  Qualitative research in Psychology ,  2 (4), 281-307.

Houtkoop-Steenstra, H. (2000). Interaction and the standardized survey interview: The living questionnaire . Cambridge University Press

Madill, A. (2011). Interaction in the semi-structured interview: A comparative analysis of the use of and response to indirect complaints. Qualitative Research in Psychology, 8 (4), 333–353.

Maryudi, A., & Fisher, M. (2020). The power in the interview: A practical guide for identifying the critical role of actor interests in environment research. Forest and Society, 4 (1), 142–150

O’Key, V., Hugh-Jones, S., & Madill, A. (2009). Recruiting and engaging with people in deprived locales: Interviewing families about their eating patterns. Social Psychological Review, 11 (20), 30–35.

Puchta, C., & Potter, J. (2004). Focus group practice . Sage.

Schaeffer, N. C. (1991). Conversation with a purpose— Or conversation? Interaction in the standardized interview. In P. P. Biemer, R. M. Groves, L. E. Lyberg, & N. A. Mathiowetz (Eds.), Measurement errors in surveys (pp. 367–391). Wiley.

Silverman, D. (1973). Interview talk: Bringing off a research instrument. Sociology, 7 (1), 31–48.

Print Friendly, PDF & Email

structured interview research methodology

Structured Interviews: Guide to Standardized Questions

structured interview research methodology

Introduction

Types of interviews in qualitative research, what are structured interviews good for, structured interview process.

Qualitative researchers are used to dealing with unstructured data in social settings that are often dynamic and unpredictable. That said, there are research methods that can provide some more control over this unpredictable data while collecting insightful data .

The structured interview is one such method. Researchers can conduct a structured interview when they want to standardize the research process to give all respondents the same questions and analyze differences between answers.

In this article, we'll look at structured interviews, when they are ideal for your research, and how to conduct them.

structured interview research methodology

Interviews are intentionally crafted sources of data in social science research. There are three types of interviews in research that balance research rigor and rich data collection .

To better understand structured interviews, it's important to contrast them with the other types of interviews that also serve useful purposes in research. As always, the best tool for data collection depends on your research inquiry.

Structured interviews

The structured interview format is the most rigid of the three types of interviews conceptualized in qualitative research. Imagine policy makers want to understand the perceptions of dozens or even hundreds of individuals. In this case, it may make it easier to streamline the interview process by simply asking the same questions of all respondents.

The same structured interview questions are posed to each and every respondent, akin to how hiring managers ask the same questions to all applicants during the hiring process. The intention behind this approach is to ensure that the interview is the same no matter who the respondent is, leaving only the differences in responses to be analyzed .

Moreover, the standardized interview format typically involves respondents being asked the same set of questions in the same order. A uniform sequence of questions ensures for an easy analysis when you can line up answers across respondents.

structured interview research methodology

Unstructured interviews

An unstructured interview is the exact opposite of a structured interview, as unstructured interviews have no predetermined set of questions. Instead of a standardized interview, a researcher may opt for a study that remains open to exploring any issues or topics that a participant brings up in their interview. While this can generate unexpected insights, it can also be time-consuming and may not always yield answers that are directly related to the original research question guiding the study.

However, this doesn't make a study that employs unstructured interviews less rigorous . In fact, unstructured interviews are a great tool for inductive inquiry . One typical use for unstructured interviews is to probe not only for answers but for the salient points of a topic to begin with.

When a researcher uses an unstructured interview, they usually have a topic in mind but not a predetermined set of data points to analyze at the outset. This format allows respondents to speak at length on their perspectives and offer the researcher insights that can later form a theoretical framework for future research that could benefit from a structured interview format.

Moreover, this format provides the researcher with the greatest degree of freedom in determining questions depending on how they interact with their respondents. A respondent's body language, for example, may signal discomfort with a particularly controversial question. The interviewer can thus decide to adjust or reword their questions to create a more comfortable environment for the respondent.

Semi-structured interviews

A semi-structured interview lies in the middle ground between the structured and unstructured interview. This type of interview still relies on predetermined questions as a structured interview does. However, unlike structured interviews, a semi-structured interview also allows for follow-up questions to respondents when their answers warrant further probing. The predetermined questions thus serve as a guide for the interviewer, but the wording and ordering of questions can be adjusted, and additional questions can be asked during the course of the interview.

A researcher may conduct semi-structured interviews when they need flexibility in asking questions but can still benefit from advance preparation of key questions. In this case, much of the advice in this article about structured interviews still applies in terms of ensuring some degree of standardization when conducting research.

structured interview research methodology

Identify key insights from your data with ATLAS.ti

Analyze interviews, observations, and all qualitative data with ATLAS.ti. Download your free trial here.

Consider that more free-flowing interview formats in qualitative research allow for the interviewer to more freely probe a respondent for deeper, more insightful answers on the topic of inquiry. This approach to research is useful when the researcher needs to develop theoretical coherence surrounding a new topic or research context in which it would be difficult to predict beforehand which questions are worth asking.

In this sense, structured interviews make more sense for research inquiries with a well-defined theoretical framework that guides the data collection and data analysis process . With such a framework in mind, researchers can devise questions that are grounded in existing research so that new insights further develop that scholarship.

Advantages of structured interviews

Formal, structured interviews are ideal for keeping interviewers and interview respondents focused on the topic at hand. A conversation might take unanticipated turns without a set goal or predetermined objective in mind; a structured interview helps keep the dialogue from going down any irrelevant tangents and minimize potentially unnecessary, extended monologues.

Another key advantage of structured interviews is that it makes comparisons across participants easier. Since each person was asked the same questions, the data is produced in a consistent format. Researchers can then focus on analyzing answers to a particular question, and there is minimal data organization work that needs to be done to facilitate the analysis.

There are also benefits in terms of the logistics of conducting structured interviews. Interviewers concerned with time constraints will find this format beneficial to their data collection .

Moreover, ensuring that respondents are asked the same questions in the same order limits the need for training interviewers to conduct interviews in a consistent manner. Unstructured and semi-structured interviews rely on the ability to ask follow-up questions in moments when the responses provide opportunities for deeper elaboration.

Those who conduct a structured interview, on the other hand, need only read from an interview guide with a list of questions to pose to respondents. This allows the researcher more freedom to rely on assistants to conduct interviews with minimal training and resources.

structured interview research methodology

Disadvantages of structured interviews

In structured interviews, there is little room for asking probing questions of respondents, particularly if the researcher believes that follow-up questions might adversely influence how the respondent answers subsequent core questions. Restricting the interview to a predetermined set of questions may mitigate this effect, but it may also prevent a sufficiently clear understanding of respondents' perspectives established from the use of follow-up questions.

Forcing the interviewer to ask the same order of questions in an interview can also have a consequential effect on the data collection . Because every respondent is different, the interview questions may resonate with each person in different ways. A skillful interviewer conducting unstructured or semi-structured interviews has the freedom to make choices about what questions to ask in order to gather the most insightful data.

Ultimately, the biggest disadvantage of structured interviews comes from their biggest advantage: using predetermined questions can be a double-edged sword, providing consistency and systematic organization but also limiting the research to the questions that were decided before conducting the interviews. This makes it crucial that researchers have a clear understanding of which questions they want to ask and why. It can also be helpful to conduct pilot tests of the interview, to test out the structured questions with a handful of people and assess if any changes to the questions need to be made.

Why not just do surveys?

You might think that a structured interview is no different from a survey with open-ended questions. After all, the questions are determined ahead of time and won't change over the course of data collection . In many ways, there are many similarities in both methods.

There are, of course, benefits to either approach. Surveys permit data collection from much larger numbers of respondents than may be feasible for an interview study. Structured interviews, however, allow the interviewer some degree of flexibility, particularly when the respondent has trouble understanding the question or needs further prompting to provide a sufficient response.

Moreover, the interpersonal interaction between the interviewer and respondent offers potential for richer data collection because of the degree of rapport established through face-to-face communication. Where written questions may seem static and impersonal, an in-person interview (or at least one conducted in real time) might make the respondent more comfortable in answering questions.

Individual interviews are also more likely to generate detailed responses to questions in comparison to surveys. Interviews are also well suited for research topics that bear some personal significance for participants, providing ample space for them to express themselves.

When you conduct a structured interview, you are designing a study that is as standardized as possible to mitigate context effects and ensure the ease of data collection and analysis . As with all interviews conducted in qualitative research , there is an intentional process to planning for structured interviews with considerations that researchers should keep in mind.

Research design

As mentioned above, research inquiries with clearly defined theoretical frameworks tend to benefit from structured interviews. Researchers can create a list of questions from such frameworks so that answers speak directly to, affirm, or challenge the existing scholarship surrounding the topic of interest.

A researcher should conduct a literature review to determine the extent of theoretical coherence in the topic they are researching. Are there aspects of a topic or phenomenon that scholars have identified that can serve as key data points around which questions can be crafted? Conversely, is it a topic or phenomenon that lacks sufficient conceptualization?

If your literature review does not allow you to create or use a robust theoretical framework for data collection, consider other types of interviews that allow you to inductively generate that framework in data analysis .

You should also make decisions about the conditions under which you conduct interviews. Some studies go as far as making sure that the interview environment is a uniform context across respondents. Are interviews in a quiet, comfortable environment? What time of day are interviews conducted?

The degree to which you ensure uniform conditions across interviews is up to you. Whatever you decide, however, creating an environment where respondents feel free to volunteer answers will facilitate rich data collection that will make data analysis more meaningful.

Structured interview questions

An interview guide is an essential tool for structured interviews. This guide is little more than a list of required questions to ask, but this list ensures consistency across the interviews in your study.

When you write questions for a structured interview, rely on your literature review to identify salient points around which you can design questions. This approach ensures that you are grounding your data collection in the established research.

When crafting your guide, think about the time constraints and the likely length of answers that your respondents may give. Structured interviews can involve five or 25 questions, but if you are limited to 30-45 minutes per respondent, you will need to consider whether you can ask the required questions and collect sufficient responses within your timeframe.

As a result, it's important to pilot your questions with preliminary respondents or other researchers. A pilot interview allows you to test your interview protocol and make tweaks to your question guide before conducting your study in earnest.

structured interview research methodology

Collecting data from structured interviews

Data collection refers to conducting the interviews , recording what you and your respondents say, and transcribing those recordings for data analysis . While this is a simple enough task, it is important to consider the equipment you use to collect data.

If the verbal utterances of your respondents are your sole concern, then an audio recorder should be sufficient for capturing your respondents' answers. Your choice of equipment can be as simple as a smartphone audio recorder application. Alternatively, you can consider professional equipment to make sure you collect as much audio detail as possible from your interviews.

Communication studies, for example, may be more concerned about the interviewer effect (e.g., studies that ask controversial questions to evoke particular responses) or the context effects (i.e., the effect of the surrounding environment on respondents) in interviews . In such cases, interviewers may capture data with video recordings to analyze body language or facial expressions to certain interview questions. Responses caught on video can be analyzed for any patterns across respondents.

Analyzing structured interviews

Once you have transcribed your interviews, you can analyze your data. One of the more common means for analyzing qualitative data is thematic analysis , which relies on the identification of commonly recurring themes throughout your research. What codes occur the most often? Are there commonalities across responses that are worth pointing out to your research audience?

structured interview research methodology

It's a good idea to code each response by the question they address. The set order of questions in a structured interview study makes it easy to identify the answers given by each respondent. By coding each answer by the question they respond to and the themes apparent in the response, you will be able to analyze what themes and patterns occur in each set of answers.

structured interview research methodology

You can also analyze differences between respondents. In ATLAS.ti, you can place interview transcripts into document groups to organize and divide your data along salient categories such as gender, age group, socioeconomic status, and other identifiers you may find useful. In doing so, you will be able to restrict your data analysis to a specific group of interview respondents to see how their answers differ from other groups.

Presenting interview findings

Disseminating qualitative research is often a matter of summarizing the salient points of your data analysis so that it is easy to understand, insightful, and useful to your research audience. For research collecting data from interviews , two of the more common approaches to presenting findings include visualizations and excerpts.

Visualizations are ideal for representing the salient ideas arising from large sets of otherwise unstructured data . Meaningful illustrations such as frequency charts, word clouds, and Sankey diagrams can prove more persuasive than an extended narrative in a research paper or presentation .

Consider the word cloud in the screenshot of ATLAS.ti below. This word cloud was generated from the transcripts of a set of interviews to illustrate what concepts appear the most often in the selected data. Concepts mentioned more often appear closer to the center of the cloud, showing which keywords appear most frequently in the data. Such a visualization can provide a quick illustration to show to your research audience what topics emerged in the data analysis.

structured interview research methodology

You can also effectively represent each of your themes with an example or two from the responses in your data . Data exemplars are representations that the researcher deems are typical of or significant about the portion of the data under discussion. Often in research that employs interviews or observations , an author will present an exemplar to explain a theme that is significant to theory development or challenges an existing theory.

structured interview research methodology

ATLAS.ti provides tools to restrict your view of the data to codes you find significant to your findings. The Code Manager view makes it easy to look not at the entire data set but the specific segments of text that have been coded with a particular code. In similar fashion, ATLAS.ti's Query Tool is ideal for defining a set of criteria based on the codes in the data to see which data segments are most relevant to your research inquiry.

structured interview research methodology

Conduct interview research with ATLAS.ti

Qualitative data analysis made easy with our powerful tools. Try a free trial of ATLAS.ti.

structured interview research methodology

  • Structured Interviews: Definition, Types + [Question Examples]

busayo.longe

In carrying out a systematic investigation into specific subjects and contexts, researchers often make use of structured and semi-structured interviews. These are methods of data gathering that help you to collect first-hand information with regards to the research subject, using different methods and tools. 

Structured and semi-structured interviews are appropriate for different contexts and observations. As a researcher, it is important for you to understand the right contexts for these types of interviews and how to go about collecting information using structured or semi-structured interviewing methods. 

What is a Structured Interview?

A structured interview is a type of quantitative interview that makes use of a standardized sequence of questioning in order to gather relevant information about a research subject. This type of research is mostly used in statistical investigations and follows a premeditated sequence. 

In a structured interview, the researcher creates a set of interview questions in advance and these questions are asked in the same order so that responses can easily be placed in similar categories. A structured interview is also known as a patterned interview, planned interview or a standardized interview. 

What is a Semi-Structured Interview?

A semi-structured interview is a type of qualitative interview that has a set of premeditated questions yet, allows the interviewer to explore new developments in the cause of the interview. In some way, it represents the midpoint between structured and unstructured interviews. 

In a semi-structured interview, the interviewer is at liberty to deviate from the set interview questions and sequence as long as he or she remains with the overall scope of the interview. In addition, a semi-structured interview makes use of an interview guide which is an informal grouping of topics and questions that the interviewer can ask in different ways. 

Examples & Advantages of Semi-structured Interviews

An example of a semi-structured interview could go like this;

  • Did you visit the doctor yesterday?
  • Why did you have the visit?
  • What was the outcome of the visit?

Each question is a prompt aimed at getting the respondent to give away more information

Advantages of a Semi-structured Interview

  • They offer a more personalized approach that allows respondents to be a lot more open during the interview
  • This interview-style combines both unstructured and structured interview styles so it merges the advantages of both.
  • Allows two-way communication between candidates and interviewers

Types of Structured Interview

Structured interview examples can be classified into three, namely; the face-to-face interview, telephone interviews, and survey/questionnaires interviews

Face-to-Face Structured Interview

A face-to-face structured interview is a type of interview where the researcher and the interviewee exchange information physically. It is a method of data collection that requires the interviewer to collect information through direct communication with the respondent in line with the research context and already prepared questions. 

Face-to-face structured interviews allow the interviewer to collect factual information regarding the experiences and preferences of the research respondent. It helps the researcher minimize survey dropout rates and improve the quality of data collected, which results in more objective research outcomes. 

Learn: How to Conduct an Exit Survey

Advantages of Face-to-face Structured Interview

  • It allows for more in-depth and detailed data collection.
  • Body language and facial expressions observed during a face-to-face structured interview can inform data analysis.
  • Visual materials can be used to support face-to-face structured interviews.
  • A face-to-face structured interview allows you to gather more accurate information from the research subjects. 

Disadvantages of Face-to-face Structured Interview

  • A face-to-face structured interview is expensive to conduct because it requires a lot of staff and personnel. Different costs are incurred during a face-to-face structured interview including logistics and remuneration. 
  • This type of interview is limited to a small data sample size.
  • A face-to-face structured interview is also time-consuming.
  • It can be affected by bias and subjectivity . 

Tele-Interviews

A tele-interview is a type of structured interview that is conducted through a video or audio call. In this type of interview, the researcher gathers relevant information by communicating with the respondent via a video call or telephone conversation. 

Tele-interviews are usually conducted in accordance with the standardized interview sequence as is the norm with structured interviews. It makes use of close-ended questions in order to gather the most relevant information from the interviewee, and it is a method of quantitative observation. 

Advantages of Tele-interviews

  • Tele-interviews are more convenient and result in higher survey response rates.
  • It is not time-consuming as interviews can be completed relatively fast.
  • It has a large data sample size as it can be used to gather information over a large geographical area.
  • It is cost-effective.
  • It helps the interviewee to target specific data samples.

Disadvantages of a Tele-interview

  • It does not allow for qualitative observation of the research sample.
  • It can lead to survey response bias.
  • It is subject to network availability and other technical parameters.
  • It is difficult for the interviewer to build rapport with an interviewee via this means; especially if they are meeting for the first time. 
  • It may be difficult to read the interviewee’s body language, even with a video call. Body language usually serves as a means of gathering additional information about the research subjects. 
Use this: Interview Schedule Form

Surveys/Questionnaires  

A structured questionnaire is a common tool used in quantitative observation. It is made up of a set of standardized questions, usually close-ended arranged in a standardized interview sequence, and administered to a fixed data sample, in order to collect relevant information. 

In other words, a questionnaire is a method of data gathering that involves gathering information from target groups via a set of premeditated questions. You can administer a questionnaire physically or you can create and administer it online using data-gathering platforms like Formplus. 

Advantages of Survey/Questionnaire

  • It is time-efficient and allows you to gather information from large data samples.
  • Information collected via a questionnaire can easily be processed and placed in data categories.
  • A questionnaire is a flexible and convenient method of data collection.
  • It is also cost-efficient; especially when administered online.
  • Surveys and questionnaires are useful in describing the numerical characteristics of large sets of data. 

Disadvantages of Surveys/Questionnaires  

  • A high rate of survey response bias due to survey fatigue.
  • High survey drop-out rate. 
  • Surveys and questionnaires are susceptible to researcher error; especially when the researcher makes wrong assumptions about the data sample.
  • Surveys and questionnaires are rigid in nature.
  • In some cases, survey respondents are not entirely honest with their responses and this affects the accuracy of research outcomes. 

Tools used in Structured Interview 

  • Audio Recorders

An audio recorder is a data-gathering tool that is used to collect information during an interview by recording the conversation between the interviewer and the interviewee. This data collection tool is typically used during face-to-face interviews in order to accurately capture questions and responses. 

The recorded information is then extracted and transcribed for data categorization and data analysis. There are different types of audio recording equipment including analog and digital audio recorders, however, digital audio recorders are the best tools for capturing interactions in structured interviews. 

  • Digital Camera

A digital camera is another common tool used for structured tele-interviews. It is a type of camera that captures interactions in digital memory, which are pictures. 

In many cases, digital cameras are combined with other tools in a structured interview in order to accurately gather information about the research sample. It is an effective method of gathering visual information. 

Just as its name implies, a camcorder is the hybridization of a camera and a recorder. It is a portable dual-purpose tool used in structured interviews to collect static and live-motion visual data for later playback and analysis. 

A telephone is a communication device that is used to facilitate interaction between the researcher and interviewee; especially when both parties in different geographical spaces.

  • Formplus Survey/Questionnaire

Formplus is a data-gathering platform that you can use to create and administer questionnaires for online survey s. In the form builder, you can add different fields to your form in order to collect a variety of information from respondents. 

Apart from allowing you to add different form fields to your questionnaires and surveys, Formplus also enables you to create smart forms with conditional logic and form lookup features. It also allows you to personalize your survey using different customization options in the form builder. 

Best Types of Questions For Structured Interview

Open-ended questions.

An open-ended question is a type of question that does not limit the respondent to a set of answers. In other words, open-ended questions are free-form questions that give the interviewee the freedom to express his or her knowledge, experiences and thoughts. 

Open-ended questions are typically used for qualitative observation where attention is paid to an in-depth description of the research subjects. These types of questions are designed to elicit full and detailed responses from the research subjects, unlike close-ended questions that require brief responses. 

Examples of Open-Ended Questions

  • What do you think about the new packaging?
  • How can we improve our services?
  • Why did you choose this outfit?
  • How can we serve you better? 

Advantages of Open-Ended Questions

  • Open-ended questions are useful for qualitative observation.
  • Open-ended questions help you gain unexpected insights and in-depth information. 
  • It exposes the researcher to an infinite range of responses.
  • It helps the researcher arrive at more objective research outcomes. 

Disadvantages of Open-ended Questions 

  • Data collection using open-ended questions is time-consuming.
  • It cannot be used for quantitative research.
  • There is a great possibility of capturing large volumes of irrelevant data. 

Using Open-ended Questions for Interviews 

In interviews, open-ended questions are used to gain insight into the thoughts and experiences of the respondents. To do this, the interviewer generates a set of open-ended questions that can be asked in any sequence, and other open-ended questions may arise in follow-up inquiries.

Use this: Interview Feedback Form 

Close-Ended Questions

A close-ended question is a type of question that restricts the respondent to a range of probable responses as options. It is often used in quantitative research to gather statistical data from interviewees, and there are different types of close-ended questions including multiple choice and Likert scale questions . 

A close-ended question is primarily defined by the need to have a set of predefined responses which the interviewee chooses from. These types of questions help the researcher to categorize data in terms of numerical value and to restrict interview responses to the most valid data. 

Examples of Close-ended Questions

1. Do you enjoy using our product?

  • I don’t Know

2. Have you ever visited London?

3. Did you enjoy the relationship seminar?

  • No, I did not
  • I can’t say

4. On a scale of 1-5, rate our service delivery. (1-Poor; 5-Excellent). 

5. How often do you visit home? 

  • Somewhat often
  • I don’t visit home. 

Advantages of Close-ended Questions 

  • It is useful for statistical inquiries.
  • Close-ended questions are straight-forward and easy to respond to.
  • Data gathered through close-ended questions are easy to analyze.
  • It reduces the chances of gathering irrelevant responses.

Disadvantages of Close-Ended Questions 

  • Close-ended questions are highly subjective in nature and have a high probability of survey response bias .
  • Close-ended questions do not allow you to collect in-depth information about the experiences of the research subjects.
  • Close-ended questions cannot be used for qualitative observation. 

Using Close-ended Questions for Unstructured Interviews

Close-ended questions are used in interviews for statistical inquiries. In many cases, interviews begin with a set of close-ended questions which lead to further inquiries depending on the type, that is, structured, unstructured, or semi-structured interviews. 

Also Read: Structured vs Unstructured Interviews

Multiple Choice Question

A multiple-choice question is a type of close-ended question that provides respondents with a list of possible answers. The interviewee is required to choose one or more options in response to the question; depending on the question type and stipulated instructions. 

Typically, a multiple-choice question is one of the most common types of questions used in a survey or questionnaire. It is also a valid means of quantitative inquiry because it pays attention to the numerical value of data categories. A multiple-choice question is made up of 3 parts which are the stem, the correct answer(s) and the distractors.  

Examples of Multiple Choice Questions

  • How many times do you visit home?

2. What types of shirts do you wear? (Choose as many that apply)

  • Long-sleeved Shirt
  • Short-sleeved Shirt 

3. Which of the following gadgets do you use?

4. What is your highest level of education?

Advantages of Multiple Choice Question

  • A multiple-choice question is an effective method of assessment; especially n qualitative research. 
  • It is time-efficient. 
  • It reduces the chances of interviewer bias because of its objective approach. 

Disadvantages of Multiple Choice Questions

  • Multiple Choice questions are limited to certain types of knowledge. 
  • It cannot be used for problem-solving and high-order reasoning assessments. 
  • It can lead to ambiguity and misinterpretation which causes survey response bias. 
  • Survey fatigue leads to high survey drop-out rates. 

Dichotomous Questions

A dichotomous question is a type of close-ended question that can only have two possible answers. It is a method of quantitative observation and it is typically used for educational research and assessments, and other research processes that involve statistical evaluation. 

It is important for researchers to limit the use of dichotomous questions to situations where there are only 2 possible answers. These types of questions are restricted to yes/no, true/false or agree/disagree options and they are used to gather information related to the experiences of the research subjects. 

Examples of Dichotomous Questions

1. Do you enjoy using this product?

2. I have always used this product for my hair.

3. Are you lactose-intolerant?

4. Have you ever witnessed an explosion?

5. Have you ever visited our farm?

Advantages of Dichotomous Questions

  • It is an effective method of quantitative research. 
  • Surveys containing dichotomous questions are easy to administer.
  • It is non-ambivalent in nature.
  • It allows for ease of data-gathering and analysis.
  • Dichotomous questions are brief, easy and simplified in nature. 

Disadvantages of Dichotomous Questions

  • A dichotomous question is limited in nature.
  • It cannot be used to gather qualitative information in research. 
  • It is not suitable for in-depth data gathering. 
Learn: Types of Screening Interview 

How to Prepare a Structured Interview

  • Choose the right setting

It’s important to provide a comfortable setting for your respondent. If you don’t, they’ll be subject to participant bias which can then skew the results of your interview.

  • Tell them the purpose of your interview

You need to give your participants a heads up on why you’re conducting this. This is also the stage where you talk about any confidentiality clauses and get informed consent from your researchers. Explain how these answers will be used and who will have access to it. 

  • Prepare your questions

Start by asking the basics to warm up your respondents. Then depending on your structured interview style, you can then choose tailored questions. E.g multiple-choice, dichotomous, open-ended, or close-ended questions. Ensure your questions are as neutral as possible and give room for your respondents to add any extra impressions or comments.

  • Verify that your tools are working

Check that your audio recorder is working fine and that your camera is properly placed before you kick off the interview. For phone interviews, confirm that you have enough call credits or that your internet connection is stable. If you’re using Formplus, you don’t have to bother about getting cut off thanks to the offline form feature. This means you can still record responses even when your respondents have poor or zero internet connection

  • Make notes and record observations

Ensure that your notes are legible and clear enough for you to revert. Write down your observations. Were your respondents nervous or surprised at any particular question?

Also Read: Unstructured Interviews

How to Use Formplus For Structured Interview

Sign into formplus.

In the Formplus builder, you can easily create a questionnaire for your structured interview by dragging and dropping preferred fields into your form. To access the Formplus builder, you will need to create an account on Formplus. 

Once you do this, sign in to your account and click on “Create Form ” to begin. 

Edit Form Title

Click on the field provided to input your form title, for example, “Structured Interview Questionnaire”.

  • Click on the edit button to edit the form.
  • Add Fields: Drag and drop preferred form fields into your form in the Formplus builder inputs column. There are several field input options for survey forms in the Formplus builder including table fields and you can create a smarter questionnaire by using the conditional logic feature. 
  • Edit fields: You can modify your form fields to be hidden, required or read-only depending on your data sample and the purpose of the interview. 
  • Click on “Save”
  • Preview form. 

Customise Form

Formplus allows you to add unique features to your structured questionnaire. You can personalize your questionnaire using various customization options in the builder. Here, you can add background images, your organization’s logo, and other features. You can also change the display theme of your form. 

Share your Form Link with Respondents

Formplus allows you to share your questionnaire with interviewees using multiple form-sharing options. You can use the direct social media sharing buttons to share your form link to your organization’s social media pages. 

You can also embed your questionnaire into your website so that form respondents can easily fill it out when they visit your webpage. Formplus enables you to send out email invitations to interviewees and to also share your questionnaire as a QR code.

Conclusion  

It is important for every researcher to understand how to conduct structured and unstructured interviews. While a structured interview strictly follows an interview sequence comprising standardized questions, a semi-structured interview allows the researcher to digress from the sequence of inquiry, based on the information provided by the respondent. 

You can conduct a structured interview using an audio recorder, telephone or surveys. Formplus allows you to create and administer online surveys easily, and you can add different form fields to allow you to collect a variety of information using the form builder. 

Logo

Connect to Formplus, Get Started Now - It's Free!

  • advantage of unstructured interview
  • advantages of structured interview
  • examples of structured interview
  • semi structured interview
  • busayo.longe

Formplus

You may also like:

33 Online Shopping Questionnaire + [Template Examples]

Learn how to study users’ behaviors, experiences, and preferences as they shop items from your e-commerce store with this article

structured interview research methodology

Job Evaluation: Definition, Methods + [Form Template]

Everything you need to know about job evaluation. Importance, types, methods and question examples

Structured vs Unstructured Interviews: 13 Key Differences

Difference between structured and unstructured interview in definition, uses, examples, types, advantages and disadvantages.

Unstructured Interviews: Definition + [Question Examples]

Simple guide on unstructured interview, types, examples, advantages and disadvantages. Learn how to conduct an unstructured interview

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 15 September 2022

Interviews in the social sciences

  • Eleanor Knott   ORCID: orcid.org/0000-0002-9131-3939 1 ,
  • Aliya Hamid Rao   ORCID: orcid.org/0000-0003-0674-4206 1 ,
  • Kate Summers   ORCID: orcid.org/0000-0001-9964-0259 1 &
  • Chana Teeger   ORCID: orcid.org/0000-0002-5046-8280 1  

Nature Reviews Methods Primers volume  2 , Article number:  73 ( 2022 ) Cite this article

729k Accesses

77 Citations

41 Altmetric

Metrics details

  • Interdisciplinary studies

In-depth interviews are a versatile form of qualitative data collection used by researchers across the social sciences. They allow individuals to explain, in their own words, how they understand and interpret the world around them. Interviews represent a deceptively familiar social encounter in which people interact by asking and answering questions. They are, however, a very particular type of conversation, guided by the researcher and used for specific ends. This dynamic introduces a range of methodological, analytical and ethical challenges, for novice researchers in particular. In this Primer, we focus on the stages and challenges of designing and conducting an interview project and analysing data from it, as well as strategies to overcome such challenges.

Similar content being viewed by others

structured interview research methodology

The fundamental importance of method to theory

structured interview research methodology

How ‘going online’ mediates the challenges of policy elite interviews

structured interview research methodology

Participatory action research

Introduction.

In-depth interviews are a qualitative research method that follow a deceptively familiar logic of human interaction: they are conversations where people talk with each other, interact and pose and answer questions 1 . An interview is a specific type of interaction in which — usually and predominantly — a researcher asks questions about someone’s life experience, opinions, dreams, fears and hopes and the interview participant answers the questions 1 .

Interviews will often be used as a standalone method or combined with other qualitative methods, such as focus groups or ethnography, or quantitative methods, such as surveys or experiments. Although interviewing is a frequently used method, it should not be viewed as an easy default for qualitative researchers 2 . Interviews are also not suited to answering all qualitative research questions, but instead have specific strengths that should guide whether or not they are deployed in a research project. Whereas ethnography might be better suited to trying to observe what people do, interviews provide a space for extended conversations that allow the researcher insights into how people think and what they believe. Quantitative surveys also give these kinds of insights, but they use pre-determined questions and scales, privileging breadth over depth and often overlooking harder-to-reach participants.

In-depth interviews can take many different shapes and forms, often with more than one participant or researcher. For example, interviews might be highly structured (using an almost survey-like interview guide), entirely unstructured (taking a narrative and free-flowing approach) or semi-structured (using a topic guide ). Researchers might combine these approaches within a single project depending on the purpose of the interview and the characteristics of the participant. Whatever form the interview takes, researchers should be mindful of the dynamics between interviewer and participant and factor these in at all stages of the project.

In this Primer, we focus on the most common type of interview: one researcher taking a semi-structured approach to interviewing one participant using a topic guide. Focusing on how to plan research using interviews, we discuss the necessary stages of data collection. We also discuss the stages and thought-process behind analysing interview material to ensure that the richness and interpretability of interview material is maintained and communicated to readers. The Primer also tracks innovations in interview methods and discusses the developments we expect over the next 5–10 years.

We wrote this Primer as researchers from sociology, social policy and political science. We note our disciplinary background because we acknowledge that there are disciplinary differences in how interviews are approached and understood as a method.

Experimentation

Here we address research design considerations and data collection issues focusing on topic guide construction and other pragmatics of the interview. We also explore issues of ethics and reflexivity that are crucial throughout the research project.

Research design

Participant selection.

Participants can be selected and recruited in various ways for in-depth interview studies. The researcher must first decide what defines the people or social groups being studied. Often, this means moving from an abstract theoretical research question to a more precise empirical one. For example, the researcher might be interested in how people talk about race in contexts of diversity. Empirical settings in which this issue could be studied could include schools, workplaces or adoption agencies. The best research designs should clearly explain why the particular setting was chosen. Often there are both intrinsic and extrinsic reasons for choosing to study a particular group of people at a specific time and place 3 . Intrinsic motivations relate to the fact that the research is focused on an important specific social phenomenon that has been understudied. Extrinsic motivations speak to the broader theoretical research questions and explain why the case at hand is a good one through which to address them empirically.

Next, the researcher needs to decide which types of people they would like to interview. This decision amounts to delineating the inclusion and exclusion criteria for the study. The criteria might be based on demographic variables, like race or gender, but they may also be context-specific, for example, years of experience in an organization. These should be decided based on the research goals. Researchers should be clear about what characteristics would make an individual a candidate for inclusion in the study (and what would exclude them).

The next step is to identify and recruit the study’s sample . Usually, many more people fit the inclusion criteria than can be interviewed. In cases where lists of potential participants are available, the researcher might want to employ stratified sampling , dividing the list by characteristics of interest before sampling.

When there are no lists, researchers will often employ purposive sampling . Many researchers consider purposive sampling the most useful mode for interview-based research since the number of interviews to be conducted is too small to aim to be statistically representative 4 . Instead, the aim is not breadth, via representativeness, but depth via rich insights about a set of participants. In addition to purposive sampling, researchers often use snowball sampling . Both purposive and snowball sampling can be combined with quota sampling . All three types of sampling aim to ensure a variety of perspectives within the confines of a research project. A goal for in-depth interview studies can be to sample for range, being mindful of recruiting a diversity of participants fitting the inclusion criteria.

Study design

The total number of interviews depends on many factors, including the population studied, whether comparisons are to be made and the duration of interviews. Studies that rely on quota sampling where explicit comparisons are made between groups will require a larger number of interviews than studies focused on one group only. Studies where participants are interviewed over several hours, days or even repeatedly across years will tend to have fewer participants than those that entail a one-off engagement.

Researchers often stop interviewing when new interviews confirm findings from earlier interviews with no new or surprising insights (saturation) 4 , 5 , 6 . As a criterion for research design, saturation assumes that data collection and analysis are happening in tandem and that researchers will stop collecting new data once there is no new information emerging from the interviews. This is not always possible. Researchers rarely have time for systematic data analysis during data collection and they often need to specify their sample in funding proposals prior to data collection. As a result, researchers often draw on existing reports of saturation to estimate a sample size prior to data collection. These suggest between 12 and 20 interviews per category of participant (although researchers have reported saturation with samples that are both smaller and larger than this) 7 , 8 , 9 . The idea of saturation has been critiqued by many qualitative researchers because it assumes that meaning inheres in the data, waiting to be discovered — and confirmed — once saturation has been reached 7 . In-depth interview data are often multivalent and can give rise to different interpretations. The important consideration is, therefore, not merely how many participants are interviewed, but whether one’s research design allows for collecting rich and textured data that provide insight into participants’ understandings, accounts, perceptions and interpretations.

Sometimes, researchers will conduct interviews with more than one participant at a time. Researchers should consider the benefits and shortcomings of such an approach. Joint interviews may, for example, give researchers insight into how caregivers agree or debate childrearing decisions. At the same time, they may be less adaptive to exploring aspects of caregiving that participants may not wish to disclose to each other. In other cases, there may be more than one person interviewing each participant, such as when an interpreter is used, and so it is important to consider during the research design phase how this might shape the dynamics of the interview.

Data collection

Semi-structured interviews are typically organized around a topic guide comprised of an ordered set of broad topics (usually 3–5). Each topic includes a set of questions that form the basis of the discussion between the researcher and participant (Fig.  1 ). These topics are organized around key concepts that the researcher has identified (for example, through a close study of prior research, or perhaps through piloting a small, exploratory study) 5 .

figure 1

a | Elaborated topics the researcher wants to cover in the interview and example questions. b | An example topic arc. Using such an arc, one can think flexibly about the order of topics. Considering the main question for each topic will help to determine the best order for the topics. After conducting some interviews, the researcher can move topics around if a different order seems to make sense.

Topic guide

One common way to structure a topic guide is to start with relatively easy, open-ended questions (Table  1 ). Opening questions should be related to the research topic but broad and easy to answer, so that they help to ease the participant into conversation.

After these broad, opening questions, the topic guide may move into topics that speak more directly to the overarching research question. The interview questions will be accompanied by probes designed to elicit concrete details and examples from the participant (see Table  1 ).

Abstract questions are often easier for participants to answer once they have been asked more concrete questions. In our experience, for example, questions about feelings can be difficult for some participants to answer, but when following probes concerning factual experiences these questions can become less challenging. After the main themes of the topic guide have been covered, the topic guide can move onto closing questions. At this stage, participants often repeat something they have said before, although they may sometimes introduce a new topic.

Interviews are especially well suited to gaining a deeper insight into people’s experiences. Getting these insights largely depends on the participants’ willingness to talk to the researcher. We recommend designing open-ended questions that are more likely to elicit an elaborated response and extended reflection from participants rather than questions that can be answered with yes or no.

Questions should avoid foreclosing the possibility that the participant might disagree with the premise of the question. Take for example the question: “Do you support the new family-friendly policies?” This question minimizes the possibility of the participant disagreeing with the premise of this question, which assumes that the policies are ‘family-friendly’ and asks for a yes or no answer. Instead, asking more broadly how a participant feels about the specific policy being described as ‘family-friendly’ (for example, a work-from-home policy) allows them to express agreement, disagreement or impartiality and, crucially, to explain their reasoning 10 .

For an uninterrupted interview that will last between 90 and 120 minutes, the topic guide should be one to two single-spaced pages with questions and probes. Ideally, the researcher will memorize the topic guide before embarking on the first interview. It is fine to carry a printed-out copy of the topic guide but memorizing the topic guide ahead of the interviews can often make the interviewer feel well prepared in guiding the participant through the interview process.

Although the topic guide helps the researcher stay on track with the broad areas they want to cover, there is no need for the researcher to feel tied down by the topic guide. For instance, if a participant brings up a theme that the researcher intended to discuss later or a point the researcher had not anticipated, the researcher may well decide to follow the lead of the participant. The researcher’s role extends beyond simply stating the questions; it entails listening and responding, making split-second decisions about what line of inquiry to pursue and allowing the interview to proceed in unexpected directions.

Optimizing the interview

The ideal place for an interview will depend on the study and what is feasible for participants. Generally, a place where the participant and researcher can both feel relaxed, where the interview can be uninterrupted and where noise or other distractions are limited is ideal. But this may not always be possible and so the researcher needs to be prepared to adapt their plans within what is feasible (and desirable for participants).

Another key tool for the interview is a recording device (assuming that permission for recording has been given). Recording can be important to capture what the participant says verbatim. Additionally, it can allow the researcher to focus on determining what probes and follow-up questions they want to pursue rather than focusing on taking notes. Sometimes, however, a participant may not allow the researcher to record, or the recording may fail. If the interview is not recorded we suggest that the researcher takes brief notes during the interview, if feasible, and then thoroughly make notes immediately after the interview and try to remember the participant’s facial expressions, gestures and tone of voice. Not having a recording of an interview need not limit the researcher from getting analytical value from it.

As soon as possible after each interview, we recommend that the researcher write a one-page interview memo comprising three key sections. The first section should identify two to three important moments from the interview. What constitutes important is up to the researcher’s discretion 9 . The researcher should note down what happened in these moments, including the participant’s facial expressions, gestures, tone of voice and maybe even the sensory details of their surroundings. This exercise is about capturing ethnographic detail from the interview. The second part of the interview memo is the analytical section with notes on how the interview fits in with previous interviews, for example, where the participant’s responses concur or diverge from other responses. The third part consists of a methodological section where the researcher notes their perception of their relationship with the participant. The interview memo allows the researcher to think critically about their positionality and practice reflexivity — key concepts for an ethical and transparent research practice in qualitative methodology 11 , 12 .

Ethics and reflexivity

All elements of an in-depth interview can raise ethical challenges and concerns. Good ethical practice in interview studies often means going beyond the ethical procedures mandated by institutions 13 . While discussions and requirements of ethics can differ across disciplines, here we focus on the most pertinent considerations for interviews across the research process for an interdisciplinary audience.

Ethical considerations prior to interview

Before conducting interviews, researchers should consider harm minimization, informed consent, anonymity and confidentiality, and reflexivity and positionality. It is important for the researcher to develop their own ethical sensitivities and sensibilities by gaining training in interview and qualitative methods, reading methodological and field-specific texts on interviews and ethics and discussing their research plans with colleagues.

Researchers should map the potential harm to consider how this can be minimized. Primarily, researchers should consider harm from the participants’ perspective (Box  1 ). But, it is also important to consider and plan for potential harm to the researcher, research assistants, gatekeepers, future researchers and members of the wider community 14 . Even the most banal of research topics can potentially pose some form of harm to the participant, researcher and others — and the level of harm is often highly context-dependent. For example, a research project on religion in society might have very different ethical considerations in a democratic versus authoritarian research context because of how openly or not such topics can be discussed and debated 15 .

The researcher should consider how they will obtain and record informed consent (for example, written or oral), based on what makes the most sense for their research project and context 16 . Some institutions might specify how informed consent should be gained. Regardless of how consent is obtained, the participant must be made aware of the form of consent, the intentions and procedures of the interview and potential forms of harm and benefit to the participant or community before the interview commences. Moreover, the participant must agree to be interviewed before the interview commences. If, in addition to interviews, the study contains an ethnographic component, it is worth reading around this topic (see, for example, Murphy and Dingwall 17 ). Informed consent must also be gained for how the interview will be recorded before the interview commences. These practices are important to ensure the participant is contributing on a voluntary basis. It is also important to remind participants that they can withdraw their consent at any time during the interview and for a specified period after the interview (to be decided with the participant). The researcher should indicate that participants can ask for anything shared to be off the record and/or not disseminated.

In terms of anonymity and confidentiality, it is standard practice when conducting interviews to agree not to use (or even collect) participants’ names and personal details that are not pertinent to the study. Anonymizing can often be the safer option for minimizing harm to participants as it is hard to foresee all the consequences of de-anonymizing, even if participants agree. Regardless of what a researcher decides, decisions around anonymity must be agreed with participants during the process of gaining informed consent and respected following the interview.

Although not all ethical challenges can be foreseen or planned for 18 , researchers should think carefully — before the interview — about power dynamics, participant vulnerability, emotional state and interactional dynamics between interviewer and participant, even when discussing low-risk topics. Researchers may then wish to plan for potential ethical issues, for example by preparing a list of relevant organizations to which participants can be signposted. A researcher interviewing a participant about debt, for instance, might prepare in advance a list of debt advice charities, organizations and helplines that could provide further support and advice. It is important to remember that the role of an interviewer is as a researcher rather than as a social worker or counsellor because researchers may not have relevant and requisite training in these other domains.

Box 1 Mapping potential forms of harm

Social: researchers should avoid causing any relational detriment to anyone in the course of interviews, for example, by sharing information with other participants or causing interview participants to be shunned or mistreated by their community as a result of participating.

Economic: researchers should avoid causing financial detriment to anyone, for example, by expecting them to pay for transport to be interviewed or to potentially lose their job as a result of participating.

Physical: researchers should minimize the risk of anyone being exposed to violence as a result of the research both from other individuals or from authorities, including police.

Psychological: researchers should minimize the risk of causing anyone trauma (or re-traumatization) or psychological anguish as a result of the research; this includes not only the participant but importantly the researcher themselves and anyone that might read or analyse the transcripts, should they contain triggering information.

Political: researchers should minimize the risk of anyone being exposed to political detriment as a result of the research, such as retribution.

Professional/reputational: researchers should minimize the potential for reputational damage to anyone connected to the research (this includes ensuring good research practices so that any researchers involved are not harmed reputationally by being involved with the research project).

The task here is not to map exhaustively the potential forms of harm that might pertain to a particular research project (that is the researcher’s job and they should have the expertise most suited to mapping such potential harms relative to the specific project) but to demonstrate the breadth of potential forms of harm.

Ethical considerations post-interview

Researchers should consider how interview data are stored, analysed and disseminated. If participants have been offered anonymity and confidentiality, data should be stored in a way that does not compromise this. For example, researchers should consider removing names and any other unnecessary personal details from interview transcripts, password-protecting and encrypting files and using pseudonyms to label and store all interview data. It is also important to address where interview data are taken (for example, across borders in particular where interview data might be of interest to local authorities) and how this might affect the storage of interview data.

Examining how the researcher will represent participants is a paramount ethical consideration both in the planning stages of the interview study and after it has been conducted. Dissemination strategies also need to consider questions of anonymity and representation. In small communities, even if participants are given pseudonyms, it might be obvious who is being described. Anonymizing not only the names of those participating but also the research context is therefore a standard practice 19 . With particularly sensitive data or insights about the participant, it is worth considering describing participants in a more abstract way rather than as specific individuals. These practices are important both for protecting participants’ anonymity but can also affect the ability of the researcher and others to return ethically to the research context and similar contexts 20 .

Reflexivity and positionality

Reflexivity and positionality mean considering the researcher’s role and assumptions in knowledge production 13 . A key part of reflexivity is considering the power relations between the researcher and participant within the interview setting, as well as how researchers might be perceived by participants. Further, researchers need to consider how their own identities shape the kind of knowledge and assumptions they bring to the interview, including how they approach and ask questions and their analysis of interviews (Box  2 ). Reflexivity is a necessary part of developing ethical sensibility as a researcher by adapting and reflecting on how one engages with participants. Participants should not feel judged, for example, when they share information that researchers might disagree with or find objectionable. How researchers deal with uncomfortable moments or information shared by participants is at their discretion, but they should consider how they will react both ahead of time and in the moment.

Researchers can develop their reflexivity by considering how they themselves would feel being asked these interview questions or represented in this way, and then adapting their practice accordingly. There might be situations where these questions are not appropriate in that they unduly centre the researchers’ experiences and worldview. Nevertheless, these prompts can provide a useful starting point for those beginning their reflexive journey and developing an ethical sensibility.

Reflexivity and ethical sensitivities require active reflection throughout the research process. For example, researchers should take care in interview memos and their notes to consider their assumptions, potential preconceptions, worldviews and own identities prior to and after interviews (Box  2 ). Checking in with assumptions can be a way of making sure that researchers are paying close attention to their own theoretical and analytical biases and revising them in accordance with what they learn through the interviews. Researchers should return to these notes (especially when analysing interview material), to try to unpack their own effects on the research process as well as how participants positioned and engaged with them.

Box 2 Aspects to reflect on reflexively

For reflexive engagement, and understanding the power relations being co-constructed and (re)produced in interviews, it is necessary to reflect, at a minimum, on the following.

Ethnicity, race and nationality, such as how does privilege stemming from race or nationality operate between the researcher, the participant and research context (for example, a researcher from a majority community may be interviewing a member of a minority community)

Gender and sexuality, see above on ethnicity, race and nationality

Social class, and in particular the issue of middle-class bias among researchers when formulating research and interview questions

Economic security/precarity, see above on social class and thinking about the researcher’s relative privilege and the source of biases that stem from this

Educational experiences and privileges, see above

Disciplinary biases, such as how the researcher’s discipline/subfield usually approaches these questions, possibly normalizing certain assumptions that might be contested by participants and in the research context

Political and social values

Lived experiences and other dimensions of ourselves that affect and construct our identity as researchers

In this section, we discuss the next stage of an interview study, namely, analysing the interview data. Data analysis may begin while more data are being collected. Doing so allows early findings to inform the focus of further data collection, as part of an iterative process across the research project. Here, the researcher is ultimately working towards achieving coherence between the data collected and the findings produced to answer successfully the research question(s) they have set.

The two most common methods used to analyse interview material across the social sciences are thematic analysis 21 and discourse analysis 22 . Thematic analysis is a particularly useful and accessible method for those starting out in analysis of qualitative data and interview material as a method of coding data to develop and interpret themes in the data 21 . Discourse analysis is more specialized and focuses on the role of discourse in society by paying close attention to the explicit, implicit and taken-for-granted dimensions of language and power 22 , 23 . Although thematic and discourse analysis are often discussed as separate techniques, in practice researchers might flexibly combine these approaches depending on the object of analysis. For example, those intending to use discourse analysis might first conduct thematic analysis as a way to organize and systematize the data. The object and intention of analysis might differ (for example, developing themes or interrogating language), but the questions facing the researcher (such as whether to take an inductive or deductive approach to analysis) are similar.

Preparing data

Data preparation is an important step in the data analysis process. The researcher should first determine what comprises the corpus of material and in what form it will it be analysed. The former refers to whether, for example, alongside the interviews themselves, analytic memos or observational notes that may have been taken during data collection will also be directly analysed. The latter refers to decisions about how the verbal/audio interview data will be transformed into a written form, making it suitable for processes of data analysis. Typically, interview audio recordings are transcribed to produce a written transcript. It is important to note that the process of transcription is one of transformation. The verbal interview data are transformed into a written transcript through a series of decisions that the researcher must make. The researcher should consider the effect of mishearing what has been said or how choosing to punctuate a sentence in a particular way will affect the final analysis.

Box  3 shows an example transcript excerpt from an interview with a teacher conducted by Teeger as part of her study of history education in post-apartheid South Africa 24 (Box  3 ). Seeing both the questions and the responses means that the reader can contextualize what the participant (Ms Mokoena) has said. Throughout the transcript the researcher has used square brackets, for example to indicate a pause in speech, when Ms Mokoena says “it’s [pause] it’s a difficult topic”. The transcription choice made here means that we see that Ms Mokoena has taken time to pause, perhaps to search for the right words, or perhaps because she has a slight apprehension. Square brackets are also included as an overt act of communication to the reader. When Ms Mokoena says “ja”, the English translation (“yes”) of the word in Afrikaans is placed in square brackets to ensure that the reader can follow the meaning of the speech.

Decisions about what to include when transcribing will be hugely important for the direction and possibilities of analysis. Researchers should decide what they want to capture in the transcript, based on their analytic focus. From a (post)positivist perspective 25 , the researcher may be interested in the manifest content of the interview (such as what is said, not how it is said). In that case, they may choose to transcribe intelligent verbatim . From a constructivist perspective 25 , researchers may choose to record more aspects of speech (including, for example, pauses, repetitions, false starts, talking over one another) so that these features can be analysed. Those working from this perspective argue that to recognize the interactional nature of the interview setting adequately and to avoid misinterpretations, features of interaction (pauses, overlaps between speakers and so on) should be preserved in transcription and therefore in the analysis 10 . Readers interested in learning more should consult Potter and Hepburn’s summary of how to present interaction through transcription of interview data 26 .

The process of analysing semi-structured interviews might be thought of as a generative rather than an extractive enterprise. Findings do not already exist within the interview data to be discovered. Rather, researchers create something new when analysing the data by applying their analytic lens or approach to the transcripts. At a high level, there are options as to what researchers might want to glean from their interview data. They might be interested in themes, whereby they identify patterns of meaning across the dataset 21 . Alternatively, they may focus on discourse(s), looking to identify how language is used to construct meanings and therefore how language reinforces or produces aspects of the social world 27 . Alternatively, they might look at the data to understand narrative or biographical elements 28 .

A further overarching decision to make is the extent to which researchers bring predetermined framings or understandings to bear on their data, or instead begin from the data themselves to generate an analysis. One way of articulating this is the extent to which researchers take a deductive approach or an inductive approach to analysis. One example of a truly inductive approach is grounded theory, whereby the aim of the analysis is to build new theory, beginning with one’s data 6 , 29 . In practice, researchers using thematic and discourse analysis often combine deductive and inductive logics and describe their process instead as iterative (referred to also as an abductive approach ) 30 , 31 . For example, researchers may decide that they will apply a given theoretical framing, or begin with an initial analytic framework, but then refine or develop these once they begin the process of analysis.

Box 3 Excerpt of interview transcript (from Teeger 24 )

Interviewer : Maybe you could just start by talking about what it’s like to teach apartheid history.

Ms Mokoena : It’s a bit challenging. You’ve got to accommodate all the kids in the class. You’ve got to be sensitive to all the racial differences. You want to emphasize the wrongs that were done in the past but you also want to, you know, not to make kids feel like it’s their fault. So you want to use the wrongs of the past to try and unite the kids …

Interviewer : So what kind of things do you do?

Ms Mokoena : Well I normally highlight the fact that people that were struggling were not just the blacks, it was all the races. And I give examples of the people … from all walks of life, all races, and highlight how they suffered as well as a result of apartheid, particularly the whites… . What I noticed, particularly my first year of teaching apartheid, I noticed that the black kids made the others feel responsible for what happened… . I had a lot of fights…. A lot of kids started hating each other because, you know, the others are white and the others were black. And they started saying, “My mother is a domestic worker because she was never allowed an opportunity to get good education.” …

Interviewer : I didn’t see any of that now when I was observing.

Ms Mokoena : … Like I was saying I think that because of the re-emphasis of the fact that, look, everybody did suffer one way or the other, they sort of got to see that it was everybody’s struggle … . They should now get to understand that that’s why we’re called a Rainbow Nation. Not everybody agreed with apartheid and not everybody suffered. Even all the blacks, not all blacks got to feel what the others felt . So ja [yes], it’s [pause] it’s a difficult topic, ja . But I think if you get the kids to understand why we’re teaching apartheid in the first place and you show the involvement of all races in all the different sides , then I think you have managed to teach it properly. So I think because of my inexperience then — that was my first year of teaching history — so I think I — maybe I over-emphasized the suffering of the blacks versus the whites [emphasis added].

Reprinted with permission from ref. 24 , Sage Publications.

From data to codes

Coding data is a key building block shared across many approaches to data analysis. Coding is a way of organizing and describing data, but is also ultimately a way of transforming data to produce analytic insights. The basic practice of coding involves highlighting a segment of text (this may be a sentence, a clause or a longer excerpt) and assigning a label to it. The aim of the label is to communicate some sort of summary of what is in the highlighted piece of text. Coding is an iterative process, whereby researchers read and reread their transcripts, applying and refining their codes, until they have a coding frame (a set of codes) that is applied coherently across the dataset and that captures and communicates the key features of what is contained in the data as it relates to the researchers’ analytic focus.

What one codes for is entirely contingent on the focus of the research project and the choices the researcher makes about the approach to analysis. At first, one might apply descriptive codes, summarizing what is contained in the interviews. It is rarely desirable to stop at this point, however, because coding is a tool to move from describing the data to interpreting the data. Suppose the researcher is pursuing some version of thematic analysis. In that case, it might be that the objects of coding are aspects of reported action, emotions, opinions, norms, relationships, routines, agreement/disagreement and change over time. A discourse analysis might instead code for different types of speech acts, tropes, linguistic or rhetorical devices. Multiple types of code might be generated within the same research project. What is important is that researchers are aware of the choices they are making in terms of what they are coding for. Moreover, through the process of refinement, the aim is to produce a set of discrete codes — in which codes are conceptually distinct, as opposed to overlapping. By using the same codes across the dataset, the researcher can capture commonalities across the interviews. This process of refinement involves relabelling codes and reorganizing how and where they are applied in the dataset.

From coding to analysis and writing

Data analysis is also an iterative process in which researchers move closer to and further away from the data. As they move away from the data, they synthesize their findings, thus honing and articulating their analytic insights. As they move closer to the data, they ground these insights in what is contained in the interviews. The link should not be broken between the data themselves and higher-order conceptual insights or claims being made. Researchers must be able to show evidence for their claims in the data. Figure  2 summarizes this iterative process and suggests the sorts of activities involved at each stage more concretely.

figure 2

As well as going through steps 1 to 6 in order, the researcher will also go backwards and forwards between stages. Some stages will themselves be a forwards and backwards processing of coding and refining when working across different interview transcripts.

At the stage of synthesizing, there are some common quandaries. When dealing with a dataset consisting of multiple interviews, there will be salient and minority statements across different participants, or consensus or dissent on topics of interest to the researcher. A strength of qualitative interviews is that we can build in these nuances and variations across our data as opposed to aggregating them away. When exploring and reporting data, researchers should be asking how different findings are patterned and which interviews contain which codes, themes or tropes. Researchers should think about how these variations fit within the longer flow of individual interviews and what these variations tell them about the nature of their substantive research interests.

A further consideration is how to approach analysis within and across interview data. Researchers may look at one individual code, to examine the forms it takes across different participants and what they might be able to summarize about this code in the round. Alternatively, they might look at how a code or set of codes pattern across the account of one participant, to understand the code(s) in a more contextualized way. Further analysis might be done according to different sampling characteristics, where researchers group together interviews based on certain demographic characteristics and explore these together.

When it comes to writing up and presenting interview data, key considerations tend to rest on what is often termed transparency. When presenting the findings of an interview-based study, the reader should be able to understand and trace what the stated findings are based upon. This process typically involves describing the analytic process, how key decisions were made and presenting direct excerpts from the data. It is important to account for how the interview was set up and to consider the active part that the researcher has played in generating the data 32 . Quotes from interviews should not be thought of as merely embellishing or adding interest to a final research output. Rather, quotes serve the important function of connecting the reader directly to the underlying data. Quotes, therefore, should be chosen because they provide the reader with the most apt insight into what is being discussed. It is good practice to report not just on what participants said, but also on the questions that were asked to elicit the responses.

Researchers have increasingly used specialist qualitative data analysis software to organize and analyse their interview data, such as NVivo or ATLAS.ti. It is important to remember that such software is a tool for, rather than an approach or technique of, analysis. That said, software also creates a wide range of possibilities in terms of what can be done with the data. As researchers, we should reflect on how the range of possibilities of a given software package might be shaping our analytical choices and whether these are choices that we do indeed want to make.

Applications

This section reviews how and why in-depth interviews have been used by researchers studying gender, education and inequality, nationalism and ethnicity and the welfare state. Although interviews can be employed as a method of data collection in just about any social science topic, the applications below speak directly to the authors’ expertise and cutting-edge areas of research.

When it comes to the broad study of gender, in-depth interviews have been invaluable in shaping our understanding of how gender functions in everyday life. In a study of the US hedge fund industry (an industry dominated by white men), Tobias Neely was interested in understanding the factors that enable white men to prosper in the industry 33 . The study comprised interviews with 45 hedge fund workers and oversampled women of all races and men of colour to capture a range of experiences and beliefs. Tobias Neely found that practices of hiring, grooming and seeding are key to maintaining white men’s dominance in the industry. In terms of hiring, the interviews clarified that white men in charge typically preferred to hire people like themselves, usually from their extended networks. When women were hired, they were usually hired to less lucrative positions. In terms of grooming, Tobias Neely identifies how older and more senior men in the industry who have power and status will select one or several younger men as their protégés, to include in their own elite networks. Finally, in terms of her concept of seeding, Tobias Neely describes how older men who are hedge fund managers provide the seed money (often in the hundreds of millions of dollars) for a hedge fund to men, often their own sons (but not their daughters). These interviews provided an in-depth look into gendered and racialized mechanisms that allow white men to flourish in this industry.

Research by Rao draws on dozens of interviews with men and women who had lost their jobs, some of the participants’ spouses and follow-up interviews with about half the sample approximately 6 months after the initial interview 34 . Rao used interviews to understand the gendered experience and understanding of unemployment. Through these interviews, she found that the very process of losing their jobs meant different things for men and women. Women often saw job loss as being a personal indictment of their professional capabilities. The women interviewed often referenced how years of devaluation in the workplace coloured their interpretation of their job loss. Men, by contrast, were also saddened by their job loss, but they saw it as part and parcel of a weak economy rather than a personal failing. How these varied interpretations occurred was tied to men’s and women’s very different experiences in the workplace. Further, through her analysis of these interviews, Rao also showed how these gendered interpretations had implications for the kinds of jobs men and women sought to pursue after job loss. Whereas men remained tied to participating in full-time paid work, job loss appeared to be a catalyst pushing some of the women to re-evaluate their ties to the labour force.

In a study of workers in the tech industry, Hart used interviews to explain how individuals respond to unwanted and ambiguously sexual interactions 35 . Here, the researcher used interviews to allow participants to describe how these interactions made them feel and act and the logics of how they interpreted, classified and made sense of them 35 . Through her analysis of these interviews, Hart showed that participants engaged in a process she termed “trajectory guarding”, whereby they sought to monitor unwanted and ambiguously sexual interactions to avoid them from escalating. Yet, as Hart’s analysis proficiently demonstrates, these very strategies — which protect these workers sexually — also undermined their workplace advancement.

Drawing on interviews, these studies have helped us to understand better how gendered mechanisms, gendered interpretations and gendered interactions foster gender inequality when it comes to paid work. Methodologically, these studies illuminate the power of interviews to reveal important aspects of social life.

Nationalism and ethnicity

Traditionally, nationalism has been studied from a top-down perspective, through the lens of the state or using historical methods; in other words, in-depth interviews have not been a common way of collecting data to study nationalism. The methodological turn towards everyday nationalism has encouraged more scholars to go to the field and use interviews (and ethnography) to understand nationalism from the bottom up: how people talk about, give meaning, understand, navigate and contest their relation to nation, national identification and nationalism 36 , 37 , 38 , 39 . This turn has also addressed the gap left by those studying national and ethnic identification via quantitative methods, such as surveys.

Surveys can enumerate how individuals ascribe to categorical forms of identification 40 . However, interviews can question the usefulness of such categories and ask whether these categories are reflected, or resisted, by participants in terms of the meanings they give to identification 41 , 42 . Categories often pitch identification as a mutually exclusive choice; but identification might be more complex than such categories allow. For example, some might hybridize these categories or see themselves as moving between and across categories 43 . Hearing how people talk about themselves and their relation to nations, states and ethnicities, therefore, contributes substantially to the study of nationalism and national and ethnic forms of identification.

One particular approach to studying these topics, whether via everyday nationalism or alternatives, is that of using interviews to capture both articulations and narratives of identification, relations to nationalism and the boundaries people construct. For example, interviews can be used to gather self–other narratives by studying how individuals construct I–we–them boundaries 44 , including how participants talk about themselves, who participants include in their various ‘we’ groupings and which and how participants create ‘them’ groupings of others, inserting boundaries between ‘I/we’ and ‘them’. Overall, interviews hold great potential for listening to participants and understanding the nuances of identification and the construction of boundaries from their point of view.

Education and inequality

Scholars of social stratification have long noted that the school system often reproduces existing social inequalities. Carter explains that all schools have both material and sociocultural resources 45 . When children from different backgrounds attend schools with different material resources, their educational and occupational outcomes are likely to vary. Such material resources are relatively easy to measure. They are operationalized as teacher-to-student ratios, access to computers and textbooks and the physical infrastructure of classrooms and playgrounds.

Drawing on Bourdieusian theory 46 , Carter conceptualizes the sociocultural context as the norms, values and dispositions privileged within a social space 45 . Scholars have drawn on interviews with students and teachers (as well as ethnographic observations) to show how schools confer advantages on students from middle-class families, for example, by rewarding their help-seeking behaviours 47 . Focusing on race, researchers have revealed how schools can remain socioculturally white even as they enrol a racially diverse student population. In such contexts, for example, teachers often misrecognize the aesthetic choices made by students of colour, wrongly inferring that these students’ tastes in clothing and music reflect negative orientations to schooling 48 , 49 , 50 . These assessments can result in disparate forms of discipline and may ultimately shape educators’ assessments of students’ academic potential 51 .

Further, teachers and administrators tend to view the appropriate relationship between home and school in ways that resonate with white middle-class parents 52 . These parents are then able to advocate effectively for their children in ways that non-white parents are not 53 . In-depth interviews are particularly good at tapping into these understandings, revealing the mechanisms that confer privilege on certain groups of students and thereby reproduce inequality.

In addition, interviews can shed light on the unequal experiences that young people have within educational institutions, as the views of dominant groups are affirmed while those from disadvantaged backgrounds are delegitimized. For example, Teeger’s interviews with South African high schoolers showed how — because racially charged incidents are often framed as jokes in the broader school culture — Black students often feel compelled to ignore and keep silent about the racism they experience 54 . Interviews revealed that Black students who objected to these supposed jokes were coded by other students as serious or angry. In trying to avoid such labels, these students found themselves unable to challenge the racism they experienced. Interviews give us insight into these dynamics and help us see how young people understand and interpret the messages transmitted in schools — including those that speak to issues of inequality in their local school contexts as well as in society more broadly 24 , 55 .

The welfare state

In-depth interviews have also proved to be an important method for studying various aspects of the welfare state. By welfare state, we mean the social institutions relating to the economic and social wellbeing of a state’s citizens. Notably, using interviews has been useful to look at how policy design features are experienced and play out on the ground. Interviews have often been paired with large-scale surveys to produce mixed-methods study designs, therefore achieving both breadth and depth of insights.

In-depth interviews provide the opportunity to look behind policy assumptions or how policies are designed from the top down, to examine how these play out in the lives of those affected by the policies and whose experiences might otherwise be obscured or ignored. For example, the Welfare Conditionality project used interviews to critique the assumptions that conditionality (such as, the withdrawal of social security benefits if recipients did not perform or meet certain criteria) improved employment outcomes and instead showed that conditionality was harmful to mental health, living standards and had many other negative consequences 56 . Meanwhile, combining datasets from two small-scale interview studies with recipients allowed Summers and Young to critique assumptions around the simplicity that underpinned the design of Universal Credit in 2020, for example, showing that the apparently simple monthly payment design instead burdened recipients with additional money management decisions and responsibilities 57 .

Similarly, the Welfare at a (Social) Distance project used a mixed-methods approach in a large-scale study that combined national surveys with case studies and in-depth interviews to investigate the experience of claiming social security benefits during the COVID-19 pandemic. The interviews allowed researchers to understand in detail any issues experienced by recipients of benefits, such as delays in the process of claiming, managing on a very tight budget and navigating stigma and claiming 58 .

These applications demonstrate the multi-faceted topics and questions for which interviews can be a relevant method for data collection. These applications highlight not only the relevance of interviews, but also emphasize the key added value of interviews, which might be missed by other methods (surveys, in particular). Interviews can expose and question what is taken for granted and directly engage with communities and participants that might otherwise be ignored, obscured or marginalized.

Reproducibility and data deposition

There is a robust, ongoing debate about reproducibility in qualitative research, including interview studies. In some research paradigms, reproducibility can be a way of interrogating the rigour and robustness of research claims, by seeing whether these hold up when the research process is repeated. Some scholars have suggested that although reproducibility may be challenging, researchers can facilitate it by naming the place where the research was conducted, naming participants, sharing interview and fieldwork transcripts (anonymized and de-identified in cases where researchers are not naming people or places) and employing fact-checkers for accuracy 11 , 59 , 60 .

In addition to the ethical concerns of whether de-anonymization is ever feasible or desirable, it is also important to address whether the replicability of interview studies is meaningful. For example, the flexibility of interviews allows for the unexpected and the unforeseen to be incorporated into the scope of the research 61 . However, this flexibility means that we cannot expect reproducibility in the conventional sense, given that different researchers will elicit different types of data from participants. Sharing interview transcripts with other researchers, for instance, downplays the contextual nature of an interview.

Drawing on Bauer and Gaskell, we propose several measures to enhance rigour in qualitative research: transparency, grounding interpretations and aiming for theoretical transferability and significance 62 .

Researchers should be transparent when describing their methodological choices. Transparency means documenting who was interviewed, where and when (without requiring de-anonymization, for example, by documenting their characteristics), as well as the questions they were asked. It means carefully considering who was left out of the interviews and what that could mean for the researcher’s findings. It also means carefully considering who the researcher is and how their identity shaped the research process (integrating and articulating reflexivity into whatever is written up).

Second, researchers should ground their interpretations in the data. Grounding means presenting the evidence upon which the interpretation relies. Quotes and extracts should be extensive enough to allow the reader to evaluate whether the researcher’s interpretations are grounded in the data. At each step, researchers should carefully compare their own explanations and interpretations with alternative explanations. Doing so systematically and frequently allows researchers to become more confident in their claims. Here, researchers should justify the link between data and analysis by using quotes to justify and demonstrate the analytical point, while making sure the analytical point offers an interpretation of quotes (Box  4 ).

An important step in considering alternative explanations is to seek out disconfirming evidence 4 , 63 . This involves looking for instances where participants deviate from what the majority are saying and thus bring into question the theory (or explanation) that the researcher is developing. Careful analysis of such examples can often demonstrate the salience and meaning of what appears to be the norm (see Table  2 for examples) 54 . Considering alternative explanations and paying attention to disconfirming evidence allows the researcher to refine their own theories in respect of the data.

Finally, researchers should aim for theoretical transferability and significance in their discussions of findings. One way to think about this is to imagine someone who is not interested in the empirical study. Articulating theoretical transferability and significance usually takes the form of broadening out from the specific findings to consider explicitly how the research has refined or altered prior theoretical approaches. This process also means considering under what other conditions, aside from those of the study, the researcher thinks their theoretical revision would be supported by and why. Importantly, it also includes thinking about the limitations of one’s own approach and where the theoretical implications of the study might not hold.

Box 4 An example of grounding interpretations in data (from Rao 34 )

In an article explaining how unemployed men frame their job loss as a pervasive experience, Rao writes the following: “Unemployed men in this study understood unemployment to be an expected aspect of paid work in the contemporary United States. Robert, a white unemployed communications professional, compared the economic landscape after the Great Recession with the tragic events of September 11, 2001:

Part of your post-9/11 world was knowing people that died as a result of terrorism. The same thing is true with the [Great] Recession, right? … After the Recession you know somebody who was unemployed … People that really should be working.

The pervasiveness of unemployment rendered it normal, as Robert indicates.”

Here, the link between the quote presented and the analytical point Rao is making is clear: the analytical point is grounded in a quote and an interpretation of the quote is offered 34 .

Limitations and optimizations

When deciding which research method to use, the key question is whether the method provides a good fit for the research questions posed. In other words, researchers should consider whether interviews will allow them to successfully access the social phenomena necessary to answer their question(s) and whether the interviews will do so more effectively than other methods. Table  3 summarizes the major strengths and limitations of interviews. However, the accompanying text below is organized around some key issues, where relative strengths and weaknesses are presented alongside each other, the aim being that readers should think about how these can be balanced and optimized in relation to their own research.

Breadth versus depth of insight

Achieving an overall breadth of insight, in a statistically representative sense, is not something that is possible or indeed desirable when conducting in-depth interviews. Instead, the strength of conducting interviews lies in their ability to generate various sorts of depth of insight. The experiences or views of participants that can be accessed by conducting interviews help us to understand participants’ subjective realities. The challenge, therefore, is for researchers to be clear about why depth of insight is the focus and what we should aim to glean from these types of insight.

Naturalistic or artificial interviews

Interviews make use of a form of interaction with which people are familiar 64 . By replicating a naturalistic form of interaction as a tool to gather social science data, researchers can capitalize on people’s familiarity and expectations of what happens in a conversation. This familiarity can also be a challenge, as people come to the interview with preconceived ideas about what this conversation might be for or about. People may draw on experiences of other similar conversations when taking part in a research interview (for example, job interviews, therapy sessions, confessional conversations, chats with friends). Researchers should be aware of such potential overlaps and think through their implications both in how the aims and purposes of the research interview are communicated to participants and in how interview data are interpreted.

Further, some argue that a limitation of interviews is that they are an artificial form of data collection. By taking people out of their daily lives and asking them to stand back and pass comment, we are creating a distance that makes it difficult to use such data to say something meaningful about people’s actions, experiences and views. Other approaches, such as ethnography, might be more suitable for tapping into what people actually do, as opposed to what they say they do 65 .

Dynamism and replicability

Interviews following a semi-structured format offer flexibility both to the researcher and the participant. As the conversation develops, the interlocutors can explore the topics raised in much more detail, if desired, or pass over ones that are not relevant. This flexibility allows for the unexpected and the unforeseen to be incorporated into the scope of the research.

However, this flexibility has a related challenge of replicability. Interviews cannot be reproduced because they are contingent upon the interaction between the researcher and the participant in that given moment of interaction. In some research paradigms, replicability can be a way of interrogating the robustness of research claims, by seeing whether they hold when they are repeated. This is not a useful framework to bring to in-depth interviews and instead quality criteria (such as transparency) tend to be employed as criteria of rigour.

Accessing the private and personal

Interviews have been recognized for their strength in accessing private, personal issues, which participants may feel more comfortable talking about in a one-to-one conversation. Furthermore, interviews are likely to take a more personable form with their extended questions and answers, perhaps making a participant feel more at ease when discussing sensitive topics in such a context. There is a similar, but separate, argument made about accessing what are sometimes referred to as vulnerable groups, who may be difficult to make contact with using other research methods.

There is an associated challenge of anonymity. There can be types of in-depth interview that make it particularly challenging to protect the identities of participants, such as interviewing within a small community, or multiple members of the same household. The challenge to ensure anonymity in such contexts is even more important and difficult when the topic of research is of a sensitive nature or participants are vulnerable.

Increasingly, researchers are collaborating in large-scale interview-based studies and integrating interviews into broader mixed-methods designs. At the same time, interviews can be seen as an old-fashioned (and perhaps outdated) mode of data collection. We review these debates and discussions and point to innovations in interview-based studies. These include the shift from face-to-face interviews to the use of online platforms, as well as integrating and adapting interviews towards more inclusive methodologies.

Collaborating and mixing

Qualitative researchers have long worked alone 66 . Increasingly, however, researchers are collaborating with others for reasons such as efficiency, institutional incentives (for example, funding for collaborative research) and a desire to pool expertise (for example, studying similar phenomena in different contexts 67 or via different methods). Collaboration can occur across disciplines and methods, cases and contexts and between industry/business, practitioners and researchers. In many settings and contexts, collaboration has become an imperative 68 .

Cheek notes how collaboration provides both advantages and disadvantages 68 . For example, collaboration can be advantageous, saving time and building on the divergent knowledge, skills and resources of different researchers. Scholars with different theoretical or case-based knowledge (or contacts) can work together to build research that is comparative and/or more than the sum of its parts. But such endeavours also carry with them practical and political challenges in terms of how resources might actually be pooled, shared or accounted for. When undertaking such projects, as Morse notes, it is worth thinking about the nature of the collaboration and being explicit about such a choice, its advantages and its disadvantages 66 .

A further tension, but also a motivation for collaboration, stems from integrating interviews as a method in a mixed-methods project, whether with other qualitative researchers (to combine with, for example, focus groups, document analysis or ethnography) or with quantitative researchers (to combine with, for example, surveys, social media analysis or big data analysis). Cheek and Morse both note the pitfalls of collaboration with quantitative researchers: that quality of research may be sacrificed, qualitative interpretations watered down or not taken seriously, or tensions experienced over the pace and different assumptions that come with different methods and approaches of research 66 , 68 .

At the same time, there can be real benefits of such mixed-methods collaboration, such as reaching different and more diverse audiences or testing assumptions and theories between research components in the same project (for example, testing insights from prior quantitative research via interviews, or vice versa), as long as the skillsets of collaborators are seen as equally beneficial to the project. Cheek provides a set of questions that, as a starting point, can be useful for guiding collaboration, whether mixed methods or otherwise. First, Cheek advises asking all collaborators about their assumptions and understandings concerning collaboration. Second, Cheek recommends discussing what each perspective highlights and focuses on (and conversely ignores or sidelines) 68 .

A different way to engage with the idea of collaboration and mixed methods research is by fostering greater collaboration between researchers in the Global South and Global North, thus reversing trends of researchers from the Global North extracting knowledge from the Global South 69 . Such forms of collaboration also align with interview innovations, discussed below, that seek to transform traditional interview approaches into more participatory and inclusive (as part of participatory methodologies).

Digital innovations and challenges

The ongoing COVID-19 pandemic has centred the question of technology within interview-based fieldwork. Although conducting synchronous oral interviews online — for example, via Zoom, Skype or other such platforms — has been a method used by a small constituency of researchers for many years, it became (and remains) a necessity for many researchers wanting to continue or start interview-based projects while COVID-19 prevents face-to-face data collection.

In the past, online interviews were often framed as an inferior form of data collection for not providing the kinds of (often necessary) insights and forms of immersion face-to-face interviews allow 70 , 71 . Online interviews do tend to be more decontextualized than interviews conducted face-to-face 72 . For example, it is harder to recognize, engage with and respond to non-verbal cues 71 . At the same time, they broaden participation to those who might not have been able to access or travel to sites where interviews would have been conducted otherwise, for example people with disabilities. Online interviews also offer more flexibility in terms of scheduling and time requirements. For example, they provide more flexibility around precarious employment or caring responsibilities without having to travel and be away from home. In addition, online interviews might also reduce discomfort between researchers and participants, compared with face-to-face interviews, enabling more discussion of sensitive material 71 . They can also provide participants with more control, enabling them to turn on and off the microphone and video as they choose, for example, to provide more time to reflect and disconnect if they so wish 72 .

That said, online interviews can also introduce new biases based on access to technology 72 . For example, in the Global South, there are often urban/rural and gender gaps between who has access to mobile phones and who does not, meaning that some population groups might be overlooked unless researchers sample mindfully 71 . There are also important ethical considerations when deciding between online and face-to-face interviews. Online interviews might seem to imply lower ethical risks than face-to-face interviews (for example, they lower the chances of identification of participants or researchers), but they also offer more barriers to building trust between researchers and participants 72 . Interacting only online with participants might not provide the information needed to assess risk, for example, participants’ access to a private space to speak 71 . Just because online interviews might be more likely to be conducted in private spaces does not mean that private spaces are safe, for example, for victims of domestic violence. Finally, online interviews prompt further questions about decolonizing research and engaging with participants if research is conducted from afar 72 , such as how to include participants meaningfully and challenge dominant assumptions while doing so remotely.

A further digital innovation, modulating how researchers conduct interviews and the kinds of data collected and analysed, stems from the use and integration of (new) technology, such as WhatsApp text or voice notes to conduct synchronous or asynchronous oral or written interviews 73 . Such methods can provide more privacy, comfort and control to participants and make recruitment easier, allowing participants to share what they want when they want to, using technology that already forms a part of their daily lives, especially for young people 74 , 75 . Such technology is also emerging in other qualitative methods, such as focus groups, with similar arguments around greater inclusivity versus traditional offline modes. Here, the digital challenge might be higher for researchers than for participants if they are less used to such technology 75 . And while there might be concerns about the richness, depth and quality of written messages as a form of interview data, Gibson reports that the reams of transcripts that resulted from a study using written messaging were dense with meaning to be analysed 75 .

Like with online and face-to-face interviews, it is important also to consider the ethical questions and challenges of using such technology, from gaining consent to ensuring participant safety and attending to their distress, without cues, like crying, that might be more obvious in a face-to-face setting 75 , 76 . Attention to the platform used for such interviews is also important and researchers should be attuned to the local and national context. For example, in China, many platforms are neither legal nor available 76 . There, more popular platforms — like WeChat — can be highly monitored by the government, posing potential risks to participants depending on the topic of the interview. Ultimately, researchers should consider trade-offs between online and offline interview modalities, being attentive to the social context and power dynamics involved.

The next 5–10 years

Continuing to integrate (ethically) this technology will be among the major persisting developments in interview-based research, whether to offer more flexibility to researchers or participants, or to diversify who can participate and on what terms.

Pushing the idea of inclusion even further is the potential for integrating interview-based studies within participatory methods, which are also innovating via integrating technology. There is no hard and fast line between researchers using in-depth interviews and participatory methods; many who employ participatory methods will use interviews at the beginning, middle or end phases of a research project to capture insights, perspectives and reflections from participants 77 , 78 . Participatory methods emphasize the need to resist existing power and knowledge structures. They broaden who has the right and ability to contribute to academic knowledge by including and incorporating participants not only as subjects of data collection, but as crucial voices in research design and data analysis 77 . Participatory methods also seek to facilitate local change and to produce research materials, whether for academic or non-academic audiences, including films and documentaries, in collaboration with participants.

In responding to the challenges of COVID-19, capturing the fraught situation wrought by the pandemic and the momentum to integrate technology, participatory researchers have sought to continue data collection from afar. For example, Marzi has adapted an existing project to co-produce participatory videos, via participants’ smartphones in Medellin, Colombia, alongside regular check-in conversations/meetings/interviews with participants 79 . Integrating participatory methods into interview studies offers a route by which researchers can respond to the challenge of diversifying knowledge, challenging assumptions and power hierarchies and creating more inclusive and collaborative partnerships between participants and researchers in the Global North and South.

Brinkmann, S. & Kvale, S. Doing Interviews Vol. 2 (Sage, 2018). This book offers a good general introduction to the practice and design of interview-based studies.

Silverman, D. A Very Short, Fairly Interesting And Reasonably Cheap Book About Qualitative Research (Sage, 2017).

Yin, R. K. Case Study Research And Applications: Design And Methods (Sage, 2018).

Small, M. L. How many cases do I need?’ On science and the logic of case selection in field-based research. Ethnography 10 , 5–38 (2009). This article convincingly demonstrates how the logic of qualitative research differs from quantitative research and its goal of representativeness.

Google Scholar  

Gerson, K. & Damaske, S. The Science and Art of Interviewing (Oxford Univ. Press, 2020).

Glaser, B. G. & Strauss, A. L. The Discovery Of Grounded Theory: Strategies For Qualitative Research (Aldine, 1967).

Braun, V. & Clarke, V. To saturate or not to saturate? Questioning data saturation as a useful concept for thematic analysis and sample-size rationales. Qual. Res. Sport Exerc. Health 13 , 201–216 (2021).

Guest, G., Bunce, A. & Johnson, L. How many interviews are enough? An experiment with data saturation and variability. Field Methods 18 , 59–82 (2006).

Vasileiou, K., Barnett, J., Thorpe, S. & Young, T. Characterising and justifying sample size sufficiency in interview-based studies: systematic analysis of qualitative health research over a 15-year period. BMC Med. Res. Methodol. 18 , 148 (2018).

Silverman, D. How was it for you? The Interview Society and the irresistible rise of the (poorly analyzed) interview. Qual. Res. 17 , 144–158 (2017).

Jerolmack, C. & Murphy, A. The ethical dilemmas and social scientific tradeoffs of masking in ethnography. Sociol. Methods Res. 48 , 801–827 (2019).

MathSciNet   Google Scholar  

Reyes, V. Ethnographic toolkit: strategic positionality and researchers’ visible and invisible tools in field research. Ethnography 21 , 220–240 (2020).

Guillemin, M. & Gillam, L. Ethics, reflexivity and “ethically important moments” in research. Qual. Inq. 10 , 261–280 (2004).

Summers, K. For the greater good? Ethical reflections on interviewing the ‘rich’ and ‘poor’ in qualitative research. Int. J. Soc. Res. Methodol. 23 , 593–602 (2020). This article argues that, in qualitative interview research, a clearer distinction needs to be drawn between ethical commitments to individual research participants and the group(s) to which they belong, a distinction that is often elided in existing ethics guidelines.

Yusupova, G. Exploring sensitive topics in an authoritarian context: an insider perspective. Soc. Sci. Q. 100 , 1459–1478 (2019).

Hemming, J. in Surviving Field Research: Working In Violent And Difficult Situations 21–37 (Routledge, 2009).

Murphy, E. & Dingwall, R. Informed consent, anticipatory regulation and ethnographic practice. Soc. Sci. Med. 65 , 2223–2234 (2007).

Kostovicova, D. & Knott, E. Harm, change and unpredictability: the ethics of interviews in conflict research. Qual. Res. 22 , 56–73 (2022). This article highlights how interviews need to be considered as ethically unpredictable moments where engaging with change among participants can itself be ethical.

Andersson, R. Illegality, Inc.: Clandestine Migration And The Business Of Bordering Europe (Univ. California Press, 2014).

Ellis, R. What do we mean by a “hard-to-reach” population? Legitimacy versus precarity as barriers to access. Sociol. Methods Res. https://doi.org/10.1177/0049124121995536 (2021).

Article   Google Scholar  

Braun, V. & Clarke, V. Thematic Analysis: A Practical Guide (Sage, 2022).

Alejandro, A. & Knott, E. How to pay attention to the words we use: the reflexive review as a method for linguistic reflexivity. Int. Stud. Rev. https://doi.org/10.1093/isr/viac025 (2022).

Alejandro, A., Laurence, M. & Maertens, L. in International Organisations and Research Methods: An Introduction (eds Badache, F., Kimber, L. R. & Maertens, L.) (Michigan Univ. Press, in the press).

Teeger, C. “Both sides of the story” history education in post-apartheid South Africa. Am. Sociol. Rev. 80 , 1175–1200 (2015).

Crotty, M. The Foundations Of Social Research: Meaning And Perspective In The Research Process (Routledge, 2020).

Potter, J. & Hepburn, A. Qualitative interviews in psychology: problems and possibilities. Qual. Res. Psychol. 2 , 281–307 (2005).

Taylor, S. What is Discourse Analysis? (Bloomsbury Publishing, 2013).

Riessman, C. K. Narrative Analysis (Sage, 1993).

Corbin, J. M. & Strauss, A. Grounded theory research: Procedures, canons and evaluative criteria. Qual. Sociol. 13 , 3–21 (1990).

Timmermans, S. & Tavory, I. Theory construction in qualitative research: from grounded theory to abductive analysis. Sociol. Theory 30 , 167–186 (2012).

Fereday, J. & Muir-Cochrane, E. Demonstrating rigor using thematic analysis: a hybrid approach of inductive and deductive coding and theme development. Int. J. Qual. Meth. 5 , 80–92 (2006).

Potter, J. & Hepburn, A. Eight challenges for interview researchers. Handb. Interview Res. 2 , 541–570 (2012).

Tobias Neely, M. Fit to be king: how patrimonialism on Wall Street leads to inequality. Socioecon. Rev. 16 , 365–385 (2018).

Rao, A. H. Gendered interpretations of job loss and subsequent professional pathways. Gend. Soc. 35 , 884–909 (2021). This article used interview data from unemployed men and women to illuminate how job loss becomes a pivotal moment shaping men’s and women’s orientation to paid work, especially in terms of curtailing women’s participation in paid work.

Hart, C. G. Trajectory guarding: managing unwanted, ambiguously sexual interactions at work. Am. Sociol. Rev. 86 , 256–278 (2021).

Goode, J. P. & Stroup, D. R. Everyday nationalism: constructivism for the masses. Soc. Sci. Q. 96 , 717–739 (2015).

Antonsich, M. The ‘everyday’ of banal nationalism — ordinary people’s views on Italy and Italian. Polit. Geogr. 54 , 32–42 (2016).

Fox, J. E. & Miller-Idriss, C. Everyday nationhood. Ethnicities 8 , 536–563 (2008).

Yusupova, G. Cultural nationalism and everyday resistance in an illiberal nationalising state: ethnic minority nationalism in Russia. Nations National. 24 , 624–647 (2018).

Kiely, R., Bechhofer, F. & McCrone, D. Birth, blood and belonging: identity claims in post-devolution Scotland. Sociol. Rev. 53 , 150–171 (2005).

Brubaker, R. & Cooper, F. Beyond ‘identity’. Theory Soc. 29 , 1–47 (2000).

Brubaker, R. Ethnicity Without Groups (Harvard Univ. Press, 2004).

Knott, E. Kin Majorities: Identity And Citizenship In Crimea And Moldova From The Bottom-Up (McGill Univ. Press, 2022).

Bucher, B. & Jasper, U. Revisiting ‘identity’ in international relations: from identity as substance to identifications in action. Eur. J. Int. Relat. 23 , 391–415 (2016).

Carter, P. L. Stubborn Roots: Race, Culture And Inequality In US And South African Schools (Oxford Univ. Press, 2012).

Bourdieu, P. in Cultural Theory: An Anthology Vol. 1, 81–93 (eds Szeman, I. & Kaposy, T.) (Wiley-Blackwell, 2011).

Calarco, J. M. Negotiating Opportunities: How The Middle Class Secures Advantages In School (Oxford Univ. Press, 2018).

Carter, P. L. Keepin’ It Real: School Success Beyond Black And White (Oxford Univ. Press, 2005).

Carter, P. L. ‘Black’ cultural capital, status positioning and schooling conflicts for low-income African American youth. Soc. Probl. 50 , 136–155 (2003).

Warikoo, N. K. The Diversity Bargain Balancing Acts: Youth Culture in the Global City (Univ. California Press, 2011).

Morris, E. W. “Tuck in that shirt!” Race, class, gender and discipline in an urban school. Sociol. Perspect. 48 , 25–48 (2005).

Lareau, A. Social class differences in family–school relationships: the importance of cultural capital. Sociol. Educ. 60 , 73–85 (1987).

Warikoo, N. Addressing emotional health while protecting status: Asian American and white parents in suburban America. Am. J. Sociol. 126 , 545–576 (2020).

Teeger, C. Ruptures in the rainbow nation: how desegregated South African schools deal with interpersonal and structural racism. Sociol. Educ. 88 , 226–243 (2015). This article leverages ‘ deviant ’ cases in an interview study with South African high schoolers to understand why the majority of participants were reluctant to code racially charged incidents at school as racist.

Ispa-Landa, S. & Conwell, J. “Once you go to a white school, you kind of adapt” black adolescents and the racial classification of schools. Sociol. Educ. 88 , 1–19 (2015).

Dwyer, P. J. Punitive and ineffective: benefit sanctions within social security. J. Soc. Secur. Law 25 , 142–157 (2018).

Summers, K. & Young, D. Universal simplicity? The alleged simplicity of Universal Credit from administrative and claimant perspectives. J. Poverty Soc. Justice 28 , 169–186 (2020).

Summers, K. et al. Claimants’ Experiences Of The Social Security System During The First Wave Of COVID-19 . https://www.distantwelfare.co.uk/winter-report (2021).

Desmond, M. Evicted: Poverty And Profit In The American City (Crown Books, 2016).

Reyes, V. Three models of transparency in ethnographic research: naming places, naming people and sharing data. Ethnography 19 , 204–226 (2018).

Robson, C. & McCartan, K. Real World Research (Wiley, 2016).

Bauer, M. W. & Gaskell, G. Qualitative Researching With Text, Image And Sound: A Practical Handbook (SAGE, 2000).

Lareau, A. Listening To People: A Practical Guide To Interviewing, Participant Observation, Data Analysis And Writing It All Up (Univ. Chicago Press, 2021).

Lincoln, Y. S. & Guba, E. G. Naturalistic Inquiry (Sage, 1985).

Jerolmack, C. & Khan, S. Talk is cheap. Sociol. Methods Res. 43 , 178–209 (2014).

Morse, J. M. Styles of collaboration in qualitative inquiry. Qual. Health Res. 18 , 3–4 (2008).

ADS   Google Scholar  

Lamont, M. et al. Getting Respect: Responding To Stigma And Discrimination In The United States, Brazil And Israel (Princeton Univ. Press, 2016).

Cheek, J. Researching collaboratively: implications for qualitative research and researchers. Qual. Health Res. 18 , 1599–1603 (2008).

Botha, L. Mixing methods as a process towards indigenous methodologies. Int. J. Soc. Res. Methodol. 14 , 313–325 (2011).

Howlett, M. Looking at the ‘field’ through a zoom lens: methodological reflections on conducting online research during a global pandemic. Qual. Res. https://doi.org/10.1177/1468794120985691 (2021).

Reñosa, M. D. C. et al. Selfie consents, remote rapport and Zoom debriefings: collecting qualitative data amid a pandemic in four resource-constrained settings. BMJ Glob. Health 6 , e004193 (2021).

Mwambari, D., Purdeková, A. & Bisoka, A. N. Covid-19 and research in conflict-affected contexts: distanced methods and the digitalisation of suffering. Qual. Res. https://doi.org/10.1177/1468794121999014 (2021).

Colom, A. Using WhatsApp for focus group discussions: ecological validity, inclusion and deliberation. Qual. Res. https://doi.org/10.1177/1468794120986074 (2021).

Kaufmann, K. & Peil, C. The mobile instant messaging interview (MIMI): using WhatsApp to enhance self-reporting and explore media usage in situ. Mob. Media Commun. 8 , 229–246 (2020).

Gibson, K. Bridging the digital divide: reflections on using WhatsApp instant messenger interviews in youth research. Qual. Res. Psychol. 19 , 611–631 (2020).

Lawrence, L. Conducting cross-cultural qualitative interviews with mainland Chinese participants during COVID: lessons from the field. Qual. Res. https://doi.org/10.1177/1468794120974157 (2020).

Ponzoni, E. Windows of understanding: broadening access to knowledge production through participatory action research. Qual. Res. 16 , 557–574 (2016).

Kong, T. S. Gay and grey: participatory action research in Hong Kong. Qual. Res. 18 , 257–272 (2018).

Marzi, S. Participatory video from a distance: co-producing knowledge during the COVID-19 pandemic using smartphones. Qual. Res. https://doi.org/10.1177/14687941211038171 (2021).

Kvale, S. & Brinkmann, S. InterViews: Learning The Craft Of Qualitative Research Interviewing (Sage, 2008).

Rao, A. H. The ideal job-seeker norm: unemployment and marital privileges in the professional middle-class. J. Marriage Fam. 83 , 1038–1057 (2021).

Rivera, L. A. Ivies, extracurriculars and exclusion: elite employers’ use of educational credentials. Res. Soc. Stratif. Mobil. 29 , 71–90 (2011).

Download references

Acknowledgements

The authors are grateful to the MY421 team and students for prompting how best to frame and communicate issues pertinent to in-depth interview studies.

Author information

Authors and affiliations.

Department of Methodology, London School of Economics, London, UK

Eleanor Knott, Aliya Hamid Rao, Kate Summers & Chana Teeger

You can also search for this author in PubMed   Google Scholar

Contributions

The authors contributed equally to all aspects of the article.

Corresponding author

Correspondence to Eleanor Knott .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature Reviews Methods Primers thanks Jonathan Potter and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

A pre-written interview outline for a semi-structured interview that provides both a topic structure and the ability to adapt flexibly to the content and context of the interview and the interaction between the interviewer and participant. Others may refer to the topic guide as an interview protocol.

Here we refer to the participants that take part in the study as the sample. Other researchers may refer to the participants as a participant group or dataset.

This involves dividing a population into smaller groups based on particular characteristics, for example, age or gender, and then sampling randomly within each group.

A sampling method where the guiding logic when deciding who to recruit is to achieve the most relevant participants for the research topic, in terms of being rich in information or insights.

Researchers ask participants to introduce the researcher to others who meet the study’s inclusion criteria.

Similar to stratified sampling, but participants are not necessarily randomly selected. Instead, the researcher determines how many people from each category of participants should be recruited. Recruitment can happen via snowball or purposive sampling.

A method for developing, analysing and interpreting patterns across data by coding in order to develop themes.

An approach that interrogates the explicit, implicit and taken-for-granted dimensions of language as well as the contexts in which it is articulated to unpack its purposes and effects.

A form of transcription that simplifies what has been said by removing certain verbal and non-verbal details that add no further meaning, such as ‘ums and ahs’ and false starts.

The analytic framework, theoretical approach and often hypotheses, are developed prior to examining the data and then applied to the dataset.

The analytic framework and theoretical approach is developed from analysing the data.

An approach that combines deductive and inductive components to work recursively by going back and forth between data and existing theoretical frameworks (also described as an iterative approach). This approach is increasingly recognized not only as a more realistic but also more desirable third alternative to the more traditional inductive versus deductive binary choice.

A theoretical apparatus that emphasizes the role of cultural processes and capital in (intergenerational) social reproduction.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Cite this article.

Knott, E., Rao, A.H., Summers, K. et al. Interviews in the social sciences. Nat Rev Methods Primers 2 , 73 (2022). https://doi.org/10.1038/s43586-022-00150-6

Download citation

Accepted : 14 July 2022

Published : 15 September 2022

DOI : https://doi.org/10.1038/s43586-022-00150-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

‘life minus illness = recovery’: a phenomenological study about experiences and meanings of recovery among individuals with serious mental illness from southern india.

  • Srishti Hegde
  • Shalini Quadros
  • Vinita A. Acharya

Community Mental Health Journal (2024)

Is e-business breaking down barriers for Bangladesh’s young female entrepreneurs during the COVID-19 pandemic? A qualitative study

  • Md. Fouad Hossain Sarker
  • Sayed Farrukh Ahmed
  • Md. Salman Sohel

SN Social Sciences (2024)

Between the dog and the wolf: an interpretative phenomenological analysis of bicultural, sexual minority people’s lived experiences

  • Emelie Louise Miller
  • Ingrid Zakrisson

Discover Psychology (2024)

Acknowledging that Men are Moral and Harmed by Gender Stereotypes Increases Men’s Willingness to Engage in Collective Action on Behalf of Women

  • Alexandra Vázquez
  • Lucía López-Rodríguez
  • Marco Brambilla

Sex Roles (2024)

Supporting mid-career students’ psychological needs to improve motivation and retention in post-graduate courses

  • Gillian Kirk
  • Carly Sanbrook

The Australian Educational Researcher (2024)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

structured interview research methodology

Social Science Research Methods

  • Research Fundamentals
  • Qualitative Data Analysis

Structured Interviews

structured interview research methodology

Structured interviews ask the same questions of all participants. This means that the interviewer sticks to the same wording and sequence for each individual they interview, even asking predetermined follow-up questions. The questions in a structured interview should still be open-ended, even if they are predetermined. This allows participants to still freely articulate their answers based on personal experiences and beliefs. Structured interviews make use of an “interview guide” in which all the questions are written out in advance. You can learn how to do so in the “ Creating an Interview Guide ” section.

The primary advantage of structured interviews is that they allow the interview analysis process to move a lot faster. Having predetermined questions means you can gather data that is easily comparable across different participants. It is also useful for reducing bias when several interviewers are involved since each researcher is asking the exact same questions worded in the same way. 

However, because of their rigid nature, structured interviews might not give you the entire picture. Pre-determined questions can prevent the interviewer from fully exploring a new topic as it comes up. Keep in mind that although all the questions are the same, all participants are not the same. This may cause different participants to interpret each question differently, which could therefore produce inconsistent data. 

For Example …

Elite universities are launching points for a wide variety of meaningful careers. Yet, year after year at the most selective universities, nearly half of graduating seniors head to a surprisingly narrow band of professional options. To understand why graduates “funnel” into the same consulting, finance, and tech fields, Amy Binder and colleagues interviewed more than 50 students and recent alumni from Harvard and Stanford Universities. Choosing structured interviews made sense to make sure the researchers asked each participant the same questions about their family background, choosing a college, academic major, careers, and help from their university in thinking about careers. This spectrum of questions gave Professor Binder coverage of students at all points in the job search process and helped identify how students developed their career aspirations ( Binder et al., 2015 )

The pros and cons of structured interviews are laid out in the chart below:

Pros Cons
Easier to conduct if you have limited time and resourcesLittle flexibility
Can use a larger sampleMay leave out important personal components
Reduce bias when working with multiple interviewers Interview is guided by the researcher, not the participant
Easily comparable dataParticipants may interpret questions differently than each other
Interview and analysis processes are quicker 
Interviewer can be novice
  • Please give us feedback on this specific page! This helps us improve our site. Below you can tell us if our content is unclear, confusing, or incomplete. You can tell us if our site functions are confusing, broken, or have other issues. You can also inform us of any edits or proofreading we missed. If there are any other issues or concerns with this page, please let us know!
  • Issues with content
  • Issues with functionality
  • Issues with editing/proofreading
  • Please explain your answer(s) above. You can copy and paste text from the page to highlight specific issues.
  • Hidden page URL
  • Hidden page title

css.php

Logo for Open Educational Resources

Chapter 11. Interviewing

Introduction.

Interviewing people is at the heart of qualitative research. It is not merely a way to collect data but an intrinsically rewarding activity—an interaction between two people that holds the potential for greater understanding and interpersonal development. Unlike many of our daily interactions with others that are fairly shallow and mundane, sitting down with a person for an hour or two and really listening to what they have to say is a profound and deep enterprise, one that can provide not only “data” for you, the interviewer, but also self-understanding and a feeling of being heard for the interviewee. I always approach interviewing with a deep appreciation for the opportunity it gives me to understand how other people experience the world. That said, there is not one kind of interview but many, and some of these are shallower than others. This chapter will provide you with an overview of interview techniques but with a special focus on the in-depth semistructured interview guide approach, which is the approach most widely used in social science research.

An interview can be variously defined as “a conversation with a purpose” ( Lune and Berg 2018 ) and an attempt to understand the world from the point of view of the person being interviewed: “to unfold the meaning of peoples’ experiences, to uncover their lived world prior to scientific explanations” ( Kvale 2007 ). It is a form of active listening in which the interviewer steers the conversation to subjects and topics of interest to their research but also manages to leave enough space for those interviewed to say surprising things. Achieving that balance is a tricky thing, which is why most practitioners believe interviewing is both an art and a science. In my experience as a teacher, there are some students who are “natural” interviewers (often they are introverts), but anyone can learn to conduct interviews, and everyone, even those of us who have been doing this for years, can improve their interviewing skills. This might be a good time to highlight the fact that the interview is a product between interviewer and interviewee and that this product is only as good as the rapport established between the two participants. Active listening is the key to establishing this necessary rapport.

Patton ( 2002 ) makes the argument that we use interviews because there are certain things that are not observable. In particular, “we cannot observe feelings, thoughts, and intentions. We cannot observe behaviors that took place at some previous point in time. We cannot observe situations that preclude the presence of an observer. We cannot observe how people have organized the world and the meanings they attach to what goes on in the world. We have to ask people questions about those things” ( 341 ).

Types of Interviews

There are several distinct types of interviews. Imagine a continuum (figure 11.1). On one side are unstructured conversations—the kind you have with your friends. No one is in control of those conversations, and what you talk about is often random—whatever pops into your head. There is no secret, underlying purpose to your talking—if anything, the purpose is to talk to and engage with each other, and the words you use and the things you talk about are a little beside the point. An unstructured interview is a little like this informal conversation, except that one of the parties to the conversation (you, the researcher) does have an underlying purpose, and that is to understand the other person. You are not friends speaking for no purpose, but it might feel just as unstructured to the “interviewee” in this scenario. That is one side of the continuum. On the other side are fully structured and standardized survey-type questions asked face-to-face. Here it is very clear who is asking the questions and who is answering them. This doesn’t feel like a conversation at all! A lot of people new to interviewing have this ( erroneously !) in mind when they think about interviews as data collection. Somewhere in the middle of these two extreme cases is the “ semistructured” interview , in which the researcher uses an “interview guide” to gently move the conversation to certain topics and issues. This is the primary form of interviewing for qualitative social scientists and will be what I refer to as interviewing for the rest of this chapter, unless otherwise specified.

Types of Interviewing Questions: Unstructured conversations, Semi-structured interview, Structured interview, Survey questions

Informal (unstructured conversations). This is the most “open-ended” approach to interviewing. It is particularly useful in conjunction with observational methods (see chapters 13 and 14). There are no predetermined questions. Each interview will be different. Imagine you are researching the Oregon Country Fair, an annual event in Veneta, Oregon, that includes live music, artisan craft booths, face painting, and a lot of people walking through forest paths. It’s unlikely that you will be able to get a person to sit down with you and talk intensely about a set of questions for an hour and a half. But you might be able to sidle up to several people and engage with them about their experiences at the fair. You might have a general interest in what attracts people to these events, so you could start a conversation by asking strangers why they are here or why they come back every year. That’s it. Then you have a conversation that may lead you anywhere. Maybe one person tells a long story about how their parents brought them here when they were a kid. A second person talks about how this is better than Burning Man. A third person shares their favorite traveling band. And yet another enthuses about the public library in the woods. During your conversations, you also talk about a lot of other things—the weather, the utilikilts for sale, the fact that a favorite food booth has disappeared. It’s all good. You may not be able to record these conversations. Instead, you might jot down notes on the spot and then, when you have the time, write down as much as you can remember about the conversations in long fieldnotes. Later, you will have to sit down with these fieldnotes and try to make sense of all the information (see chapters 18 and 19).

Interview guide ( semistructured interview ). This is the primary type employed by social science qualitative researchers. The researcher creates an “interview guide” in advance, which she uses in every interview. In theory, every person interviewed is asked the same questions. In practice, every person interviewed is asked mostly the same topics but not always the same questions, as the whole point of a “guide” is that it guides the direction of the conversation but does not command it. The guide is typically between five and ten questions or question areas, sometimes with suggested follow-ups or prompts . For example, one question might be “What was it like growing up in Eastern Oregon?” with prompts such as “Did you live in a rural area? What kind of high school did you attend?” to help the conversation develop. These interviews generally take place in a quiet place (not a busy walkway during a festival) and are recorded. The recordings are transcribed, and those transcriptions then become the “data” that is analyzed (see chapters 18 and 19). The conventional length of one of these types of interviews is between one hour and two hours, optimally ninety minutes. Less than one hour doesn’t allow for much development of questions and thoughts, and two hours (or more) is a lot of time to ask someone to sit still and answer questions. If you have a lot of ground to cover, and the person is willing, I highly recommend two separate interview sessions, with the second session being slightly shorter than the first (e.g., ninety minutes the first day, sixty minutes the second). There are lots of good reasons for this, but the most compelling one is that this allows you to listen to the first day’s recording and catch anything interesting you might have missed in the moment and so develop follow-up questions that can probe further. This also allows the person being interviewed to have some time to think about the issues raised in the interview and go a little deeper with their answers.

Standardized questionnaire with open responses ( structured interview ). This is the type of interview a lot of people have in mind when they hear “interview”: a researcher comes to your door with a clipboard and proceeds to ask you a series of questions. These questions are all the same whoever answers the door; they are “standardized.” Both the wording and the exact order are important, as people’s responses may vary depending on how and when a question is asked. These are qualitative only in that the questions allow for “open-ended responses”: people can say whatever they want rather than select from a predetermined menu of responses. For example, a survey I collaborated on included this open-ended response question: “How does class affect one’s career success in sociology?” Some of the answers were simply one word long (e.g., “debt”), and others were long statements with stories and personal anecdotes. It is possible to be surprised by the responses. Although it’s a stretch to call this kind of questioning a conversation, it does allow the person answering the question some degree of freedom in how they answer.

Survey questionnaire with closed responses (not an interview!). Standardized survey questions with specific answer options (e.g., closed responses) are not really interviews at all, and they do not generate qualitative data. For example, if we included five options for the question “How does class affect one’s career success in sociology?”—(1) debt, (2) social networks, (3) alienation, (4) family doesn’t understand, (5) type of grad program—we leave no room for surprises at all. Instead, we would most likely look at patterns around these responses, thinking quantitatively rather than qualitatively (e.g., using regression analysis techniques, we might find that working-class sociologists were twice as likely to bring up alienation). It can sometimes be confusing for new students because the very same survey can include both closed-ended and open-ended questions. The key is to think about how these will be analyzed and to what level surprises are possible. If your plan is to turn all responses into a number and make predictions about correlations and relationships, you are no longer conducting qualitative research. This is true even if you are conducting this survey face-to-face with a real live human. Closed-response questions are not conversations of any kind, purposeful or not.

In summary, the semistructured interview guide approach is the predominant form of interviewing for social science qualitative researchers because it allows a high degree of freedom of responses from those interviewed (thus allowing for novel discoveries) while still maintaining some connection to a research question area or topic of interest. The rest of the chapter assumes the employment of this form.

Creating an Interview Guide

Your interview guide is the instrument used to bridge your research question(s) and what the people you are interviewing want to tell you. Unlike a standardized questionnaire, the questions actually asked do not need to be exactly what you have written down in your guide. The guide is meant to create space for those you are interviewing to talk about the phenomenon of interest, but sometimes you are not even sure what that phenomenon is until you start asking questions. A priority in creating an interview guide is to ensure it offers space. One of the worst mistakes is to create questions that are so specific that the person answering them will not stray. Relatedly, questions that sound “academic” will shut down a lot of respondents. A good interview guide invites respondents to talk about what is important to them, not feel like they are performing or being evaluated by you.

Good interview questions should not sound like your “research question” at all. For example, let’s say your research question is “How do patriarchal assumptions influence men’s understanding of climate change and responses to climate change?” It would be worse than unhelpful to ask a respondent, “How do your assumptions about the role of men affect your understanding of climate change?” You need to unpack this into manageable nuggets that pull your respondent into the area of interest without leading him anywhere. You could start by asking him what he thinks about climate change in general. Or, even better, whether he has any concerns about heatwaves or increased tornadoes or polar icecaps melting. Once he starts talking about that, you can ask follow-up questions that bring in issues around gendered roles, perhaps asking if he is married (to a woman) and whether his wife shares his thoughts and, if not, how they negotiate that difference. The fact is, you won’t really know the right questions to ask until he starts talking.

There are several distinct types of questions that can be used in your interview guide, either as main questions or as follow-up probes. If you remember that the point is to leave space for the respondent, you will craft a much more effective interview guide! You will also want to think about the place of time in both the questions themselves (past, present, future orientations) and the sequencing of the questions.

Researcher Note

Suggestion : As you read the next three sections (types of questions, temporality, question sequence), have in mind a particular research question, and try to draft questions and sequence them in a way that opens space for a discussion that helps you answer your research question.

Type of Questions

Experience and behavior questions ask about what a respondent does regularly (their behavior) or has done (their experience). These are relatively easy questions for people to answer because they appear more “factual” and less subjective. This makes them good opening questions. For the study on climate change above, you might ask, “Have you ever experienced an unusual weather event? What happened?” Or “You said you work outside? What is a typical summer workday like for you? How do you protect yourself from the heat?”

Opinion and values questions , in contrast, ask questions that get inside the minds of those you are interviewing. “Do you think climate change is real? Who or what is responsible for it?” are two such questions. Note that you don’t have to literally ask, “What is your opinion of X?” but you can find a way to ask the specific question relevant to the conversation you are having. These questions are a bit trickier to ask because the answers you get may depend in part on how your respondent perceives you and whether they want to please you or not. We’ve talked a fair amount about being reflective. Here is another place where this comes into play. You need to be aware of the effect your presence might have on the answers you are receiving and adjust accordingly. If you are a woman who is perceived as liberal asking a man who identifies as conservative about climate change, there is a lot of subtext that can be going on in the interview. There is no one right way to resolve this, but you must at least be aware of it.

Feeling questions are questions that ask respondents to draw on their emotional responses. It’s pretty common for academic researchers to forget that we have bodies and emotions, but people’s understandings of the world often operate at this affective level, sometimes unconsciously or barely consciously. It is a good idea to include questions that leave space for respondents to remember, imagine, or relive emotional responses to particular phenomena. “What was it like when you heard your cousin’s house burned down in that wildfire?” doesn’t explicitly use any emotion words, but it allows your respondent to remember what was probably a pretty emotional day. And if they respond emotionally neutral, that is pretty interesting data too. Note that asking someone “How do you feel about X” is not always going to evoke an emotional response, as they might simply turn around and respond with “I think that…” It is better to craft a question that actually pushes the respondent into the affective category. This might be a specific follow-up to an experience and behavior question —for example, “You just told me about your daily routine during the summer heat. Do you worry it is going to get worse?” or “Have you ever been afraid it will be too hot to get your work accomplished?”

Knowledge questions ask respondents what they actually know about something factual. We have to be careful when we ask these types of questions so that respondents do not feel like we are evaluating them (which would shut them down), but, for example, it is helpful to know when you are having a conversation about climate change that your respondent does in fact know that unusual weather events have increased and that these have been attributed to climate change! Asking these questions can set the stage for deeper questions and can ensure that the conversation makes the same kind of sense to both participants. For example, a conversation about political polarization can be put back on track once you realize that the respondent doesn’t really have a clear understanding that there are two parties in the US. Instead of asking a series of questions about Republicans and Democrats, you might shift your questions to talk more generally about political disagreements (e.g., “people against abortion”). And sometimes what you do want to know is the level of knowledge about a particular program or event (e.g., “Are you aware you can discharge your student loans through the Public Service Loan Forgiveness program?”).

Sensory questions call on all senses of the respondent to capture deeper responses. These are particularly helpful in sparking memory. “Think back to your childhood in Eastern Oregon. Describe the smells, the sounds…” Or you could use these questions to help a person access the full experience of a setting they customarily inhabit: “When you walk through the doors to your office building, what do you see? Hear? Smell?” As with feeling questions , these questions often supplement experience and behavior questions . They are another way of allowing your respondent to report fully and deeply rather than remain on the surface.

Creative questions employ illustrative examples, suggested scenarios, or simulations to get respondents to think more deeply about an issue, topic, or experience. There are many options here. In The Trouble with Passion , Erin Cech ( 2021 ) provides a scenario in which “Joe” is trying to decide whether to stay at his decent but boring computer job or follow his passion by opening a restaurant. She asks respondents, “What should Joe do?” Their answers illuminate the attraction of “passion” in job selection. In my own work, I have used a news story about an upwardly mobile young man who no longer has time to see his mother and sisters to probe respondents’ feelings about the costs of social mobility. Jessi Streib and Betsy Leondar-Wright have used single-page cartoon “scenes” to elicit evaluations of potential racial discrimination, sexual harassment, and classism. Barbara Sutton ( 2010 ) has employed lists of words (“strong,” “mother,” “victim”) on notecards she fans out and asks her female respondents to select and discuss.

Background/Demographic Questions

You most definitely will want to know more about the person you are interviewing in terms of conventional demographic information, such as age, race, gender identity, occupation, and educational attainment. These are not questions that normally open up inquiry. [1] For this reason, my practice has been to include a separate “demographic questionnaire” sheet that I ask each respondent to fill out at the conclusion of the interview. Only include those aspects that are relevant to your study. For example, if you are not exploring religion or religious affiliation, do not include questions about a person’s religion on the demographic sheet. See the example provided at the end of this chapter.

Temporality

Any type of question can have a past, present, or future orientation. For example, if you are asking a behavior question about workplace routine, you might ask the respondent to talk about past work, present work, and ideal (future) work. Similarly, if you want to understand how people cope with natural disasters, you might ask your respondent how they felt then during the wildfire and now in retrospect and whether and to what extent they have concerns for future wildfire disasters. It’s a relatively simple suggestion—don’t forget to ask about past, present, and future—but it can have a big impact on the quality of the responses you receive.

Question Sequence

Having a list of good questions or good question areas is not enough to make a good interview guide. You will want to pay attention to the order in which you ask your questions. Even though any one respondent can derail this order (perhaps by jumping to answer a question you haven’t yet asked), a good advance plan is always helpful. When thinking about sequence, remember that your goal is to get your respondent to open up to you and to say things that might surprise you. To establish rapport, it is best to start with nonthreatening questions. Asking about the present is often the safest place to begin, followed by the past (they have to know you a little bit to get there), and lastly, the future (talking about hopes and fears requires the most rapport). To allow for surprises, it is best to move from very general questions to more particular questions only later in the interview. This ensures that respondents have the freedom to bring up the topics that are relevant to them rather than feel like they are constrained to answer you narrowly. For example, refrain from asking about particular emotions until these have come up previously—don’t lead with them. Often, your more particular questions will emerge only during the course of the interview, tailored to what is emerging in conversation.

Once you have a set of questions, read through them aloud and imagine you are being asked the same questions. Does the set of questions have a natural flow? Would you be willing to answer the very first question to a total stranger? Does your sequence establish facts and experiences before moving on to opinions and values? Did you include prefatory statements, where necessary; transitions; and other announcements? These can be as simple as “Hey, we talked a lot about your experiences as a barista while in college.… Now I am turning to something completely different: how you managed friendships in college.” That is an abrupt transition, but it has been softened by your acknowledgment of that.

Probes and Flexibility

Once you have the interview guide, you will also want to leave room for probes and follow-up questions. As in the sample probe included here, you can write out the obvious probes and follow-up questions in advance. You might not need them, as your respondent might anticipate them and include full responses to the original question. Or you might need to tailor them to how your respondent answered the question. Some common probes and follow-up questions include asking for more details (When did that happen? Who else was there?), asking for elaboration (Could you say more about that?), asking for clarification (Does that mean what I think it means or something else? I understand what you mean, but someone else reading the transcript might not), and asking for contrast or comparison (How did this experience compare with last year’s event?). “Probing is a skill that comes from knowing what to look for in the interview, listening carefully to what is being said and what is not said, and being sensitive to the feedback needs of the person being interviewed” ( Patton 2002:374 ). It takes work! And energy. I and many other interviewers I know report feeling emotionally and even physically drained after conducting an interview. You are tasked with active listening and rearranging your interview guide as needed on the fly. If you only ask the questions written down in your interview guide with no deviations, you are doing it wrong. [2]

The Final Question

Every interview guide should include a very open-ended final question that allows for the respondent to say whatever it is they have been dying to tell you but you’ve forgotten to ask. About half the time they are tired too and will tell you they have nothing else to say. But incredibly, some of the most honest and complete responses take place here, at the end of a long interview. You have to realize that the person being interviewed is often discovering things about themselves as they talk to you and that this process of discovery can lead to new insights for them. Making space at the end is therefore crucial. Be sure you convey that you actually do want them to tell you more, that the offer of “anything else?” is not read as an empty convention where the polite response is no. Here is where you can pull from that active listening and tailor the final question to the particular person. For example, “I’ve asked you a lot of questions about what it was like to live through that wildfire. I’m wondering if there is anything I’ve forgotten to ask, especially because I haven’t had that experience myself” is a much more inviting final question than “Great. Anything you want to add?” It’s also helpful to convey to the person that you have the time to listen to their full answer, even if the allotted time is at the end. After all, there are no more questions to ask, so the respondent knows exactly how much time is left. Do them the courtesy of listening to them!

Conducting the Interview

Once you have your interview guide, you are on your way to conducting your first interview. I always practice my interview guide with a friend or family member. I do this even when the questions don’t make perfect sense for them, as it still helps me realize which questions make no sense, are poorly worded (too academic), or don’t follow sequentially. I also practice the routine I will use for interviewing, which goes something like this:

  • Introduce myself and reintroduce the study
  • Provide consent form and ask them to sign and retain/return copy
  • Ask if they have any questions about the study before we begin
  • Ask if I can begin recording
  • Ask questions (from interview guide)
  • Turn off the recording device
  • Ask if they are willing to fill out my demographic questionnaire
  • Collect questionnaire and, without looking at the answers, place in same folder as signed consent form
  • Thank them and depart

A note on remote interviewing: Interviews have traditionally been conducted face-to-face in a private or quiet public setting. You don’t want a lot of background noise, as this will make transcriptions difficult. During the recent global pandemic, many interviewers, myself included, learned the benefits of interviewing remotely. Although face-to-face is still preferable for many reasons, Zoom interviewing is not a bad alternative, and it does allow more interviews across great distances. Zoom also includes automatic transcription, which significantly cuts down on the time it normally takes to convert our conversations into “data” to be analyzed. These automatic transcriptions are not perfect, however, and you will still need to listen to the recording and clarify and clean up the transcription. Nor do automatic transcriptions include notations of body language or change of tone, which you may want to include. When interviewing remotely, you will want to collect the consent form before you meet: ask them to read, sign, and return it as an email attachment. I think it is better to ask for the demographic questionnaire after the interview, but because some respondents may never return it then, it is probably best to ask for this at the same time as the consent form, in advance of the interview.

What should you bring to the interview? I would recommend bringing two copies of the consent form (one for you and one for the respondent), a demographic questionnaire, a manila folder in which to place the signed consent form and filled-out demographic questionnaire, a printed copy of your interview guide (I print with three-inch right margins so I can jot down notes on the page next to relevant questions), a pen, a recording device, and water.

After the interview, you will want to secure the signed consent form in a locked filing cabinet (if in print) or a password-protected folder on your computer. Using Excel or a similar program that allows tables/spreadsheets, create an identifying number for your interview that links to the consent form without using the name of your respondent. For example, let’s say that I conduct interviews with US politicians, and the first person I meet with is George W. Bush. I will assign the transcription the number “INT#001” and add it to the signed consent form. [3] The signed consent form goes into a locked filing cabinet, and I never use the name “George W. Bush” again. I take the information from the demographic sheet, open my Excel spreadsheet, and add the relevant information in separate columns for the row INT#001: White, male, Republican. When I interview Bill Clinton as my second interview, I include a second row: INT#002: White, male, Democrat. And so on. The only link to the actual name of the respondent and this information is the fact that the consent form (unavailable to anyone but me) has stamped on it the interview number.

Many students get very nervous before their first interview. Actually, many of us are always nervous before the interview! But do not worry—this is normal, and it does pass. Chances are, you will be pleasantly surprised at how comfortable it begins to feel. These “purposeful conversations” are often a delight for both participants. This is not to say that sometimes things go wrong. I often have my students practice several “bad scenarios” (e.g., a respondent that you cannot get to open up; a respondent who is too talkative and dominates the conversation, steering it away from the topics you are interested in; emotions that completely take over; or shocking disclosures you are ill-prepared to handle), but most of the time, things go quite well. Be prepared for the unexpected, but know that the reason interviews are so popular as a technique of data collection is that they are usually richly rewarding for both participants.

One thing that I stress to my methods students and remind myself about is that interviews are still conversations between people. If there’s something you might feel uncomfortable asking someone about in a “normal” conversation, you will likely also feel a bit of discomfort asking it in an interview. Maybe more importantly, your respondent may feel uncomfortable. Social research—especially about inequality—can be uncomfortable. And it’s easy to slip into an abstract, intellectualized, or removed perspective as an interviewer. This is one reason trying out interview questions is important. Another is that sometimes the question sounds good in your head but doesn’t work as well out loud in practice. I learned this the hard way when a respondent asked me how I would answer the question I had just posed, and I realized that not only did I not really know how I would answer it, but I also wasn’t quite as sure I knew what I was asking as I had thought.

—Elizabeth M. Lee, Associate Professor of Sociology at Saint Joseph’s University, author of Class and Campus Life , and co-author of Geographies of Campus Inequality

How Many Interviews?

Your research design has included a targeted number of interviews and a recruitment plan (see chapter 5). Follow your plan, but remember that “ saturation ” is your goal. You interview as many people as you can until you reach a point at which you are no longer surprised by what they tell you. This means not that no one after your first twenty interviews will have surprising, interesting stories to tell you but rather that the picture you are forming about the phenomenon of interest to you from a research perspective has come into focus, and none of the interviews are substantially refocusing that picture. That is when you should stop collecting interviews. Note that to know when you have reached this, you will need to read your transcripts as you go. More about this in chapters 18 and 19.

Your Final Product: The Ideal Interview Transcript

A good interview transcript will demonstrate a subtly controlled conversation by the skillful interviewer. In general, you want to see replies that are about one paragraph long, not short sentences and not running on for several pages. Although it is sometimes necessary to follow respondents down tangents, it is also often necessary to pull them back to the questions that form the basis of your research study. This is not really a free conversation, although it may feel like that to the person you are interviewing.

Final Tips from an Interview Master

Annette Lareau is arguably one of the masters of the trade. In Listening to People , she provides several guidelines for good interviews and then offers a detailed example of an interview gone wrong and how it could be addressed (please see the “Further Readings” at the end of this chapter). Here is an abbreviated version of her set of guidelines: (1) interview respondents who are experts on the subjects of most interest to you (as a corollary, don’t ask people about things they don’t know); (2) listen carefully and talk as little as possible; (3) keep in mind what you want to know and why you want to know it; (4) be a proactive interviewer (subtly guide the conversation); (5) assure respondents that there aren’t any right or wrong answers; (6) use the respondent’s own words to probe further (this both allows you to accurately identify what you heard and pushes the respondent to explain further); (7) reuse effective probes (don’t reinvent the wheel as you go—if repeating the words back works, do it again and again); (8) focus on learning the subjective meanings that events or experiences have for a respondent; (9) don’t be afraid to ask a question that draws on your own knowledge (unlike trial lawyers who are trained never to ask a question for which they don’t already know the answer, sometimes it’s worth it to ask risky questions based on your hypotheses or just plain hunches); (10) keep thinking while you are listening (so difficult…and important); (11) return to a theme raised by a respondent if you want further information; (12) be mindful of power inequalities (and never ever coerce a respondent to continue the interview if they want out); (13) take control with overly talkative respondents; (14) expect overly succinct responses, and develop strategies for probing further; (15) balance digging deep and moving on; (16) develop a plan to deflect questions (e.g., let them know you are happy to answer any questions at the end of the interview, but you don’t want to take time away from them now); and at the end, (17) check to see whether you have asked all your questions. You don’t always have to ask everyone the same set of questions, but if there is a big area you have forgotten to cover, now is the time to recover ( Lareau 2021:93–103 ).

Sample: Demographic Questionnaire

ASA Taskforce on First-Generation and Working-Class Persons in Sociology – Class Effects on Career Success

Supplementary Demographic Questionnaire

Thank you for your participation in this interview project. We would like to collect a few pieces of key demographic information from you to supplement our analyses. Your answers to these questions will be kept confidential and stored by ID number. All of your responses here are entirely voluntary!

What best captures your race/ethnicity? (please check any/all that apply)

  • White (Non Hispanic/Latina/o/x)
  • Black or African American
  • Hispanic, Latino/a/x of Spanish
  • Asian or Asian American
  • American Indian or Alaska Native
  • Middle Eastern or North African
  • Native Hawaiian or Pacific Islander
  • Other : (Please write in: ________________)

What is your current position?

  • Grad Student
  • Full Professor

Please check any and all of the following that apply to you:

  • I identify as a working-class academic
  • I was the first in my family to graduate from college
  • I grew up poor

What best reflects your gender?

  • Transgender female/Transgender woman
  • Transgender male/Transgender man
  • Gender queer/ Gender nonconforming

Anything else you would like us to know about you?

Example: Interview Guide

In this example, follow-up prompts are italicized.  Note the sequence of questions.  That second question often elicits an entire life history , answering several later questions in advance.

Introduction Script/Question

Thank you for participating in our survey of ASA members who identify as first-generation or working-class.  As you may have heard, ASA has sponsored a taskforce on first-generation and working-class persons in sociology and we are interested in hearing from those who so identify.  Your participation in this interview will help advance our knowledge in this area.

  • The first thing we would like to as you is why you have volunteered to be part of this study? What does it mean to you be first-gen or working class?  Why were you willing to be interviewed?
  • How did you decide to become a sociologist?
  • Can you tell me a little bit about where you grew up? ( prompts: what did your parent(s) do for a living?  What kind of high school did you attend?)
  • Has this identity been salient to your experience? (how? How much?)
  • How welcoming was your grad program? Your first academic employer?
  • Why did you decide to pursue sociology at the graduate level?
  • Did you experience culture shock in college? In graduate school?
  • Has your FGWC status shaped how you’ve thought about where you went to school? debt? etc?
  • Were you mentored? How did this work (not work)?  How might it?
  • What did you consider when deciding where to go to grad school? Where to apply for your first position?
  • What, to you, is a mark of career success? Have you achieved that success?  What has helped or hindered your pursuit of success?
  • Do you think sociology, as a field, cares about prestige?
  • Let’s talk a little bit about intersectionality. How does being first-gen/working class work alongside other identities that are important to you?
  • What do your friends and family think about your career? Have you had any difficulty relating to family members or past friends since becoming highly educated?
  • Do you have any debt from college/grad school? Are you concerned about this?  Could you explain more about how you paid for college/grad school?  (here, include assistance from family, fellowships, scholarships, etc.)
  • (You’ve mentioned issues or obstacles you had because of your background.) What could have helped?  Or, who or what did? Can you think of fortuitous moments in your career?
  • Do you have any regrets about the path you took?
  • Is there anything else you would like to add? Anything that the Taskforce should take note of, that we did not ask you about here?

Further Readings

Britten, Nicky. 1995. “Qualitative Interviews in Medical Research.” BMJ: British Medical Journal 31(6999):251–253. A good basic overview of interviewing particularly useful for students of public health and medical research generally.

Corbin, Juliet, and Janice M. Morse. 2003. “The Unstructured Interactive Interview: Issues of Reciprocity and Risks When Dealing with Sensitive Topics.” Qualitative Inquiry 9(3):335–354. Weighs the potential benefits and harms of conducting interviews on topics that may cause emotional distress. Argues that the researcher’s skills and code of ethics should ensure that the interviewing process provides more of a benefit to both participant and researcher than a harm to the former.

Gerson, Kathleen, and Sarah Damaske. 2020. The Science and Art of Interviewing . New York: Oxford University Press. A useful guidebook/textbook for both undergraduates and graduate students, written by sociologists.

Kvale, Steiner. 2007. Doing Interviews . London: SAGE. An easy-to-follow guide to conducting and analyzing interviews by psychologists.

Lamont, Michèle, and Ann Swidler. 2014. “Methodological Pluralism and the Possibilities and Limits of Interviewing.” Qualitative Sociology 37(2):153–171. Written as a response to various debates surrounding the relative value of interview-based studies and ethnographic studies defending the particular strengths of interviewing. This is a must-read article for anyone seriously engaging in qualitative research!

Pugh, Allison J. 2013. “What Good Are Interviews for Thinking about Culture? Demystifying Interpretive Analysis.” American Journal of Cultural Sociology 1(1):42–68. Another defense of interviewing written against those who champion ethnographic methods as superior, particularly in the area of studying culture. A classic.

Rapley, Timothy John. 2001. “The ‘Artfulness’ of Open-Ended Interviewing: Some considerations in analyzing interviews.” Qualitative Research 1(3):303–323. Argues for the importance of “local context” of data production (the relationship built between interviewer and interviewee, for example) in properly analyzing interview data.

Weiss, Robert S. 1995. Learning from Strangers: The Art and Method of Qualitative Interview Studies . New York: Simon and Schuster. A classic and well-regarded textbook on interviewing. Because Weiss has extensive experience conducting surveys, he contrasts the qualitative interview with the survey questionnaire well; particularly useful for those trained in the latter.

  • I say “normally” because how people understand their various identities can itself be an expansive topic of inquiry. Here, I am merely talking about collecting otherwise unexamined demographic data, similar to how we ask people to check boxes on surveys. ↵
  • Again, this applies to “semistructured in-depth interviewing.” When conducting standardized questionnaires, you will want to ask each question exactly as written, without deviations! ↵
  • I always include “INT” in the number because I sometimes have other kinds of data with their own numbering: FG#001 would mean the first focus group, for example. I also always include three-digit spaces, as this allows for up to 999 interviews (or, more realistically, allows for me to interview up to one hundred persons without having to reset my numbering system). ↵

A method of data collection in which the researcher asks the participant questions; the answers to these questions are often recorded and transcribed verbatim. There are many different kinds of interviews - see also semistructured interview , structured interview , and unstructured interview .

A document listing key questions and question areas for use during an interview.  It is used most often for semi-structured interviews.  A good interview guide may have no more than ten primary questions for two hours of interviewing, but these ten questions will be supplemented by probes and relevant follow-ups throughout the interview.  Most IRBs require the inclusion of the interview guide in applications for review.  See also interview and  semi-structured interview .

A data-collection method that relies on casual, conversational, and informal interviewing.  Despite its apparent conversational nature, the researcher usually has a set of particular questions or question areas in mind but allows the interview to unfold spontaneously.  This is a common data-collection technique among ethnographers.  Compare to the semi-structured or in-depth interview .

A form of interview that follows a standard guide of questions asked, although the order of the questions may change to match the particular needs of each individual interview subject, and probing “follow-up” questions are often added during the course of the interview.  The semi-structured interview is the primary form of interviewing used by qualitative researchers in the social sciences.  It is sometimes referred to as an “in-depth” interview.  See also interview and  interview guide .

The cluster of data-collection tools and techniques that involve observing interactions between people, the behaviors, and practices of individuals (sometimes in contrast to what they say about how they act and behave), and cultures in context.  Observational methods are the key tools employed by ethnographers and Grounded Theory .

Follow-up questions used in a semi-structured interview  to elicit further elaboration.  Suggested prompts can be included in the interview guide  to be used/deployed depending on how the initial question was answered or if the topic of the prompt does not emerge spontaneously.

A form of interview that follows a strict set of questions, asked in a particular order, for all interview subjects.  The questions are also the kind that elicits short answers, and the data is more “informative” than probing.  This is often used in mixed-methods studies, accompanying a survey instrument.  Because there is no room for nuance or the exploration of meaning in structured interviews, qualitative researchers tend to employ semi-structured interviews instead.  See also interview.

The point at which you can conclude data collection because every person you are interviewing, the interaction you are observing, or content you are analyzing merely confirms what you have already noted.  Achieving saturation is often used as the justification for the final sample size.

An interview variant in which a person’s life story is elicited in a narrative form.  Turning points and key themes are established by the researcher and used as data points for further analysis.

Introduction to Qualitative Research Methods Copyright © 2023 by Allison Hurst is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

Research-Methodology

Interviews can be defined as a qualitative research technique which involves “conducting intensive individual interviews with a small number of respondents to explore their perspectives on a particular idea, program or situation.” [1]

There are three different formats of interviews: structured, semi-structured and unstructured.

Structured interviews consist of a series of pre-determined questions that all interviewees answer in the same order. Data analysis usually tends to be more straightforward because researcher can compare and contrast different answers given to the same questions.

Unstructured interviews are usually the least reliable from research viewpoint, because no questions are prepared prior to the interview and data collection is conducted in an informal manner. Unstructured interviews can be associated with a high level of bias and comparison of answers given by different respondents tends to be difficult due to the differences in formulation of questions.

Semi-structured interviews contain the components of both, structured and unstructured interviews. In semi-structured interviews, interviewer prepares a set of same questions to be answered by all interviewees. At the same time, additional questions might be asked during interviews to clarify and/or further expand certain issues.

Advantages of interviews include possibilities of collecting detailed information about research questions.  Moreover, in in this type of primary data collection researcher has direct control over the flow of process and she has a chance to clarify certain issues during the process if needed. Disadvantages, on the other hand, include longer time requirements and difficulties associated with arranging an appropriate time with perspective sample group members to conduct interviews.

When conducting interviews you should have an open mind and refrain from displaying disagreements in any forms when viewpoints expressed by interviewees contradict your own ideas. Moreover, timing and environment for interviews need to be scheduled effectively. Specifically, interviews need to be conducted in a relaxed environment, free of any forms of pressure for interviewees whatsoever.

Respected scholars warn that “in conducting an interview the interviewer should attempt to create a friendly, non-threatening atmosphere. Much as one does with a cover letter, the interviewer should give a brief, casual introduction to the study; stress the importance of the person’s participation; and assure anonymity, or at least confidentiality, when possible.” [2]

There is a risk of interviewee bias during the primary data collection process and this would seriously compromise the validity of the project findings. Some interviewer bias can be avoided by ensuring that the interviewer does not overreact to responses of the interviewee. Other steps that can be taken to help avoid or reduce interviewer bias include having the interviewer dress inconspicuously and appropriately for the environment and holding the interview in a private setting.  [3]

My e-book, The Ultimate Guide to Writing a Dissertation in Business Studies: a step by step assistance offers practical assistance to complete a dissertation with minimum or no stress. The e-book covers all stages of writing a dissertation starting from the selection to the research area to submitting the completed version of the work within the deadline.John Dudovskiy

Interviews

[1] Boyce, C. & Neale, P. (2006) “Conducting in-depth Interviews: A Guide for Designing and Conducting In-Depth Interviews”, Pathfinder International Tool Series

[2] Connaway, L.S.& Powell, R.P.(2010) “Basic Research Methods for Librarians” ABC-CLIO

[3] Connaway, L.S.& Powell, R.P.(2010) “Basic Research Methods for Librarians” ABC-CLIO

Home

The Complete Guide to Structured Interviews

Want to hire top talent and reliably pick the right person for the job? Elevate your hiring process with structured interviews.  

Free Structured Interviewing Guide

Discover strategies for effective interviews.

DOWNLOAD NOW

People in circles demonstrating structured interviewing

Interviews are one of the most common ways that companies evaluate job candidates. They’re also one of the most powerful - if done correctly. 

For many employers, the interview is that key stage of the recruitment process that solidifies their decision about who to hire.

All this to say: job interviews are a high-stakes game. 

An effective, well-planned, and objective interview process can lead to better decision-making, along with all the benefits that come with hiring great people. 

An ineffective, poorly planned, and biased interview process can do the opposite, leading to new hires that are not a good fit and ultimately leading to higher employee turnover and weak performance.  

But how can you tell the difference between an ineffective interview process and an effective one? Fortunately, one of the simplest, clearest, and data-backed ways to increase the effectiveness of your interviews is by conducting structured interviews.  

Extensive research has demonstrated that structured interviews are two times more predictive of job performance than unstructured interviews. If you could improve your hiring decision-making by 2x, would you?

Structured interviews are 2 times more predictive than unstructured interviews

In this guide, we will discuss everything you need to know to build highly effective structured interviews with confidence.  

Section 1: What are Structured Interviews?

Section 2: What is the Difference Between a Structured Interview and a Traditional Interview?

Section 3: The Benefits of Structured Interviews

Section 4: Common Myths about Structured Interviews

Section 5: How to Implement Structured Interviews

Section 6: Sample Response Scoring Rubric for a Sales Manager Position

Section 7: How to Train Your Staff on Structured Interviews

Section 8: Best Practices for Conducting a Structured Job Interview

Section 9: Tools for Structured Interviews

Section 10: Actions You Can Take Today

What are Structured Interviews?

Structured interviews ask each candidate the same list of job-relevant questions, in the same order, by the same interviewers, and scores each candidate’s responses with the same pre-determined rating scale. In other words, they have structure built into the way the interview is both conducted and scored.  

At its core, a structured interview contains two key features:  

  • A common set of standardized interview questions measuring job-related qualities
  • All candidates are evaluated against a standardized scoring system

Two key features of structured interviewing

It is this very structure that makes structured interviews so much more effective than unstructured ones. The structure makes it easier to compare candidates fairly based on their job-related skills, leading to a more targeted, less biased, and ultimately more predictive interview. 

After all, a structured interview is the single strongest predictor of a new hire’s future job performance.

What is the Difference Between a Structured Interview and a Traditional Interview?

The main difference between structured and traditional interviews is that a traditional interview is more conversational and lacks specific direction, while a structured interview uses a pre-determined set of job-relevant questions and provides a rating scale to evaluate candidate answers.  

Traditional interviews are also known as unstructured interviews . 

Unstructured interviews are more conversational, where the goal is to get to know the person and ask questions about the role as you go. 

The questions asked may be tailored to the specific candidate based on their background or experience, and because this style of interview is so relaxed, no two interviews for the same job opening are directly comparable.

Let’s take a side-by-side look at these two interview styles:  

Unstructured vs. Structured Interviews

Unstructured interviews  .

  • Interviews are more like free-flowing conversations.
  • Only a few (or even no questions) are chosen in advance of each interview, and candidates may not all be given the same questions.
  • The job relevancy of each question is not evaluated.
  • Candidates aren’t always interviewed by the same evaluators.
  • Interviewers aren’t typically given a rubric to score candidate answers.
  • Evaluation can happen hours or days after the interview has been conducted.

Structured Interviews

  • Interview questions are determined in advance.  
  • Questions are asked in the same order to each candidate.
  • All questions an interviewer asks are relevant to the role.
  • The same interviewers review all candidates.
  • Applicant responses are evaluated with the same rating scale.
  • Evaluation of a candidate happens in real-time.
A free-flowing conversation between the interviewers and the candidate.Interviewer directs the discussion.
Interviewer may engage in rapport-building conversation at the start of the interview to “break the ice.”“Small talk” and candidate questions are put on hold until the interviewers’ assessments are complete.
Interviewers have the freedom to ask the questions they prefer to ask.Pre-defined questions are prepared based on a job analysis.
Interviewers can vary the questions from candidate to candidate.Every candidate is asked the same questions, in the same order.
Interviewer may be looking for a good culture fit.Interviewer is looking for competencies necessary to succeed in the role.
Interview questions may cover strengths, weaknesses, work experience, hobbies, and interests. Interviewers asks behavioral or situational questions that require the candidate to draw upon their experiences to convey competencies.
Interviewers may vary for each candidate.The interviewers involved in the interview are consistent across candidates. 
What constitutes a good response may be up to each interviewer’s opinion and impression.Interviewers score each response against an anchored rating scale. Interview guides help each interviewer to score responses objectively against pre-determined criteria.

The Drawbacks of Unstructured Interviews

Just about every company interviews, but many are using unstructured or semi-structured interviews. This represents a missed opportunity to maximize one of the most valuable touchpoints in the hiring process.

In a recent survey , we found that only 24% of companies were conducting fully structured interviews, which include standardized questions and defined rating scales. 57% were conducting semi-structured interviews, while 17% were conducting conversational, unstructured interviews.  

There are a number of drawbacks to not using a structured interviewing process.  

First, because the list of questions may vary from candidate to candidate, there tends to be a lot of inconsistency in the information that evaluators gather from each interview. 

Plus, traditional interviews often lack specific “grading guides” to evaluate candidate responses, so different interviewers may grade the same response with wildly different expectations. 

Additionally, since questions aren’t pre-screened to ensure their relevancy to the role, a lot of informational “noise” gets added into the mix, which can cloud an evaluator’s perception of a candidate’s interview performance.

There are two main consequences that result from these unstructured interviews:

  • These practices can introduce bias into your interview process – something that should be minimized as much as possible during hiring.  
  • Candidates aren’t given the same interview experience, which causes an inconsistency in evaluation, meaning that you could be overlooking the stronger applicant and making poorer hiring decisions.  

In the end, though traditional interviews have stayed the more popular interview style over the decades, it’s not the most effective way to interview.

What is a semi-structured interview?  

A semi-structured interview is an interview style where there is some sense of consistent structure, but it lacks the reliability and effectiveness of a fully structured interview.  

They’re also the most common type of interview that organizations conduct these days, according to recent survey data:  

Pie chart with survey results showing what type of interviews do you conduct

Semi-structured interviews are a step above traditional conversational interviews when it comes to predicting job success, but they’re still no replacement for a true structured interview . 

And since the degree of structure can vary so widely, “semi-structured” interviews are often no more predictive of job success than completely unstructured interviews .  

If you want your interview process to consistently and successfully find the best person in your applicant pool for the job, fully structured interviews are the best way to achieve it.    

SECTION 3

The Benefits of Structured Interviews

Structured interviews take what has often been considered a more subjective part of the hiring process and shape it into one of the most powerful tools in your hiring toolkit. 

There are quite a few reasons why structured interviews are twice as predictive of job success as their unstructured cousin.

Structured interviews provide consistent evaluation.

Because every candidate is asked the same questions, in the same order, by the same interviewers, and graded on the same rating scale, it’s easy to directly compare candidates and make a confident decision about who will thrive at your organization.

Structured interviews help reduce bias in the hiring process.

A structured interview is a fair interview. 

By cutting back on the small talk until after the assessment part of the interview is completed, interviewers are less likely to be swayed by things that are irrelevant to the job, like a candidate’s hobbies, where they grew up, or other less predictive details.

Featured Resource: Reduce Bias with Structured Interviews  

Structured interviews are highly predictive of job performance.  

Since the primary goal of a job interview is to identify which candidates will succeed in the role, good interviews must be both predictive and reliable. 

Decades of research has established that structured interviews are the single strongest predictor of future job success.

Structured interviews ultimately lead to better hires.

Structured interviews allow evaluators to stay lasered in during an interview. 

This style of interview focuses on finding the individual who has the strongest match to the competencies required to succeed after they are hired, not just succeeding during the interview.  

Structured interviews are more efficient.

Once a structured interview process is in place, structured interviews actually make the entire interview process faster and more efficient. 

When job-related interview questions are designed in advance, and hiring managers are provided with those questions along with standardized rating scales, the entire process moves more smoothly. 

Individual hiring managers and interviewers don’t have to spend time coming up with their own questions, and they don’t have to spend too much time thinking about how to evaluate each candidate.  

Structured interviews improve the legal defensibility of your hiring process.

Sticking to a set process for interviewing can provide added legal protection. 

Employers are responsible for maintaining a fair hiring process, and using structured interviews provides both strong evidence of equitable practices and makes your hiring process more legally defensible if a candidate feels they were unfairly rejected.  

Structured interviews can help you achieve your DEI goals.  

Structured interviews are so effective at reducing bias that adding them increases the likelihood that candidates from marginalized or minority backgrounds are able to move forward in your hiring process. 

For example, by reducing the amount of pre-interview chitchat where an interviewer may ask what school a candidate attended (and form an opinion based on their answer), the impact of a candidate’s pedigree is less likely to be a strong determiner on the hiring outcome.  

Structured interviews ensure that you’re focusing the interview on what really matters – how well the candidate can handle the job, regardless of their background.

Structured interviewing can supercharge your hiring process with predictive power, making it more effective at identifying top talent that will succeed on the job. 

An overhaul of your current interviewing process now can result in significant positive impacts to your business in the long run.  

Featured Resource: 5 Biggest Benefits of Structured Interviews

SECTION 4

Common Myths about Structured Interviews

If structured interviews are so effective, why aren’t more organizations conducting them? Here are a few prevailing myths and misconceptions about structured interviews that we think are worth busting:  

MYTH 1: Candidates dislike structured interviews.

Many interviewers are resistant to adopt structured interviews because they assume that candidates won’t like them, or that they will somehow damage the candidate experience. 

This is a common misconception, based on the misunderstanding that a structured interview is going to be inherently more bland or too restrictive compared to a traditional conversational interview.  

Busted: Research confidently shows that candidates actually prefer structured interviews over conversational, unstructured interviews. 

A recent study from the Lighthouse Research & Advisory found that 7 in 10 candidates prefer interviews where questions don’t vary from candidate to candidate.  

7 in 10 candidates prefer structured interviews

Most job candidates want to be evaluated fairly, on the same footing as everyone else. Candidates understand that by being asked the same questions, they are given equal opportunity to present themselves as the best person for the job. 

Structured interviews do just that.

Plus, most candidates won't even notice that an employer is conducting a structured interview. All they will see is an interviewer who is prepared, professional, and focused on job-related questions.

MYTH 2: Structured interviews are too inflexible for the interviewer to get the information they want.  

Many interviewers feel that being required to adhere to a tight structure will limit their ability to ask questions as they would naturally arise. 

There’s a common misconception that an interview with structure means it would feel robotic to conduct. 

Busted: Interviewing is still a human-centered practice, and we believe it should stay that way. You can formulate questions that highlight your employee experience and work culture, plus you still have the opportunity to inject more personality when you open the floor for candidates to ask their questions.  

Structured interviews don’t bar interviewers and candidates from asking more questions. Questions can (and should be encouraged!) be asked after the evaluation portion of the interview is completed.

Candidates are free to ask any questions that they have about life at your organization or the role, and you’re free to get to know them more – without adding potential harmful biases that could impact your hiring decision. 

MYTH 3: Structured interviews lack warmth and make it hard to connect on a personal level with candidates.  

Job interviews have often been considered a way to get to know the person behind the resume, and many interviewers have seen them as a tool to determine culture fit or if they will get along with a new hire.  

Busted: There’s plenty of opportunity to connect with candidates on a human level during a structured interview.  

Even though the questions are all picked for their ability to determine whether a candidate has the job-relevant competencies to succeed on the job, an interview is still your chance to impress the candidate. 

You can share information about what it’s like to work at your organization, what the day-to-day of the job is like, and build rapport with candidates.  

If you’re still worried that candidates may be put off by structured interviewing techniques, consider notifying them ahead of time so they aren’t left worried about their performance in the interview. 

This upfront communication is a great way to keep the candidate engaged while also being direct about what to expect from the interview process.

MYTH 4: Structured interviews require more research and planning than traditional interviews.

Many interviewers think conducting a structured interview requires so much pre-planning and preparation that they aren’t worth the time and effort to implement.

Busted: While more effort is necessary to get started with structured interviews, that level of effort isn’t required prior to every interview conducted. 

Once the question set has been decided, you’ll be sticking with it until there’s a reason to make adjustments (like a new responsibility added to the role you need to ask candidates about).  

Structured interviewing allows you to whip out the same reliably predictive question set, making your interview process more efficient in the long term.  

For a traditional interview, you’d still be spending time and energy coming up with questions. Plus, you’ll need to either brainstorm questions every time you hire, or just wing it during the interview. All that effort won’t result in the same consistent outcomes when compared with structured interviews.

Once your question set and rating scale has been established, you’re ready to rock and roll! There’s no time wasted trying to find the “right” questions to ask (they’ve already been picked), and responses are evaluated in real-time, so reviewing candidates has never been faster (or more accurate!).

SECTION 5

How to Implement Structured Interviews  

Incorporating structured interviews into your organization’s hiring process isn’t (usually) a decision you can make alone. While it’s typically led by a leader in HR or Talent Acquisition, many other stakeholders and managers will need to be on board with the change for there to be wide-spread benefits.  

Because there is upfront effort in adopting structured interviews , you will likely need cross-functional buy-in across the talent acquisition team, department heads, and hiring managers to successfully launch structured interviewing at your organization.  

If you’re not sure how to begin to get others onboard, make a pros and cons list of how your organization currently conducts interviews. 

And start asking yourself questions: 

What is your company’s hiring success rate? 

How are retention numbers? 

What would the return on investment be for improving your hiring process? 

Does your organization have DEI goals for the coming year? 

What about improved legal defensibility? 

These are all strong angles you can leverage when trying to consider the move away from traditional interviews. Set out the key metrics you want to see movement on by implementing structured interviewing.

Different roles in your organization each play a different part in making structured interviewing a reality. Let’s break down some of the additional effort required (and how it pays off in the end). 

And remember – the structure created to interview for a role can (and should!) be used whenever that role is being filled. 

If you’re interested in a proof-of-concept for structured interviewing, start with the role you either hire most frequently for or have the highest turnover in.  

Here's an example of how to set up a structured interview process for a single job: 

Structured Interview Process:

  • Ensure all stakeholders agree on what a strong candidate looks like for the role.
  • Conduct a thorough job analysis and develop relevant questions.
  • Create scoring rubrics and rating scales.
  • Reduce bias by having multiple interviewers.
  • Establish who the interviewers will be.
  • Pair structured interviews with other forms of objective candidate assessment.
  • Audit the process regularly.

See the steps in more detail below:

STEP 1: Ensure all stakeholders agree on what a strong candidate looks like for the role.

Everyone needs to be aligned on what qualities a good candidate possesses. This step is critical because it is the crux of the job analysis and the questions that get designed.

Additionally, we’d advise to keep the number of required qualities limited – no more than 5 or 6 – to keep the interview from focusing on less necessary components of the role, or making the interview process take too long.  

STEP 2: Conduct a thorough job analysis and develop relevant questions.

In order to effectively design your structured interview, you need a clear understanding of the job and the specific knowledge, abilities, and skills necessary to succeed.  

Start by defining the specific requirements and objectives of the role, and then align with all members of the talent acquisition team on what criteria candidates must meet and how. From here, you can determine the key competencies you want to design your interview to measure.

Once you fully understand the job requirements and what success looks like in the role, you can craft job-relevant questions that get to the heart of a candidate’s abilities. Combine situational and behavioral questions to capture a multi-dimensional view of a person’s competency.  

STEP 3:  Create scoring rubrics and rating scales.

Once you have your set of questions, you need to create a standardized scoring rubric so all interviewers can evaluate each candidate’s responses effectively.  

Use a 5-point anchored rating scale and provide detailed descriptors for each point on the rating scale to help evaluators know what a good and poor answer looks like. This helps reduce bias by making sure that all evaluators don’t over- or undervalue a particular part of a candidate’s answer.  

How to create a rating scale

This rubric also facilitates real-time evaluation, which reduces both bias (it’s harder to be objective about something as more time passes) and the amount of time interviewers need to spend evaluating candidates. 

Interviewers should rate each candidate's response with the anchored rating scale before moving on to the next question.

Encourage each interviewer to lock in their individual evaluations before meeting to discuss a candidate. This helps to reduce the chance of groupthink, where people’s opinions can be swayed by others in the group, and reduce the objectivity of the process.

STEP 4: Reduce bias by having multiple interviewers.

More interviewers results in more data! Scores can be combined and averaged for a quick comparison of which candidate had the strongest interview performance.

Using multiple interviewers makes it less likely that the unintentional biases of one person inform the hiring decision. However, there are two important practices you should adhere to when using more than one interviewer:

  • Have the same set of interviewers evaluate each candidate for the role.  
  • Make sure to limit the amount of conversation interviewers have about a candidate prior to submitting their evaluations.

STEP 5:  Establish who the interviewers will be.  

Part of a structured process involves having the same set of interviewers evaluate the same candidates, whenever possible. 

This way, when you combine the scores together, the results will be based on the same set of evaluators, leading to a more accurate result and a quick comparison of which candidate had the strongest interview performance.

Using multiple interviewers makes it less likely that the biases of one person will influence the hiring decision.  

STEP 6:  Pair structured interviews with other forms of objective candidate assessment.

Structured interviews might be the strongest predictor of future job success, but other objective methods of assessment – like personality and cognitive aptitude tests – will give you additional relevant data to use to find the best candidates in your applicant pool.

STEP 7:  Audit the process regularly.

Measure the success of your interview process by reviewing key metrics regularly. Are you moving the needle on metrics like onboarding time, retention, and engagement? Have you gathered candidate feedback on the interview process? 

Even the most well-built machines require frequent fine-tuning to run as best as they possibly can.  

Strive for continuous improvement of your process. Done is better than perfect, but perfection is never fully achieved. Keep refining your interview process to better identify top performers and reduce bias.  

Sample Response Scoring Rubric for a Sales Manager Position

How do i create a scoring rubric for a structured interview  .

  • Specify a core competency of the role you’re hiring for.
  • Develop questions that explore a candidate’s capacity to meet that core requirement.
  • Determine what a good, thorough answer for each question addresses – this will set the high end of your anchored rating scale.
  • Identify what components of an answer separate an excellent from an insufficient response. Be specific! This will fill out the rest of your rating scale for a specific competency.  
  • Repeat this process for each of your core competencies to build out your scoring rubric for the interview.  

Let’s look at an example:

If you’re hiring a sales manager, what qualities are most important to success in the role? The key competencies might include things like personnel management, business acumen, organization and planning, and selling.  

When creating your scoring rubric, you’ll use these essential competencies to create a framework for grading the responses to specific questions. 

For example, you can ask one behavioral and one situational question on a candidate’s ability to direct and coordinate with other employees.

Behavioral Question: What does your managerial style look like?

Situational Question: Tell me how you have connected your employer’s goals with the individual goals of members of your team.  

Then, after the candidate has responded, rate their answer using the 5-point anchored scale for the Directing & Coordinating competency. 

Sample scoring rubric for a sales manager role

By the end of your interview, you will have a clear understanding of how strongly a candidate matches with the skills required for the role and can use their average scores to quickly compare top candidates - and hire the ones who will be most likely to succeed.  

Featured Resource: How to Design Structured Interview Questions

SECTION 7

How to Train Your Staff on Structured Interviews

The benefits of structured interviews won over the higher-ups – now it’s time to get everyone else involved in interviewing comfortable with the new process.  

Change management is important here: let everyone know what the expectations will be going forward and how they will need to prepare. 

Give hiring managers (along with any other evaluators) the opportunity to ask questions about structured interviewing – you can even send them this guide!

Share the value of structured interviewing with your hiring managers.

Instead of just announcing that change will be happening, share why your organization has decided to make the change. The benefits of conducting structured interviews far outweigh the investment required to set them up.  

Interviewing is a lot of work – most hiring managers have to juggle interviews along with the day-to-day duties of their job. 

Share how structured interviews will not only make the interview process more streamlined, but how it will speed up the evaluation process and how it will help them find their team’s next star performer.  

Create question banks for frequently hired roles first.

Reduce the workload required by starting with the most common role you hire for. This will allow your teams to hit the ground running with structured interviews and start seeing the impact they have.  

Conduct a mock structured interview.

Give people a chance to practice! Encourage interviewers to pair off and run through the interview for the role they’re scheduled to conduct. 

It allows each evaluator to get used to the flow of the questions, learn how to evaluate responses in real time, and get comfortable with the new interview format. 

SECTION 8

Best Practices for Conducting a Structured Job Interview

For any interviewers who are new to structured interviews, here are some best practices to maximize the effectiveness of your interviews.

Structured Interview Best Practices:

  • Review your interview materials ahead of the interview.
  • Take candidate experience into account.
  • Follow the established, standardized questions.
  • Use the detailed rating scale for each question’s response.
  • Evaluate candidate responses in real time.
  • Allow candidates time to ask their own questions.
  • Submit your scores before discussing interviews with coworkers.
  • Understand your biases.

1. Review your interview materials ahead of the interview.

Before you conduct an interview, it’s a good idea to look over the questions you’ll be asking and check out the rating scale.

2. Take candidate experience into account.

Inform candidates ahead of time about your interview process so they know what to expect. Open and upfront communication is at the core of a positive candidate experience.  

Remember: Candidates prefer structured interviews! This style of interviewing grants equal opportunity to prove themselves and candidates recognize that.  

3. Follow the established, standardized questions.

Resist the temptation to throw a few “fun” questions into the mix. While these might do a good job at lightening the mood, they ultimately create noise that can prevent you finding the candidate who is the best for the job.

4. Use the detailed rating scale for each question’s response.

You will be provided with a detailed scoring guide to ensure that every evaluator is giving the same weight to each question. This guide also makes evaluation a quick, objective, and easy process. 

After each response, give the candidate a quick rating before moving on.  

5. Evaluate candidate responses in real time.

Make use of those rating scales! It’s important that evaluators rate a candidate’s response to one question before moving on to the next.

Provide ratings to HR as soon as possible after the interview is over. The candidate’s performance will be fresh in your mind and you’re likely to be most objective about their performance immediately after.

6. Allow candidates time to ask their own questions.

Once the standardized question set has finished and responses have been scored, open the floor to the candidate to ask questions. 

This is a great opportunity for you to highlight how great it is to work for your organization.  

7. Submit your scores before discussing interviews with coworkers.

Beat the bandwagon! By submitting your completed evaluation prior to conferring with other interviewers, you reduce the risk of their opinions influencing your own (or vice versa).  

8. Understand your biases.

Bias is a symptom of the human condition – every person has their own beliefs and experiences that shape the way they see the world and other people. 

Do your best to stay objective during interviews – the rating scale provided can help keep your bias in check.  

Want to brush up on the types of bias that can impact hiring? Review the 21 most common hiring biases before your next interview.  

FEATURED RESOURCE:  Structured Interviewing: Quick Start Guide

What to learn more about how to conduct a structured job interview? Check out our quick start guide here. 

SECTION 9

Tools for Structured Interviews

Setting up a structured interview process may sound like a lot of work, but it doesn’t have to be! There are a wide range of software tools that can make it easy to both set up and conduct structured interviews with ease.  

Interview Management Tools for Structured Interviews

In this section, we’ll provide an example of how interview tools can assist with structured interviews using Criteria’s Live Interviewing Tool.  

At Criteria, we know that structured interviews are one of the single best predictors of job performance, and yet many companies are still resistant to adopting a structured process. 

To learn more about why, we surveyed hiring professionals and found that some of the biggest hurdles preventing people from conducting fully structured interviews include things like:

  • Developing the process
  • Defining the interview questions
  • Defining the evaluation rubrics
  • Getting hiring managers to comply, and
  • Collecting and analyzing the results from multiple evaluators.

Bar chart showing the biggest hurdles when it comes to conducting fully structured interviews

Based on this data, Criteria’s team of engineers and I/O psychologists built an interviewing tool designed to tackle some of the biggest hurdles that are getting in the way of companies conducting structured interviews.  

The tool, called Live Interviewing , is a flexible interview assistant that you can use alongside any live interview, whether that interview is in-person, over video, or over the phone. It relies on technology to make the entire structured interviewing process easier for your whole team. 

For example, it makes it easy to:

  • Adopt: the tool is device-agnostic and can be used for any live interview.
  • Develop questions: you can choose questions from a pre-built library of interview questions or create your own to focus on your company’s job-related competencies.
  • Develop evaluation rubrics: quickly set up scoring rubrics that all your interviewers can use.
  • Training hiring managers: hiring managers get access to interview guides, where they can see all the questions and access the scoring rubrics that they can fill out in real time.
  • Collect and analyze the results: the platform combines all the results into one system of record, where you can view a final, objective score for each candidate and make more confident final decisions.

criteria live interviewing tool

This is just one example of a tool you can use to better enable a structured process. But it’s a useful reminder that technology can provide the guiderails needed to ensure that your team conducts the most effective, predictive interviews.  

Interested in learning more about Live Interviewing?

Get in touch to request a demo.

REQUEST A DEMO

Tools for Asynchronous, Pre-recorded Structured Interviews

Asynchronous, or pre-recorded, video interviewing platforms allow you to pre-record questions for candidates to answer on their own time. 

They’re a perfect tool for structured interviews because all you need to do is upload the job-relevant questions selected during the job analysis and you’re good to go!  

Many asynchronous tools even allow for real-time rating of recorded candidate responses to each question, making evaluation a breeze.  

An asynchronous interview tool allows you to standardize your virtual interview process with ease. Despite the convenience of tools like Zoom or Teams to conduct a virtual interview, they aren’t designed to facilitate a great interview experience.  

Using a dedicated video interview tool will unlock features that both standardize your interview process (a critical component of any structured interview) and even reduce the risk of bias influencing your hiring decisions.

This type of virtual interviewing tool automatically locks in the structure of the interview: every candidate is given the same prompts to respond to in the same order, for the same amount of time. 

When evaluators go to review a candidate’s interview, the scoring rubric is baked into the UI, making evaluation a breeze.

For example, Criteria’s asynchronous Video Interviewing tool enables you to:  

  • Choose from a pre-filmed library of standardized questions
  • Easily set up standardized scoring rubrics for each question
  • Create immersive, scenario-based assessments and situational judgment tests
  • Provide real-life job previews and scenarios  
  • Collect responses in one system of record
  • Collaborate and share feedback with other interviewers  
  • Reduce bias and increase diversity with features that disguise names and voices

criteria video interviewing tool

Asynchronous video interviewing tools are convenient for candidates and evaluators alike, allowing both sides of the table to participate in the interview at a time and place that works best for them. 

The convenience of asynchronous video interviewing is unparalleled for evaluators. Several candidates can be reviewed and evaluated in a fraction of the time it takes to conduct a single interview.

Interested in learning more about Video Interviewing?

Applicant Tracking Systems  

You can use your ATS to keep track of interview performance and then directly compare candidates interview scores with one another. 

These systems will help you manage the data your new interview process produces and make sure it stays visible to your internal decision makers.  

SECTION 10

Actions You Can Take Today

Even if you can’t make the switch to fully structured interviews at-scale yet, what incremental changes can you make this quarter to have a positive impact on your interview process? 

If you’re interested in structured interviewing but still not sure where to begin, that’s okay!  

Here are some simple ways you can add structure to your interview process and start seeing the benefits for your organization.

Review your current interview process.

Where is your interview process right now? And where do you want it to be?  

Simple changes, like sending all evaluators an email with a set of suggested questions in advance of the interview, can help make a difference.  

Start small.

It doesn’t have to be all-or-nothing.  

Start with the role you interview for the most, or just encourage hiring managers to use the same questions with each candidate. You can use this single role as a test case for the widespread adoption of structured interviewing.  

Remember that candidates care most about fairness.

Don’t worry that structured interviews will harm the candidate experience. 

Candidates prefer them over traditional interviews because they give every candidate the same opportunity to demonstrate their strengths.  

Once structured interviews have been implemented, they actually speed up your candidate evaluation process. 

Structured interviews can help you make more accurate hiring decisions quicker, getting your top talent out of the job market (and competitor’s hiring funnels) faster.  

Lean on technology.

Finding the right tool makes it easy to get up and running with structured interviewing. They take the guesswork out of the process and enable you to quickly adjust your hiring procedure.  

Choose from a robust collection of interview questions to make a job-relevant and effective interview. Use pre-recorded interview questions ensure every candidate is given the identical opportunity to prove themselves. 

Submit evaluations in real time to avoid biases. Dedicated structured interviewing tech allows you to make the switch with ease.  

Want to learn more about Criteria’s interviewing tools? Contact us today to learn more about the affordable and effective tools at your disposal.

Final Thoughts

Structured interviews are one of the simplest and most powerful ways that organizations can start making better hiring decisions. 

They reduce hiring bias, improve candidate experience, and improve hiring outcomes, which in turn leads to better company performance, higher employee retention, and all the incredible benefits that come along with that.  

While there is effort involved in making the jump to fully structured interviews, adopting them as a part of your hiring process will result in a fairer, more predictive, and more efficient hiring process that allows you to identify top talent.    

BACK TO TOP ^

Qualitative Interviewing

  • Reference work entry
  • First Online: 13 January 2019
  • Cite this reference work entry

structured interview research methodology

  • Sally Nathan 2 ,
  • Christy Newman 3 &
  • Kari Lancaster 3  

4860 Accesses

22 Citations

8 Altmetric

Qualitative interviewing is a foundational method in qualitative research and is widely used in health research and the social sciences. Both qualitative semi-structured and in-depth unstructured interviews use verbal communication, mostly in face-to-face interactions, to collect data about the attitudes, beliefs, and experiences of participants. Interviews are an accessible, often affordable, and effective method to understand the socially situated world of research participants. The approach is typically informed by an interpretive framework where the data collected is not viewed as evidence of the truth or reality of a situation or experience but rather a context-bound subjective insight from the participants. The researcher needs to be open to new insights and to privilege the participant’s experience in data collection. The data from qualitative interviews is not generalizable, but its exploratory nature permits the collection of rich data which can answer questions about which little is already known. This chapter introduces the reader to qualitative interviewing, the range of traditions within which interviewing is utilized as a method, and highlights the advantages and some of the challenges and misconceptions in its application. The chapter also provides practical guidance on planning and conducting interview studies. Three case examples are presented to highlight the benefits and risks in the use of interviewing with different participants, providing situated insights as well as advice about how to go about learning to interview if you are a novice.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

structured interview research methodology

Interviews in the social sciences

structured interview research methodology

Interviewing in Qualitative Research

Baez B. Confidentiality in qualitative research: reflections on secrets, power and agency. Qual Res. 2002;2(1):35–58. https://doi.org/10.1177/1468794102002001638 .

Article   Google Scholar  

Braun V, Clarke V. Successful qualitative research: a practical guide for beginners. London: Sage Publications; 2013.

Google Scholar  

Braun V, Clarke V, Gray D. Collecting qualitative data: a practical guide to textual, media and virtual techniques. Cambridge: Cambridge University Press; 2017.

Book   Google Scholar  

Bryman A. Social research methods. 5th ed. Oxford: Oxford University Press; 2016.

Crotty M. The foundations of social research: meaning and perspective in the research process. Australia: Allen & Unwin; 1998.

Davies MB. Doing a successful research project: using qualitative or quantitative methods. New York: Palgrave MacMillan; 2007.

Dickson-Swift V, James EL, Liamputtong P. Undertaking sensitive research in the health and social sciences. Cambridge: Cambridge University Press; 2008.

Foster M, Nathan S, Ferry M. The experience of drug-dependent adolescents in a therapeutic community. Drug Alcohol Rev. 2010;29(5):531–9.

Gillham B. The research interview. London: Continuum; 2000.

Glaser B, Strauss A. The discovery of grounded theory: strategies for qualitative research. Chicago: Aldine Publishing Company; 1967.

Hesse-Biber SN, Leavy P. In-depth interview. In: The practice of qualitative research. 2nd ed. Thousand Oaks: Sage Publications; 2011. p. 119–47

Irvine A. Duration, dominance and depth in telephone and face-to-face interviews: a comparative exploration. Int J Qual Methods. 2011;10(3):202–20.

Johnson JM. In-depth interviewing. In: Gubrium JF, Holstein JA, editors. Handbook of interview research: context and method. Thousand Oaks: Sage Publications; 2001.

Kvale S. Interviews: an introduction to qualitative research interviewing. Thousand Oaks: Sage; 1996.

Kvale S. Doing interviews. London: Sage Publications; 2007.

Lancaster K. Confidentiality, anonymity and power relations in elite interviewing: conducting qualitative policy research in a politicised domain. Int J Soc Res Methodol. 2017;20(1):93–103. https://doi.org/10.1080/13645579.2015.1123555 .

Leavy P. Method meets art: arts-based research practice. New York: Guilford Publications; 2015.

Liamputtong P. Researching the vulnerable: a guide to sensitive research methods. Thousand Oaks: Sage Publications; 2007.

Liamputtong P. Qualitative research methods. 4th ed. South Melbourne: Oxford University Press; 2013.

Mays N, Pope C. Quality in qualitative health research. In: Pope C, Mays N, editors. Qualitative research in health care. London: BMJ Books; 2000. p. 89–102.

McLellan E, MacQueen KM, Neidig JL. Beyond the qualitative interview: data preparation and transcription. Field Methods. 2003;15(1):63–84. https://doi.org/10.1177/1525822x02239573 .

Minichiello V, Aroni R, Hays T. In-depth interviewing: principles, techniques, analysis. 3rd ed. Sydney: Pearson Education Australia; 2008.

Morris ZS. The truth about interviewing elites. Politics. 2009;29(3):209–17. https://doi.org/10.1111/j.1467-9256.2009.01357.x .

Nathan S, Foster M, Ferry M. Peer and sexual relationships in the experience of drug-dependent adolescents in a therapeutic community. Drug Alcohol Rev. 2011;30(4):419–27.

National Health and Medical Research Council. National statement on ethical conduct in human research. Canberra: Australian Government; 2007.

Neal S, McLaughlin E. Researching up? Interviews, emotionality and policy-making elites. J Soc Policy. 2009;38(04):689–707. https://doi.org/10.1017/S0047279409990018 .

O’Reilly M, Parker N. ‘Unsatisfactory saturation’: a critical exploration of the notion of saturated sample sizes in qualitative research. Qual Res. 2013;13(2):190–7. https://doi.org/10.1177/1468794112446106 .

Ostrander S. “Surely you're not in this just to be helpful”: access, rapport and interviews in three studies of elites. In: Hertz R, Imber J, editors. Studying elites using qualitative methods. Thousand Oaks: Sage Publications; 1995. p. 133–50.

Chapter   Google Scholar  

Patton M. Qualitative research & evaluation methods: integrating theory and practice. Thousand Oaks: Sage Publications; 2015.

Punch KF. Introduction to social research: quantitative and qualitative approaches. London: Sage; 2005.

Rhodes T, Bernays S, Houmoller K. Parents who use drugs: accounting for damage and its limitation. Soc Sci Med. 2010;71(8):1489–97. https://doi.org/10.1016/j.socscimed.2010.07.028 .

Riessman CK. Narrative analysis. London: Sage; 1993.

Ritchie J. Not everything can be reduced to numbers. In: Berglund C, editor. Health research. Melbourne: Oxford University Press; 2001. p. 149–73.

Rubin H, Rubin I. Qualitative interviewing: the art of hearing data. 2nd ed. Thousand Oaks: Sage Publications; 2012.

Serry T, Liamputtong P. The in-depth interviewing method in health. In: Liamputtong P, editor. Research methods in health: foundations for evidence-based practice. 3rd ed. South Melbourne: Oxford University Press; 2017. p. 67–83.

Silverman D. Doing qualitative research. 5th ed. London: Sage; 2017.

Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (coreq): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57. https://doi.org/10.1093/intqhc/mzm042 .

Download references

Author information

Authors and affiliations.

School of Public Health and Community Medicine, Faculty of Medicine, UNSW, Sydney, NSW, Australia

Sally Nathan

Centre for Social Research in Health, Faculty of Arts and Social Sciences, UNSW, Sydney, NSW, Australia

Christy Newman & Kari Lancaster

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Sally Nathan .

Editor information

Editors and affiliations.

School of Science and Health, Western Sydney University, Penrith, NSW, Australia

Pranee Liamputtong

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this entry

Cite this entry.

Nathan, S., Newman, C., Lancaster, K. (2019). Qualitative Interviewing. In: Liamputtong, P. (eds) Handbook of Research Methods in Health Social Sciences. Springer, Singapore. https://doi.org/10.1007/978-981-10-5251-4_77

Download citation

DOI : https://doi.org/10.1007/978-981-10-5251-4_77

Published : 13 January 2019

Publisher Name : Springer, Singapore

Print ISBN : 978-981-10-5250-7

Online ISBN : 978-981-10-5251-4

eBook Packages : Social Sciences Reference Module Humanities and Social Sciences Reference Module Business, Economics and Social Sciences

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • [email protected]
  • +44 (0)7794 607431

Nightingale Logo

Research Methods: Structured interview techniques

Nancy Hughes

Outdoor sculpture that looks like rusted metal circle with word "listen" cut out in the middle on a blue sky background.

Photo by   Nick Fewings  on  Unsplash

Interviews are a great method for gathering first hand accounts of people’s experiences, perceptions and attitudes. They can vary in structure from more of a guided discussion based on a framework of topics through to a fixed list of scripted questions. The flexibility that they offer can lure researchers into a false sense of security, with interviews often being seen as the easy option. However, carrying out an effective interview takes planning and skill. 

One of the most commonly used techniques is the semi-structured interview, where the researcher plans out a range of questions and follow-up prompts to guide participants through a series of topics. However, there are alternatives which provide more structure and support the researcher's focus on gathering specific insights. 

This blog gives an overview of three of these alternative techniques for conducting an interview.

Critical Incident Technique

This technique was the focus of the first in our Research Methods series. To summarise, Critical Incident Technique (CIT) is a great technique for getting  participants to explore their experiences of different situations or events. It involves asking the participant to identify and describe up to three positive and three negative first hand experiences.

Vignettes are essentially stories about individuals and situations, which participants are asked to consider and reflect upon. Usually they are hypothetical and enable researchers to explore what a participant might do, feel or think in a given situation.  Depending on topics being explored, the vignette itself can be presented as plain text, comic strips, story boards or videos.  Being hypothetical in nature, vignettes allow participants to respond in very open and unreserved ways, detailing their genuine thoughts and opinions.

Vignettes offer a great way to explore more sensitive or challenging situations.

One drawback of vignettes is that, because they don't refer to real events, participants sometimes feel unable to make a decision or offer an explanation and can resort to unhelpful ‘it depends’ responses. It is also important to recognise that what participants say they would do and what they would actually do if faced with a situation can be very different. 

Advantages:

  • Allows you to explore situations that participants don't have direct experience of
  • Carefully targeting the content in the vignette helps participants focus on the specifics aspects of the situation you are interested in

Disadvantages:

  • Participants can struggle to accurately respond to hypothetical situations
  • Poorly written vignettes can be leading and fail to elicit meaningful responses
  • Lots of effort required upfront to research, design and create the vignette itself

Repertory Grid Technique

The technique, also referred to as ‘triading’, is based on George Kelly’s personal construct theory which suggests people make sense of the world through their own subjective classifications or ‘personal constructs’. Essentially it works by either the researcher or participant selecting between five to ten products or services from a particular domain, which the participant is familiar with. These may be written down or perhaps offered as picture cards. The researcher then asks the participant to select three at a time (a ‘triad’) and explain in what ways two are similar and yet different from the third, with this process being repeated with a different three entities until the researcher feels all new insights have been expressed. 

This technique is versatile as it allows researchers to elicit participants' inner motivations and beliefs on a diversity of discrete entities e.g. people, objects and events, whilst minimising researcher bias or influence.

Repertory Grid is particularly effective for comparing related products and services such as a range of mobile phones or online fashion brands.

Participants often become very engaged in this technique as they are discussing topics they know, yet the process often reveals insights even they weren’t aware of. Researchers can also gain deeper insights by prompting the participants throughout the process to give some more detail, explanation or clarification on the descriptions they give. 

  • Gives a solid structure to interviews that can make it easier for less experienced researchers to run
  • Good at uncovering biases and opinions that participants didn't necessarily know they have
  • Some flexibility to delve deeper into interesting topics that come up as the interview progresses

Disadvantages

  • Only useful for exploring specific types of research question, where you are focused on differences and similarities
  • Needs participants to be familiar with a wide variety of 'entities' so that you can explore lots of different 'triads' and get at underlying opinions and experiences

Find out more:

Barter, C. and Renold, E. (1999) The use of vignettes in qualitative research

The Repertory Grid: Eliciting User Experience Comparisons in the Customer’s Voice

Fransella, F., Bell, R. and Bannister, D. (2004)   A manual for repertory grid technique

The Critical Incident Technique in UX

Other articles you might like

Image of person in front of painting of a face

Research Methods: User Personas

Screening Checkpoint Image

Research Methods: Participant Screening

Michael Brown

Receive our latest updates and insights. Non-salesy and we never spam.

Thanks for signing up to our quarterly newsletter. we’ll be in touch soon..

In the meantime, if you’d like to discuss a project, get in touch .

Sign up to receive a free business case canvas and guide.

Your business case canvas and guide will arrive in your inbox shortly.

Sign up for a guide to typical timings and costs. We won't use your data for any other purpose.

Your timings and costs guide will arrive in your inbox shortly.

  • Harvard Library
  • Research Guides
  • Faculty of Arts & Sciences Libraries

Library Support for Qualitative Research

  • Interview Research
  • Resources for Methodology
  • Remote Research & Virtual Fieldwork

Background Resources for Interview Research

Online resources, communities, and databases.

  • Types of Interviews
  • Oral History
  • Engaging Participants
  • Interview Questions
  • Conducting Interviews
  • Transcription
  • Coding and Analysis
  • Managing Interview Data
  • Finding Interview Data
  • Class Materials
  • Data Management & Repositories
  • Campus Access

Textbooks, Guidebooks, and Handbooks  

  • The Ethnographic Interview by James P. Spradley  “Spradley wrote this book for the professional and student who have never done ethnographic fieldwork (p. 231) and for the professional ethnographer who is interested in adapting the author’s procedures (p. iv). Part 1 outlines in 3 chapters Spradley’s version of ethnographic research, and it provides the background for Part 2 which consists of 12 guided steps (chapters) ranging from locating and interviewing an informant to writing an ethnography. Most of the examples come from the author’s own fieldwork among U.S. subcultures . . . Steps 6 and 8 explain lucidly how to construct a domain and a taxonomic analysis” (excerpted from book review by James D. Sexton, 1980).  
  • Fundamentals of Qualitative Research by Johnny Saldana (Series edited by Patricia Leavy)  Provides a soup-to-nuts overview of the qualitative data collection process, including interviewing, participant observation, and other methods.  
  • InterViews by Steinar Kvale  Interviewing is an essential tool in qualitative research and this introduction to interviewing outlines both the theoretical underpinnings and the practical aspects of the process. After examining the role of the interview in the research process, Steinar Kvale considers some of the key philosophical issues relating to interviewing: the interview as conversation, hermeneutics, phenomenology, concerns about ethics as well as validity, and postmodernism. Having established this framework, the author then analyzes the seven stages of the interview process - from designing a study to writing it up.  
  • Practical Evaluation by Michael Quinn Patton  Surveys different interviewing strategies, from, a) informal/conversational, to b) interview guide approach, to c) standardized and open-ended, to d) closed/quantitative. Also discusses strategies for wording questions that are open-ended, clear, sensitive, and neutral, while supporting the speaker. Provides suggestions for probing and maintaining control of the interview process, as well as suggestions for recording and transcription.  
  • The SAGE Handbook of Interview Research by Amir B. Marvasti (Editor); James A. Holstein (Editor); Jaber F. Gubrium (Editor); Karyn D. McKinney (Editor)  The new edition of this landmark volume emphasizes the dynamic, interactional, and reflexive dimensions of the research interview. Contributors highlight the myriad dimensions of complexity that are emerging as researchers increasingly frame the interview as a communicative opportunity as much as a data-gathering format. The book begins with the history and conceptual transformations of the interview, which is followed by chapters that discuss the main components of interview practice. Taken together, the contributions to The SAGE Handbook of Interview Research: The Complexity of the Craft encourage readers simultaneously to learn the frameworks and technologies of interviewing and to reflect on the epistemological foundations of the interview craft.  
  • The SAGE Handbook of Online Research Methods by Nigel G. Fielding, Raymond M. Lee and Grant Blank (Editors) Bringing together the leading names in both qualitative and quantitative online research, this new edition is organized into nine sections: 1. Online Research Methods 2. Designing Online Research 3. Online Data Capture and Data Collection 4. The Online Survey 5. Digital Quantitative Analysis 6. Digital Text Analysis 7. Virtual Ethnography 8. Online Secondary Analysis: Resources and Methods 9. The Future of Online Social Research.
  • Interviews as a Method for Qualitative Research (video) This short video summarizes why interviews can serve as useful data in qualitative research.
  • Companion website to Bloomberg and Volpe's  Completing Your Qualitative Dissertation: A Road Map from Beginning to End,  4th ed Provides helpful templates and appendices featured in the book, as well as links to other useful dissertation resources.
  • International Congress of Qualitative Inquiry Annual conference hosted by the International Center for Qualitative Inquiry at the University of Illinois at Urbana-Champaign, which aims to facilitate the development of qualitative research methods across a wide variety of academic disciplines, among other initiatives.
  • METHODSPACE An online home of the research methods community, where practicing researchers share how to make research easier.
  • SAGE researchmethods Researchers can explore methods concepts to help them design research projects, understand particular methods or identify a new method, conduct their research, and write up their findings. A "methods map" facilitates finding content on methods.
  • << Previous: Remote Research & Virtual Fieldwork
  • Next: Types of Interviews >>

Except where otherwise noted, this work is subject to a Creative Commons Attribution 4.0 International License , which allows anyone to share and adapt our material as long as proper attribution is given. For details and exceptions, see the Harvard Library Copyright Policy ©2021 Presidents and Fellows of Harvard College.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Neurol Res Pract

Logo of neurrp

How to use and assess qualitative research methods

Loraine busetto.

1 Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany

Wolfgang Wick

2 Clinical Cooperation Unit Neuro-Oncology, German Cancer Research Center, Heidelberg, Germany

Christoph Gumbinger

Associated data.

Not applicable.

This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions, and focussing on intervention improvement. The most common methods of data collection are document study, (non-) participant observations, semi-structured interviews and focus groups. For data analysis, field-notes and audio-recordings are transcribed into protocols and transcripts, and coded using qualitative data management software. Criteria such as checklists, reflexivity, sampling strategies, piloting, co-coding, member-checking and stakeholder involvement can be used to enhance and assess the quality of the research conducted. Using qualitative in addition to quantitative designs will equip us with better tools to address a greater range of research problems, and to fill in blind spots in current neurological research and practice.

The aim of this paper is to provide an overview of qualitative research methods, including hands-on information on how they can be used, reported and assessed. This article is intended for beginning qualitative researchers in the health sciences as well as experienced quantitative researchers who wish to broaden their understanding of qualitative research.

What is qualitative research?

Qualitative research is defined as “the study of the nature of phenomena”, including “their quality, different manifestations, the context in which they appear or the perspectives from which they can be perceived” , but excluding “their range, frequency and place in an objectively determined chain of cause and effect” [ 1 ]. This formal definition can be complemented with a more pragmatic rule of thumb: qualitative research generally includes data in form of words rather than numbers [ 2 ].

Why conduct qualitative research?

Because some research questions cannot be answered using (only) quantitative methods. For example, one Australian study addressed the issue of why patients from Aboriginal communities often present late or not at all to specialist services offered by tertiary care hospitals. Using qualitative interviews with patients and staff, it found one of the most significant access barriers to be transportation problems, including some towns and communities simply not having a bus service to the hospital [ 3 ]. A quantitative study could have measured the number of patients over time or even looked at possible explanatory factors – but only those previously known or suspected to be of relevance. To discover reasons for observed patterns, especially the invisible or surprising ones, qualitative designs are needed.

While qualitative research is common in other fields, it is still relatively underrepresented in health services research. The latter field is more traditionally rooted in the evidence-based-medicine paradigm, as seen in " research that involves testing the effectiveness of various strategies to achieve changes in clinical practice, preferably applying randomised controlled trial study designs (...) " [ 4 ]. This focus on quantitative research and specifically randomised controlled trials (RCT) is visible in the idea of a hierarchy of research evidence which assumes that some research designs are objectively better than others, and that choosing a "lesser" design is only acceptable when the better ones are not practically or ethically feasible [ 5 , 6 ]. Others, however, argue that an objective hierarchy does not exist, and that, instead, the research design and methods should be chosen to fit the specific research question at hand – "questions before methods" [ 2 , 7 – 9 ]. This means that even when an RCT is possible, some research problems require a different design that is better suited to addressing them. Arguing in JAMA, Berwick uses the example of rapid response teams in hospitals, which he describes as " a complex, multicomponent intervention – essentially a process of social change" susceptible to a range of different context factors including leadership or organisation history. According to him, "[in] such complex terrain, the RCT is an impoverished way to learn. Critics who use it as a truth standard in this context are incorrect" [ 8 ] . Instead of limiting oneself to RCTs, Berwick recommends embracing a wider range of methods , including qualitative ones, which for "these specific applications, (...) are not compromises in learning how to improve; they are superior" [ 8 ].

Research problems that can be approached particularly well using qualitative methods include assessing complex multi-component interventions or systems (of change), addressing questions beyond “what works”, towards “what works for whom when, how and why”, and focussing on intervention improvement rather than accreditation [ 7 , 9 – 12 ]. Using qualitative methods can also help shed light on the “softer” side of medical treatment. For example, while quantitative trials can measure the costs and benefits of neuro-oncological treatment in terms of survival rates or adverse effects, qualitative research can help provide a better understanding of patient or caregiver stress, visibility of illness or out-of-pocket expenses.

How to conduct qualitative research?

Given that qualitative research is characterised by flexibility, openness and responsivity to context, the steps of data collection and analysis are not as separate and consecutive as they tend to be in quantitative research [ 13 , 14 ]. As Fossey puts it : “sampling, data collection, analysis and interpretation are related to each other in a cyclical (iterative) manner, rather than following one after another in a stepwise approach” [ 15 ]. The researcher can make educated decisions with regard to the choice of method, how they are implemented, and to which and how many units they are applied [ 13 ]. As shown in Fig.  1 , this can involve several back-and-forth steps between data collection and analysis where new insights and experiences can lead to adaption and expansion of the original plan. Some insights may also necessitate a revision of the research question and/or the research design as a whole. The process ends when saturation is achieved, i.e. when no relevant new information can be found (see also below: sampling and saturation). For reasons of transparency, it is essential for all decisions as well as the underlying reasoning to be well-documented.

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig1_HTML.jpg

Iterative research process

While it is not always explicitly addressed, qualitative methods reflect a different underlying research paradigm than quantitative research (e.g. constructivism or interpretivism as opposed to positivism). The choice of methods can be based on the respective underlying substantive theory or theoretical framework used by the researcher [ 2 ].

Data collection

The methods of qualitative data collection most commonly used in health research are document study, observations, semi-structured interviews and focus groups [ 1 , 14 , 16 , 17 ].

Document study

Document study (also called document analysis) refers to the review by the researcher of written materials [ 14 ]. These can include personal and non-personal documents such as archives, annual reports, guidelines, policy documents, diaries or letters.

Observations

Observations are particularly useful to gain insights into a certain setting and actual behaviour – as opposed to reported behaviour or opinions [ 13 ]. Qualitative observations can be either participant or non-participant in nature. In participant observations, the observer is part of the observed setting, for example a nurse working in an intensive care unit [ 18 ]. In non-participant observations, the observer is “on the outside looking in”, i.e. present in but not part of the situation, trying not to influence the setting by their presence. Observations can be planned (e.g. for 3 h during the day or night shift) or ad hoc (e.g. as soon as a stroke patient arrives at the emergency room). During the observation, the observer takes notes on everything or certain pre-determined parts of what is happening around them, for example focusing on physician-patient interactions or communication between different professional groups. Written notes can be taken during or after the observations, depending on feasibility (which is usually lower during participant observations) and acceptability (e.g. when the observer is perceived to be judging the observed). Afterwards, these field notes are transcribed into observation protocols. If more than one observer was involved, field notes are taken independently, but notes can be consolidated into one protocol after discussions. Advantages of conducting observations include minimising the distance between the researcher and the researched, the potential discovery of topics that the researcher did not realise were relevant and gaining deeper insights into the real-world dimensions of the research problem at hand [ 18 ].

Semi-structured interviews

Hijmans & Kuyper describe qualitative interviews as “an exchange with an informal character, a conversation with a goal” [ 19 ]. Interviews are used to gain insights into a person’s subjective experiences, opinions and motivations – as opposed to facts or behaviours [ 13 ]. Interviews can be distinguished by the degree to which they are structured (i.e. a questionnaire), open (e.g. free conversation or autobiographical interviews) or semi-structured [ 2 , 13 ]. Semi-structured interviews are characterized by open-ended questions and the use of an interview guide (or topic guide/list) in which the broad areas of interest, sometimes including sub-questions, are defined [ 19 ]. The pre-defined topics in the interview guide can be derived from the literature, previous research or a preliminary method of data collection, e.g. document study or observations. The topic list is usually adapted and improved at the start of the data collection process as the interviewer learns more about the field [ 20 ]. Across interviews the focus on the different (blocks of) questions may differ and some questions may be skipped altogether (e.g. if the interviewee is not able or willing to answer the questions or for concerns about the total length of the interview) [ 20 ]. Qualitative interviews are usually not conducted in written format as it impedes on the interactive component of the method [ 20 ]. In comparison to written surveys, qualitative interviews have the advantage of being interactive and allowing for unexpected topics to emerge and to be taken up by the researcher. This can also help overcome a provider or researcher-centred bias often found in written surveys, which by nature, can only measure what is already known or expected to be of relevance to the researcher. Interviews can be audio- or video-taped; but sometimes it is only feasible or acceptable for the interviewer to take written notes [ 14 , 16 , 20 ].

Focus groups

Focus groups are group interviews to explore participants’ expertise and experiences, including explorations of how and why people behave in certain ways [ 1 ]. Focus groups usually consist of 6–8 people and are led by an experienced moderator following a topic guide or “script” [ 21 ]. They can involve an observer who takes note of the non-verbal aspects of the situation, possibly using an observation guide [ 21 ]. Depending on researchers’ and participants’ preferences, the discussions can be audio- or video-taped and transcribed afterwards [ 21 ]. Focus groups are useful for bringing together homogeneous (to a lesser extent heterogeneous) groups of participants with relevant expertise and experience on a given topic on which they can share detailed information [ 21 ]. Focus groups are a relatively easy, fast and inexpensive method to gain access to information on interactions in a given group, i.e. “the sharing and comparing” among participants [ 21 ]. Disadvantages include less control over the process and a lesser extent to which each individual may participate. Moreover, focus group moderators need experience, as do those tasked with the analysis of the resulting data. Focus groups can be less appropriate for discussing sensitive topics that participants might be reluctant to disclose in a group setting [ 13 ]. Moreover, attention must be paid to the emergence of “groupthink” as well as possible power dynamics within the group, e.g. when patients are awed or intimidated by health professionals.

Choosing the “right” method

As explained above, the school of thought underlying qualitative research assumes no objective hierarchy of evidence and methods. This means that each choice of single or combined methods has to be based on the research question that needs to be answered and a critical assessment with regard to whether or to what extent the chosen method can accomplish this – i.e. the “fit” between question and method [ 14 ]. It is necessary for these decisions to be documented when they are being made, and to be critically discussed when reporting methods and results.

Let us assume that our research aim is to examine the (clinical) processes around acute endovascular treatment (EVT), from the patient’s arrival at the emergency room to recanalization, with the aim to identify possible causes for delay and/or other causes for sub-optimal treatment outcome. As a first step, we could conduct a document study of the relevant standard operating procedures (SOPs) for this phase of care – are they up-to-date and in line with current guidelines? Do they contain any mistakes, irregularities or uncertainties that could cause delays or other problems? Regardless of the answers to these questions, the results have to be interpreted based on what they are: a written outline of what care processes in this hospital should look like. If we want to know what they actually look like in practice, we can conduct observations of the processes described in the SOPs. These results can (and should) be analysed in themselves, but also in comparison to the results of the document analysis, especially as regards relevant discrepancies. Do the SOPs outline specific tests for which no equipment can be observed or tasks to be performed by specialized nurses who are not present during the observation? It might also be possible that the written SOP is outdated, but the actual care provided is in line with current best practice. In order to find out why these discrepancies exist, it can be useful to conduct interviews. Are the physicians simply not aware of the SOPs (because their existence is limited to the hospital’s intranet) or do they actively disagree with them or does the infrastructure make it impossible to provide the care as described? Another rationale for adding interviews is that some situations (or all of their possible variations for different patient groups or the day, night or weekend shift) cannot practically or ethically be observed. In this case, it is possible to ask those involved to report on their actions – being aware that this is not the same as the actual observation. A senior physician’s or hospital manager’s description of certain situations might differ from a nurse’s or junior physician’s one, maybe because they intentionally misrepresent facts or maybe because different aspects of the process are visible or important to them. In some cases, it can also be relevant to consider to whom the interviewee is disclosing this information – someone they trust, someone they are otherwise not connected to, or someone they suspect or are aware of being in a potentially “dangerous” power relationship to them. Lastly, a focus group could be conducted with representatives of the relevant professional groups to explore how and why exactly they provide care around EVT. The discussion might reveal discrepancies (between SOPs and actual care or between different physicians) and motivations to the researchers as well as to the focus group members that they might not have been aware of themselves. For the focus group to deliver relevant information, attention has to be paid to its composition and conduct, for example, to make sure that all participants feel safe to disclose sensitive or potentially problematic information or that the discussion is not dominated by (senior) physicians only. The resulting combination of data collection methods is shown in Fig.  2 .

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig2_HTML.jpg

Possible combination of data collection methods

Attributions for icons: “Book” by Serhii Smirnov, “Interview” by Adrien Coquet, FR, “Magnifying Glass” by anggun, ID, “Business communication” by Vectors Market; all from the Noun Project

The combination of multiple data source as described for this example can be referred to as “triangulation”, in which multiple measurements are carried out from different angles to achieve a more comprehensive understanding of the phenomenon under study [ 22 , 23 ].

Data analysis

To analyse the data collected through observations, interviews and focus groups these need to be transcribed into protocols and transcripts (see Fig.  3 ). Interviews and focus groups can be transcribed verbatim , with or without annotations for behaviour (e.g. laughing, crying, pausing) and with or without phonetic transcription of dialects and filler words, depending on what is expected or known to be relevant for the analysis. In the next step, the protocols and transcripts are coded , that is, marked (or tagged, labelled) with one or more short descriptors of the content of a sentence or paragraph [ 2 , 15 , 23 ]. Jansen describes coding as “connecting the raw data with “theoretical” terms” [ 20 ]. In a more practical sense, coding makes raw data sortable. This makes it possible to extract and examine all segments describing, say, a tele-neurology consultation from multiple data sources (e.g. SOPs, emergency room observations, staff and patient interview). In a process of synthesis and abstraction, the codes are then grouped, summarised and/or categorised [ 15 , 20 ]. The end product of the coding or analysis process is a descriptive theory of the behavioural pattern under investigation [ 20 ]. The coding process is performed using qualitative data management software, the most common ones being InVivo, MaxQDA and Atlas.ti. It should be noted that these are data management tools which support the analysis performed by the researcher(s) [ 14 ].

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig3_HTML.jpg

From data collection to data analysis

Attributions for icons: see Fig. ​ Fig.2, 2 , also “Speech to text” by Trevor Dsouza, “Field Notes” by Mike O’Brien, US, “Voice Record” by ProSymbols, US, “Inspection” by Made, AU, and “Cloud” by Graphic Tigers; all from the Noun Project

How to report qualitative research?

Protocols of qualitative research can be published separately and in advance of the study results. However, the aim is not the same as in RCT protocols, i.e. to pre-define and set in stone the research questions and primary or secondary endpoints. Rather, it is a way to describe the research methods in detail, which might not be possible in the results paper given journals’ word limits. Qualitative research papers are usually longer than their quantitative counterparts to allow for deep understanding and so-called “thick description”. In the methods section, the focus is on transparency of the methods used, including why, how and by whom they were implemented in the specific study setting, so as to enable a discussion of whether and how this may have influenced data collection, analysis and interpretation. The results section usually starts with a paragraph outlining the main findings, followed by more detailed descriptions of, for example, the commonalities, discrepancies or exceptions per category [ 20 ]. Here it is important to support main findings by relevant quotations, which may add information, context, emphasis or real-life examples [ 20 , 23 ]. It is subject to debate in the field whether it is relevant to state the exact number or percentage of respondents supporting a certain statement (e.g. “Five interviewees expressed negative feelings towards XYZ”) [ 21 ].

How to combine qualitative with quantitative research?

Qualitative methods can be combined with other methods in multi- or mixed methods designs, which “[employ] two or more different methods [ …] within the same study or research program rather than confining the research to one single method” [ 24 ]. Reasons for combining methods can be diverse, including triangulation for corroboration of findings, complementarity for illustration and clarification of results, expansion to extend the breadth and range of the study, explanation of (unexpected) results generated with one method with the help of another, or offsetting the weakness of one method with the strength of another [ 1 , 17 , 24 – 26 ]. The resulting designs can be classified according to when, why and how the different quantitative and/or qualitative data strands are combined. The three most common types of mixed method designs are the convergent parallel design , the explanatory sequential design and the exploratory sequential design. The designs with examples are shown in Fig.  4 .

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig4_HTML.jpg

Three common mixed methods designs

In the convergent parallel design, a qualitative study is conducted in parallel to and independently of a quantitative study, and the results of both studies are compared and combined at the stage of interpretation of results. Using the above example of EVT provision, this could entail setting up a quantitative EVT registry to measure process times and patient outcomes in parallel to conducting the qualitative research outlined above, and then comparing results. Amongst other things, this would make it possible to assess whether interview respondents’ subjective impressions of patients receiving good care match modified Rankin Scores at follow-up, or whether observed delays in care provision are exceptions or the rule when compared to door-to-needle times as documented in the registry. In the explanatory sequential design, a quantitative study is carried out first, followed by a qualitative study to help explain the results from the quantitative study. This would be an appropriate design if the registry alone had revealed relevant delays in door-to-needle times and the qualitative study would be used to understand where and why these occurred, and how they could be improved. In the exploratory design, the qualitative study is carried out first and its results help informing and building the quantitative study in the next step [ 26 ]. If the qualitative study around EVT provision had shown a high level of dissatisfaction among the staff members involved, a quantitative questionnaire investigating staff satisfaction could be set up in the next step, informed by the qualitative study on which topics dissatisfaction had been expressed. Amongst other things, the questionnaire design would make it possible to widen the reach of the research to more respondents from different (types of) hospitals, regions, countries or settings, and to conduct sub-group analyses for different professional groups.

How to assess qualitative research?

A variety of assessment criteria and lists have been developed for qualitative research, ranging in their focus and comprehensiveness [ 14 , 17 , 27 ]. However, none of these has been elevated to the “gold standard” in the field. In the following, we therefore focus on a set of commonly used assessment criteria that, from a practical standpoint, a researcher can look for when assessing a qualitative research report or paper.

Assessors should check the authors’ use of and adherence to the relevant reporting checklists (e.g. Standards for Reporting Qualitative Research (SRQR)) to make sure all items that are relevant for this type of research are addressed [ 23 , 28 ]. Discussions of quantitative measures in addition to or instead of these qualitative measures can be a sign of lower quality of the research (paper). Providing and adhering to a checklist for qualitative research contributes to an important quality criterion for qualitative research, namely transparency [ 15 , 17 , 23 ].

Reflexivity

While methodological transparency and complete reporting is relevant for all types of research, some additional criteria must be taken into account for qualitative research. This includes what is called reflexivity, i.e. sensitivity to the relationship between the researcher and the researched, including how contact was established and maintained, or the background and experience of the researcher(s) involved in data collection and analysis. Depending on the research question and population to be researched this can be limited to professional experience, but it may also include gender, age or ethnicity [ 17 , 27 ]. These details are relevant because in qualitative research, as opposed to quantitative research, the researcher as a person cannot be isolated from the research process [ 23 ]. It may influence the conversation when an interviewed patient speaks to an interviewer who is a physician, or when an interviewee is asked to discuss a gynaecological procedure with a male interviewer, and therefore the reader must be made aware of these details [ 19 ].

Sampling and saturation

The aim of qualitative sampling is for all variants of the objects of observation that are deemed relevant for the study to be present in the sample “ to see the issue and its meanings from as many angles as possible” [ 1 , 16 , 19 , 20 , 27 ] , and to ensure “information-richness [ 15 ]. An iterative sampling approach is advised, in which data collection (e.g. five interviews) is followed by data analysis, followed by more data collection to find variants that are lacking in the current sample. This process continues until no new (relevant) information can be found and further sampling becomes redundant – which is called saturation [ 1 , 15 ] . In other words: qualitative data collection finds its end point not a priori , but when the research team determines that saturation has been reached [ 29 , 30 ].

This is also the reason why most qualitative studies use deliberate instead of random sampling strategies. This is generally referred to as “ purposive sampling” , in which researchers pre-define which types of participants or cases they need to include so as to cover all variations that are expected to be of relevance, based on the literature, previous experience or theory (i.e. theoretical sampling) [ 14 , 20 ]. Other types of purposive sampling include (but are not limited to) maximum variation sampling, critical case sampling or extreme or deviant case sampling [ 2 ]. In the above EVT example, a purposive sample could include all relevant professional groups and/or all relevant stakeholders (patients, relatives) and/or all relevant times of observation (day, night and weekend shift).

Assessors of qualitative research should check whether the considerations underlying the sampling strategy were sound and whether or how researchers tried to adapt and improve their strategies in stepwise or cyclical approaches between data collection and analysis to achieve saturation [ 14 ].

Good qualitative research is iterative in nature, i.e. it goes back and forth between data collection and analysis, revising and improving the approach where necessary. One example of this are pilot interviews, where different aspects of the interview (especially the interview guide, but also, for example, the site of the interview or whether the interview can be audio-recorded) are tested with a small number of respondents, evaluated and revised [ 19 ]. In doing so, the interviewer learns which wording or types of questions work best, or which is the best length of an interview with patients who have trouble concentrating for an extended time. Of course, the same reasoning applies to observations or focus groups which can also be piloted.

Ideally, coding should be performed by at least two researchers, especially at the beginning of the coding process when a common approach must be defined, including the establishment of a useful coding list (or tree), and when a common meaning of individual codes must be established [ 23 ]. An initial sub-set or all transcripts can be coded independently by the coders and then compared and consolidated after regular discussions in the research team. This is to make sure that codes are applied consistently to the research data.

Member checking

Member checking, also called respondent validation , refers to the practice of checking back with study respondents to see if the research is in line with their views [ 14 , 27 ]. This can happen after data collection or analysis or when first results are available [ 23 ]. For example, interviewees can be provided with (summaries of) their transcripts and asked whether they believe this to be a complete representation of their views or whether they would like to clarify or elaborate on their responses [ 17 ]. Respondents’ feedback on these issues then becomes part of the data collection and analysis [ 27 ].

Stakeholder involvement

In those niches where qualitative approaches have been able to evolve and grow, a new trend has seen the inclusion of patients and their representatives not only as study participants (i.e. “members”, see above) but as consultants to and active participants in the broader research process [ 31 – 33 ]. The underlying assumption is that patients and other stakeholders hold unique perspectives and experiences that add value beyond their own single story, making the research more relevant and beneficial to researchers, study participants and (future) patients alike [ 34 , 35 ]. Using the example of patients on or nearing dialysis, a recent scoping review found that 80% of clinical research did not address the top 10 research priorities identified by patients and caregivers [ 32 , 36 ]. In this sense, the involvement of the relevant stakeholders, especially patients and relatives, is increasingly being seen as a quality indicator in and of itself.

How not to assess qualitative research

The above overview does not include certain items that are routine in assessments of quantitative research. What follows is a non-exhaustive, non-representative, experience-based list of the quantitative criteria often applied to the assessment of qualitative research, as well as an explanation of the limited usefulness of these endeavours.

Protocol adherence

Given the openness and flexibility of qualitative research, it should not be assessed by how well it adheres to pre-determined and fixed strategies – in other words: its rigidity. Instead, the assessor should look for signs of adaptation and refinement based on lessons learned from earlier steps in the research process.

Sample size

For the reasons explained above, qualitative research does not require specific sample sizes, nor does it require that the sample size be determined a priori [ 1 , 14 , 27 , 37 – 39 ]. Sample size can only be a useful quality indicator when related to the research purpose, the chosen methodology and the composition of the sample, i.e. who was included and why.

Randomisation

While some authors argue that randomisation can be used in qualitative research, this is not commonly the case, as neither its feasibility nor its necessity or usefulness has been convincingly established for qualitative research [ 13 , 27 ]. Relevant disadvantages include the negative impact of a too large sample size as well as the possibility (or probability) of selecting “ quiet, uncooperative or inarticulate individuals ” [ 17 ]. Qualitative studies do not use control groups, either.

Interrater reliability, variability and other “objectivity checks”

The concept of “interrater reliability” is sometimes used in qualitative research to assess to which extent the coding approach overlaps between the two co-coders. However, it is not clear what this measure tells us about the quality of the analysis [ 23 ]. This means that these scores can be included in qualitative research reports, preferably with some additional information on what the score means for the analysis, but it is not a requirement. Relatedly, it is not relevant for the quality or “objectivity” of qualitative research to separate those who recruited the study participants and collected and analysed the data. Experiences even show that it might be better to have the same person or team perform all of these tasks [ 20 ]. First, when researchers introduce themselves during recruitment this can enhance trust when the interview takes place days or weeks later with the same researcher. Second, when the audio-recording is transcribed for analysis, the researcher conducting the interviews will usually remember the interviewee and the specific interview situation during data analysis. This might be helpful in providing additional context information for interpretation of data, e.g. on whether something might have been meant as a joke [ 18 ].

Not being quantitative research

Being qualitative research instead of quantitative research should not be used as an assessment criterion if it is used irrespectively of the research problem at hand. Similarly, qualitative research should not be required to be combined with quantitative research per se – unless mixed methods research is judged as inherently better than single-method research. In this case, the same criterion should be applied for quantitative studies without a qualitative component.

The main take-away points of this paper are summarised in Table ​ Table1. 1 . We aimed to show that, if conducted well, qualitative research can answer specific research questions that cannot to be adequately answered using (only) quantitative designs. Seeing qualitative and quantitative methods as equal will help us become more aware and critical of the “fit” between the research problem and our chosen methods: I can conduct an RCT to determine the reasons for transportation delays of acute stroke patients – but should I? It also provides us with a greater range of tools to tackle a greater range of research problems more appropriately and successfully, filling in the blind spots on one half of the methodological spectrum to better address the whole complexity of neurological research and practice.

Take-away-points

• Assessing complex multi-component interventions or systems (of change)

• What works for whom when, how and why?

• Focussing on intervention improvement

• Document study

• Observations (participant or non-participant)

• Interviews (especially semi-structured)

• Focus groups

• Transcription of audio-recordings and field notes into transcripts and protocols

• Coding of protocols

• Using qualitative data management software

• Combinations of quantitative and/or qualitative methods, e.g.:

• : quali and quanti in parallel

• : quanti followed by quali

• : quali followed by quanti

• Checklists

• Reflexivity

• Sampling strategies

• Piloting

• Co-coding

• Member checking

• Stakeholder involvement

• Protocol adherence

• Sample size

• Randomization

• Interrater reliability, variability and other “objectivity checks”

• Not being quantitative research

Acknowledgements

Abbreviations.

EVTEndovascular treatment
RCTRandomised Controlled Trial
SOPStandard Operating Procedure
SRQRStandards for Reporting Qualitative Research

Authors’ contributions

LB drafted the manuscript; WW and CG revised the manuscript; all authors approved the final versions.

no external funding.

Availability of data and materials

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Choosing Between a Structured or Conversational Interview

  • Marlo Lyons

structured interview research methodology

Both approaches have pros and cons — and can yield different insights about a candidate.

It’s critical to avoid the financial burden of making a wrong hire. Two approaches to conducting interviews — structured and conversational — can yield different insights about a candidate. While structured interviews make it easier to compare candidate responses and help ensure each interviewer covers distinct areas without redundancy, they may fall short in uncovering the candidate’s communication style and adaptability to change in a real-world setting. Conversational interviews offer a unique opportunity to get to know a candidate better by engaging them in a discussion about a real problem your organization is facing or has faced, but they can also present greater opportunities for bias to creep in. Here’s what each interview method can reveal about a candidate and when you might want to use them.

Interviewing candidates involves more than assessing their hard and soft skills — it’s crucial to choose the right method to gain a comprehensive understanding of their potential long-term fit for the team and company. During my time in human resources, I frequently encountered new hires who possessed extensive experience and expertise but struggled to adapt, which ultimately benefited no one. This mismatch often stemmed from a lack of alignment between the candidate’s values and the company’s environment and core principles, as well as the hiring manager’s lack of understanding about a candidate’s long-term career aspirations and motivations.

structured interview research methodology

  • Marlo Lyons is a career, executive, and team coach, as well as the award-winning author of Wanted – A New Career: The Definitive Playbook for Transitioning to a New Career or Finding Your Dream Job . You can reach her at marlolyonscoaching.com .

Partner Center

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Unstructured Interview | Definition, Guide & Examples

Unstructured Interview | Definition, Guide & Examples

Published on January 27, 2022 by Tegan George . Revised on June 22, 2023.

An unstructured interview is a data collection method that relies on asking participants questions to collect data on a topic. Also known as non-directive interviewing , unstructured interviews do not have a set pattern and questions are not arranged in advance.

In research, unstructured interviews are usually qualitative in nature, and can be very helpful for social science or humanities research focusing on personal experiences.

An unstructured interview can be a particularly useful exploratory research tool. Known for being very informal and flexible, they can yield captivating responses from your participants.

  • Structured interviews : The questions are predetermined in both topic and order.
  • Semi-structured interviews : A few questions are predetermined, but other questions aren’t planned.
  • Focus group interviews : The questions are presented to a group instead of one individual.

Table of contents

What is an unstructured interview, when to use an unstructured interview, advantages of unstructured interviews, disadvantages of unstructured interviews, unstructured interview questions, how to conduct an unstructured interview, how to analyze an unstructured interview, presenting your results, other interesting articles, frequently asked questions about unstructured interviews.

An unstructured interview is the most flexible type of interview, with room for spontaneity. In contrast to a structured interview , the questions and the order in which they are presented are not set. Instead, the interview proceeds based on the participant’s previous answers.

Unstructured interviews are open-ended. This lack of structure can help you gather detailed information on your topic, while still allowing you to observe patterns in the analysis stage.

It can be challenging to know what type of interview best fits your subject matter. Unstructured interviews can be very challenging to conduct, and may not always be the best fit for your research question . Unstructured interviews are best used when:

  • You are an experienced interviewer and have a very strong background in your research topic.
  • Your research question is exploratory in nature. While you may have developed hypotheses, you are open to discovering new or shifting viewpoints.
  • You are seeking descriptive data, and are ready to ask questions that will deepen and contextualize your initial thoughts and hypotheses .
  • Your research depends on forming connections with your participants and making them feel comfortable revealing deeper emotions, thoughts, or lived experiences.

Even more so than in structured or semi-structured interviews, it is critical that you remain organized and develop a system for keeping track of participant responses. Since the questions are not set beforehand, the data collection and analysis becomes more complex.

Differences between different types of interviews

Make sure to choose the type of interview that suits your research best. This table shows the most important differences between the four types.

Fixed questions
Fixed order of questions
Fixed number of questions
Option to ask additional questions

Unstructured interviews have a few advantages compared to other types of interviews.

Very flexible

Respondents are more at ease, reduced risk of bias, more detail and nuance.

Unstructured interviews also have a few downsides compared to other data collection methods.

Low generalizability and reliability

Risk of leading questions, very time-consuming, risk of low internal validity.

It can be challenging to ask unstructured interview questions that get you the information you seek without biasing your responses. You will have to rely on the flow of the conversation and the cues you pick up from your participants.

Here are a few tips:

  • Since you won’t be designing set questions ahead of time, it’s important to feel sufficiently comfortable with your topic that you can come up with questions spontaneously.
  • Write yourself a guide with notes about your topic and what you’re seeking to investigate or gain from your interviews, so you have notes to refer back to.
  • Try to ask questions that encourage your participant to answer at length. Avoid closed-ended questions that can be answered with a simple “yes” or “no.”
  • Relatedly, focus on “how” questions rather than “why” questions to help put your participants at ease and avoid any feelings of defensiveness or anxiety.
  • Consider beginning the interview with an icebreaker or a “freebie” question, to start on a relaxed and comfortable note before delving into the more sensitive topics.

Here are a few possibilities for how your conversation could proceed:

Conversation A: 

  • Interviewer: Do you go to the gym? How often?
  • Participant: I go to the gym 5 times per week.
  • Interviewer: What feelings does going to the gym bring out in you?
  • Participant: I don’t feel like myself unless I go to the gym.

Since the participant hinted that going to the gym is important for their mental health, proceed with questions in that vein, such as:

  • You say you “don’t feel like yourself.” Can you elaborate?
  • If you have to skip a gym day, how does that make you feel?
  • Is there anything else that makes you feel the way going to the gym does?

Conversation B:

  • Participant: No, I hate the gym.

Since the participant seems to have strong feelings against the gym, you can probe deeper.

  • What makes you feel this way about the gym?
  • What do you like to do instead?
  • Do your feelings about the gym reflect on your feelings about exercise in general?

Once you’ve determined that an unstructured interview is the right fit for your research topic , you can proceed with the following steps.

Step 1: Set your goals and foundations

As you conceptualize your research question, consider starting with some guiding questions, such as:

  • What are you trying to learn or achieve from an unstructured interview specifically?
  • Why is an unstructured interview the best fit for your research, as opposed to a different type of interview or another research method ?
  • What is the guiding force behind your research? What topic will serve as the foundation for your unscripted and follow-up questions?

While you do not need to plan your questions ahead of time for an unstructured interview, this does not mean that no advanced planning is needed. Unstructured interviews actually require extensive planning ahead to ensure that the interview stage will be fruitful.

  • Perhaps you have been studying it for quite some time, or you have previously conducted another type of research on a similar topic.
  • Maybe you are seeking a bit more detail or nuance to confirm or challenge past results, or you are interested in delving deeper into a particular question that arose from past research.

Once you are feeling really solid about your research question, you can start brainstorming categories of questions you may ask. You can start with one broad, overarching question and brainstorm what paths the conversation could take.

Step 2: Assemble your participants

There are a few sampling methods you can use to recruit your interview participants, such as:

  • Voluntary response sampling : For example, posting flyers in the dining hall and seeing who answers.
  • Stratified sampling of a particular age, race, or gender identity that is relevant to your research.
  • Convenience sampling of other students at your university, colleagues or friends.

Step 3: Decide on your setting

You should decide ahead of time whether your interview will be conducted in-person, over the phone, or via video conferencing.

In-person, phone, or video interviews each have their own advantages and disadvantages.

  • In general, live interviews can lead to nervousness or interviewer effects, where the respondent feels pressured to respond in a manner they perceive will please you.
  • Videoconferencing specifically can feel awkward or stilted, which could affect your results. However, your participant may be more comfortable in their own home.
  • Not being face-to-face with respondents, such as in a phone interview, could lead to more honest answers. However, there could be environmental conditions or distractions on the participant side that could affect their responses.
  • Consent to video- or audio-recording
  • Signature of a confidentiality agreement
  • Signature of an agreement to anonymize or pseudonymize data.

Step 4: Conduct your interviews

As you conduct your interviews, pay special attention to any environmental conditions that could bias your responses. This includes noises, temperature, and setting, but also your body language. Be careful to moderate your tone of voice and any responses to avoid interviewer effects.

Remember that one of the biggest challenges with unstructured interviews is to keep your questions neutral and unbiased. Strive for open-ended phrasing, and allow your participants to set the pace, asking follow-up questions that flow naturally from their last answer.

After you’re finished conducting your interviews, you move into the analysis phase. Don’t forget to assign each participant a pseudonym (such as a number or letter) to be sure you stay organized.

First, transcribe your recorded interviews. You can then conduct content or thematic analysis to create your categories, seeking patterns that stand out to you among your responses and testing your hypotheses .

Transcribing interviews

The transcription process can be quite lengthy for unstructured interviews due to their more detailed nature. One decision that can save you quite a bit of time before you get started is whether you will be conducting verbatim transcription or intelligent verbatim transcription.

  • If you consider that laughter, hesitations, or filler words like “umm” affect your analysis and research conclusions, you should conduct verbatim transcription and include them.
  • If not, intelligent verbatim transcription allows you to exclude fillers and fix any grammar issues in your transcription. Intelligent verbatim transcription can save you some time in this step.

Transcribing has the added benefit of being a great opportunity for cleansing your data . While you listen, you can take notes of questions or inconsistencies that come up.

Note that in some cases, your supervisor may ask you to add the finished transcriptions in the appendix of your research paper .

Coding unstructured interviews

After you’re finished transcribing, you can begin your thematic or content analysis . Here, you separate words, patterns, or recurring responses that stand out to you into labels or categories for later analysis. This process is called “coding.”

Due to the open-ended nature of unstructured interviews, you will most likely proceed with thematic analysis, rather than content analysis. In thematic analysis, you draw preliminary conclusions about your participants through identifying common topics, ideas, or patterns in their responses.

  • After you have familiarized yourself sufficiently with your responses, you can separate them into different codes or labels.
  • However, codes can be a bit too specific or niche for robust analysis. You can proceed by grouping similar codes into broader themes.
  • After identifying your themes, be sure to double-check your responses to ensure that the themes you chose appropriately represent your data.

Analyzing unstructured interviews

Once you’re confident with your preliminary thoughts, you can take either an inductive or a deductive approach in your analysis.

  • An inductive approach is more open-ended, allowing your data to then determine your themes.
  • A deductive approach is the opposite, and involves investigating whether your data confirm preconceived themes or ideas.

Thematic analysis is quite subjective, which can lead to issues with the reliability of your results. The unstructured nature of this type of interview leads to greater dependence on your judgment and interpretations. Be extra vigilant about remaining objective here.

After your data analysis, you’re ready to combine your findings into a research paper .

  • Your methodology section describes your data collection process (in this case, describing your unstructured interview process) and explains how you justified and conceptualized your analysis.
  • Your discussion and results sections usually describe each of your coded categories, and give you the opportunity to showcase your argument.
  • You can then conclude with your main takeaways and avenues for further study.
  • Since unstructured interviews are predominantly exploratory in nature, you can add suggestions for future research in the discussion section .

Example of interview methodology for a research paper

Let’s say you are a history student particularly interested in the history of the town around your campus. The town has a long history dating back to the early 1600s, but town census data shows that many long-term residents have been moving away in recent years.

You identify a few potential reasons for this shift:

  • People are moving away because there are better opportunities in the closest big city
  • The university has been aggressively purchasing real estate to build more student housing
  • The university has long been the main source of jobs for the town, and education budget cuts have led to a hiring freeze
  • The cost of living in the area has skyrocketed in recent years, and long-time residents can no longer afford their property taxes

Anecdotally, you hypothesize that the increased cost of living is the predominant factor in driving away long-time residents. However, you cannot rule out the possibility of the other options, specifically the lack of job options coupled with the university’s expansionist aims.

You feel very comfortable with this topic and oral histories in general. Since it is exploratory in nature but has the potential to become sensitive or emotional, you decide to conduct unstructured interviews with long-term residents of your town. Multi-generational residents are of particular interest.

To find the right mix of participants, you post in the Facebook group for town residents, as well as in the town’s NextDoor forum. You also post flyers in local coffee shops and even some mailboxes.

Once you’ve assembled your participants, it’s time to proceed with your interviews. Consider starting out with an icebreaker, such as:

  • What is your favorite thing about this town?
  • Tell me about a memory of the town that you have that’s particularly special.

You can then proceed with the interview, asking follow-up questions relevant to your participants’ responses, probing their family history, ties to the community, or any stories they have to share– whether funny, touching, or sentimental.

Establishing rapport with your participants helps you delve into the reasoning behind the choice to stay or leave, and competing thoughts and feelings they may have as the interview goes on. Remember to try to structure it like a conversation, to put them more at ease with the emotional topics.

  • Has increased cost of living led to you considering leaving the area? → The phrasing implies that you, the interviewer, think this is the case. This could bias your respondents, incentivizing them to answer affirmatively as well.
  • Are there any factors that would lead to you considering leaving the area? → This phrasing ensures the participant is giving their own opinion, and may even yield some surprising responses that enrich your analysis.

After conducting your interviews and transcribing your data, you can then conduct thematic analysis, coding responses into different categories. Since you began your research with several theories for why residents may be leaving that all seemed plausible, you would use the inductive approach.

After identifying the relevant themes from your data, you can draw inferences and conclusions. Your results section usually addresses each theme or pattern you found, describing each in turn, as well as how often you came across them in your analysis.

Perhaps one reason in particular really jumped out from responses, or maybe it was more of a mixed bag. Explain why you think this could be the case, and feel free to include lots of (properly anonymized) examples from the data to better illustrate your points.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

An unstructured interview is the most flexible type of interview, but it is not always the best fit for your research topic.

Unstructured interviews are best used when:

  • You are an experienced interviewer and have a very strong background in your research topic, since it is challenging to ask spontaneous, colloquial questions.
  • Your research question is exploratory in nature. While you may have developed hypotheses, you are open to discovering new or shifting viewpoints through the interview process.
  • You are seeking descriptive data, and are ready to ask questions that will deepen and contextualize your initial thoughts and hypotheses.
  • Your research depends on forming connections with your participants and making them feel comfortable revealing deeper emotions, lived experiences, or thoughts.

The four most common types of interviews are:

  • Structured interviews : The questions are predetermined in both topic and order. 
  • Unstructured interviews : None of the questions are predetermined.

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organize your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

The interviewer effect is a type of bias that emerges when a characteristic of an interviewer (race, age, gender identity, etc.) influences the responses given by the interviewee.

There is a risk of an interviewer effect in all types of interviews , but it can be mitigated by writing really high-quality interview questions.

Social desirability bias is the tendency for interview participants to give responses that will be viewed favorably by the interviewer or other participants. It occurs in all types of interviews and surveys , but is most common in semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias can be mitigated by ensuring participants feel at ease and comfortable sharing their views. Make sure to pay attention to your own body language and any physical or verbal cues, such as nodding or widening your eyes.

This type of bias can also occur in observations if the participants know they’re being observed. They might alter their behavior accordingly.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

George, T. (2023, June 22). Unstructured Interview | Definition, Guide & Examples. Scribbr. Retrieved July 30, 2024, from https://www.scribbr.com/methodology/unstructured-interview/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, what is a focus group | step-by-step guide & examples, structured interview | definition, guide & examples, semi-structured interview | definition, guide & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Semistructured interviewing in primary care research: a balance of relationship and rigour

orcid logo

Lisa M Vaughn .

https://doi.org/ 10.1136/fmch-2018-000057

Semistructured in-depth interviews are commonly used in qualitative research and are the most frequent qualitative data source in health services research. This method typically consists of a dialogue between researcher and participant, guided by a flexible interview protocol and supplemented by follow-up questions, probes and comments. The method allows the researcher to collect open-ended data, to explore participant thoughts, feelings and beliefs about a particular topic and to delve deeply into personal and sometimes sensitive issues. The purpose of this article was to identify and describe the essential skills to designing and conducting semistructured interviews in family medicine and primary care research settings. We reviewed the literature on semistructured interviewing to identify key skills and components for using this method in family medicine and primary care research settings. Overall, semistructured interviewing requires both a relational focus and practice in the skills of facilitation. Skills include: (1) determining the purpose and scope of the study; (2) identifying participants; (3) considering ethical issues; (4) planning logistical aspects; (5) developing the interview guide; (6) establishing trust and rapport; (7) conducting the interview; (8) memoing and reflection; (9) analysing the data; (10) demonstrating the trustworthiness of the research; and (11) presenting findings in a paper or report. Semistructured interviews provide an effective and feasible research method for family physicians to conduct in primary care research settings. Researchers using semistructured interviews for data collection should take on a relational focus and consider the skills of interviewing to ensure quality. Semistructured interviewing can be a powerful tool for family physicians, primary care providers and other health services researchers to use to understand the thoughts, beliefs and experiences of individuals. Despite the utility, semistructured interviews can be intimidating and challenging for researchers not familiar with qualitative approaches. In order to elucidate this method, we provide practical guidance for researchers, including novice researchers and those with few resources, to use semistructured interviewing as a data collection strategy. We provide recommendations for the essential steps to follow in order to best implement semistructured interviews in family medicine and primary care research settings.

  • Introduction

Semistructured interviews can be used by family medicine researchers in clinical settings or academic settings even with few resources. In contrast to large-scale epidemiological studies, or even surveys, a family medicine researcher can conduct a highly meaningful project with interviews with as few as 8–12 participants. For example, Chang and her colleagues, all family physicians, conducted semistructured interviews with 10 providers to understand their perspectives on weight gain in pregnant patients. 1 The interviewers asked questions about providers’ overall perceptions on weight gain, their clinical approach to weight gain during pregnancy and challenges when managing weight gain among pregnant patients. Additional examples conducted by or with family physicians or in primary care settings are summarised in table 1 . 1–6

From our perspective as seasoned qualitative researchers, conducting effective semistructured interviews requires: (1) a relational focus, including active engagement and curiosity, and (2) practice in the skills of interviewing. First, a relational focus emphasises the unique relationship between interviewer and interviewee. To obtain quality data, interviews should not be conducted with a transactional question-answer approach but rather should be unfolding, iterative interactions between the interviewer and interviewee. Second, interview skills can be learnt. Some of us will naturally be more comfortable and skilful at conducting interviews but all aspects of interviews are learnable and through practice and feedback will improve. Throughout this article, we highlight strategies to balance relationship and rigour when conducting semistructured interviews in primary care and the healthcare setting.

Qualitative research interviews are ‘attempts to understand the world from the subjects’ point of view, to unfold the meaning of peoples’ experiences, to uncover their lived world prior to scientific explanations’ (p 1). 7 Qualitative research interviews unfold as an interviewer asks questions of the interviewee in order to gather subjective information about a particular topic or experience. Though the definitions and purposes of qualitative research interviews vary slightly in the literature, there is common emphasis on the experiences of interviewees and the ways in which the interviewee perceives the world (see table 2 for summary of definitions from seminal texts).

The most common type of interview used in qualitative research and the healthcare context is semistructured interview. 8 Figure 1 highlights the key features of this data collection method, which is guided by a list of topics or questions with follow-up questions, probes and comments. Typically, the sequencing and wording of the questions are modified by the interviewer to best fit the interviewee and interview context. Semistructured interviews can be conducted in multiple ways (ie, face to face, telephone, text/email, individual, group, brief, in-depth), each of which have advantages and disadvantages. We will focus on the most common form of semistructured interviews within qualitative research—individual, face-to-face, in-depth interviews.

Key characteristics of semistructured interviews.

Purpose of semistructured interviews

The overall purpose of using semistructured interviews for data collection is to gather information from key informants who have personal experiences, attitudes, perceptions and beliefs related to the topic of interest. Researchers can use semistructured interviews to collect new, exploratory data related to a research topic, triangulate other data sources or validate findings through member checking (respondent feedback about research results). 9 If using a mixed methods approach, semistructured interviews can also be used in a qualitative phase to explore new concepts to generate hypotheses or explain results from a quantitative phase that tests hypotheses. Semistructured interviews are an effective method for data collection when the researcher wants: (1) to collect qualitative, open-ended data; (2) to explore participant thoughts, feelings and beliefs about a particular topic; and (3) to delve deeply into personal and sometimes sensitive issues.

Designing and conducting semistructured interviews

In the following section, we provide recommendations for the steps required to carefully design and conduct semistructured interviews with emphasis on applications in family medicine and primary care research (see table 3 ).

Steps for designing and conducting semistructured interviews

Step 1: determining the purpose and scope of the study.

The purpose of the study is the primary objective of your project and may be based on an anecdotal experience, a review of the literature or previous research finding. The purpose is developed in response to an identified gap or problem that needs to be addressed.

Research questions are the driving force of a study because they are associated with every other aspect of the design. They should be succinct and clearly indicate that you are using a qualitative approach. Qualitative research questions typically start with ‘What’, ‘How’ or ‘Why’ and focus on the exploration of a single concept based on participant perspectives. 10

Step 2: identifying participants

After deciding on the purpose of the study and research question(s), the next step is to determine who will provide the best information to answer the research question. Good interviewees are those who are available, willing to be interviewed and have lived experiences and knowledge about the topic of interest. 11 12 Working with gatekeepers or informants to get access to potential participants can be extremely helpful as they are trusted sources that control access to the target sample.

Sampling strategies are influenced by the research question and the purpose of the study. Unlike quantitative studies, statistical representativeness is not the goal of qualitative research. There is no calculation of statistical power and the goal is not a large sample size. Instead, qualitative approaches seek an in-depth and detailed understanding and typically use purposeful sampling. See the study of Hatch for a summary of various types of purposeful sampling that can be used for interview studies. 12

‘How many participants are needed?’ The most common answer is, ‘it depends’—it depends on the purpose of the study, what kind of study is planned and what questions the study is trying to answer. 12–14 One common standard in qualitative sample sizes is reaching thematic saturation, which refers to the point at which no new thematic information is gathered from participants. Malterud and colleagues discuss the concept of information power , or a qualitative equivalent to statistical power, to determine how many interviews should be collected in a study. They suggest that the size of a sample should depend on the aim, homogeneity of the sample, theory, interview quality and analytic strategy. 14

Step 3: considering ethical issues

An ethical attitude should be present from the very beginning of the research project even before you decide who to interview. 15 This ethical attitude should incorporate respect, sensitivity and tact towards participants throughout the research process. Because semistructured interviewing often requires the participant to reveal sensitive and personal information directly to the interviewer, it is important to consider the power imbalance between the researcher and the participant. In healthcare settings, the interviewer or researcher may be a part of the patient’s healthcare team or have contact with the healthcare team. The researchers should ensure the interviewee that their participation and answers will not influence the care they receive or their relationship with their providers. Other issues to consider include: reducing the risk of harm; protecting the interviewee’s information; adequately informing interviewees about the study purpose and format; and reducing the risk of exploitation. 10

Step 4: planning logistical aspects

Careful planning particularly around the technical aspects of interviews can be the difference between a great interview and a not so great interview. During the preparation phase, the researcher will need to plan and make decisions about the best ways to contact potential interviewees, obtain informed consent, arrange interview times and locations convenient for both participant and researcher, and test recording equipment. Although many experienced researchers have found themselves conducting interviews in less than ideal locations, the interview location should avoid (or at least minimise) interruptions and be appropriate for the interview (quiet, private and able to get a clear recording). 16 For some research projects, the participants’ homes may make sense as the best interview location. 16

Initial contacts can be made through telephone or email and followed up with more details so the individual can make an informed decision about whether they wish to be interviewed. Potential participants should know what to expect in terms of length of time, purpose of the study, why they have been selected and who will be there. In addition, participants should be informed that they can refuse to answer questions or can withdraw from the study at any time, including during the interview itself.

Audio recording the interview is recommended so that the interviewer can concentrate on the interview and build rapport rather than being distracted with extensive note taking 16 (see table 4 for audio-recording tips). Participants should be informed that audio recording is used for data collection and that they can refuse to be audio recorded should they prefer.

Most researchers will want to have interviews transcribed verbatim from the audio recording. This allows you to refer to the exact words of participants during the analysis. Although it is possible to conduct analyses from the audio recordings themselves or from notes, it is not ideal. However, transcription can be extremely time consuming and, if not done yourself, can be costly.

In the planning phase of research, you will want to consider whether qualitative research software (eg, NVivo, ATLAS.ti, MAXQDA, Dedoose, and so on) will be used to assist with organising, managing and analysis. While these tools are helpful in the management of qualitative data, it is important to consider your research budget, the cost of the software and the learning curve associated with using a new system.

Step 5: developing the interview guide

Semistructured interviews include a short list of ‘guiding’ questions that are supplemented by follow-up and probing questions that are dependent on the interviewee’s responses. 8 17 All questions should be open ended, neutral, clear and avoid leading language. In addition, questions should use familiar language and avoid jargon.

Most interviews will start with an easy, context-setting question before moving to more difficult or in-depth questions. 17 Table 5 gives details of the types of guiding questions including ‘grand tour’ questions, 18 core questions and planned and unplanned follow-up questions.

To illustrate, online supplementary appendix A presents a sample interview guide from our study of weight gain during pregnancy among young women. We start with the prompt, ‘Tell me about how your pregnancy has been so far’ to initiate conversation about their thoughts and feelings during pregnancy. The subsequent questions will elicit responses to help answer our research question about young women’s perspectives related to weight gain during pregnancy.

After developing the guiding questions, it is important to pilot test the interview. Having a good sense of the guide helps you to pace the interview (and not run out of time), use a conversational tone and make necessary adjustments to the questions.

Like all qualitative research, interviewing is iterative in nature—data collection and analysis occur simultaneously, which may result in changes to the guiding questions as the study progresses. Questions that are not effective may be replaced with other questions and additional probes can be added to explore new topics that are introduced by participants in previous interviews. 10

Step 6: establishing trust and rapport

Interviews are a special form of relationship, where the interviewer and interviewee converse about important and often personal topics. The interviewer must build rapport quickly by listening attentively and respectfully to the information shared by the interviewee. 19 As the interview progresses, the interviewer must continue to demonstrate respect, encourage the interviewee to share their perspectives and acknowledge the sensitive nature of the conversation. 20

To establish rapport, it is important to be authentic and open to the interviewee’s point of view. It is possible that the participants you recruit for your study will have preconceived notions about research, which may include mistrust. As a result, it is important to describe why you are conducting the research and how their participation is meaningful. In an interview relationship, the interviewee is the expert and should be treated as such—you are relying on the interviewee to enhance your understanding and add to your research. Small behaviours that can enhance rapport include: dressing professionally but not overly formal; avoiding jargon or slang; and using a normal conversational tone. Because interviewees will be discussing their experience, having some awareness of contextual or cultural factors that may influence their perspectives may be helpful as background knowledge.

Step 7: conducting the interview

Location and set-up.

The interview should have already been scheduled at a convenient time and location for the interviewee. The location should be private, ideally with a closed door, rather than a public place. It is helpful if there is a room where you can speak privately without interruption, and where it is quiet enough to hear and audio record the interview. Within the interview space, Josselson 15 suggests an arrangement with a comfortable distance between the interviewer and interviewee with a low table in between for the recorder and any materials (consent forms, questionnaires, water, and so on).

Beginning the interview

Many interviewers start with chatting to break the ice and attempt to establish commonalities, rapport and trust. Most interviews will need to begin with a brief explanation of the research study, consent/assent procedures, rationale for talking to that particular interviewee and description of the interview format and agenda. 11 It can also be helpful if the interviewer shares a little about who they are and why they are interested in the topic. The recording equipment should have already been tested thoroughly but interviewers may want to double-check that the audio equipment is working and remind participants about the reason for recording.

Interviewer stance

During the interview, the interviewer should adopt a friendly and non-judgemental attitude. You will want to maintain a warm and conversational tone, rather than a rote, question-answer approach. It is important to recognise the potential power differential as a researcher. Conveying a sense of being in the interview together and that you as the interviewer are a person just like the interviewee can help ease any discomfort. 15

Active listening

During a face-to-face interview, there is an opportunity to observe social and non-verbal cues of the interviewee. These cues may come in the form of voice, body language, gestures and intonation, and can supplement the interviewee’s verbal response and can give clues to the interviewer about the process of the interview. 21 Listening is the key to successful interviewing. 22 Listening should be ‘attentive, empathic, nonjudgmental, listening in order to invite, and engender talk’ 15 15 (p 66). Silence, nods, smiles and utterances can also encourage further elaboration from the interviewee.

Continuing the interview

As the interview progresses, the interviewer can repeat the words used by the interviewee, use planned and unplanned follow-up questions that invite further clarification, exploration or elaboration. As DiCicco-Bloom and Crabtree 10 explain: ‘Throughout the interview, the goal of the interviewer is to encourage the interviewee to share as much information as possible, unselfconsciously and in his or her own words’ (p 317). Some interviewees are more forthcoming and will offer many details of their experiences without much probing required. Others will require prompting and follow-up to elicit sufficient detail.

As a result, follow-up questions are equally important to the core questions in a semistructured interview. Prompts encourage people to continue talking and they can elicit more details needed to understand the topic. Examples of verbal probes are repeating the participant’s words, summarising the main idea or expressing interest with verbal agreement. 8 11 See table 6 for probing techniques and example probes we have used in our own interviewing.

Step 8: memoing and reflection

After an interview, it is essential for the interviewer to begin to reflect on both the process and the content of the interview. During the actual interview, it can be difficult to take notes or begin reflecting. Even if you think you will remember a particular moment, you likely will not be able to recall each moment with sufficient detail. Therefore, interviewers should always record memos —notes about what you are learning from the data. 23 24 There are different approaches to recording memos: you can reflect on several specific ideas, or create a running list of thoughts. Memos are also useful for improving the quality of subsequent interviews.

Step 9: analysing the data

The data analysis strategy should also be developed during planning stages because analysis occurs concurrently with data collection. 25 The researcher will take notes, modify the data collection procedures and write reflective memos throughout the data collection process. This begins the process of data analysis.

The data analysis strategy used in your study will depend on your research question and qualitative design—see the study of Creswell for an overview of major qualitative approaches. 26 The general process for analysing and interpreting most interviews involves reviewing the data (in the form of transcripts, audio recordings or detailed notes), applying descriptive codes to the data and condensing and categorising codes to look for patterns. 24 27 These patterns can exist within a single interview or across multiple interviews depending on the research question and design. Qualitative computer software programs can be used to help organise and manage interview data.

Step 10: demonstrating the trustworthiness of the research

Similar to validity and reliability, qualitative research can be assessed on trustworthiness. 9 28 There are several criteria used to establish trustworthiness: credibility (whether the findings accurately and fairly represent the data), transferability (whether the findings can be applied to other settings and contexts), confirmability (whether the findings are biased by the researcher) and dependability (whether the findings are consistent and sustainable over time).

Step 11: presenting findings in a paper or report

When presenting the results of interview analysis, researchers will often report themes or narratives that describe the broad range of experiences evidenced in the data. This involves providing an in-depth description of participant perspectives and being sure to include multiple perspectives. 12 In interview research, the participant words are your data. Presenting findings in a report requires the integration of quotes into a more traditional written format.

  • Conclusions

Though semistructured interviews are often an effective way to collect open-ended data, there are some disadvantages as well. One common problem with interviewing is that not all interviewees make great participants. 12 29 Some individuals are hard to engage in conversation or may be reluctant to share about sensitive or personal topics. Difficulty interviewing some participants can affect experienced and novice interviewers. Some common problems include not doing a good job of probing or asking for follow-up questions, failure to actively listen, not having a well-developed interview guide with open-ended questions and asking questions in an insensitive way. Outside of pitfalls during the actual interview, other problems with semistructured interviewing may be underestimating the resources required to recruit participants, interview, transcribe and analyse the data.

Despite their limitations, semistructured interviews can be a productive way to collect open-ended data from participants. In our research, we have interviewed children and adolescents about their stress experiences and coping behaviours, young women about their thoughts and behaviours during pregnancy, practitioners about the care they provide to patients and countless other key informants about health-related topics. Because the intent is to understand participant experiences, the possible research topics are endless.

Due to the close relationships family physicians have with their patients, the unique settings in which they work, and in their advocacy, semistructured interviews are an attractive approach for family medicine researchers, even if working in a setting with limited research resources. When seeking to balance both the relational focus of interviewing and the necessary rigour of research, we recommend: prioritising listening over talking; using clear language and avoiding jargon; and deeply engaging in the interview process by actively listening, expressing empathy, demonstrating openness to the participant’s worldview and thanking the participant for helping you to understand their experience.

  • Further Reading

Edwards R, & Holland J. (2013). What is qualitative interviewing?: A&C Black.

Josselson R. Interviewing for qualitative inquiry: A relational approach. Guilford Press, 2013.

Kvale S. InterViews: An Introduction to Qualitative Research Interviewing. SAGE, London, 1996.

Pope C, & Mays N. (Eds). (2006). Qualitative research in health care.

  • Supplementary files
  • Publication history

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Study Protocol

The effectiveness of group interpersonal synchrony in young autistic adults’ work environment: A mixed methods RCT study protocol

Roles Conceptualization, Data curation, Investigation, Methodology, Project administration, Resources, Software, Supervision, Visualization, Writing – original draft, Writing – review & editing

* E-mail: [email protected]

Affiliation University of Haifa, Faculty of Social Welfare & Health Sciences, School of Creative Arts Therapies, Haifa, Israel

ORCID logo

Roles Conceptualization, Methodology, Supervision, Writing – review & editing

  • Tamar Dvir, 
  • Tal-Chen Rabinowitch, 
  • Cochavit Elefant

PLOS

  • Published: July 31, 2024
  • https://doi.org/10.1371/journal.pone.0307956
  • Peer Review
  • Reader Comments

Fig 1

Introduction

Few autistic adults are able to integrate successfully into the world of work given their difficulties adapting to the social and stressful aspects of work environments. Interpersonal synchrony, when two or more individuals share body movements or sensations, is a powerful force that consolidates human groups while promoting the ability to self-regulate and cooperate with others. The abilities to self-regulate and cooperate are crucial for maintaining a calm and productive work environment. This study protocol outlines research that aims to assess the effects of group interpersonal synchrony on prosociality and work-related stress of young autistic adults in their work environment.

Methods and analysis

This mixed-methods randomized controlled trial will investigate two movement-based group synchronous and non-synchronous intervention conditions. The sample will be composed of young adults enrolled in an innovative Israeli program designed to integrate cognitively-abled 18- to 25-year-old autistic adults into the Israeli army work force. The movement-based intervention sessions will take place in groups of 10–14 participants, once a week for 10 weeks. Questionnaires, behavioral collaborative tasks and semi-structured interviews will be conducted. Quantitative data will be collected for each participant at three points of time: before and after the intervention period, and 17 weeks after the end of the intervention. Qualitative data will be collected after the intervention period in interviews with the participants.

Little is known about interventions that promote successful integration into social and stressful work environments. The findings are likely to shed new light on the use of group interpersonal synchrony in autistic individuals at work.

Trial registration

NCT05846308 .

Citation: Dvir T, Rabinowitch T-C, Elefant C (2024) The effectiveness of group interpersonal synchrony in young autistic adults’ work environment: A mixed methods RCT study protocol. PLoS ONE 19(7): e0307956. https://doi.org/10.1371/journal.pone.0307956

Editor: Stergios Makris, Edge Hill University, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND

Received: May 9, 2023; Accepted: July 12, 2024; Published: July 31, 2024

Copyright: © 2024 Dvir et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: Deidentified research data will be made publicly available when the study is completed and published.

Funding: The author(s) received no specific funding for this work.

Competing interests: The authors have declared that no competing interests exist.

Individuals diagnosed with Autism Spectrum Disorder (ASD) have difficulties related to social interaction and communication, as well as repetitive and stereotyped interests [ 1 ]. According to the DSM-V [ 1 ] cognitively-abled autistic people are characterized by a high intellectual level and the least requirements for support. There are few cognitively-abled autistic adults who can successfully integrate into the world of work since they often manifest atypical behavior in social situations, find it hard to initiate social interactions, have limited interest in forming social relationships, find it challenging to cooperate with co-workers and often fail to regulate work-related stress [ 2 – 4 ]. The ability to work, according to the International Classification of Functioning, Disability and Health (World Health Organization, 2001) is considered to be a significant factor affecting individuals’ health, quality of life, and wellbeing. This accounts for the growing interest in developing tailored interventions to promote the long-term integration of challenged populations into social and stressful work environments [ 5 , 6 ].

A new wave of research in autism suggests that in addition to difficulties with social interactions and stressful environments, ASD is also characterized by fundamental differences in sensorimotor functioning including the delayed development of motor skills [ 7 ], abnormal gait [ 8 ], difficulty with balance [ 9 ], difficulties in coordinating movements that involve both sides of the body, and in controlling force and direction when throwing a ball [ 10 ]. These altered modes of perceiving and moving in the world (alone, as well as with other people) may help explain some of the key fundamental social impairments in autism and asynchronous social behavior in particular [ 11 – 15 ]. Asynchronous social behavior in autism (e.g., reduced eye contact, absence of well-timed co-regulation, no coherent engagement of mutual attention, no anticipation and no emotional build up in shared interactions) was first identified in a micro-analytic study of videos of an 11-month old infant who was diagnosed with ASD during her second year of life [ 16 ]. Stronger evidence for the association between atypical interpersonal synchrony and autism was recently described [ 13 ] in a study which also suggested that there may be a disruption in intrapersonal mechanisms such as atypical motor timing, and in interpersonal mechanisms such as atypical inter-individual coupling.

The aim of this trial is to assess the effects of group interpersonal synchrony on the prosociality and work-related stress of young autistic adults in their work environment. The trial is designed as a randomized controlled trial with two parallel groups (experimental group and treated control group), primary and secondary endpoints and a follow-up of intervention effectiveness after 17 weeks as described in Fig 1 . Randomization will be performed as block randomization with a 1:1 allocation. In the following sections we review research on interpersonal synchrony in autism and on the associations between interpersonal synchrony, prosociality and work-related stress.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

Note: EHQ, Edinburgh Handedness Questionnaire; NTB, Need to Belong Scale; IOS, Inclusion of Other in Self Scale; FCI, Friendship Closeness Inventory; GSB, General Sense of Belonging Scale; IS, Irritation Scale; CT, Collection Task; PGG, Public Goods Game.

https://doi.org/10.1371/journal.pone.0307956.g001

Interpersonal synchrony in autism

Interpersonal synchrony is said to occur when two or more individuals are engaged in body movements or sensations (such as gaze, affect, voice and touch) at the same time (precise synchrony) or as a responsive behavior (lagged synchrony) [ 17 , 18 ]. This form of interpersonal relationship has at times been called “togetherness”, especially when there is no leader or follower in this form of non-verbal behavioral sharing [ 19 ]. Body movements may be connected spatially (spatial synchrony), temporally (rhythmic synchrony) or qualitatively (effort synchrony) [ 20 , 21 ]. Interpersonal synchrony can emerge intentionally as a result of an explicit shared goal, such as running at same pace, or spontaneously without being aware of it as part of a social interaction, such as when dancing, singing or playing an instrument [ 22 ]. Studies over the last ten years have shown that autistic children and adults demonstrate poor ability to synchronize their body with a partner during rhythmic interaction activities and games [ 11 , 12 , 23 – 26 ]. For example, an innovative study focusing on motion synchronization during an open-ended joint improvisation paradigm dubbed the Mirror Game found that cognitively-abled autistic adults exhibited much less interpersonal synchronization than neurotypical individuals [ 23 ]. Autistic adolescents also presented less interpersonal synchrony, and fewer intentional and spontaneous motor interactions on a pendulum coordination paradigm [ 12 ].

The effects of interpersonal synchrony on prosociality

Prosociality refers to behaviors that are intended to benefit others, such as helping, cooperating, sharing and donating [ 27 ]. Recently, there has been increased interest in studying the causal influences of interpersonal synchrony on prosociality in general [ 28 , 29 ] and specifically in work environments [ 30 , 31 ]. A meta-analysis of 60 studies that compared an interpersonal synchrony condition to at least one control condition found a medium-sized effect of interpersonal synchrony on the prosociality of neurotypical adults with regard to both attitudes and behavior towards individuals or groups involved in the same intervention [ 28 ]. Another meta-analysis of 42 studies suggested that synchrony in larger groups increases prosociality [ 29 ]. Other studies that have assessed the effects of interventions based on mirroring movement in autistic adults suggest that interpersonal synchrony may be an effective intervention tool for enhancing social skills [ 32 ] and specifically emotional inference [ 33 ], which is a fundamental skill underlying prosociality. Manders et al. [ 34 ] argued that the structured mirroring task, which encourages interpersonal synchrony such as in the case of the Mirror Game, may not be sufficient to effectively support social engagement in autistic individuals, since they tend to simply follow the instructions to synchronize their movements. These findings emphasize the need to include synchronized sensory-motor interactions in tailored interventions to foster prosociality, while proposing both structured intentional synchrony and less structured spontaneous synchrony [ 35 ]. To the best of our knowledge, no study has examined the effectiveness of group interpersonal synchrony on the prosociality of autistic individuals in the context of a social work environment. Since working teams’ cognitive performance declines in stressful environments [ 36 ], in the following section we review research on the associations between interpersonal synchrony and work-related stress.

The effects of interpersonal synchrony on work-related stress

Work-related stress is defined as individuals’ responses when presented with work demands and pressures that are misaligned with their knowledge or abilities and which challenge their ability to cope [ 37 ]. A recent longitudinal field experiment in a neurotypical work environment found that a 9-week synchronous movement intervention reduced work-related stress and diminished sick days immediately following the intervention, but not after three months [ 30 ]. A qualitative evaluation of workplace choir singing in Health Service staff reported greater enjoyment at work and an improvement in work engagement [ 31 ]. One possible explanation is that participating in synchronous activities may release endogenous opioids such as endorphin, which are involved in the regulation of emotional pain [ 38 ]. While these studies support the widespread belief that togetherness and harmony only have beneficial effects on personal wellbeing, a study by Galbusera et al.’s [ 39 ] suggested that interpersonal synchrony when implemented in the form of a dyadic moving task designed to encourage spontaneous social interaction, could decrease people’s ability to self-regulate due to their extra reliance on the dyadic interaction. This may imply that an effective intervention to support autistic individuals in stressful work environments should include co-regulation with others in a group setting rather than in an intimate dyadic framework that might be less efficient in terms of stress regulation. To the best of our knowledge, no studies have examined the effectiveness of interpersonal synchrony on the work-related stress of autistic individuals in the context of a group.

The effects of interpersonal synchrony on social closeness and a sense of belonging

According to McNeill’s [ 40 ] Muscular Bonding hypothesis, group interpersonal synchrony, which involves coordinated rhythmic movement, is a powerful force uniting human groups and enhancing social closeness and a sense of belonging. Social closeness is defined as the way individuals engage in friendships, interact and relate to significant others [ 41 ]. Social closeness in a workplace setting was enhanced using an intervention in which participants shared synchronous movement [ 30 ]. Sense of belonging is the experience of personal involvement in a system or environment, which makes individuals feel that they are an integral part [ 42 ]. Interpersonal synchrony has been found to enhance people’s sense of belonging in an experimental setting using a finger tapping task [ 43 ]. Furthermore, these feelings of social closeness and belongingness among individuals are known to promote prosocial behavior [ 44 ] as well as the ability to cope with various stressors [ 45 ].

Moderating factor underlying the effects of interpersonal synchrony

The need to belong is characterized by the drive to be noticed, valued and accepted by others [ 46 ]. It is a fundamental need that exerts a strong influence in virtually every domain of social behavior starting early in development [ 47 ]. While all individuals differ in the strength of their desire to belong to social groups, the social motivation hypothesis argues that autistic individuals’ need to belong is diminished because they find social stimuli less rewarding as compared to neurotypicals [ 4 ]. For example, Brezis et al. [ 23 ] found that autistic adults who played the Mirror Game with an expert improvisor were less likely to want to continue playing, but their ratings of how they felt during the game did not differ. Hence, group interpersonal synchrony may be more effective when the need to belong is higher, since the intrinsic motivation is likely to have a positive effect on the sense of belonging to the group. The suggested moderating model is depicted in Fig 2 .

thumbnail

https://doi.org/10.1371/journal.pone.0307956.g002

The aim of this study is to assess the effects of group interpersonal synchrony on the prosociality and work-related stress of young autistic adults in their work environment. The primary objective is to determine whether a synchronized group intervention will have an immediate and/or long-term positive effect on participants’ prosociality and work-related stress. The secondary objectives are to determine: (1) Whether a synchronized group intervention will have an immediate and/or long-term positive effect on participants’ social closeness and sense of belonging, (2) Whether this effect will be influenced by participants’ need to belong as reported before the intervention, (3) How participants perceive the intervention as affecting their prosociality and work-related stress, and (4) In what ways the participants’ perception will contribute to a better understanding of the intervention effect.

Materials and methods

A convergent mixed methods design will be applied, where quantitative and qualitative data are collected and analyzed in parallel. The use of mixed methods may provide a better understanding of the study objectives. Specifically, a convergent design will be used [ 48 ] to gain a more in-depth understanding of the quantitative results by incorporating the participants’ qualitative perspectives. The two sets of results will be compared to obtain a more complete understanding of the study objectives by validating one set of findings with the other and ensuring that the participants’ perspectives are consistent and not dependent on the research method.

Participants

This study will be conducted in collaboration with Roim Rachok Program (RRP) an innovative Israeli civilian program designed to integrate cognitively-abled young autistic adults first into the Israeli army work force, and later into the labor market [ 6 ]. As part of the program, before joining the army, the trainees take part in a professional 13 weeks course delivered by an interdisciplinary team that includes army officers and instructors, an occupational therapist, a speech therapist and an art therapist. The course content is composed of two main fields: (a) army related professional instruction (e.g., software programming, data analysis and image interpretation), and (b) integration into the army working environment. Trainees who successfully complete the course are inducted into the army as civilian volunteers for a four-month trial period. The trial period is designed to allow trainees to experience their future working environment before committing to long-term army service. Trainees who successfully complete the trial period can sign up for one more year of service with an annual option of extension, for up to three years. The RRP participants are integrated into the army corps with other soldiers who do not have developmental disabilities.

The sample for this study will be comprised of young adults enrolled in an RRP who fulfill the following criteria: Inclusion criteria: (a) Aged 18 to 25 when accepted to the program, since these age limits are defined as part of the admission requirements for the RRP. (b) Diagnosis of autism spectrum disorder: Participants must have an official diagnosis of an autism spectrum disorder as assessed by a child psychiatrist or clinical psychologist according to the DSM-V [ 1 ] before acceptance to the RRP. Exclusion criteria: Trainees with severe sensory impairments such as blindness or deafness and/or severe physical disability.

Ethical considerations

This study has been approved by the ethics committee for human experiments of the Faculty of Social Welfare and Health Sciences at the University of Haifa (Approval # 017/21). Participation in the study will be voluntary without preconditions. All participants will receive a letter explaining the study and how it will be conducted, and will sign an informed consent form to participate in the study. Participants who have custodian will be required to present a written custodian approval as well. No risks or discomfort are expected as a result of participation in this study. However, if any physical or emotional discomfort is experienced, the participants will receive appropriate support from the RRP professional staff. The data will be coded to protect the participants’ confidentiality. All data, including the questionnaires, the behavioral task outcomes and audio recordings will be stored in a secure cloud and will not be available to anyone other than the research team. The audio recordings will be deleted immediately after being transcribed.

Sample size

A meta-analysis on prosocial consequences of interpersonal synchrony among neurotypical adults found a medium-sized effect of interpersonal synchrony on prosociality with regard to both attitudes and behavior towards individuals or groups involved in the same intervention [ 28 ]. Therefore, an effect defined as d = 0.25 may be expected [ 49 ]. An a-priori power analysis using the G*Power computer program [ 50 ] indicated that a total sample size of 40 participants would be needed to detect medium-sized effects defined as f = 0.25, with 80% power and alpha at .05, using a repeated measure, within-between interaction ANOVA with four groups and three measurements. We will recruit at least N = 60 participants (30 in each intervention group) to plan for possible dropouts.

Participant recruitment

A total of 60 participants will be recruited at the beginning of a program cycle from approximately four consecutive program cycles, each of which is composed of 20 to 30 trainees. A new program cycle starts every four months. The first author will be responsible for the study approval procedure at the beginning of the course while providing information about the study to all trainees and inviting them to participate. The trainees’ decision to participate or not in the study will not affect their participation in the course in any way. The trainees who agree to take part in the study will sign an informed consent form that clarifies that participation in the study is on a voluntary basis and that anonymity and confidentiality are guaranteed as well as the right to withdraw at any time without this affecting their participation in the program. Participants who have a custodian will be required to present the custodian’s approval as well. This form will also include information about the data collection procedures, including data from the RRP personal file.

Study design

This study will implement a randomized controlled trial (RCT) design to investigate a synchronous or non-synchronous movement-based group intervention, so that each participant will only engage in one intervention condition, alongside a phenomenological study with the participants of both groups. The RCT will compare the experimental group (synchronous movement) with a treated control group (non-synchronous movement) and will follow the recommendations of the Consolidate Standards of Reporting Trials (CONSORT) [ 51 ] and the recommendations of the Standards for Reporting Qualitative Research [ 52 ]. In addition, the SPIRIT guidelines [ 53 ] have been followed. Fig 1 outlines when each study component occurs. This design does not include a nontreated control group to meet the program’s requirement that all trainees should participate in all course activities including the movement-based group intervention. The study was registered with ClinicalTrials.gov (registration number: NCT05846308) afterthe enrolment of participants started due to the uncertainty about the clinical character of the target population and intervention. The authors confirm that all ongoing and related trials for this intervention are registered.

The condition groups will be matched according to the following control variables: (a) gender (male/female); since there is diagnostic gender bias with respect to ASD [ 54 ] (b) army related profession (image interpretation/GIS/data analytics/software programming/electro-optics/data annotation) (c) handedness (right-handed/left-handed); since there is a larger distribution of left-handedness in the ASD population compared to the general population [ 55 ] (d) need to belong level (NTBL) (high/low). To control for handedness and NTBL during the randomization process, the following questionnaires will be administered to the participants: (a) The Hebrew adaptation of the Edinburgh Handedness questionnaire [ 56 ], the most common method for measuring handedness in individuals with Autism [ 57 ]. (b) The Hebrew adaptation of the Need to Belong (NTB) Scale [ 59 ] which comprises ten items rated on a 5-point Likert scale ranging from “Strongly Disagree” (1) to “Strongly Agree” (5). This scale has acceptable interitem reliability (the Cronbach’s alpha generally exceeds .80) and has been positively correlated with extraversion, agreeableness, and neuroticism and with having an identity that is defined in terms of social attributes [ 58 ]. NTBL(high/low) will be calculated using a cut-off of the NTB measure, so that participants with the 50% highest NTB score will be assigned a high NTBL and the rest will be assigned a low NTBL. After inclusion in the study and assessment of the control variables, participants will be assigned by the first author to one of the intervention conditions on an individual basis according to a computer-generated randomization procedure. The allocation ratio of the intended numbers of participants in the comparison groups will be 1:1 so that the number of participants assigned to the synchronous intervention will be similar to the number of participants assigned to the non-synchronous intervention. To this end, we will first create subgroups containing participants with same gender, army-related profession, handedness and NTBL. For each subgroup, alternate assignments will be made in random order. The order of subgroup assignments will also be determined randomly. For the first subgroup, we will use a simple coin flip randomization procedure to determine the intervention type to start assigning to. An overview of the study design is shown in Fig 3 .

thumbnail

https://doi.org/10.1371/journal.pone.0307956.g003

The interventions will be conducted by the first author, who is a professional Dance Movement therapist, as part of the RRP course, once a week for the duration of the 10 weeks of each program cycle. A graduate student of the RRP will take part as a co-instructor for modelling and assistance. Each intervention group will have between 10 and 14 participants. Since autistic participants may feel anxious without a predictable structure, a structured physical training protocol will be used for each condition. The protocols were designed by the first author with the help of a professional physical trainer and after consulting with dance movement therapists who have research experience in the area of synchrony interventions in ASD. Each protocol is composed of 10 physical training sessions, each lasting 60 minutes. The protocols differ in terms of using synchronous activity vs. non-synchronous activity. They do not differ in terms of physical exercise type or duration to control for the effect of exercise type and duration on the dependent variables. Some exercises include the use of a training mattress, an elastic band or a sponge ball. To allow for some flexibility in terms of spontaneous synchrony, each session includes a social engagement segment either as a warm-up or as a social game. A Tabata Timer [ 59 ] will be used to break the workout into clearly defined intervals during the main training segment. A metronome beat (every second) will be used to help the participants do the exercises at the right pace.

Synchronous condition

The instructors and the participants will form a circle facing each other while doing the physical exercises as shown in Fig 4 . To facilitate interpersonal synchrony, the participants will be instructed to do the same physical exercises (spatial synchrony) together at the same pace (rhythmic synchrony). The participants will not be instructed to use the same amount of effort, to accommodate a range of strengths and intensities.

thumbnail

https://doi.org/10.1371/journal.pone.0307956.g004

Non-synchronous condition

The participants will do the same physical exercises as the participants in the synchronous group but in the form of circuit training with seven stations. The circuit training will require the participants to do a different physical exercise at a different pace at each station. A detailed description of the exercises will be provided for each station. The instructors will demonstrate all the exercises before the beginning of training. The participants will be instructed to do the exercise for a set period of time, the same duration used in the synchronous group. The circuit training stations will be in the form of a circle, as shown in Fig 5 , but will be set up so that the participants do not to face each other when doing the exercises to prevent spontaneous synchronization. Depending on group size, the participants will work either as dyads or as single trainees in each station. In the dyad form one participant will do the exercise while the partner is resting and vice-versa. In the single form each participant will exercise as though a partner were present. The participants will be instructed to change partners each week so that each participant will have the opportunity to experience both the single and dyadic forms, with different partners.

thumbnail

https://doi.org/10.1371/journal.pone.0307956.g005

Intervention protocol

Each session will consist of four parts as listed in Table 1 .

thumbnail

https://doi.org/10.1371/journal.pone.0307956.t001

Quantitative data collection: Background and control variables

Background and control data will be collected for each participant from the RRP personal file which contains demographic and medical information obtained by the program staff using dedicated questionnaires. The questionnaires are handed out routinely to all RRP trainees during the program selection process, a few months before admission to the program. These cover information such as demographic data (e.g. age, gender, country of birth, residential area, number of siblings and birth order), medical data (e.g. diagnosis, age at diagnosis, comorbidities, use of psychiatric medication and HMO disability bracket) and physiological data (e.g. body weight and height).

Quantitative data collection: Outcome variables

Quantitative data on the following outcome variables (as listed in Table 2 ) will be collected for each participant using validated questionnaires and behavioral tasks at three points of time: before the first intervention session, after the last intervention session (pre-post) and 17 weeks after the end of the intervention period (follow-up). The data collection procedure will be administered by the first author in a group session, corresponding to the same group as the intervention group, in a classroom at RRP facilities. This procedure will be composed of two parts: (a) A group behavioral task which will be recorded using one video camera. (b) Questionnaire administration via online software (Qualtrics).

thumbnail

https://doi.org/10.1371/journal.pone.0307956.t002

Social closeness.

Social closeness will be measured using the Inclusion of Other in Self Scale (IOS) [ 60 , 61 ]. This scale is made up of seven Venn diagram-like pictures where one circle represents the participant and the other circle represents the entire intervention group. The diagrams, ranging from no overlap to near complete overlap, measure close relationships between self and the group by capturing aspects of both feeling close and behaving close. The scale has high reliability for the friendship subgroup sample (Cronbach’s alpha .92) and a high test-retest correlation (r = .86 (n = 31)). Significant correlations between the IOS and measures of closeness that are primarily verbal and multi-item were found (60).

Friendship closeness.

Friendship closeness will be measured using the Hebrew adaptation of the Friendship Closeness Inventory [ 41 ]. The FCI is composed of 49 items that measure closeness in same-sex friendships and is divided into three distinguishable yet related subscales: Emotional Closeness (EC), Behavioral Closeness (BC), and Cognitive Closeness (CC). For the purposes of the present study the first item will be adjusted to include a reference to participants in the intervention group alone: “ Do you have friends among the participants in the physical training group whom you consider to be ’close friends’ ?”. The scale has high reliability for both the total score (Cronbach’s alpha .94) and for the subscales scores (Cronbach’s alpha .91 for EC, Cronbach’s alpha .93 for BC and Cronbach’s alpha .87 for CC) [ 41 ].

Sense of belonging.

Sense of belonging will be measured using the Hebrew adaptation of the General Sense of Belonging Scale [ 62 ]. This scale is composed of 12 items that measure sense of belonging (achieved belongingness) which are rated on a 7-point Likert scale ranging from “Strongly Disagree” (1) to “Strongly Agree” (7). Six items measure sense of acceptance and inclusion; for example: I have a sense of belonging . The other six items measure sense of rejection and exclusion; for example: I feel like an outsider . For the purposes of the present study, the scale will be adjusted so that the words “other people” or “others” will be replaced by the words “participants in the physical training group”; for example: “ When I am with participants in the physical training group , I feel included ”. The scale has high reliability (Coefficient alpha = .94 and an average inter-item correlation (AIC) = .57 (M = 66.3, SD = 13.5) and strong patterns of validity estimates have been established (62). The original version of the General Sense of Belonging Scale [ 62 ] will also be used to validate the adapted one.

Work-related stress.

Work-related stress will be measured using the Hebrew adaptation of the Irritation Scale [ 63 ]. This scale comprises eight items, three of which assess cognitive irritation and five of which assess emotional irritation. Items are rated on a 7-point Likert scale ranging from “Strongly Disagree” (1) to “Strongly Agree” (7). For the purposes of the present study, the scale will be slightly adjusted in that the words “at work” will be replaced by the words “in the course”, for example: “ Even at home I often think of my problems in the course ”. The scale has high reliability (Cronbach’s alpha .89) and for the German version, a large number of studies report correlations between this scale and stressors at work, further impairments such as psychosomatic complaints or depression, and missing resources such as social support [ 63 ].

Prosociality.

Prosociality will be measured using the following behavioral group cooperation tasks, both of which resemble the classic N-person cooperation dilemma as a “collective action problem of a common good” as described by [ 64 ].

1 . Collection task . Participants will have to work together to pick up 100 small washers (a flat plastic coin with a diameter of 4 cm), as described in [ 65 ]. The participants will be asked to remain quietly in their seats for 5 minutes while the washers are being randomly scattered in a rectangular structure on one side of the classroom. Then the participants will be asked to pick up one washer at a time and put it in a small basket located on the other side of the classroom in the shortest time, until all washers are collected by the group. Only then will they be able to return to their seats and remain quietly for another 5 minutes. Participant will be asked to abstain from running to avoid collisions and maintain group safety. In this specific task, the act of gathering washers embodies the concept of the common good. Participants must collect all washers and place them in a small basket before returning to their seats. However, the temptation to free ride, relying on others’ efforts, illustrates the collective action problem. While it benefits both individuals and the group to gather the washers, the impact of any single person’s contribution is minimal and difficult to discern. This dynamic creates an incentive to conserve energy by abstaining from action and allowing others to undertake the task. Participants must work together toward a common goal (collecting all washers in the shortest time), therefore there is likely no incentive to compete. As in previous social loafing paradigms [ 66 ], cooperation on this task will be operationalized by participants’ effort to overcome the incentive to conserve energy, represented by their step rate (number of steps per second). The number of steps will be measured using a wearable fitness tracker (Fitbit Inspire 2) that will be attached to the participants’ wrists using a special band. Fitbit fitness trackers are commonly used in clinical trials [ 67 ]. Each tracker will be customized with participants’ personal height and weight to ensure step rate accuracy. The collection procedure will be recorded using one video camera to control task execution. Since cooperation on this task will depend on the participants’ physical fitness and might be influenced by group pressure, we will also use a non-physical confidential task to measure pure voluntary cooperation.

2 . Public goods game . Participants will be told they will be given 30 NIS and that they can donate some or all of it to a group investment, as described in [ 68 ]. The money in the group investment will then be doubled and divided equally among all members of the group. The money earned in this game will be compensation for participating in the study. In contrast to the collection task, participants’ contribution will be confidential to avoid group pressure. Cooperation on this task will be operationalized by the amount of money each participant decides to donate to the group investment.

Qualitative data collection

Qualitative data will be collected from the participants till data saturation is achieved, using semi-structured interviews. After each intervention cycle, all participants will be asked to volunteer to be interviewed. The interviewees will be chosen out of the pool of volunteers based on the following criteria: (a) At least one participant from each intervention group. (b) At least one participant from each army related profession. (c) High verbal self-expression—the participant must have been appraised by army officers as being able to verbally express thoughts, emotions and views. The interviews will be carried out one to two weeks after the end of the intervention cycle. The interview guide for the semi-structured interviews will be composed of 11 questions ranging from the most general (e.g., “ How did you experience the course in general ?”) to the most specific (e.g., “ How did the group physical training sessions affect your experience during the course : (a) with respect to your sense of belonging (b) with respect to friendship closeness (c) with respect to your sense of involvement and cooperation (d) with respect to your work-related stress ”). In order to merge the qualitative and quantitative data, the qualitative questions in the interview guide will be congruent with the quantitative objectives of this study. To motivate the participants to discuss their perspective and express their emotions, two questions out of the 11 will be administered using a set of projective cards. This set of cards contains 13 words and 17 pictures reflecting associations with the word “group”. On the first question the participants will be asked to choose a word that best describes their relationship with other members of the intervention group (e.g. “coherence”, “loneliness” and “opportunity”) and to explain why they chose this specific word. On the second question the participants will be asked to choose the picture that best depicts their experience during the group intervention process and to explain why they chose this specific picture. The use of semi‐ambiguous words and pictures is less challenging for individuals on the autistic spectrum than the presentation of more ambiguous stimuli. The interviews will be conducted by the first author and will be audio recorded and fully transcribed before analysis.

Data analyses

To answer the five study objectives (see above in Introduction) the following data analysis will be conducted:

Quantitative data analysis.

The data obtained as study variables will be coded, processed and analyzed at the end of the collection process. IBM SPSS Statistics Version 27 will be used for statistical analysis. To control the family-wise error rate when performing multiple hypothesis tests, Bonferroni Step-Down (Holm) procedure will be applied. Participants must attend at least 4 sessions to be included in the analyses. To describe the sample, descriptive statistics will be calculated followed by univariate and bivariate statistical analyses. The first step will be to assess whether the data follows a normal distribution for each of the variables which will guide the choice of parametric or non-parametric tests. To answer the primary study objective and the first secondary study objective, the effects of the intervention over time (at the end of the intervention and after 17 weeks) will be examined using a two-way mixed model ANOVA. To explore the second secondary study objective, the influence of the need to belong on the effect of the intervention over time (at the end of the intervention and after 17 weeks) will be examined using a three-way mixed model ANOVA.

Qualitative data analysis.

To investigate the participants’ experiences as related to the third secondary study objective, the semi-structured interviews will be analyzed using a thematic analysis approach [ 69 , 70 ] in which the data are systematically parsed to identify patterns of meanings (themes). The analysis will be performed using MAXQDA 2022.

Mixed methods analysis.

For data integration related to the fourth secondary study objective, a merged data analysis will be conducted based on side-by-side comparison of the quantitative findings with the interview data [ 48 ].

Study status

This study is currently ongoing. A feasibility assessment [ 71 ] of the intervention protocol was conducted to fine-tune the intervention protocol by documenting the experiences of the participants in terms of the goals of each of the intervention conditions (see above in Procedure). Participants’ allocation to intervention conditions during the feasibility assessment was not randomized or controlled and no data was collected. Full RCT implementation during three program cycles (starting in August 2021) have been completed successfully. The RCT implementation of the fourth program cycle is in process.

Serving in the army in Israel is mandatory at the age of 18 and represents the first step towards independent life outside of the home. Until recently, cognitively-abled autistic adults were automatically exempt from military service. The Roim Rachok civilian program aims to develop professional skills and tools to enable the successful integration of this population into an army work environment [ 6 ]. Recently, a new army program was developed to enable a full mandatory service for cognitively-abled autistic adults on a larger scale, and is currently being pilot tested. Clearly more needs to be known about interventions that promote successful integration in social and stressful work environments.

This study constitutes an innovative effort to empirically examine the use of group interpersonal synchrony as an effective intervention for young autistic adults who would like to successfully integrate into the Israeli army work force. It can contribute to the enhancement of the state of the art in the area of interpersonal synchrony as an effective intervention for autistic individuals specifically, and for individuals with difficulties in adapting to social and stressful work environments in general. The findings are also expected to contribute to the evidence base on intervention options for young autistic adults. The results are likely to provide information on the relevance of synchronized intervention as compared to non-synchronized interventions for individual wellbeing by incorporating the participants’ experiences. The findings can also contribute to further specify treatment guidelines for this population and to enrich future training and education of dance movement therapists, music therapists and other health care professionals working in the field of ASD.

The strength of this study lies in its ability to conduct an RCT in a field setting where the interventions are embedded in a plausible context: a program that aims to improve participants’ work-related soft skills. The trainees’ motivation to participate in this study is expected to be high, since both the intervention and data collection processes are embedded within the program schedule with no need for extra time-consuming activities (except for the individual interviews). The strength of this study is also linked to its potential limitations in terms of quantitative data generalizability due to the relatively medium sample size and the heterogeneity of the population. We expect that merging the quantitative results with in-depth qualitative outcomes will facilitate the transferability of the findings to other contexts.

Supporting information

S1 checklist. spirit 2013 checklist: recommended items to address in a clinical trial protocol and related documents*..

https://doi.org/10.1371/journal.pone.0307956.s001

https://doi.org/10.1371/journal.pone.0307956.s002

Acknowledgments

We thank the following individuals for their voluntary collaboration and support of this study: Tal Vardi (RRP Manager), Dr. Efrat Selanikyo (RRP Professional manager), Eyal Ron (RRP Course manager), Ori Dvir (computer science student) for helping develop the fitness tracker data retrieval software, Tomer Wasserman (certified physical trainer) for helping in the design of the intervention protocol, and Elad Shtayinmatz (RRP graduate) for helping administer the intervention protocol.

  • 1. DSM-V. Diagnostic and Statistical Manual of Mental Disorders. VA: American Psychiatric Association; 2013.
  • 2. Hendrickx S. Asperger Syndrome & Employment: What People with Asperger Syndrome Really Really Want. First. London and Philadelphia: Jessica Kingsley; 2009.
  • View Article
  • PubMed/NCBI
  • Google Scholar
  • 17. Bernieri FJ, Rosenthal R. Interpersonal coordination: Behavior matching and interactional synchrony. In: S F R., Rime B, editors. Fundamentals of nonverbal behavior. Cambridge University Press; 1991. p. 401–432.
  • 20. Kestenberg Amighi Janet Loman Susan, Mark Sossin K. The Meaning of Movement Embodied Developmental, Clinical, and Cultural Perspectives of the Kestenberg Movement Profile. [Internet]. 2nd editio. NY: Routledge; 2018 [cited 2018 Aug 1]. Available from: https://www.routledge.com/The-Meaning-of-Movement-Embodied-Developmental-Clinical-and-Cultural/Kestenberg-Amighi-Loman-Sossin/p/book/9781138484634 .
  • 40. McNeill WH. Keeping Together in Time. Cambridge, MA: Harvard University Press; 1995.
  • 46. Dunning D. Social Motivation. Dunning D, editor. Psychology Press, Taylor & Francis Group; 2011.
  • 48. Creswell JW, Clark VLP. Designing and Conducting Mixed Methods Research. SAGE Publications Ltd; 2017. 520 p.
  • Open access
  • Published: 31 July 2024

“ They don't have the luxury of time ”: interviews exploring the determinants of public health research activity that contextualise embedded researcher roles in local government

  • Rachael C. Edwards   ORCID: orcid.org/0000-0003-4717-7615 1 ,
  • Dylan Kneale 1 ,
  • Claire Stansfield 1 &
  • Sarah Lester 1  

Health Research Policy and Systems volume  22 , Article number:  88 ( 2024 ) Cite this article

3 Altmetric

Metrics details

Embedded researchers are a novel intervention to improve the translation of research evidence into policy and practice settings, including public health. These roles are being implemented with increasing popularity, but they often lack clear evaluative frameworks. Understanding initial levels of research activity, including associated barriers and opportunities, is essential to developing theories of change and thus shaping the roles and defining expectations. We aimed to identify the principal determinants of research activity in public health that contextualise embedded researcher roles, including attributes of the embedded researcher themselves.

We undertook seventeen semi-structured interviews with embedded researchers in diverse public health settings in English local government. Interviews were analysed using thematic analysis.

We identified thirteen interlinked determinants of research activity within local government public health settings. Research and interpersonal skills, as well as pre-existing connections and knowledge within local government, were highly valued individual attributes for embedded researchers. Resource deficiencies (funding, time, and infrastructure) were primary barriers to research activity, whereas a strong local appetite for evidence informed decision making presented a valuable opportunity. However, there was inconsistencies across public health teams relating to perceptions of what constituted “research” and the resources that would be required.

Conclusions

Our results suggest that successful embedded researchers will have equally strong research and communication skills and should be offered mentorship and clear career progression pathways. Perceptions of research within local government are closely linked to resource deficiencies and senior endorsement. Embedded researchers could benefit from taking the time to develop locally contextualised knowledge of this research culture. Theories of change for embedded researchers should conceptualise the interconnections across individual, interpersonal, and organisational barriers and opportunities underlying local government research activity. Further research is needed to identify methods for exploring the influence of embedded researchers as well as to unpack the stages of research activity within local government and the associated behaviours.

Peer Review reports

In recent decades, researchers have been increasingly interested in the mechanisms underlying the translation of research into practice [ 1 , 2 ]. This concern was born largely from the recognition of an enduring research-implementation gap whereby academic research is not translated into decision-making within policy and practice settings [ 3 ]. In public health, bridging this gap is critical to ensuring that depleted public funds are efficiently allocated to address rising health inequalities. However, the processes through which research diffuses into public health decisions are highly complex, non-linear, and constrained by a variety of barriers [ 4 ].

The literature has identified a wide range of factors that inhibit research activity within public health settings [ 5 , 6 , 7 ]. For example, through a systematic scoping review of the literature on public health decision making processes, Kneale, Rojas-García [ 4 ] identified several barriers to research evidence use including a lack of access to and applicability of academic research. Similarly, the National Institute for Health and Care Research (NIHR) recently funded studies across several local authorities in England to identify barriers and enablers of research activity [ 8 ]. These studies identified a variety of constraints both internal and external to local authorities including capacity limitations, misalignment of research timelines between local government and academia, and a lack of consensus on what constitutes research. To address such obstacles and improve cultures of evidence use in policy and practice, embedded researchers are increasingly being adopted within public health and a variety of other settings as a novel intervention at the research-policy interface [ 9 , 10 , 11 ].

Drawing from existing definitions [e.g. 12 , 13 ], we have derived a set of principles for defining an embedded researcher [ 14 ]. These principles identify attributes shared by embedded researchers across diverse settings, while also embracing the variety of forms the roles can take. Broadly, embedded researchers are roles which are co-affiliated with a research and policy or practice setting to enable research activity and use. As such, they act as change agents and are more than an occasional collaborative relationship. Rather, they have continual engagement with a host organisation who has joint influence over their aims and activities. In the context of embedded researchers, research activity should be conceptualised in the broadest sense to reflect the wide variety of activities they undertake (e.g., co-production, capacity building, supporting research use) [ 15 ].

Embedded researchers are well placed to address barriers to research activity in public health as they can, among other things, improve the local relevance of research and enhance local buy-in through becoming immersed and building trust within a host organisation [ 11 , 16 ]. In recognition of this value, researchers are increasingly being embedded as change agents within public health practice [ 14 ]. A growing number of examples have since emerged demonstrating how embedded researchers can activate incremental change in research cultures through, for example, “growing networks, becoming a local expert and champion, and enhancing evidence fluency (the skills needed to source and interpret evidence) or curiosity about evidence and research” [ 17 , 18 , pg. 3, 18 , 19 ].

Despite their identified potential in public health, embedded researchers are still relatively novel in these settings, often quite exploratory and lacking clear objectives and monitoring frameworks [ 10 , 20 ]. Understanding initial levels of research activity, including associated barriers and opportunities, is essential to developing theories of change for embedded researcher interventions and thus shaping the roles and defining expectations. However, given the numerous determinants of research activity in public health, investigating the local research context to inform embedded researcher interventions has the potential to be a highly onerous process. As such, the capacity constraints common in public health settings [ 6 ] present a significant barrier to the efficient design of embedded researcher roles, including aims and expectations. Identifying the determinants of research activity which are likely to be most relevant to embedded researchers could streamline this initial investigation.

A growing body of work has explored barriers and opportunities underlying embedded researcher roles [ 9 , 11 , 21 ]. For example, Coates and Mickan [ 1 ], surveyed over 100 ‘embedded researchers’ in Australian healthcare organisations to identify challenges and opportunities. They found, for example, that research was not sufficiently valued within healthcare organisations, but that access to research colleagues and mentors was a primary enabler. Some of this work has focused on procedural barriers and opportunities for embedded researchers such as those relating to attributes of the roles and strategies for becoming embedded within a host team [ 9 , 22 , 23 ]. This body of research has significantly advanced our understanding of embedded researcher interventions and many of the inhibiting and enabling factors align with the determinants of research activity within public health more generally. However, much of this work has emerged from clinical healthcare settings, and to our knowledge, there has been no comprehensive summary of the determinants of research activity in public health settings in the context of embedded researchers.

Through interviews with a diverse cohort of embedded researchers in English local government, we aim to identify the principal determinants of research activity in public health that contextualise embedded researcher roles. Among other benefits, investigating these attributes and the extent to which they present a barrier or opportunity to research activity at the inception of embedded researcher posts will assist with defining expectations, priorities, and theories of change for these interventions. We explore these determinants across three distinct, but connected layers of influence:

Individual: Attributes of the embedded researcher

Interpersonal: Attributes of the embedded researcher’s local government colleagues

Organisational: Attributes of the local government system

Organisational and interpersonal determinants of research activity have typically been grouped under “contextual factors” within the literature and, as such, distinct analysis of these layers of influence provides a unique perspective. We include individual attributes relating to the embedded researcher themselves such as their own knowledge, skills, and experience, as these factors intersect with attributes of the local government and should similarly be considered when defining objectives embedded researchers can be expected to achieve and identifying the types of support they will require in this process.

Case study: clinical research network research practitioner posts

For this research, we interviewed embedded researchers who were a part of a programme of work funded by the NIHR through its Clinical Research Network (CRN). Referred to as Public Health Local Authority Research Practitioners (PHLARPs), these embedded researchers were based across twenty-three English local authority (LA) public health settings who had bid into the CRN to receive funding for the posts (some PHLARPs were embedded in multiple LAs, while in other cases LAs supported multiple PHLARPs as part of job share arrangements). In the UK, LA public health teams set the direction of local policy and direct the delivery of local public health activity in cooperation with other LA departments and allied bodies such as Health and Wellbeing Boards [ 4 ].

The PHLARP roles were a part of a novel and exploratory set of interventions aimed at facilitating and enhancing public health cultures of research engagement and activity within local government. We undertook research to investigate various aspects of this programme. The present paper reports on one component of this broader programme of research [ 17 ].

The purpose of the PHLARP roles was to enable LA public health teams to build their research activity. The PHLARPs worked towards this overarching aim primarily through capacity building activity such as linking LA colleagues to research opportunities, building networks, supporting research projects, and facilitating training, although they also in some cases co-produced research with the LA [ 23 ]. In March 2020, the first two PHLARPs started in post and most of the remaining cohort followed in spring 2021. Originally, these positions were advertised as one-year contracts with a salary ranging between approximately £28,000–43,000. However, partway through this initial year, most posts were extended.

We conceptualise PHLARPs as embedded researchers as the structure and aims of the posts align with our set of principles defining embedded researcher roles [ 14 ]. In short, the roles were affiliated within a host LA team, while still maintaining links with an academic institution, such as a university or the CRN, and involved the broad aim of facilitating research activity. Despite being linked through overarching aims and objectives, the PHLARP roles were operationalised flexibly based on the local context including the local needs and priorities. This flexibility was an important aspect of the programme given the diversity of LAs in which PHLARPs were based. In practice, LA staff contributed to, and often led on the formulation of job descriptions, usually alongside an academic partner. The CRN-PHLARP programme thus presents a unique opportunity to explore contextual factors across embedded researchers with similar overarching aims but embedded within diverse LA settings.

Recruitment and semi-structured interview protocol

The CRN provided our research team with contact details for most PHLARPs that were funded through their programme. We pilot tested our interview schedule with one of these PHLARPs in November 2021. This initial interview was included within our final sample as the schedule did not change significantly. We then contacted the remaining PHLARPs about their potential involvement and carried out interviews in spring 2022. All interviews were conducted online through either Zoom or Teams. Prior to starting each interview, we (i) informed participants of the anonymity of their responses, (ii) attained informed consent for their participation, and (iii) requested their consent to the use of an audio recorder. This research was approved by a University College London research ethics committee (REC1540).

The first section of the interview consisted of gathering basic details about the PHLARP’s roles including start date, contract length, weekly time allocation, and any shared responsibilities (e.g., if the post was a job share). We also collected details on their LA and academic affiliations. We then asked a range of open ended, semi-structured questions revolving around (i) the research culture within the LA and how this had changed over the course of the PHLARP’s time in post, (ii) factors the PHLARPs perceived to have enabled or hindered their activities and influence on research culture, and (iii) skills the PHLARPs had relied upon or needed to develop during the post. Throughout the interviews, we asked PHLARPs to provide illustrative examples to provide further context to their responses.

Overview of participants

We conducted a total of seventeen interviews with PHLARPs which represents approximately seven-in-ten of all those originally recruited to the posts. All interviews were audio recorded, lasting an average of 49 min (range: 34–69 min), and manually transcribed.

PHLARP posts were highly diverse in relation to the structure of their roles and affiliations with academia and local government. At the time of our interviews, PHLARPs had been in their roles between six months and one and a half years. Of those interviewed, approximately half were full time for at least some of the duration of their post ( n  = 9). The remaining PHLARPs were part time, five of whom split their role as part of a job share. Just over half ( n  = 10) were primarily affiliated within a single layer of local government (e.g., London borough, city council) and the remainder had a remit to work across several local administrative units (e.g., a county council or different LAs). While most PHLARPs were affiliated primarily with a public health team or other department at a similar level within the LA, a few held strong connections with more senior bodies involved with overarching strategic decision making.

All PHLARPs held some level of dual affiliation across a LA and either a university or local CRN (one of our defining principles for embedded researchers), but the relative level of affiliation across these organisations varied substantially: a quarter ( n  = 4) were more closely affiliated with a university than the local government, half ( n  = 8) held weak affiliations with an academic organisation beyond the CRN, and just over a quarter ( n  = 5) held an equal level of dual affiliation across academia and local government. PHLARPs with greater levels of research experience generally held the strongest levels of affiliation with universities whereas those who were early in their career with respect to research tended to exhibit stronger levels of embeddedness within local government.

We analysed our transcripts through an inductive thematic analysis approach using NVIVO qualitative analysis software (released in March 2020) [ 24 ]. We applied the guidelines of Braun and Clarke [ 25 ] for thematic analysis. While reviewing the transcripts for accuracy, we compiled an initial list of codes relating to the determinants of research activity within the LA. Through two additional rounds of data review, we added to and modified this initial list and merged codes into final themes and sub themes. Finally, these codes were grouped into hierarchical, interrelated levels of influence within the LA: Individual—Attributes of the embedded researcher, Interpersonal—Attributes of the embedded researcher’s LA colleagues, and Organisational—Attributes of the local government system. For each theme (i.e., determinant of research activity), we reviewed the associated text to explore the context in which it was discussed as well as if it was framed as an opportunity, barrier, or both across PHLARPs. The primary analysis was performed by the lead author, with the second author double coding a randomly selected twenty five percent sample of the transcripts. Any disagreements in the coding were discussed. Once coding was finalised, we calculated thematic frequencies. Most PHLARPs in our sample were each connected with a distinct LA, but two individuals held their posts as a job share within a single LA. Nevertheless, we considered these PHLARPs to be distinct units of analysis for the purposes of calculating frequencies as they brought unique sets of experience to their roles and reflected individual perspectives.

PHLARPs identified thirteen primary, interlinked factors that they perceived to underly levels of research activity within LA public health settings (Fig.  1 ). These factors are likely to be highly relevant to embedded researcher roles and could thus inform embedded researcher activity and theories of change. For some factors, their framing as either a barrier or opportunity varied substantially across LAs. For most, however, responses were relatively consistent with regards to whether it presented a hindrance or enabler of research activity. Although we have placed these determinants of research activity within multiple layers of influence on embedded researcher roles, our results also highlight many interconnections across factors and layers. These relationships suggest that effectively addressing barriers to LA research activity requires a holistic approach which embraces the inseparability of individual, interrelation, and organisational determinants.

figure 1

Nested, interconnected determinants of research activity in public health settings that contextualise embedded researcher roles

Individual: attributes of the embedded researcher

At the level of an individual embedded researcher, PHLARPs described five primary attributes that affected their ability to influence LA research activity. Most of these factors related to knowledge and skillsets that were valued in the role, but PHLARPs also described aspects of their career stage and trajectory that should be considered in the design of embedded researcher posts.

First, possessing research skills and experience was regularly described as a primary enabling factor for PHLARPs ( n  = 15). Indeed, most participants held research degrees and possessed a strong level of research experience. PHLARPs emphasised the value of research skillsets to their roles, including experience in writing research bids, formulating research questions, and data collection and analysis methods. For example, a participant stated that, “ I’ve used my skills and knowledge around behaviour change, my background from my PhD, to try and make sure [our activity is] theory based ”. Skills in co-production and patient and public involvement were described as particularly valuable by several participants, with many working to further develop these skills while in post. Those with less research experience spoke about the steep learning curve they underwent in their roles, and many described the value of the mentorship and support they received through academic supervision and/or working with colleagues with complementary skillsets.

Also emphasised by most PHLARPs was the value of communication and interpersonal skills ( n  = 15). Participants described how much of their role revolved around networking, collaboration, public engagement, and promoting research opportunities. As part of these activities, participants described the need to influence colleagues to, for example, take up such opportunities and (particularly at the leadership level) support research activity. Understanding the local context, including capacity limitations and perceptions about research, and applying this knowledge was emphasised as key to effective communication. For example, a PHLARP described how,

“ I think communication has probably been the most important [skill]. […] A lot of the gains that have been made in terms of research culture in the team have been about understanding people's perceptions about research and challenging them and facilitating better conversations about research and making people feel comfortable to express ignorance or lack of understanding so that we can help. And making people feel comfortable ”.

Many PHLARPs described how much of their time was spent translating information between diverse stakeholders, such as academic institutions and the LA, who often differ significantly in their language and priorities. Research and communication skills were often jointly applied, such as when engaging in co-production or public engagement.

Several PHLARPs spoke about the value of pre-existing knowledge of, and connections within, the LA ( n  = 11) which varied substantially. While some were already embedded within the LA team prior to starting in the post, others had no prior experience working with LAs such as one PHLARP who said, “At first, I didn't understand a single word of what anyone spoke. [LAs are] like an alien place completely ”. Knowledge of the systems and language enabled PHLARPs to navigate the LA and its many layers and complex decision-making processes. Furthermore, understanding the various roles within the LA enabled PHLARPs to efficiently direct and make research enquires. Those with strong pre-existing connections discussed how these links sped up the processes of establishing their role and building trust, thereby accelerating their influence. For example, a PHLARP described the value of their connections in saying, “ I’m well networked into colleagues right across the Council. I know how to engage people, and what to do if I’m unable to engage people, how to influence on that to move things forward. That's been really useful ”.

Versatility was also emphasised by over half of the PHLARPs as an asset in their role ( n  = 10). These PHLARPs described the significant day-to-day variation within their role and often identified an enjoyment of this diversity. “ No two days are the same ” and “ I get bored easily ” were common sentiments. A PHLARP described this in saying how, “ it was just wonderful to be able to have something that would always keep me on my toes. Really diverse, incredibly enjoyable. Constantly learning different things ”. Related skills were also identified under this theme including autonomy and adaptability.

Nine PHLARPs voiced how the career stage and trajectory of embedded researchers are important elements to consider in the design of their posts. In particular, early career researchers require enhanced support, mentorship, and career development opportunities (e.g., publication opportunities). Many PHLARPs who entered their posts at a later career stage held permanent academic posts to return to when their LA roles came to an end. This was not the case for most early career PHLARPs. Partly because their livelihoods relied upon contract extensions, some of these early career PHLARPs expressed a heightened sense of urgency to prove themselves in the role and demonstrate influence. These early career PHLARPs viewed the exploratory nature of the roles and the lack of clear objectives with greater apprehension than their more established colleagues. The short-term nature of PHLARP contracts also presented a barrier to the establishment of trust with LA colleagues.

Interpersonal: attributes of embedded researcher’s local government colleagues

PHLARPs described four primary determinants of research activity at an interpersonal level relating to their LA team and colleagues (e.g., skills, resources, knowledge, experience, perceptions). Most prominent among these themes was the capacity (time) constraints of LA staff which limited their engagement with both PHLARPs and research activity more broadly ( n  = 16). These time constraints were exacerbated by the Covid-19 pandemic and its ongoing effects across the health system and beyond. For example, one PHLARP lamented that, “ it has been a tough gig doing [the role] during the pandemic because of the fact there's so many competing pressures on local authority public health teams ”. Because of these pressures, it took longer for the PHLARPs to forge relationships within the LA.

LA colleagues often displayed an initial lack of interest in research activity which was perceived not as a lack of enthusiasm for evidence use, but rather as a symptom of feeling overwhelmed with other responsibilities (e.g., service delivery). Several PHLARPs described how these capacity constraints were reflected in the job descriptions of LA colleagues: “ Their job descriptions do not have space for research, and since it is not in their job role, any research that they are trying to undertake, it's an add-on. And that seems to be the biggest barrier, that they don't have the resources and they don't have the time to do it ”. PHLARPs spoke about how these time constraints inhibited their colleagues from taking up research opportunities such as applying for research funding as there was often no one available to lead on a bid: “ The amount of time that it takes to set up a research project is what we need, at least six months […] But they don't have the luxury of time ”. This theme is strongly linked to a deficiency of LA research funding (an organisational barrier—see below).

An equivalent number of participants described how positive perceptions of evidence informed decision making can enhance uptake of research opportunities among colleagues ( n  = 16). Encouragingly, participants widely identified a strong appetite for evidence informed decision making within the LA and a widespread understanding of the value of research evidence to public health decisions. For example, a PHLARP indicated that their “ contacts in the public health team were very embracing of research and what it can do in a practical sense and then what value it can add to decisions being made by local authorities and councils ”. However, PHLARPs also described how some of their colleagues viewed research involvement as something that was necessarily time consuming and costly. As such, PHLARPs needed to communicate and frame research opportunities as adaptations to existing workloads rather than add-ons. Additionally, perceptions of what was meant by “research” were not always shared. For example, a PHLARP described how “ The feedback was that [“research”] was too specific and academic a word and it wasn't very applicable in the local authority setting ”. Variation in understandings of research again highlight the value of strong communication skills to PHLARP roles.

Third, many PHLARPs discussed how research activity was influenced by existing levels of research knowledge and skills within the LA ( n  = 13). These research skills were described as varying widely both across and within LAs. For example, some PHLARPs described a strong level of research experience among their colleagues such as one participant who said, “ capacity is there in the sense that they are capable people. Most of them in public health, they have had their masters, so they've done a research thesis. Many have PhDs. Many supervise masters ”. Others, however, perceived that these skills were lacking within the LA, or existed only in pockets or on certain teams.

Finally, almost a quarter of PHLARPs identified trust in academia as an influencer of research activity within the LA, all of whom perceived that such trust was lacking ( n  = 4). These PHLARPs recounted examples from LA colleagues of negative prior experiences collaborating with academics such as cases where research did not benefit nor was shared with the LA. For example, a PHLARP described how, “ the mutual value isn't always clear. So, it might be a really good piece of research, but in practice, what does it actually mean for the Council in terms of the resources they have to put in and the benefit for them? I think sometimes true collaboration can be something that's a bit missing ”.

Organisational: attributes of the local government system

Across the wider organisational context, PHLARPs described four primary factors that they perceived to influence research activity. Funding (e.g., infrastructure, staffing) was particularly critical ( n  = 13). Unfortunately, almost all those who spoke about this factor identified a significant research funding deficit. For example, a PHLARP described how, “ I think the City Council is low capacity because it's been underfunded, and it’s not been able to benefit from some previous calls for work in this area. I’d say it's behind ”. This lack of funding filters down to exacerbate capacity constraints identified at the interpersonal level and reflects the current trend of austerity, evidenced by a PHLARP who said, “ the general financial climate within local authorities has meant that we’ve had a number of significant restrictions over the last year or two within the Council which are responding to local government finances ”. A few PHLARPs raised concerns about growing research funding inequities across LAs and the difficulties they experienced when competing for research funding with capacity rich LAs who had more established cultures of research.

Second, many PHLARPs identified how research activity was influenced by the extent of alignment between the priorities and expectations of academia and the LA ( n  = 14). Most PHLARPs perceived that a stronger level of alignment was needed. A PHLARP described this in saying, “ everyone has their own agenda. The County Council is pushing a report or audit, or something to do with public health. Whereas the university just want to get publications out there […] There’s that misalignment in a sense and that makes it really hard to draw everyone together and get everyone working on the same page ”. Beyond misalignment of research priorities, PHLARPs also described differences between the research expectations of LAs and academic institutions such as timelines and ethics procedures. Given the fast pace and responsive nature of LAs, one PHLARP described how, “ The robust methodical planning and the timing of things [in academia] doesn't always align ”.

Several PHLARPs described how strong levels of senior leadership and endorsement of research can benefit research activity through a variety of pathways ( n  = 11). “ You need that consistent leadership at the very top ” emphasised one PHLARP. PHLARPs identified how a lack of senior leadership can lead to concerns that time spent on research activity will not be valued. Perceptions and appetite for research among senior colleagues would also filter down through the LA, described by a PHLARP who said, “ the culture within the Council is really top down. So, if the managers think doing research is expensive and time consuming, ultimately the people the manager is managing will think the same ”.

Finally, PHLARPs emphasised the value of research infrastructure for enabling research activity ( n  = 10). This infrastructure was almost always described as lacking within LAs. Examples of such infrastructure included ethics forms and procedures, access to the literature, research enabling software (e.g., for referencing and analysis), and research strategies. Additionally, a few PHLARPs spoke about a deficiency of information on research activity (including relevant contacts) for those wanting to collaborate with the LA on research opportunities.

We identified thirteen primary determinants of research activity in LA public health settings across three interrelated layers of influence. Investigating which factors inhibit and facilitate research activity in a local setting presents a critical step in the design of embedded researcher posts as this information serves to contextualise the roles and informs the design of evidence-based theories of change. The factors identified in this research present a useful starting point for such investigation as they are likely to be highly relevant to embedded researcher posts in LA public health settings. As an additional aid for the development of these posts, we have summarised our findings into a list of recommended, interrelated considerations for the design of embedded researcher posts and their underlying theories of change (Table  1 ).

Our results suggest that successful candidates for embedded researcher posts should have equally strong research and communication skills. These two skillsets have been highlighted by other work on embedded researchers which suggest that both are necessary for success in these roles [ 26 , 27 ]. The value of interpersonal skills aligns with the critical importance of establishing trust and the capacity building and co-production activity embedded researchers will likely undertake including networking, influencing, and building linkages across institutions. Knowledge of local government and existing connections within this system were also identified as valuable to the roles. As such, LAs should plan for a longer initial scoping phase if the embedded researcher lacks prior experience working within a LA context. It takes time for embedded researchers to become immersed within a host organisation and for trust to be established, particularly within a complex organisation such as a LA, and a growing body of literature provides guidance to assist embedded researchers during this phase of their role [ 9 , 22 , 23 ].

The career stage and trajectory of an embedded researcher is another key factor for LAs to consider in the design and recruitment for the posts. The contract length, for example, will be of particular concern for early career researchers, many of whom will not have permanent posts or established track records in academia. Short contracts are also likely to affect LA staff investment in the embedded researcher’s work. A growing body of research suggests that a lack of career progression pathways and insufficient recognition for capacity building activities and achievements present significant challenges for embedded researchers [ 28 , 29 ]. LAs and academic institutions must thus collaborate to ensure that career progression pathways, mentorship, and professional development opportunities are provided and that achievements beyond traditional academic outputs are recognised and valued. For example, this could include accessing funding opportunities such as the NIHR’s Pre-Doctoral and Doctoral Local Authority Fellowships which support staff in developing the necessary skillsets and create links with academia.

Many of the barriers to research activity identified at the interpersonal and organisational levels align with those previously described within the embedded researcher literature in clinical settings. For example, capacity constraints [ 22 , 30 ], a lack of sufficient research infrastructure [ 1 ], and a misalignment of academic and local government priorities [ 3 , 16 ] have all been identified in healthcare contexts. Some of these barriers have also been identified in the context of embedded researchers in public health settings [ 31 ] and within public health literature more broadly [ 6 , 8 ]. Our research adds to this work, suggesting that these barriers are likely to be highly relevant to embedded researcher roles in public health. Furthermore, through separating interpersonal from organisational determinants, this paper teases apart layers of influence on embedded researcher interventions while also highlighting their interconnectedness. While it is useful to identify individual determinants, our results suggest that consideration for this interdependence is equally as important when designing theories of change for embedded researchers.

Our findings also provide detail on how local perceptions of research-based evidence can enable and hinder embedded researchers in fostering a research active LA. Embedded researchers widely perceived there to be a strong appetite for evidence informed decision making across LA staff, a result that is not often discussed within the literature [ 31 ]. However, there were inconsistencies in what was perceived to constitute “research”, with many feeling that engagement with research would necessarily be highly resource intensive. Given the severe capacity constraints faced by public health teams, such perceptions presented an initial barrier to embedded researcher activity. A mismatch between academic timelines and LA activity perpetuated such assumptions. Therefore, we suggest that to constructively engage with LAs, embedded researchers and academics more broadly must recognise and work within existing capacity constraints through adopting and communicating a flexible and pragmatic conceptualisation of research. At the same time, we also suggest that research should be integrated within LA job descriptions and advocate for the value that protected time for research could add to LA public health outcomes. Given these complexities surrounding perceptions of research within policy and practice settings, we would caution authors to avoid an oversimplified narrative depicting research evidence as being undervalued in these contexts.

Developing theories of change for embedded researcher interventions

Developing a strong, evidence informed theory of change presents a critical step in operationalising embedded researcher roles as this theory links embedded researcher activities and projects to the research barriers and behaviours they aim to influence. Identifying baseline levels of research activity and associated barriers prior to an embedded researcher being in post could provide an initial level of structure and direction for the role, clarity that is likely to be particularly valued by early career researchers. However, there are also benefits to be gained from embedded researcher involvement in this process. For example, undertaking a needs assessment provides embedded researchers the opportunity to build trust with colleagues, a key stage in becoming embedded within a host organisation [ 23 ]. As such, we suggest that theories of change for embedded researchers would benefit from progressive development, initially co-created by the home and host organisations, and continually adapted as the embedded researcher becomes familiar with the local context.

This research focused on identifying determinants of research activity (barriers and opportunities). As described above, existing levels of research activity itself should also be investigated to inform the aims and theories of change for embedded researcher posts. To avoid ambiguity, “research activity” should be clearly defined in the context of the specific embedded researcher intervention. While certain dimensions of research activity have been investigated within local government (e.g., use of research evidence in decision making, co-production) [ 7 , 32 ], little work has explored the behaviours that constitute research activity more broadly or developed associated theories or typologies (but see [ 33 ]). For example, research activity within a LA could include the type of evidence LA staff base decisions on, how often research (including evaluation) is conducted, the rigour of research that is undertaken, and how often the LA collaborates with research institutions. More work is needed in this area to aid LAs in identifying desired behaviours and stages they can expect as they develop their research maturity.

As part of our wider programme of work on embedded researchers, the many interlinked barriers and opportunities identified in this research have informed the development of a logic model conceptualising the stages of embedded researcher interventions and surrounding contextual factors (see Appendix 2 in [ 17 ]). In addition to the present research, several other projects have fed into this model including a systematic review, documentary analysis, a diary study, a survey, and further qualitative research involving the PHLARPs and other programme stakeholders. This model links embedded researchers to change in individual (e.g., attitudes) and organisational (e.g., funding, infrastructure) determinants of research activity, as well as to longer term change in this activity itself. Through displaying change across multiple determinants within a single stage, the model reflects their interconnectedness. For example, it indicates how strengthening the research infrastructure will likely need to take place alongside action to enhance research curiosity and enthusiasm. Drawing on this model could present a valuable starting point in the development of theories of change for individual embedded researcher posts.

This paper identified many interrelated determinants of public health research activity at the interpersonal and organisational levels that are relevant to embedded researcher interventions. We also highlighted the skillsets necessary for embedded researchers to have an influence over these determinants. When interpreting our results, it is important to recognise that the PHLARPs investigated in this study reflect a specific way of implementing embedded researcher interventions, albeit one that allows for high levels of flexibility. For example, capacity building was the primary activity for most of our participants, with less emphasis placed on research production. This prioritisation differs from some definitions and schemes which frame embedded researcher roles predominantly around co-production (e.g., [ 13 ]). Furthermore, most of our embedded researchers did not work as part of senior strategic decision-making contexts within the LA. It would be useful to investigate potential variation in the determinants of research activity across different levels of local government.

Further research is needed to explore methods of assessing the influence of embedded researchers as well as to clarify the behaviours that constitute research activity within local government. Variation in PHLARP responses suggests that while some LAs have relatively strong research infrastructure and support, others were relatively weak. Theories of change for embedded researchers should be contextualised within this existing research culture. This variation also presents an important consideration for funders if they are to avoid perpetuating research inequities. Indeed, LAs with strong research cultures can capitalise on funding successes through building their research capacity and other resources they allocate to additional funding opportunities. Funding organisations could thus consider more targeted funding streams to support LAs at different stages of research maturity.

The present research has focused on the barriers and facilitators of research activity within LAs. It is also necessary to acknowledge that academic institutions have an equally important role to play in facilitating the translation of research into practice. Indeed, the underutilisation of research evidence is as much a reflection of the ‘supply’ side as the ‘demand’ side. We have highlighted a few of these barriers such as the length of academic timelines which do not match the pace and need for research within LAs. A lack of salience has also been identified as a prominent barrier to the use of academic research within local government [ 34 ]. As such, academics must actively work alongside LAs to address barriers to evidence use. Co-production presents key mechanisms to address these barriers, collaborations that can be enabled through embedded research activity.

Availability of data and materials

Due to the qualitative nature of this research, participants of this study did not agree for their data to be shared publicly, so supporting data is not available.

Abbreviations

Clinical Research Network

Local authority

National Institute for Health and Care Research

Public Health Local Authority Research Practitioner

Coates D, Mickan S. Challenges and enablers of the embedded researcher model. J Health Organ Manag. 2020;34(7):743–64.

Article   Google Scholar  

Rapport F, Clay-Williams R, Churruca K, Shih P, Hogden A, Braithwaite J. The struggle of translating science into action: foundational concepts of implementation science. J Eval Clin Pract. 2018;24(1):117–26.

Article   PubMed   Google Scholar  

Churruca K, Ludlow K, Taylor N, Long JC, Best S, Braithwaite J. The time has come: Embedded implementation research for health care improvement. J Eval Clin Pract. 2019;25(3):373–80.

Kneale D, Rojas-García A, Raine R, Thomas J. The use of evidence in English local public health decision-making: a systematic scoping review. Implement Sci. 2017;12(1):53.

Article   PubMed   PubMed Central   Google Scholar  

van der Graaf P, Forrest LF, Adams J, Shucksmith J, White M. How do public health professionals view and engage with research? A qualitative interview study and stakeholder workshop engaging public health professionals and researchers. BMC Public Health. 2017;17(1):892.

Homer C, Woodall J, Freeman C, South J, Cooke J, Holliday J, et al. Changing the culture: a qualitative study exploring research capacity in local government. BMC Public Health. 2022;22(1):1341.

Orton L, Lloyd-Williams F, Taylor-Robinson D, O’Flaherty M, Capewell S. The use of research evidence in public health decision making processes: systematic review. PLoS ONE. 2011;6(7): e21704.

Article   CAS   PubMed   PubMed Central   Google Scholar  

NIHR. Local authority research systems call. National Institute for Health and Care Research; 2022.

Cheetham M, Wiseman A, Khazaeli B, Gibson E, Gray P, Van der Graaf P, Rushmer R. Embedded research: a promising way to create evidence-informed impact in public health? J Public Health. 2018;40(Issue suppl_1):i64–70.

Article   CAS   Google Scholar  

Ward V, Tooman T, Reid B, Davies H, O’Brien B, Mear L, Marshall M. A framework to support the design and cultivation of embedded research initiatives. Evid Policy. 2021;17(4):755–69.

Marshall M, Pagel C, French C, Utley M, Allwood D, Fulop N, et al. Moving improvement research closer to practice: the researcher-in-residence model. BMJ Qual Saf. 2014;23(10):801–5.

McGinity R, Salokangas M. Introduction: ‘embedded research’ as an approach into academia for emerging researchers. Manag Educ. 2014;28(1):3–5.

Vindrola-Padros C, Pape T, Utley M, Fulop NJ. The role of embedded research in quality improvement: a narrative review. BMJ Qual Saf. 2017;26(1):70–80.

Kneale D, Stansfield C, Goldman R, Lester S, Edwards RC, Thomas J. The implementation of embedded researchers in policy, public services, and commercial settings: a systematic evidence and gap map. Implement Sci Commun. 2024;5(1):41.

Coates D, Mickan S. The embedded researcher model in Australian healthcare settings: comparison by degree of “embeddedness.” Transl Res. 2020;218:29–42.

Article   CAS   PubMed   Google Scholar  

Vindrola-Padros C, Eyre L, Baxter H, Cramer H, George B, Wye L, et al. Addressing the challenges of knowledge co-production in quality improvement: learning from the implementation of the researcher-in-residence model. BMJ Qual Saf. 2019;28(1):67.

Kneale D, Edwards R, Stansfield C, Lester S, Goldman R, Thomas J. What are embedded researchers and what influence do they have in public health settings? London: University College London; 2023.

Google Scholar  

Langeveld K, Stronks K, Harting J. Use of a knowledge broker to establish healthy public policies in a city district: a developmental evaluation. BMC Public Health. 2016;16:271.

Varallyay NI, Bennett SC, Kennedy C, Ghaffar A, Peters DH. How does embedded implementation research work? Examining core features through qualitative case studies in Latin America and the Caribbean. Health Policy Plan. 2020;35(Supplement_2):ii98–111.

Mickan S, Coates D. Embedded researchers’ purpose and practice: current perspectives from Australia. Int J Health Plann Manage. 2022;37(1):133–42.

Potts AJ, Nobles J, Shearn K, Danks K, Frith G. Embedded researchers as part of a whole systems approach to physical activity: reflections and recommendations. Systems. 2022;10(3):69.

Reen G, Page B, Oikonomou E. Working as an embedded researcher in a healthcare setting: a practical guide for current or prospective embedded researchers. J Eval Clin Pract. 2022;28(1):93–8.

Edwards RC, Kneale D, Stansfield C, Lester S. What are the mechanisms driving the early stages of embedded researcher interventions? A qualitative process evaluation in English local government. Soc Sci Med. 2024;340: 116407.

QSR International Pty Ltd. NVivo (released in March 2020). 2020.

Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3(2):77–101.

Ward V, Tooman T, Reid B, Davies H, Marshall M. Embedding researchers into organisations: a study of the features of embedded research initiatives. Evid Policy. 2021;17(4):593–614.

Currey J, Considine J, Khaw D. Clinical nurse research consultant: a clinical and academic role to advance practice and the discipline of nursing. J Adv Nurs. 2011;67(10):2275–83.

Trusson D, Rowley E, Bramley L. A mixed-methods study of challenges and benefits of clinical academic careers for nurses, midwives and allied health professionals. BMJ Open. 2019;9(10): e030595.

Chew S, Armstrong N, Martin G. Institutionalising knowledge brokering as a sustainable knowledge translation solution in healthcare: how can it work in practice? Evid Policy. 2013;9(3):335–51.

Williams J, Craig TJ, Robson D. Barriers and facilitators of clinician and researcher collaborations: a qualitative study. BMC Health Serv Res. 2020;20(1):1126.

Cheetham M, Redgate S, van der Graaf P, Humble C, Hunter D, Adamson A. ‘What I really want is academics who want to partner and who care about the outcome’: Findings from a mixed-methods study of evidence use in local government in England. Evid Policy. 2022;19:1–21.

van der Graaf P, Cheetham M, Redgate S, Humble C, Adamson A. Co-production in local government: process, codification and capacity building of new knowledge in collective reflection spaces. Workshops findings from a UK mixed methods study. Health Res Policy Syst. 2021;19(1):12.

Fynn JF, Jones J, Jones A. A systems approach to the exploration of research activity and relationships within a local authority. Health Res Policy Syst. 2021;19(1):137.

Kneale D, Rojas-García A, Thomas J. Obstacles and opportunities to using research evidence in local public health decision-making in England. Health Res Policy Syst. 2019;17(1):61.

Download references

Acknowledgements

The authors would like to thank all participants for donating their time to this research. They would also like to extend their gratitude to the members of the project’s Advisory Group who provided valuable feedback at several stages of the project’s development. The manuscript benefitted greatly from the input of the Editor and two referees. Thanks to James Thomas, Katy Sutcliffe, and Amanda Sowden who oversee the work of the Policy Research Programme.

This research was commissioned by the National Institute for Health and Care Research (NIHR) Policy Research Programme (PRP) for the Department of Health and Social Care (DHSC). It was funded through a supplement to the NIHR PRP contract with the EPPI Centre at University College London (reviews facility to support national policy development and implementation, NIHR200701). The views expressed in this publication are those of the author(s) and not necessarily those of the NHS, the NIHR, or the DHSC.

Author information

Authors and affiliations.

Evidence for Policy and Practice Information Centre, UCL Social Research Institute, Institute of Education, University College London, Gower Street, London, WC1E 6BT, United Kingdom

Rachael C. Edwards, Dylan Kneale, Claire Stansfield & Sarah Lester

You can also search for this author in PubMed   Google Scholar

Contributions

RCE conceptualised the research, collected, analysed and interpreted the data, and wrote the original manuscript draft. DK managed the overarching research project, conceptualised the present research, collected, analysed and interpreted the data, and provided feedback on drafts of the manuscript. CS conceptualised the research, collected the data, and provided feedback on drafts of the manuscript. SL conceptualised the research and provided feedback on drafts of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Rachael C. Edwards .

Ethics declarations

Ethics approval and consent to participate.

This research was approved by a University College London research ethics committee (REC1540).

Consent for publication

Not applicable.

Competing interests

This study represents independent research commissioned through the National Institute for Health and Care Research (NIHR) Policy Research Programme (PRP) for the Department of Health and Social Care (DHSC). The views expressed in this publication are those of the author(s) and not necessarily those of the NHS, the NIHR or the DHSC. NIHR also funded the embedded researcher programme that was the focus of the research, but had no substantial input in the study design, data collection, analysis, interpretation of data, in the writing of the article, or in the decision to submit the article for publication.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Edwards, R.C., Kneale, D., Stansfield, C. et al. “ They don't have the luxury of time ”: interviews exploring the determinants of public health research activity that contextualise embedded researcher roles in local government. Health Res Policy Sys 22 , 88 (2024). https://doi.org/10.1186/s12961-024-01162-2

Download citation

Received : 15 August 2023

Accepted : 07 June 2024

Published : 31 July 2024

DOI : https://doi.org/10.1186/s12961-024-01162-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Embedded researcher
  • Evidence use
  • Local government
  • Public health
  • Research activity

Health Research Policy and Systems

ISSN: 1478-4505

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

structured interview research methodology

COMMENTS

  1. Structured Interview

    Revised on June 22, 2023. A structured interview is a data collection method that relies on asking questions in a set order to collect data on a topic. It is one of four types of interviews. In research, structured interviews are often quantitative in nature. They can also be used in qualitative research if the questions are open-ended, but ...

  2. Structured Interviews: Definitive Guide with Examples

    Yes or no and true or false questions are examples of dichotomous questions. Open-ended questions are common in structured interviews. However, researchers use them when conducting qualitative research and looking for in-depth information about the interviewee's perceptions or experiences. These questions take longer for the interviewee to ...

  3. Types of Interviews in Research

    An interview is a qualitative research method that relies on asking questions in order to collect data. Interviews involve two or more people, one of whom is the interviewer asking the questions. ... Semi-structured interviews are often open-ended, allowing for flexibility, but follow a predetermined thematic framework, giving a sense of order ...

  4. (PDF) How to Conduct an Effective Interview; A Guide to Interview

    Vancouver, Canada. Abstract. Interviews are one of the most promising ways of collecting qualitative data throug h establishment of a. communication between r esearcher and the interviewee. Re ...

  5. PDF Structured Methods: Interviews, Questionnaires and Observation

    An offer of a copy of the final research report can help in some cases. Ensure that the questionnaire can be returned with the minimum of trouble and expense (e.g. by including a reply paid envelope). Keep the questionnaire short and easy to answer. Ensure that you send it to people for whom it is relevant.

  6. Interview Method In Psychology Research

    A structured interview is a quantitative research method where the interviewer a set of prepared closed-ended questions in the form of an interview schedule, which he/she reads out exactly as worded. Interviews schedules have a standardized format, meaning the same questions are asked to each interviewee in the same order (see Fig. 1).

  7. How to Conduct Structured Interviews

    The structured interview is one such method. Researchers can conduct a structured interview when they want to standardize the research process to give all respondents the same questions and analyze differences between answers. In this article, we'll look at structured interviews, when they are ideal for your research, and how to conduct them.

  8. Structured Interviews: Definition, Types + [Question Examples]

    A structured interview is a type of quantitative interview that makes use of a standardized sequence of questioning in order to gather relevant information about a research subject. This type of research is mostly used in statistical investigations and follows a premeditated sequence. In a structured interview, the researcher creates a set of ...

  9. Interviews in the social sciences

    For example, interviews might be highly structured (using an almost survey-like interview guide), entirely unstructured (taking a narrative and free-flowing approach) or semi-structured (using a ...

  10. Structured interview

    A structured interview (also known as a standardized interview or a researcher-administered survey) is a quantitative research method commonly employed in survey research.The aim of this approach is to ensure that each interview is presented with exactly the same questions in the same order. This ensures that answers can be reliably aggregated and that comparisons can be made with confidence ...

  11. Structured Interviews

    Structured Interviews. Structured interviews ask the same questions of all participants. This means that the interviewer sticks to the same wording and sequence for each individual they interview, even asking predetermined follow-up questions. The questions in a structured interview should still be open-ended, even if they are predetermined.

  12. Chapter 11. Interviewing

    A method of data collection in which the researcher asks the participant questions; the answers to these questions are often recorded and transcribed verbatim. There are many different kinds of interviews - see also semistructured interview, structured interview, and unstructured interview.

  13. A methodological guide to using and reporting on interviews in

    In making this decision, researchers could weigh up the advantages and disadvantages of interviews as a methodology in the light of their research question(s), including different styles of interviews (structured, semi-structured and unstructured). Structured interviews are based on a fixed set of pre-determined questions.

  14. interviews

    Interviews can be defined as a qualitative research technique which involves "conducting intensive individual interviews with a small number of respondents to explore their perspectives on a particular idea, program or situation." There are three different formats of interviews: structured, semi-structured and unstructured.

  15. The Complete Guide to Structured Interviews

    Structured interviews require more research and planning than traditional interviews. ... Structured interviews might be the strongest predictor of future job success, but other objective methods of assessment - like personality and cognitive aptitude tests - will give you additional relevant data to use to find the best candidates in your ...

  16. Qualitative Interviewing

    Qualitative interviewing is a foundational method in qualitative research and is widely used in health research and the social sciences. Both qualitative semi-structured and in-depth unstructured interviews use verbal communication, mostly in face-to-face interactions, to collect data about the attitudes, beliefs, and experiences of participants.

  17. Research Methods: Structured interview techniques

    Nancy Hughes. Interviews are a great method for gathering first hand accounts of people's experiences, perceptions and attitudes. They can vary in structure from more of a guided discussion based on a framework of topics through to a fixed list of scripted questions. The flexibility that they offer can lure researchers into a false sense of ...

  18. Types of Interviews

    Given The semi-structured interview is a qualitative data collection strategy in which the researcher asks informants a series of predetermined but open-ended questions. The researcher has more control over the topics of the interview than in unstructured interviews, but in contrast to structured interviews or questionnaires that use closed ...

  19. Qualitative research method-interviewing and observation

    Interviewing. This is the most common format of data collection in qualitative research. According to Oakley, qualitative interview is a type of framework in which the practices and standards be not only recorded, but also achieved, challenged and as well as reinforced.[] As no research interview lacks structure[] most of the qualitative research interviews are either semi-structured, lightly ...

  20. Interview Research

    Semi-Structured: Semi-Structured Interview. Entry in The SAGE Encyclopedia of Qualitative Research Methodsby Lioness Ayres; Editor: Lisa M. Given The semi-structured interview is a qualitative data collection strategy in which the researcher asks informants a series of predetermined but open-ended questions.The researcher has more control over the topics of the interview than in unstructured ...

  21. How to use and assess qualitative research methods

    Semi-structured interviews are characterized by open-ended questions and the use of an interview guide (or topic guide/list) in which the broad areas of interest, sometimes including sub-questions, are defined . The pre-defined topics in the interview guide can be derived from the literature, previous research or a preliminary method of data ...

  22. Choosing Between a Structured or Conversational Interview

    Two approaches to conducting interviews — structured and conversational — can yield different insights about a candidate. While structured interviews make it easier to compare candidate ...

  23. PDF The Conduct of Structured Interviews as Research Implementation Method

    Keywords: structured interview, research method, data saturation 1.0 INTRODUCTION Interviewing is an approach used for gathering valuable data or information from people, where any person-to-person interaction between the individuals with a specific objective is called an interview [1].

  24. Semi-Structured Interview

    A semi-structured interview is a data collection method that relies on asking questions within a predetermined thematic framework. However, the questions are not set in order or in phrasing. In research, semi-structured interviews are often qualitative in nature. They are generally used as an exploratory tool in marketing, social science ...

  25. Unstructured Interview

    Revised on June 22, 2023. An unstructured interview is a data collection method that relies on asking participants questions to collect data on a topic. Also known as non-directive interviewing, unstructured interviews do not have a set pattern and questions are not arranged in advance. In research, unstructured interviews are usually ...

  26. Semistructured interviewing in primary care research: a balance of

    The most common type of interview used in qualitative research and the healthcare context is semistructured interview.8 Figure 1 highlights the key features of this data collection method, which is guided by a list of topics or questions with follow-up questions, probes and comments. Typically, the sequencing and wording of the questions are modified by the interviewer to best fit the ...

  27. The effectiveness of group interpersonal synchrony in young autistic

    The interview guide for the semi-structured interviews will be composed of 11 questions ranging from the most general ... APA handbook of research methods in psychology, Vol 2: Research designs: Quantitative, qualitative, neuropsychological, and biological. 2012;2:57-71. View Article Google Scholar 71. ...

  28. "They don't have the luxury of time": interviews exploring the

    We undertook seventeen semi-structured interviews with embedded researchers in diverse public health settings in English local government. Interviews were analysed using thematic analysis. ... Further research is needed to identify methods for exploring the influence of embedded researchers as well as to unpack the stages of research activity ...