Grad Coach

Research Design 101

Everything You Need To Get Started (With Examples)

By: Derek Jansen (MBA) | Reviewers: Eunice Rautenbach (DTech) & Kerryn Warren (PhD) | April 2023

Research design for qualitative and quantitative studies

Navigating the world of research can be daunting, especially if you’re a first-time researcher. One concept you’re bound to run into fairly early in your research journey is that of “ research design ”. Here, we’ll guide you through the basics using practical examples , so that you can approach your research with confidence.

Overview: Research Design 101

What is research design.

  • Research design types for quantitative studies
  • Video explainer : quantitative research design
  • Research design types for qualitative studies
  • Video explainer : qualitative research design
  • How to choose a research design
  • Key takeaways

Research design refers to the overall plan, structure or strategy that guides a research project , from its conception to the final data analysis. A good research design serves as the blueprint for how you, as the researcher, will collect and analyse data while ensuring consistency, reliability and validity throughout your study.

Understanding different types of research designs is essential as helps ensure that your approach is suitable  given your research aims, objectives and questions , as well as the resources you have available to you. Without a clear big-picture view of how you’ll design your research, you run the risk of potentially making misaligned choices in terms of your methodology – especially your sampling , data collection and data analysis decisions.

The problem with defining research design…

One of the reasons students struggle with a clear definition of research design is because the term is used very loosely across the internet, and even within academia.

Some sources claim that the three research design types are qualitative, quantitative and mixed methods , which isn’t quite accurate (these just refer to the type of data that you’ll collect and analyse). Other sources state that research design refers to the sum of all your design choices, suggesting it’s more like a research methodology . Others run off on other less common tangents. No wonder there’s confusion!

In this article, we’ll clear up the confusion. We’ll explain the most common research design types for both qualitative and quantitative research projects, whether that is for a full dissertation or thesis, or a smaller research paper or article.

Free Webinar: Research Methodology 101

Research Design: Quantitative Studies

Quantitative research involves collecting and analysing data in a numerical form. Broadly speaking, there are four types of quantitative research designs: descriptive , correlational , experimental , and quasi-experimental . 

Descriptive Research Design

As the name suggests, descriptive research design focuses on describing existing conditions, behaviours, or characteristics by systematically gathering information without manipulating any variables. In other words, there is no intervention on the researcher’s part – only data collection.

For example, if you’re studying smartphone addiction among adolescents in your community, you could deploy a survey to a sample of teens asking them to rate their agreement with certain statements that relate to smartphone addiction. The collected data would then provide insight regarding how widespread the issue may be – in other words, it would describe the situation.

The key defining attribute of this type of research design is that it purely describes the situation . In other words, descriptive research design does not explore potential relationships between different variables or the causes that may underlie those relationships. Therefore, descriptive research is useful for generating insight into a research problem by describing its characteristics . By doing so, it can provide valuable insights and is often used as a precursor to other research design types.

Correlational Research Design

Correlational design is a popular choice for researchers aiming to identify and measure the relationship between two or more variables without manipulating them . In other words, this type of research design is useful when you want to know whether a change in one thing tends to be accompanied by a change in another thing.

For example, if you wanted to explore the relationship between exercise frequency and overall health, you could use a correlational design to help you achieve this. In this case, you might gather data on participants’ exercise habits, as well as records of their health indicators like blood pressure, heart rate, or body mass index. Thereafter, you’d use a statistical test to assess whether there’s a relationship between the two variables (exercise frequency and health).

As you can see, correlational research design is useful when you want to explore potential relationships between variables that cannot be manipulated or controlled for ethical, practical, or logistical reasons. It is particularly helpful in terms of developing predictions , and given that it doesn’t involve the manipulation of variables, it can be implemented at a large scale more easily than experimental designs (which will look at next).

That said, it’s important to keep in mind that correlational research design has limitations – most notably that it cannot be used to establish causality . In other words, correlation does not equal causation . To establish causality, you’ll need to move into the realm of experimental design, coming up next…

Need a helping hand?

example research design

Experimental Research Design

Experimental research design is used to determine if there is a causal relationship between two or more variables . With this type of research design, you, as the researcher, manipulate one variable (the independent variable) while controlling others (dependent variables). Doing so allows you to observe the effect of the former on the latter and draw conclusions about potential causality.

For example, if you wanted to measure if/how different types of fertiliser affect plant growth, you could set up several groups of plants, with each group receiving a different type of fertiliser, as well as one with no fertiliser at all. You could then measure how much each plant group grew (on average) over time and compare the results from the different groups to see which fertiliser was most effective.

Overall, experimental research design provides researchers with a powerful way to identify and measure causal relationships (and the direction of causality) between variables. However, developing a rigorous experimental design can be challenging as it’s not always easy to control all the variables in a study. This often results in smaller sample sizes , which can reduce the statistical power and generalisability of the results.

Moreover, experimental research design requires random assignment . This means that the researcher needs to assign participants to different groups or conditions in a way that each participant has an equal chance of being assigned to any group (note that this is not the same as random sampling ). Doing so helps reduce the potential for bias and confounding variables . This need for random assignment can lead to ethics-related issues . For example, withholding a potentially beneficial medical treatment from a control group may be considered unethical in certain situations.

Quasi-Experimental Research Design

Quasi-experimental research design is used when the research aims involve identifying causal relations , but one cannot (or doesn’t want to) randomly assign participants to different groups (for practical or ethical reasons). Instead, with a quasi-experimental research design, the researcher relies on existing groups or pre-existing conditions to form groups for comparison.

For example, if you were studying the effects of a new teaching method on student achievement in a particular school district, you may be unable to randomly assign students to either group and instead have to choose classes or schools that already use different teaching methods. This way, you still achieve separate groups, without having to assign participants to specific groups yourself.

Naturally, quasi-experimental research designs have limitations when compared to experimental designs. Given that participant assignment is not random, it’s more difficult to confidently establish causality between variables, and, as a researcher, you have less control over other variables that may impact findings.

All that said, quasi-experimental designs can still be valuable in research contexts where random assignment is not possible and can often be undertaken on a much larger scale than experimental research, thus increasing the statistical power of the results. What’s important is that you, as the researcher, understand the limitations of the design and conduct your quasi-experiment as rigorously as possible, paying careful attention to any potential confounding variables .

The four most common quantitative research design types are descriptive, correlational, experimental and quasi-experimental.

Research Design: Qualitative Studies

There are many different research design types when it comes to qualitative studies, but here we’ll narrow our focus to explore the “Big 4”. Specifically, we’ll look at phenomenological design, grounded theory design, ethnographic design, and case study design.

Phenomenological Research Design

Phenomenological design involves exploring the meaning of lived experiences and how they are perceived by individuals. This type of research design seeks to understand people’s perspectives , emotions, and behaviours in specific situations. Here, the aim for researchers is to uncover the essence of human experience without making any assumptions or imposing preconceived ideas on their subjects.

For example, you could adopt a phenomenological design to study why cancer survivors have such varied perceptions of their lives after overcoming their disease. This could be achieved by interviewing survivors and then analysing the data using a qualitative analysis method such as thematic analysis to identify commonalities and differences.

Phenomenological research design typically involves in-depth interviews or open-ended questionnaires to collect rich, detailed data about participants’ subjective experiences. This richness is one of the key strengths of phenomenological research design but, naturally, it also has limitations. These include potential biases in data collection and interpretation and the lack of generalisability of findings to broader populations.

Grounded Theory Research Design

Grounded theory (also referred to as “GT”) aims to develop theories by continuously and iteratively analysing and comparing data collected from a relatively large number of participants in a study. It takes an inductive (bottom-up) approach, with a focus on letting the data “speak for itself”, without being influenced by preexisting theories or the researcher’s preconceptions.

As an example, let’s assume your research aims involved understanding how people cope with chronic pain from a specific medical condition, with a view to developing a theory around this. In this case, grounded theory design would allow you to explore this concept thoroughly without preconceptions about what coping mechanisms might exist. You may find that some patients prefer cognitive-behavioural therapy (CBT) while others prefer to rely on herbal remedies. Based on multiple, iterative rounds of analysis, you could then develop a theory in this regard, derived directly from the data (as opposed to other preexisting theories and models).

Grounded theory typically involves collecting data through interviews or observations and then analysing it to identify patterns and themes that emerge from the data. These emerging ideas are then validated by collecting more data until a saturation point is reached (i.e., no new information can be squeezed from the data). From that base, a theory can then be developed .

As you can see, grounded theory is ideally suited to studies where the research aims involve theory generation , especially in under-researched areas. Keep in mind though that this type of research design can be quite time-intensive , given the need for multiple rounds of data collection and analysis.

example research design

Ethnographic Research Design

Ethnographic design involves observing and studying a culture-sharing group of people in their natural setting to gain insight into their behaviours, beliefs, and values. The focus here is on observing participants in their natural environment (as opposed to a controlled environment). This typically involves the researcher spending an extended period of time with the participants in their environment, carefully observing and taking field notes .

All of this is not to say that ethnographic research design relies purely on observation. On the contrary, this design typically also involves in-depth interviews to explore participants’ views, beliefs, etc. However, unobtrusive observation is a core component of the ethnographic approach.

As an example, an ethnographer may study how different communities celebrate traditional festivals or how individuals from different generations interact with technology differently. This may involve a lengthy period of observation, combined with in-depth interviews to further explore specific areas of interest that emerge as a result of the observations that the researcher has made.

As you can probably imagine, ethnographic research design has the ability to provide rich, contextually embedded insights into the socio-cultural dynamics of human behaviour within a natural, uncontrived setting. Naturally, however, it does come with its own set of challenges, including researcher bias (since the researcher can become quite immersed in the group), participant confidentiality and, predictably, ethical complexities . All of these need to be carefully managed if you choose to adopt this type of research design.

Case Study Design

With case study research design, you, as the researcher, investigate a single individual (or a single group of individuals) to gain an in-depth understanding of their experiences, behaviours or outcomes. Unlike other research designs that are aimed at larger sample sizes, case studies offer a deep dive into the specific circumstances surrounding a person, group of people, event or phenomenon, generally within a bounded setting or context .

As an example, a case study design could be used to explore the factors influencing the success of a specific small business. This would involve diving deeply into the organisation to explore and understand what makes it tick – from marketing to HR to finance. In terms of data collection, this could include interviews with staff and management, review of policy documents and financial statements, surveying customers, etc.

While the above example is focused squarely on one organisation, it’s worth noting that case study research designs can have different variation s, including single-case, multiple-case and longitudinal designs. As you can see in the example, a single-case design involves intensely examining a single entity to understand its unique characteristics and complexities. Conversely, in a multiple-case design , multiple cases are compared and contrasted to identify patterns and commonalities. Lastly, in a longitudinal case design , a single case or multiple cases are studied over an extended period of time to understand how factors develop over time.

As you can see, a case study research design is particularly useful where a deep and contextualised understanding of a specific phenomenon or issue is desired. However, this strength is also its weakness. In other words, you can’t generalise the findings from a case study to the broader population. So, keep this in mind if you’re considering going the case study route.

Case study design often involves investigating an individual to gain an in-depth understanding of their experiences, behaviours or outcomes.

How To Choose A Research Design

Having worked through all of these potential research designs, you’d be forgiven for feeling a little overwhelmed and wondering, “ But how do I decide which research design to use? ”. While we could write an entire post covering that alone, here are a few factors to consider that will help you choose a suitable research design for your study.

Data type: The first determining factor is naturally the type of data you plan to be collecting – i.e., qualitative or quantitative. This may sound obvious, but we have to be clear about this – don’t try to use a quantitative research design on qualitative data (or vice versa)!

Research aim(s) and question(s): As with all methodological decisions, your research aim and research questions will heavily influence your research design. For example, if your research aims involve developing a theory from qualitative data, grounded theory would be a strong option. Similarly, if your research aims involve identifying and measuring relationships between variables, one of the experimental designs would likely be a better option.

Time: It’s essential that you consider any time constraints you have, as this will impact the type of research design you can choose. For example, if you’ve only got a month to complete your project, a lengthy design such as ethnography wouldn’t be a good fit.

Resources: Take into account the resources realistically available to you, as these need to factor into your research design choice. For example, if you require highly specialised lab equipment to execute an experimental design, you need to be sure that you’ll have access to that before you make a decision.

Keep in mind that when it comes to research, it’s important to manage your risks and play as conservatively as possible. If your entire project relies on you achieving a huge sample, having access to niche equipment or holding interviews with very difficult-to-reach participants, you’re creating risks that could kill your project. So, be sure to think through your choices carefully and make sure that you have backup plans for any existential risks. Remember that a relatively simple methodology executed well generally will typically earn better marks than a highly-complex methodology executed poorly.

example research design

Recap: Key Takeaways

We’ve covered a lot of ground here. Let’s recap by looking at the key takeaways:

  • Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data.
  • Research designs for quantitative studies include descriptive , correlational , experimental and quasi-experimenta l designs.
  • Research designs for qualitative studies include phenomenological , grounded theory , ethnographic and case study designs.
  • When choosing a research design, you need to consider a variety of factors, including the type of data you’ll be working with, your research aims and questions, your time and the resources available to you.

If you need a helping hand with your research design (or any other aspect of your research), check out our private coaching services .

example research design

Psst… there’s more (for free)

This post is part of our dissertation mini-course, which covers everything you need to get started with your dissertation, thesis or research project. 

You Might Also Like:

Survey Design 101: The Basics

Is there any blog article explaining more on Case study research design? Is there a Case study write-up template? Thank you.

Solly Khan

Thanks this was quite valuable to clarify such an important concept.

hetty

Thanks for this simplified explanations. it is quite very helpful.

Belz

This was really helpful. thanks

Imur

Thank you for your explanation. I think case study research design and the use of secondary data in researches needs to be talked about more in your videos and articles because there a lot of case studies research design tailored projects out there.

Please is there any template for a case study research design whose data type is a secondary data on your repository?

Sam Msongole

This post is very clear, comprehensive and has been very helpful to me. It has cleared the confusion I had in regard to research design and methodology.

Robyn Pritchard

This post is helpful, easy to understand, and deconstructs what a research design is. Thanks

kelebogile

how to cite this page

Peter

Thank you very much for the post. It is wonderful and has cleared many worries in my mind regarding research designs. I really appreciate .

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Research Design | Step-by-Step Guide with Examples

Published on 5 May 2022 by Shona McCombes . Revised on 20 March 2023.

A research design is a strategy for answering your research question  using empirical data. Creating a research design means making decisions about:

  • Your overall aims and approach
  • The type of research design you’ll use
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research aims and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, frequently asked questions.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities – start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Prevent plagiarism, run a free check.

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types. Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships, while descriptive and correlational designs allow you to measure variables and describe relationships between them.

With descriptive and correlational designs, you can get a clear picture of characteristics, trends, and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analysing the data.

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study – plants, animals, organisations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region, or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalise your results to the population as a whole.

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study, your aim is to deeply understand a specific context, not to generalise to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question.

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviours, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews.

Observation methods

Observations allow you to collect data unobtrusively, observing characteristics, behaviours, or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected – for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are reliable and valid.

Operationalisation

Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalisation means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in – for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced , while validity means that you’re actually measuring the concept you’re interested in.

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method, you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample – by mail, online, by phone, or in person?

If you’re using a probability sampling method, it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method, how will you avoid bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organising and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymise and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well organised will save time when it comes to analysing them. It can also help other researchers validate and add to your findings.

On their own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyse the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarise your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarise your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

There are many other ways of analysing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2023, March 20). Research Design | Step-by-Step Guide with Examples. Scribbr. Retrieved 1 April 2024, from https://www.scribbr.co.uk/research-methods/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

  • How it works

How to Write a Research Design – Guide with Examples

Published by Alaxendra Bets at August 14th, 2021 , Revised On October 3, 2023

A research design is a structure that combines different components of research. It involves the use of different data collection and data analysis techniques logically to answer the  research questions .

It would be best to make some decisions about addressing the research questions adequately before starting the research process, which is achieved with the help of the research design.

Below are the key aspects of the decision-making process:

  • Data type required for research
  • Research resources
  • Participants required for research
  • Hypothesis based upon research question(s)
  • Data analysis  methodologies
  • Variables (Independent, dependent, and confounding)
  • The location and timescale for conducting the data
  • The time period required for research

The research design provides the strategy of investigation for your project. Furthermore, it defines the parameters and criteria to compile the data to evaluate results and conclude.

Your project’s validity depends on the data collection and  interpretation techniques.  A strong research design reflects a strong  dissertation , scientific paper, or research proposal .

Steps of research design

Step 1: Establish Priorities for Research Design

Before conducting any research study, you must address an important question: “how to create a research design.”

The research design depends on the researcher’s priorities and choices because every research has different priorities. For a complex research study involving multiple methods, you may choose to have more than one research design.

Multimethodology or multimethod research includes using more than one data collection method or research in a research study or set of related studies.

If one research design is weak in one area, then another research design can cover that weakness. For instance, a  dissertation analyzing different situations or cases will have more than one research design.

For example:

  • Experimental research involves experimental investigation and laboratory experience, but it does not accurately investigate the real world.
  • Quantitative research is good for the  statistical part of the project, but it may not provide an in-depth understanding of the  topic .
  • Also, correlational research will not provide experimental results because it is a technique that assesses the statistical relationship between two variables.

While scientific considerations are a fundamental aspect of the research design, It is equally important that the researcher think practically before deciding on its structure. Here are some questions that you should think of;

  • Do you have enough time to gather data and complete the write-up?
  • Will you be able to collect the necessary data by interviewing a specific person or visiting a specific location?
  • Do you have in-depth knowledge about the  different statistical analysis and data collection techniques to address the research questions  or test the  hypothesis ?

If you think that the chosen research design cannot answer the research questions properly, you can refine your research questions to gain better insight.

Step 2: Data Type you Need for Research

Decide on the type of data you need for your research. The type of data you need to collect depends on your research questions or research hypothesis. Two types of research data can be used to answer the research questions:

Primary Data Vs. Secondary Data

Qualitative vs. quantitative data.

Also, see; Research methods, design, and analysis .

Need help with a thesis chapter?

  • Hire an expert from ResearchProspect today!
  • Statistical analysis, research methodology, discussion of the results or conclusion – our experts can help you no matter how complex the requirements are.

analysis image

Step 3: Data Collection Techniques

Once you have selected the type of research to answer your research question, you need to decide where and how to collect the data.

It is time to determine your research method to address the  research problem . Research methods involve procedures, techniques, materials, and tools used for the study.

For instance, a dissertation research design includes the different resources and data collection techniques and helps establish your  dissertation’s structure .

The following table shows the characteristics of the most popularly employed research methods.

Research Methods

Step 4: Procedure of Data Analysis

Use of the  correct data and statistical analysis technique is necessary for the validity of your research. Therefore, you need to be certain about the data type that would best address the research problem. Choosing an appropriate analysis method is the final step for the research design. It can be split into two main categories;

Quantitative Data Analysis

The quantitative data analysis technique involves analyzing the numerical data with the help of different applications such as; SPSS, STATA, Excel, origin lab, etc.

This data analysis strategy tests different variables such as spectrum, frequencies, averages, and more. The research question and the hypothesis must be established to identify the variables for testing.

Qualitative Data Analysis

Qualitative data analysis of figures, themes, and words allows for flexibility and the researcher’s subjective opinions. This means that the researcher’s primary focus will be interpreting patterns, tendencies, and accounts and understanding the implications and social framework.

You should be clear about your research objectives before starting to analyze the data. For example, you should ask yourself whether you need to explain respondents’ experiences and insights or do you also need to evaluate their responses with reference to a certain social framework.

Step 5: Write your Research Proposal

The research design is an important component of a research proposal because it plans the project’s execution. You can share it with the supervisor, who would evaluate the feasibility and capacity of the results  and  conclusion .

Read our guidelines to write a research proposal  if you have already formulated your research design. The research proposal is written in the future tense because you are writing your proposal before conducting research.

The  research methodology  or research design, on the other hand, is generally written in the past tense.

How to Write a Research Design – Conclusion

A research design is the plan, structure, strategy of investigation conceived to answer the research question and test the hypothesis. The dissertation research design can be classified based on the type of data and the type of analysis.

Above mentioned five steps are the answer to how to write a research design. So, follow these steps to  formulate the perfect research design for your dissertation .

ResearchProspect writers have years of experience creating research designs that align with the dissertation’s aim and objectives. If you are struggling with your dissertation methodology chapter, you might want to look at our dissertation part-writing service.

Our dissertation writers can also help you with the full dissertation paper . No matter how urgent or complex your need may be, ResearchProspect can help. We also offer PhD level research paper writing services.

Frequently Asked Questions

What is research design.

Research design is a systematic plan that guides the research process, outlining the methodology and procedures for collecting and analysing data. It determines the structure of the study, ensuring the research question is answered effectively, reliably, and validly. It serves as the blueprint for the entire research project.

How to write a research design?

To write a research design, define your research question, identify the research method (qualitative, quantitative, or mixed), choose data collection techniques (e.g., surveys, interviews), determine the sample size and sampling method, outline data analysis procedures, and highlight potential limitations and ethical considerations for the study.

How to write the design section of a research paper?

In the design section of a research paper, describe the research methodology chosen and justify its selection. Outline the data collection methods, participants or samples, instruments used, and procedures followed. Detail any experimental controls, if applicable. Ensure clarity and precision to enable replication of the study by other researchers.

How to write a research design in methodology?

To write a research design in methodology, clearly outline the research strategy (e.g., experimental, survey, case study). Describe the sampling technique, participants, and data collection methods. Detail the procedures for data collection and analysis. Justify choices by linking them to research objectives, addressing reliability and validity.

You May Also Like

Here we explore what is research problem in dissertation with research problem examples to help you understand how and when to write a research problem.

Not sure how to approach a company for your primary research study? Don’t worry. Here we have some tips for you to successfully gather primary study.

To help students organise their dissertation proposal paper correctly, we have put together detailed guidelines on how to structure a dissertation proposal.

USEFUL LINKS

LEARNING RESOURCES

DMCA.com Protection Status

COMPANY DETAILS

Research-Prospect-Writing-Service

  • How It Works
  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • Types of Research Designs
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

Introduction

Before beginning your paper, you need to decide how you plan to design the study .

The research design refers to the overall strategy and analytical approach that you have chosen in order to integrate, in a coherent and logical way, the different components of the study, thus ensuring that the research problem will be thoroughly investigated. It constitutes the blueprint for the collection, measurement, and interpretation of information and data. Note that the research problem determines the type of design you choose, not the other way around!

De Vaus, D. A. Research Design in Social Research . London: SAGE, 2001; Trochim, William M.K. Research Methods Knowledge Base. 2006.

General Structure and Writing Style

The function of a research design is to ensure that the evidence obtained enables you to effectively address the research problem logically and as unambiguously as possible . In social sciences research, obtaining information relevant to the research problem generally entails specifying the type of evidence needed to test the underlying assumptions of a theory, to evaluate a program, or to accurately describe and assess meaning related to an observable phenomenon.

With this in mind, a common mistake made by researchers is that they begin their investigations before they have thought critically about what information is required to address the research problem. Without attending to these design issues beforehand, the overall research problem will not be adequately addressed and any conclusions drawn will run the risk of being weak and unconvincing. As a consequence, the overall validity of the study will be undermined.

The length and complexity of describing the research design in your paper can vary considerably, but any well-developed description will achieve the following :

  • Identify the research problem clearly and justify its selection, particularly in relation to any valid alternative designs that could have been used,
  • Review and synthesize previously published literature associated with the research problem,
  • Clearly and explicitly specify hypotheses [i.e., research questions] central to the problem,
  • Effectively describe the information and/or data which will be necessary for an adequate testing of the hypotheses and explain how such information and/or data will be obtained, and
  • Describe the methods of analysis to be applied to the data in determining whether or not the hypotheses are true or false.

The research design is usually incorporated into the introduction of your paper . You can obtain an overall sense of what to do by reviewing studies that have utilized the same research design [e.g., using a case study approach]. This can help you develop an outline to follow for your own paper.

NOTE : Use the SAGE Research Methods Online and Cases and the SAGE Research Methods Videos databases to search for scholarly resources on how to apply specific research designs and methods . The Research Methods Online database contains links to more than 175,000 pages of SAGE publisher's book, journal, and reference content on quantitative, qualitative, and mixed research methodologies. Also included is a collection of case studies of social research projects that can be used to help you better understand abstract or complex methodological concepts. The Research Methods Videos database contains hours of tutorials, interviews, video case studies, and mini-documentaries covering the entire research process.

Creswell, John W. and J. David Creswell. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 5th edition. Thousand Oaks, CA: Sage, 2018; De Vaus, D. A. Research Design in Social Research . London: SAGE, 2001; Gorard, Stephen. Research Design: Creating Robust Approaches for the Social Sciences . Thousand Oaks, CA: Sage, 2013; Leedy, Paul D. and Jeanne Ellis Ormrod. Practical Research: Planning and Design . Tenth edition. Boston, MA: Pearson, 2013; Vogt, W. Paul, Dianna C. Gardner, and Lynne M. Haeffele. When to Use What Research Design . New York: Guilford, 2012.

Action Research Design

Definition and Purpose

The essentials of action research design follow a characteristic cycle whereby initially an exploratory stance is adopted, where an understanding of a problem is developed and plans are made for some form of interventionary strategy. Then the intervention is carried out [the "action" in action research] during which time, pertinent observations are collected in various forms. The new interventional strategies are carried out, and this cyclic process repeats, continuing until a sufficient understanding of [or a valid implementation solution for] the problem is achieved. The protocol is iterative or cyclical in nature and is intended to foster deeper understanding of a given situation, starting with conceptualizing and particularizing the problem and moving through several interventions and evaluations.

What do these studies tell you ?

  • This is a collaborative and adaptive research design that lends itself to use in work or community situations.
  • Design focuses on pragmatic and solution-driven research outcomes rather than testing theories.
  • When practitioners use action research, it has the potential to increase the amount they learn consciously from their experience; the action research cycle can be regarded as a learning cycle.
  • Action research studies often have direct and obvious relevance to improving practice and advocating for change.
  • There are no hidden controls or preemption of direction by the researcher.

What these studies don't tell you ?

  • It is harder to do than conducting conventional research because the researcher takes on responsibilities of advocating for change as well as for researching the topic.
  • Action research is much harder to write up because it is less likely that you can use a standard format to report your findings effectively [i.e., data is often in the form of stories or observation].
  • Personal over-involvement of the researcher may bias research results.
  • The cyclic nature of action research to achieve its twin outcomes of action [e.g. change] and research [e.g. understanding] is time-consuming and complex to conduct.
  • Advocating for change usually requires buy-in from study participants.

Coghlan, David and Mary Brydon-Miller. The Sage Encyclopedia of Action Research . Thousand Oaks, CA:  Sage, 2014; Efron, Sara Efrat and Ruth Ravid. Action Research in Education: A Practical Guide . New York: Guilford, 2013; Gall, Meredith. Educational Research: An Introduction . Chapter 18, Action Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Gorard, Stephen. Research Design: Creating Robust Approaches for the Social Sciences . Thousand Oaks, CA: Sage, 2013; Kemmis, Stephen and Robin McTaggart. “Participatory Action Research.” In Handbook of Qualitative Research . Norman Denzin and Yvonna S. Lincoln, eds. 2nd ed. (Thousand Oaks, CA: SAGE, 2000), pp. 567-605; McNiff, Jean. Writing and Doing Action Research . London: Sage, 2014; Reason, Peter and Hilary Bradbury. Handbook of Action Research: Participative Inquiry and Practice . Thousand Oaks, CA: SAGE, 2001.

Case Study Design

A case study is an in-depth study of a particular research problem rather than a sweeping statistical survey or comprehensive comparative inquiry. It is often used to narrow down a very broad field of research into one or a few easily researchable examples. The case study research design is also useful for testing whether a specific theory and model actually applies to phenomena in the real world. It is a useful design when not much is known about an issue or phenomenon.

  • Approach excels at bringing us to an understanding of a complex issue through detailed contextual analysis of a limited number of events or conditions and their relationships.
  • A researcher using a case study design can apply a variety of methodologies and rely on a variety of sources to investigate a research problem.
  • Design can extend experience or add strength to what is already known through previous research.
  • Social scientists, in particular, make wide use of this research design to examine contemporary real-life situations and provide the basis for the application of concepts and theories and the extension of methodologies.
  • The design can provide detailed descriptions of specific and rare cases.
  • A single or small number of cases offers little basis for establishing reliability or to generalize the findings to a wider population of people, places, or things.
  • Intense exposure to the study of a case may bias a researcher's interpretation of the findings.
  • Design does not facilitate assessment of cause and effect relationships.
  • Vital information may be missing, making the case hard to interpret.
  • The case may not be representative or typical of the larger problem being investigated.
  • If the criteria for selecting a case is because it represents a very unusual or unique phenomenon or problem for study, then your interpretation of the findings can only apply to that particular case.

Case Studies. Writing@CSU. Colorado State University; Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 4, Flexible Methods: Case Study Design. 2nd ed. New York: Columbia University Press, 1999; Gerring, John. “What Is a Case Study and What Is It Good for?” American Political Science Review 98 (May 2004): 341-354; Greenhalgh, Trisha, editor. Case Study Evaluation: Past, Present and Future Challenges . Bingley, UK: Emerald Group Publishing, 2015; Mills, Albert J. , Gabrielle Durepos, and Eiden Wiebe, editors. Encyclopedia of Case Study Research . Thousand Oaks, CA: SAGE Publications, 2010; Stake, Robert E. The Art of Case Study Research . Thousand Oaks, CA: SAGE, 1995; Yin, Robert K. Case Study Research: Design and Theory . Applied Social Research Methods Series, no. 5. 3rd ed. Thousand Oaks, CA: SAGE, 2003.

Causal Design

Causality studies may be thought of as understanding a phenomenon in terms of conditional statements in the form, “If X, then Y.” This type of research is used to measure what impact a specific change will have on existing norms and assumptions. Most social scientists seek causal explanations that reflect tests of hypotheses. Causal effect (nomothetic perspective) occurs when variation in one phenomenon, an independent variable, leads to or results, on average, in variation in another phenomenon, the dependent variable.

Conditions necessary for determining causality:

  • Empirical association -- a valid conclusion is based on finding an association between the independent variable and the dependent variable.
  • Appropriate time order -- to conclude that causation was involved, one must see that cases were exposed to variation in the independent variable before variation in the dependent variable.
  • Nonspuriousness -- a relationship between two variables that is not due to variation in a third variable.
  • Causality research designs assist researchers in understanding why the world works the way it does through the process of proving a causal link between variables and by the process of eliminating other possibilities.
  • Replication is possible.
  • There is greater confidence the study has internal validity due to the systematic subject selection and equity of groups being compared.
  • Not all relationships are causal! The possibility always exists that, by sheer coincidence, two unrelated events appear to be related [e.g., Punxatawney Phil could accurately predict the duration of Winter for five consecutive years but, the fact remains, he's just a big, furry rodent].
  • Conclusions about causal relationships are difficult to determine due to a variety of extraneous and confounding variables that exist in a social environment. This means causality can only be inferred, never proven.
  • If two variables are correlated, the cause must come before the effect. However, even though two variables might be causally related, it can sometimes be difficult to determine which variable comes first and, therefore, to establish which variable is the actual cause and which is the  actual effect.

Beach, Derek and Rasmus Brun Pedersen. Causal Case Study Methods: Foundations and Guidelines for Comparing, Matching, and Tracing . Ann Arbor, MI: University of Michigan Press, 2016; Bachman, Ronet. The Practice of Research in Criminology and Criminal Justice . Chapter 5, Causation and Research Designs. 3rd ed. Thousand Oaks, CA: Pine Forge Press, 2007; Brewer, Ernest W. and Jennifer Kubn. “Causal-Comparative Design.” In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 125-132; Causal Research Design: Experimentation. Anonymous SlideShare Presentation; Gall, Meredith. Educational Research: An Introduction . Chapter 11, Nonexperimental Research: Correlational Designs. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Trochim, William M.K. Research Methods Knowledge Base. 2006.

Cohort Design

Often used in the medical sciences, but also found in the applied social sciences, a cohort study generally refers to a study conducted over a period of time involving members of a population which the subject or representative member comes from, and who are united by some commonality or similarity. Using a quantitative framework, a cohort study makes note of statistical occurrence within a specialized subgroup, united by same or similar characteristics that are relevant to the research problem being investigated, rather than studying statistical occurrence within the general population. Using a qualitative framework, cohort studies generally gather data using methods of observation. Cohorts can be either "open" or "closed."

  • Open Cohort Studies [dynamic populations, such as the population of Los Angeles] involve a population that is defined just by the state of being a part of the study in question (and being monitored for the outcome). Date of entry and exit from the study is individually defined, therefore, the size of the study population is not constant. In open cohort studies, researchers can only calculate rate based data, such as, incidence rates and variants thereof.
  • Closed Cohort Studies [static populations, such as patients entered into a clinical trial] involve participants who enter into the study at one defining point in time and where it is presumed that no new participants can enter the cohort. Given this, the number of study participants remains constant (or can only decrease).
  • The use of cohorts is often mandatory because a randomized control study may be unethical. For example, you cannot deliberately expose people to asbestos, you can only study its effects on those who have already been exposed. Research that measures risk factors often relies upon cohort designs.
  • Because cohort studies measure potential causes before the outcome has occurred, they can demonstrate that these “causes” preceded the outcome, thereby avoiding the debate as to which is the cause and which is the effect.
  • Cohort analysis is highly flexible and can provide insight into effects over time and related to a variety of different types of changes [e.g., social, cultural, political, economic, etc.].
  • Either original data or secondary data can be used in this design.
  • In cases where a comparative analysis of two cohorts is made [e.g., studying the effects of one group exposed to asbestos and one that has not], a researcher cannot control for all other factors that might differ between the two groups. These factors are known as confounding variables.
  • Cohort studies can end up taking a long time to complete if the researcher must wait for the conditions of interest to develop within the group. This also increases the chance that key variables change during the course of the study, potentially impacting the validity of the findings.
  • Due to the lack of randominization in the cohort design, its external validity is lower than that of study designs where the researcher randomly assigns participants.

Healy P, Devane D. “Methodological Considerations in Cohort Study Designs.” Nurse Researcher 18 (2011): 32-36; Glenn, Norval D, editor. Cohort Analysis . 2nd edition. Thousand Oaks, CA: Sage, 2005; Levin, Kate Ann. Study Design IV: Cohort Studies. Evidence-Based Dentistry 7 (2003): 51–52; Payne, Geoff. “Cohort Study.” In The SAGE Dictionary of Social Research Methods . Victor Jupp, editor. (Thousand Oaks, CA: Sage, 2006), pp. 31-33; Study Design 101. Himmelfarb Health Sciences Library. George Washington University, November 2011; Cohort Study. Wikipedia.

Cross-Sectional Design

Cross-sectional research designs have three distinctive features: no time dimension; a reliance on existing differences rather than change following intervention; and, groups are selected based on existing differences rather than random allocation. The cross-sectional design can only measure differences between or from among a variety of people, subjects, or phenomena rather than a process of change. As such, researchers using this design can only employ a relatively passive approach to making causal inferences based on findings.

  • Cross-sectional studies provide a clear 'snapshot' of the outcome and the characteristics associated with it, at a specific point in time.
  • Unlike an experimental design, where there is an active intervention by the researcher to produce and measure change or to create differences, cross-sectional designs focus on studying and drawing inferences from existing differences between people, subjects, or phenomena.
  • Entails collecting data at and concerning one point in time. While longitudinal studies involve taking multiple measures over an extended period of time, cross-sectional research is focused on finding relationships between variables at one moment in time.
  • Groups identified for study are purposely selected based upon existing differences in the sample rather than seeking random sampling.
  • Cross-section studies are capable of using data from a large number of subjects and, unlike observational studies, is not geographically bound.
  • Can estimate prevalence of an outcome of interest because the sample is usually taken from the whole population.
  • Because cross-sectional designs generally use survey techniques to gather data, they are relatively inexpensive and take up little time to conduct.
  • Finding people, subjects, or phenomena to study that are very similar except in one specific variable can be difficult.
  • Results are static and time bound and, therefore, give no indication of a sequence of events or reveal historical or temporal contexts.
  • Studies cannot be utilized to establish cause and effect relationships.
  • This design only provides a snapshot of analysis so there is always the possibility that a study could have differing results if another time-frame had been chosen.
  • There is no follow up to the findings.

Bethlehem, Jelke. "7: Cross-sectional Research." In Research Methodology in the Social, Behavioural and Life Sciences . Herman J Adèr and Gideon J Mellenbergh, editors. (London, England: Sage, 1999), pp. 110-43; Bourque, Linda B. “Cross-Sectional Design.” In  The SAGE Encyclopedia of Social Science Research Methods . Michael S. Lewis-Beck, Alan Bryman, and Tim Futing Liao. (Thousand Oaks, CA: 2004), pp. 230-231; Hall, John. “Cross-Sectional Survey Design.” In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 173-174; Helen Barratt, Maria Kirwan. Cross-Sectional Studies: Design Application, Strengths and Weaknesses of Cross-Sectional Studies. Healthknowledge, 2009. Cross-Sectional Study. Wikipedia.

Descriptive Design

Descriptive research designs help provide answers to the questions of who, what, when, where, and how associated with a particular research problem; a descriptive study cannot conclusively ascertain answers to why. Descriptive research is used to obtain information concerning the current status of the phenomena and to describe "what exists" with respect to variables or conditions in a situation.

  • The subject is being observed in a completely natural and unchanged natural environment. True experiments, whilst giving analyzable data, often adversely influence the normal behavior of the subject [a.k.a., the Heisenberg effect whereby measurements of certain systems cannot be made without affecting the systems].
  • Descriptive research is often used as a pre-cursor to more quantitative research designs with the general overview giving some valuable pointers as to what variables are worth testing quantitatively.
  • If the limitations are understood, they can be a useful tool in developing a more focused study.
  • Descriptive studies can yield rich data that lead to important recommendations in practice.
  • Appoach collects a large amount of data for detailed analysis.
  • The results from a descriptive research cannot be used to discover a definitive answer or to disprove a hypothesis.
  • Because descriptive designs often utilize observational methods [as opposed to quantitative methods], the results cannot be replicated.
  • The descriptive function of research is heavily dependent on instrumentation for measurement and observation.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 5, Flexible Methods: Descriptive Research. 2nd ed. New York: Columbia University Press, 1999; Given, Lisa M. "Descriptive Research." In Encyclopedia of Measurement and Statistics . Neil J. Salkind and Kristin Rasmussen, editors. (Thousand Oaks, CA: Sage, 2007), pp. 251-254; McNabb, Connie. Descriptive Research Methodologies. Powerpoint Presentation; Shuttleworth, Martyn. Descriptive Research Design, September 26, 2008; Erickson, G. Scott. "Descriptive Research Design." In New Methods of Market Research and Analysis . (Northampton, MA: Edward Elgar Publishing, 2017), pp. 51-77; Sahin, Sagufta, and Jayanta Mete. "A Brief Study on Descriptive Research: Its Nature and Application in Social Science." International Journal of Research and Analysis in Humanities 1 (2021): 11; K. Swatzell and P. Jennings. “Descriptive Research: The Nuts and Bolts.” Journal of the American Academy of Physician Assistants 20 (2007), pp. 55-56; Kane, E. Doing Your Own Research: Basic Descriptive Research in the Social Sciences and Humanities . London: Marion Boyars, 1985.

Experimental Design

A blueprint of the procedure that enables the researcher to maintain control over all factors that may affect the result of an experiment. In doing this, the researcher attempts to determine or predict what may occur. Experimental research is often used where there is time priority in a causal relationship (cause precedes effect), there is consistency in a causal relationship (a cause will always lead to the same effect), and the magnitude of the correlation is great. The classic experimental design specifies an experimental group and a control group. The independent variable is administered to the experimental group and not to the control group, and both groups are measured on the same dependent variable. Subsequent experimental designs have used more groups and more measurements over longer periods. True experiments must have control, randomization, and manipulation.

  • Experimental research allows the researcher to control the situation. In so doing, it allows researchers to answer the question, “What causes something to occur?”
  • Permits the researcher to identify cause and effect relationships between variables and to distinguish placebo effects from treatment effects.
  • Experimental research designs support the ability to limit alternative explanations and to infer direct causal relationships in the study.
  • Approach provides the highest level of evidence for single studies.
  • The design is artificial, and results may not generalize well to the real world.
  • The artificial settings of experiments may alter the behaviors or responses of participants.
  • Experimental designs can be costly if special equipment or facilities are needed.
  • Some research problems cannot be studied using an experiment because of ethical or technical reasons.
  • Difficult to apply ethnographic and other qualitative methods to experimentally designed studies.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 7, Flexible Methods: Experimental Research. 2nd ed. New York: Columbia University Press, 1999; Chapter 2: Research Design, Experimental Designs. School of Psychology, University of New England, 2000; Chow, Siu L. "Experimental Design." In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 448-453; "Experimental Design." In Social Research Methods . Nicholas Walliman, editor. (London, England: Sage, 2006), pp, 101-110; Experimental Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Kirk, Roger E. Experimental Design: Procedures for the Behavioral Sciences . 4th edition. Thousand Oaks, CA: Sage, 2013; Trochim, William M.K. Experimental Design. Research Methods Knowledge Base. 2006; Rasool, Shafqat. Experimental Research. Slideshare presentation.

Exploratory Design

An exploratory design is conducted about a research problem when there are few or no earlier studies to refer to or rely upon to predict an outcome . The focus is on gaining insights and familiarity for later investigation or undertaken when research problems are in a preliminary stage of investigation. Exploratory designs are often used to establish an understanding of how best to proceed in studying an issue or what methodology would effectively apply to gathering information about the issue.

The goals of exploratory research are intended to produce the following possible insights:

  • Familiarity with basic details, settings, and concerns.
  • Well grounded picture of the situation being developed.
  • Generation of new ideas and assumptions.
  • Development of tentative theories or hypotheses.
  • Determination about whether a study is feasible in the future.
  • Issues get refined for more systematic investigation and formulation of new research questions.
  • Direction for future research and techniques get developed.
  • Design is a useful approach for gaining background information on a particular topic.
  • Exploratory research is flexible and can address research questions of all types (what, why, how).
  • Provides an opportunity to define new terms and clarify existing concepts.
  • Exploratory research is often used to generate formal hypotheses and develop more precise research problems.
  • In the policy arena or applied to practice, exploratory studies help establish research priorities and where resources should be allocated.
  • Exploratory research generally utilizes small sample sizes and, thus, findings are typically not generalizable to the population at large.
  • The exploratory nature of the research inhibits an ability to make definitive conclusions about the findings. They provide insight but not definitive conclusions.
  • The research process underpinning exploratory studies is flexible but often unstructured, leading to only tentative results that have limited value to decision-makers.
  • Design lacks rigorous standards applied to methods of data gathering and analysis because one of the areas for exploration could be to determine what method or methodologies could best fit the research problem.

Cuthill, Michael. “Exploratory Research: Citizen Participation, Local Government, and Sustainable Development in Australia.” Sustainable Development 10 (2002): 79-89; Streb, Christoph K. "Exploratory Case Study." In Encyclopedia of Case Study Research . Albert J. Mills, Gabrielle Durepos and Eiden Wiebe, editors. (Thousand Oaks, CA: Sage, 2010), pp. 372-374; Taylor, P. J., G. Catalano, and D.R.F. Walker. “Exploratory Analysis of the World City Network.” Urban Studies 39 (December 2002): 2377-2394; Exploratory Research. Wikipedia.

Field Research Design

Sometimes referred to as ethnography or participant observation, designs around field research encompass a variety of interpretative procedures [e.g., observation and interviews] rooted in qualitative approaches to studying people individually or in groups while inhabiting their natural environment as opposed to using survey instruments or other forms of impersonal methods of data gathering. Information acquired from observational research takes the form of “ field notes ” that involves documenting what the researcher actually sees and hears while in the field. Findings do not consist of conclusive statements derived from numbers and statistics because field research involves analysis of words and observations of behavior. Conclusions, therefore, are developed from an interpretation of findings that reveal overriding themes, concepts, and ideas. More information can be found HERE .

  • Field research is often necessary to fill gaps in understanding the research problem applied to local conditions or to specific groups of people that cannot be ascertained from existing data.
  • The research helps contextualize already known information about a research problem, thereby facilitating ways to assess the origins, scope, and scale of a problem and to gage the causes, consequences, and means to resolve an issue based on deliberate interaction with people in their natural inhabited spaces.
  • Enables the researcher to corroborate or confirm data by gathering additional information that supports or refutes findings reported in prior studies of the topic.
  • Because the researcher in embedded in the field, they are better able to make observations or ask questions that reflect the specific cultural context of the setting being investigated.
  • Observing the local reality offers the opportunity to gain new perspectives or obtain unique data that challenges existing theoretical propositions or long-standing assumptions found in the literature.

What these studies don't tell you

  • A field research study requires extensive time and resources to carry out the multiple steps involved with preparing for the gathering of information, including for example, examining background information about the study site, obtaining permission to access the study site, and building trust and rapport with subjects.
  • Requires a commitment to staying engaged in the field to ensure that you can adequately document events and behaviors as they unfold.
  • The unpredictable nature of fieldwork means that researchers can never fully control the process of data gathering. They must maintain a flexible approach to studying the setting because events and circumstances can change quickly or unexpectedly.
  • Findings can be difficult to interpret and verify without access to documents and other source materials that help to enhance the credibility of information obtained from the field  [i.e., the act of triangulating the data].
  • Linking the research problem to the selection of study participants inhabiting their natural environment is critical. However, this specificity limits the ability to generalize findings to different situations or in other contexts or to infer courses of action applied to other settings or groups of people.
  • The reporting of findings must take into account how the researcher themselves may have inadvertently affected respondents and their behaviors.

Historical Design

The purpose of a historical research design is to collect, verify, and synthesize evidence from the past to establish facts that defend or refute a hypothesis. It uses secondary sources and a variety of primary documentary evidence, such as, diaries, official records, reports, archives, and non-textual information [maps, pictures, audio and visual recordings]. The limitation is that the sources must be both authentic and valid.

  • The historical research design is unobtrusive; the act of research does not affect the results of the study.
  • The historical approach is well suited for trend analysis.
  • Historical records can add important contextual background required to more fully understand and interpret a research problem.
  • There is often no possibility of researcher-subject interaction that could affect the findings.
  • Historical sources can be used over and over to study different research problems or to replicate a previous study.
  • The ability to fulfill the aims of your research are directly related to the amount and quality of documentation available to understand the research problem.
  • Since historical research relies on data from the past, there is no way to manipulate it to control for contemporary contexts.
  • Interpreting historical sources can be very time consuming.
  • The sources of historical materials must be archived consistently to ensure access. This may especially challenging for digital or online-only sources.
  • Original authors bring their own perspectives and biases to the interpretation of past events and these biases are more difficult to ascertain in historical resources.
  • Due to the lack of control over external variables, historical research is very weak with regard to the demands of internal validity.
  • It is rare that the entirety of historical documentation needed to fully address a research problem is available for interpretation, therefore, gaps need to be acknowledged.

Howell, Martha C. and Walter Prevenier. From Reliable Sources: An Introduction to Historical Methods . Ithaca, NY: Cornell University Press, 2001; Lundy, Karen Saucier. "Historical Research." In The Sage Encyclopedia of Qualitative Research Methods . Lisa M. Given, editor. (Thousand Oaks, CA: Sage, 2008), pp. 396-400; Marius, Richard. and Melvin E. Page. A Short Guide to Writing about History . 9th edition. Boston, MA: Pearson, 2015; Savitt, Ronald. “Historical Research in Marketing.” Journal of Marketing 44 (Autumn, 1980): 52-58;  Gall, Meredith. Educational Research: An Introduction . Chapter 16, Historical Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007.

Longitudinal Design

A longitudinal study follows the same sample over time and makes repeated observations. For example, with longitudinal surveys, the same group of people is interviewed at regular intervals, enabling researchers to track changes over time and to relate them to variables that might explain why the changes occur. Longitudinal research designs describe patterns of change and help establish the direction and magnitude of causal relationships. Measurements are taken on each variable over two or more distinct time periods. This allows the researcher to measure change in variables over time. It is a type of observational study sometimes referred to as a panel study.

  • Longitudinal data facilitate the analysis of the duration of a particular phenomenon.
  • Enables survey researchers to get close to the kinds of causal explanations usually attainable only with experiments.
  • The design permits the measurement of differences or change in a variable from one period to another [i.e., the description of patterns of change over time].
  • Longitudinal studies facilitate the prediction of future outcomes based upon earlier factors.
  • The data collection method may change over time.
  • Maintaining the integrity of the original sample can be difficult over an extended period of time.
  • It can be difficult to show more than one variable at a time.
  • This design often needs qualitative research data to explain fluctuations in the results.
  • A longitudinal research design assumes present trends will continue unchanged.
  • It can take a long period of time to gather results.
  • There is a need to have a large sample size and accurate sampling to reach representativness.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 6, Flexible Methods: Relational and Longitudinal Research. 2nd ed. New York: Columbia University Press, 1999; Forgues, Bernard, and Isabelle Vandangeon-Derumez. "Longitudinal Analyses." In Doing Management Research . Raymond-Alain Thiétart and Samantha Wauchope, editors. (London, England: Sage, 2001), pp. 332-351; Kalaian, Sema A. and Rafa M. Kasim. "Longitudinal Studies." In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 440-441; Menard, Scott, editor. Longitudinal Research . Thousand Oaks, CA: Sage, 2002; Ployhart, Robert E. and Robert J. Vandenberg. "Longitudinal Research: The Theory, Design, and Analysis of Change.” Journal of Management 36 (January 2010): 94-120; Longitudinal Study. Wikipedia.

Meta-Analysis Design

Meta-analysis is an analytical methodology designed to systematically evaluate and summarize the results from a number of individual studies, thereby, increasing the overall sample size and the ability of the researcher to study effects of interest. The purpose is to not simply summarize existing knowledge, but to develop a new understanding of a research problem using synoptic reasoning. The main objectives of meta-analysis include analyzing differences in the results among studies and increasing the precision by which effects are estimated. A well-designed meta-analysis depends upon strict adherence to the criteria used for selecting studies and the availability of information in each study to properly analyze their findings. Lack of information can severely limit the type of analyzes and conclusions that can be reached. In addition, the more dissimilarity there is in the results among individual studies [heterogeneity], the more difficult it is to justify interpretations that govern a valid synopsis of results. A meta-analysis needs to fulfill the following requirements to ensure the validity of your findings:

  • Clearly defined description of objectives, including precise definitions of the variables and outcomes that are being evaluated;
  • A well-reasoned and well-documented justification for identification and selection of the studies;
  • Assessment and explicit acknowledgment of any researcher bias in the identification and selection of those studies;
  • Description and evaluation of the degree of heterogeneity among the sample size of studies reviewed; and,
  • Justification of the techniques used to evaluate the studies.
  • Can be an effective strategy for determining gaps in the literature.
  • Provides a means of reviewing research published about a particular topic over an extended period of time and from a variety of sources.
  • Is useful in clarifying what policy or programmatic actions can be justified on the basis of analyzing research results from multiple studies.
  • Provides a method for overcoming small sample sizes in individual studies that previously may have had little relationship to each other.
  • Can be used to generate new hypotheses or highlight research problems for future studies.
  • Small violations in defining the criteria used for content analysis can lead to difficult to interpret and/or meaningless findings.
  • A large sample size can yield reliable, but not necessarily valid, results.
  • A lack of uniformity regarding, for example, the type of literature reviewed, how methods are applied, and how findings are measured within the sample of studies you are analyzing, can make the process of synthesis difficult to perform.
  • Depending on the sample size, the process of reviewing and synthesizing multiple studies can be very time consuming.

Beck, Lewis W. "The Synoptic Method." The Journal of Philosophy 36 (1939): 337-345; Cooper, Harris, Larry V. Hedges, and Jeffrey C. Valentine, eds. The Handbook of Research Synthesis and Meta-Analysis . 2nd edition. New York: Russell Sage Foundation, 2009; Guzzo, Richard A., Susan E. Jackson and Raymond A. Katzell. “Meta-Analysis Analysis.” In Research in Organizational Behavior , Volume 9. (Greenwich, CT: JAI Press, 1987), pp 407-442; Lipsey, Mark W. and David B. Wilson. Practical Meta-Analysis . Thousand Oaks, CA: Sage Publications, 2001; Study Design 101. Meta-Analysis. The Himmelfarb Health Sciences Library, George Washington University; Timulak, Ladislav. “Qualitative Meta-Analysis.” In The SAGE Handbook of Qualitative Data Analysis . Uwe Flick, editor. (Los Angeles, CA: Sage, 2013), pp. 481-495; Walker, Esteban, Adrian V. Hernandez, and Micheal W. Kattan. "Meta-Analysis: It's Strengths and Limitations." Cleveland Clinic Journal of Medicine 75 (June 2008): 431-439.

Mixed-Method Design

  • Narrative and non-textual information can add meaning to numeric data, while numeric data can add precision to narrative and non-textual information.
  • Can utilize existing data while at the same time generating and testing a grounded theory approach to describe and explain the phenomenon under study.
  • A broader, more complex research problem can be investigated because the researcher is not constrained by using only one method.
  • The strengths of one method can be used to overcome the inherent weaknesses of another method.
  • Can provide stronger, more robust evidence to support a conclusion or set of recommendations.
  • May generate new knowledge new insights or uncover hidden insights, patterns, or relationships that a single methodological approach might not reveal.
  • Produces more complete knowledge and understanding of the research problem that can be used to increase the generalizability of findings applied to theory or practice.
  • A researcher must be proficient in understanding how to apply multiple methods to investigating a research problem as well as be proficient in optimizing how to design a study that coherently melds them together.
  • Can increase the likelihood of conflicting results or ambiguous findings that inhibit drawing a valid conclusion or setting forth a recommended course of action [e.g., sample interview responses do not support existing statistical data].
  • Because the research design can be very complex, reporting the findings requires a well-organized narrative, clear writing style, and precise word choice.
  • Design invites collaboration among experts. However, merging different investigative approaches and writing styles requires more attention to the overall research process than studies conducted using only one methodological paradigm.
  • Concurrent merging of quantitative and qualitative research requires greater attention to having adequate sample sizes, using comparable samples, and applying a consistent unit of analysis. For sequential designs where one phase of qualitative research builds on the quantitative phase or vice versa, decisions about what results from the first phase to use in the next phase, the choice of samples and estimating reasonable sample sizes for both phases, and the interpretation of results from both phases can be difficult.
  • Due to multiple forms of data being collected and analyzed, this design requires extensive time and resources to carry out the multiple steps involved in data gathering and interpretation.

Burch, Patricia and Carolyn J. Heinrich. Mixed Methods for Policy Research and Program Evaluation . Thousand Oaks, CA: Sage, 2016; Creswell, John w. et al. Best Practices for Mixed Methods Research in the Health Sciences . Bethesda, MD: Office of Behavioral and Social Sciences Research, National Institutes of Health, 2010Creswell, John W. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 4th edition. Thousand Oaks, CA: Sage Publications, 2014; Domínguez, Silvia, editor. Mixed Methods Social Networks Research . Cambridge, UK: Cambridge University Press, 2014; Hesse-Biber, Sharlene Nagy. Mixed Methods Research: Merging Theory with Practice . New York: Guilford Press, 2010; Niglas, Katrin. “How the Novice Researcher Can Make Sense of Mixed Methods Designs.” International Journal of Multiple Research Approaches 3 (2009): 34-46; Onwuegbuzie, Anthony J. and Nancy L. Leech. “Linking Research Questions to Mixed Methods Data Analysis Procedures.” The Qualitative Report 11 (September 2006): 474-498; Tashakorri, Abbas and John W. Creswell. “The New Era of Mixed Methods.” Journal of Mixed Methods Research 1 (January 2007): 3-7; Zhanga, Wanqing. “Mixed Methods Application in Health Intervention Research: A Multiple Case Study.” International Journal of Multiple Research Approaches 8 (2014): 24-35 .

Observational Design

This type of research design draws a conclusion by comparing subjects against a control group, in cases where the researcher has no control over the experiment. There are two general types of observational designs. In direct observations, people know that you are watching them. Unobtrusive measures involve any method for studying behavior where individuals do not know they are being observed. An observational study allows a useful insight into a phenomenon and avoids the ethical and practical difficulties of setting up a large and cumbersome research project.

  • Observational studies are usually flexible and do not necessarily need to be structured around a hypothesis about what you expect to observe [data is emergent rather than pre-existing].
  • The researcher is able to collect in-depth information about a particular behavior.
  • Can reveal interrelationships among multifaceted dimensions of group interactions.
  • You can generalize your results to real life situations.
  • Observational research is useful for discovering what variables may be important before applying other methods like experiments.
  • Observation research designs account for the complexity of group behaviors.
  • Reliability of data is low because seeing behaviors occur over and over again may be a time consuming task and are difficult to replicate.
  • In observational research, findings may only reflect a unique sample population and, thus, cannot be generalized to other groups.
  • There can be problems with bias as the researcher may only "see what they want to see."
  • There is no possibility to determine "cause and effect" relationships since nothing is manipulated.
  • Sources or subjects may not all be equally credible.
  • Any group that is knowingly studied is altered to some degree by the presence of the researcher, therefore, potentially skewing any data collected.

Atkinson, Paul and Martyn Hammersley. “Ethnography and Participant Observation.” In Handbook of Qualitative Research . Norman K. Denzin and Yvonna S. Lincoln, eds. (Thousand Oaks, CA: Sage, 1994), pp. 248-261; Observational Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Patton Michael Quinn. Qualitiative Research and Evaluation Methods . Chapter 6, Fieldwork Strategies and Observational Methods. 3rd ed. Thousand Oaks, CA: Sage, 2002; Payne, Geoff and Judy Payne. "Observation." In Key Concepts in Social Research . The SAGE Key Concepts series. (London, England: Sage, 2004), pp. 158-162; Rosenbaum, Paul R. Design of Observational Studies . New York: Springer, 2010;Williams, J. Patrick. "Nonparticipant Observation." In The Sage Encyclopedia of Qualitative Research Methods . Lisa M. Given, editor.(Thousand Oaks, CA: Sage, 2008), pp. 562-563.

Philosophical Design

Understood more as an broad approach to examining a research problem than a methodological design, philosophical analysis and argumentation is intended to challenge deeply embedded, often intractable, assumptions underpinning an area of study. This approach uses the tools of argumentation derived from philosophical traditions, concepts, models, and theories to critically explore and challenge, for example, the relevance of logic and evidence in academic debates, to analyze arguments about fundamental issues, or to discuss the root of existing discourse about a research problem. These overarching tools of analysis can be framed in three ways:

  • Ontology -- the study that describes the nature of reality; for example, what is real and what is not, what is fundamental and what is derivative?
  • Epistemology -- the study that explores the nature of knowledge; for example, by what means does knowledge and understanding depend upon and how can we be certain of what we know?
  • Axiology -- the study of values; for example, what values does an individual or group hold and why? How are values related to interest, desire, will, experience, and means-to-end? And, what is the difference between a matter of fact and a matter of value?
  • Can provide a basis for applying ethical decision-making to practice.
  • Functions as a means of gaining greater self-understanding and self-knowledge about the purposes of research.
  • Brings clarity to general guiding practices and principles of an individual or group.
  • Philosophy informs methodology.
  • Refine concepts and theories that are invoked in relatively unreflective modes of thought and discourse.
  • Beyond methodology, philosophy also informs critical thinking about epistemology and the structure of reality (metaphysics).
  • Offers clarity and definition to the practical and theoretical uses of terms, concepts, and ideas.
  • Limited application to specific research problems [answering the "So What?" question in social science research].
  • Analysis can be abstract, argumentative, and limited in its practical application to real-life issues.
  • While a philosophical analysis may render problematic that which was once simple or taken-for-granted, the writing can be dense and subject to unnecessary jargon, overstatement, and/or excessive quotation and documentation.
  • There are limitations in the use of metaphor as a vehicle of philosophical analysis.
  • There can be analytical difficulties in moving from philosophy to advocacy and between abstract thought and application to the phenomenal world.

Burton, Dawn. "Part I, Philosophy of the Social Sciences." In Research Training for Social Scientists . (London, England: Sage, 2000), pp. 1-5; Chapter 4, Research Methodology and Design. Unisa Institutional Repository (UnisaIR), University of South Africa; Jarvie, Ian C., and Jesús Zamora-Bonilla, editors. The SAGE Handbook of the Philosophy of Social Sciences . London: Sage, 2011; Labaree, Robert V. and Ross Scimeca. “The Philosophical Problem of Truth in Librarianship.” The Library Quarterly 78 (January 2008): 43-70; Maykut, Pamela S. Beginning Qualitative Research: A Philosophic and Practical Guide . Washington, DC: Falmer Press, 1994; McLaughlin, Hugh. "The Philosophy of Social Research." In Understanding Social Work Research . 2nd edition. (London: SAGE Publications Ltd., 2012), pp. 24-47; Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, CSLI, Stanford University, 2013.

Sequential Design

  • The researcher has a limitless option when it comes to sample size and the sampling schedule.
  • Due to the repetitive nature of this research design, minor changes and adjustments can be done during the initial parts of the study to correct and hone the research method.
  • This is a useful design for exploratory studies.
  • There is very little effort on the part of the researcher when performing this technique. It is generally not expensive, time consuming, or workforce intensive.
  • Because the study is conducted serially, the results of one sample are known before the next sample is taken and analyzed. This provides opportunities for continuous improvement of sampling and methods of analysis.
  • The sampling method is not representative of the entire population. The only possibility of approaching representativeness is when the researcher chooses to use a very large sample size significant enough to represent a significant portion of the entire population. In this case, moving on to study a second or more specific sample can be difficult.
  • The design cannot be used to create conclusions and interpretations that pertain to an entire population because the sampling technique is not randomized. Generalizability from findings is, therefore, limited.
  • Difficult to account for and interpret variation from one sample to another over time, particularly when using qualitative methods of data collection.

Betensky, Rebecca. Harvard University, Course Lecture Note slides; Bovaird, James A. and Kevin A. Kupzyk. "Sequential Design." In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 1347-1352; Cresswell, John W. Et al. “Advanced Mixed-Methods Research Designs.” In Handbook of Mixed Methods in Social and Behavioral Research . Abbas Tashakkori and Charles Teddle, eds. (Thousand Oaks, CA: Sage, 2003), pp. 209-240; Henry, Gary T. "Sequential Sampling." In The SAGE Encyclopedia of Social Science Research Methods . Michael S. Lewis-Beck, Alan Bryman and Tim Futing Liao, editors. (Thousand Oaks, CA: Sage, 2004), pp. 1027-1028; Nataliya V. Ivankova. “Using Mixed-Methods Sequential Explanatory Design: From Theory to Practice.” Field Methods 18 (February 2006): 3-20; Bovaird, James A. and Kevin A. Kupzyk. “Sequential Design.” In Encyclopedia of Research Design . Neil J. Salkind, ed. Thousand Oaks, CA: Sage, 2010; Sequential Analysis. Wikipedia.

Systematic Review

  • A systematic review synthesizes the findings of multiple studies related to each other by incorporating strategies of analysis and interpretation intended to reduce biases and random errors.
  • The application of critical exploration, evaluation, and synthesis methods separates insignificant, unsound, or redundant research from the most salient and relevant studies worthy of reflection.
  • They can be use to identify, justify, and refine hypotheses, recognize and avoid hidden problems in prior studies, and explain data inconsistencies and conflicts in data.
  • Systematic reviews can be used to help policy makers formulate evidence-based guidelines and regulations.
  • The use of strict, explicit, and pre-determined methods of synthesis, when applied appropriately, provide reliable estimates about the effects of interventions, evaluations, and effects related to the overarching research problem investigated by each study under review.
  • Systematic reviews illuminate where knowledge or thorough understanding of a research problem is lacking and, therefore, can then be used to guide future research.
  • The accepted inclusion of unpublished studies [i.e., grey literature] ensures the broadest possible way to analyze and interpret research on a topic.
  • Results of the synthesis can be generalized and the findings extrapolated into the general population with more validity than most other types of studies .
  • Systematic reviews do not create new knowledge per se; they are a method for synthesizing existing studies about a research problem in order to gain new insights and determine gaps in the literature.
  • The way researchers have carried out their investigations [e.g., the period of time covered, number of participants, sources of data analyzed, etc.] can make it difficult to effectively synthesize studies.
  • The inclusion of unpublished studies can introduce bias into the review because they may not have undergone a rigorous peer-review process prior to publication. Examples may include conference presentations or proceedings, publications from government agencies, white papers, working papers, and internal documents from organizations, and doctoral dissertations and Master's theses.

Denyer, David and David Tranfield. "Producing a Systematic Review." In The Sage Handbook of Organizational Research Methods .  David A. Buchanan and Alan Bryman, editors. ( Thousand Oaks, CA: Sage Publications, 2009), pp. 671-689; Foster, Margaret J. and Sarah T. Jewell, editors. Assembling the Pieces of a Systematic Review: A Guide for Librarians . Lanham, MD: Rowman and Littlefield, 2017; Gough, David, Sandy Oliver, James Thomas, editors. Introduction to Systematic Reviews . 2nd edition. Los Angeles, CA: Sage Publications, 2017; Gopalakrishnan, S. and P. Ganeshkumar. “Systematic Reviews and Meta-analysis: Understanding the Best Evidence in Primary Healthcare.” Journal of Family Medicine and Primary Care 2 (2013): 9-14; Gough, David, James Thomas, and Sandy Oliver. "Clarifying Differences between Review Designs and Methods." Systematic Reviews 1 (2012): 1-9; Khan, Khalid S., Regina Kunz, Jos Kleijnen, and Gerd Antes. “Five Steps to Conducting a Systematic Review.” Journal of the Royal Society of Medicine 96 (2003): 118-121; Mulrow, C. D. “Systematic Reviews: Rationale for Systematic Reviews.” BMJ 309:597 (September 1994); O'Dwyer, Linda C., and Q. Eileen Wafford. "Addressing Challenges with Systematic Review Teams through Effective Communication: A Case Report." Journal of the Medical Library Association 109 (October 2021): 643-647; Okoli, Chitu, and Kira Schabram. "A Guide to Conducting a Systematic Literature Review of Information Systems Research."  Sprouts: Working Papers on Information Systems 10 (2010); Siddaway, Andy P., Alex M. Wood, and Larry V. Hedges. "How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-analyses, and Meta-syntheses." Annual Review of Psychology 70 (2019): 747-770; Torgerson, Carole J. “Publication Bias: The Achilles’ Heel of Systematic Reviews?” British Journal of Educational Studies 54 (March 2006): 89-102; Torgerson, Carole. Systematic Reviews . New York: Continuum, 2003.

  • << Previous: Purpose of Guide
  • Next: Design Flaws to Avoid >>
  • Last Updated: Apr 1, 2024 9:56 AM
  • URL: https://libguides.usc.edu/writingguide
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

example research design

Home Market Research Research Tools and Apps

Research Design: What it is, Elements & Types

Research Design

Can you imagine doing research without a plan? Probably not. When we discuss a strategy to collect, study, and evaluate data, we talk about research design. This design addresses problems and creates a consistent and logical model for data analysis. Let’s learn more about it.

What is Research Design?

Research design is the framework of research methods and techniques chosen by a researcher to conduct a study. The design allows researchers to sharpen the research methods suitable for the subject matter and set up their studies for success.

Creating a research topic explains the type of research (experimental,  survey research ,  correlational , semi-experimental, review) and its sub-type (experimental design, research problem , descriptive case-study). 

There are three main types of designs for research:

  • Data collection
  • Measurement
  • Data Analysis

The research problem an organization faces will determine the design, not vice-versa. The design phase of a study determines which tools to use and how they are used.

The Process of Research Design

The research design process is a systematic and structured approach to conducting research. The process is essential to ensure that the study is valid, reliable, and produces meaningful results.

  • Consider your aims and approaches: Determine the research questions and objectives, and identify the theoretical framework and methodology for the study.
  • Choose a type of Research Design: Select the appropriate research design, such as experimental, correlational, survey, case study, or ethnographic, based on the research questions and objectives.
  • Identify your population and sampling method: Determine the target population and sample size, and choose the sampling method, such as random , stratified random sampling , or convenience sampling.
  • Choose your data collection methods: Decide on the methods, such as surveys, interviews, observations, or experiments, and select the appropriate instruments or tools for collecting data.
  • Plan your data collection procedures: Develop a plan for data collection, including the timeframe, location, and personnel involved, and ensure ethical considerations.
  • Decide on your data analysis strategies: Select the appropriate data analysis techniques, such as statistical analysis , content analysis, or discourse analysis, and plan how to interpret the results.

The process of research design is a critical step in conducting research. By following the steps of research design, researchers can ensure that their study is well-planned, ethical, and rigorous.

Research Design Elements

Impactful research usually creates a minimum bias in data and increases trust in the accuracy of collected data. A design that produces the slightest margin of error in experimental research is generally considered the desired outcome. The essential elements are:

  • Accurate purpose statement
  • Techniques to be implemented for collecting and analyzing research
  • The method applied for analyzing collected details
  • Type of research methodology
  • Probable objections to research
  • Settings for the research study
  • Measurement of analysis

Characteristics of Research Design

A proper design sets your study up for success. Successful research studies provide insights that are accurate and unbiased. You’ll need to create a survey that meets all of the main characteristics of a design. There are four key characteristics:

Characteristics of Research Design

  • Neutrality: When you set up your study, you may have to make assumptions about the data you expect to collect. The results projected in the research should be free from research bias and neutral. Understand opinions about the final evaluated scores and conclusions from multiple individuals and consider those who agree with the results.
  • Reliability: With regularly conducted research, the researcher expects similar results every time. You’ll only be able to reach the desired results if your design is reliable. Your plan should indicate how to form research questions to ensure the standard of results.
  • Validity: There are multiple measuring tools available. However, the only correct measuring tools are those which help a researcher in gauging results according to the objective of the research. The  questionnaire  developed from this design will then be valid.
  • Generalization:  The outcome of your design should apply to a population and not just a restricted sample . A generalized method implies that your survey can be conducted on any part of a population with similar accuracy.

The above factors affect how respondents answer the research questions, so they should balance all the above characteristics in a good design. If you want, you can also learn about Selection Bias through our blog.

Research Design Types

A researcher must clearly understand the various types to select which model to implement for a study. Like the research itself, the design of your analysis can be broadly classified into quantitative and qualitative.

Qualitative research

Qualitative research determines relationships between collected data and observations based on mathematical calculations. Statistical methods can prove or disprove theories related to a naturally existing phenomenon. Researchers rely on qualitative observation research methods that conclude “why” a particular theory exists and “what” respondents have to say about it.

Quantitative research

Quantitative research is for cases where statistical conclusions to collect actionable insights are essential. Numbers provide a better perspective for making critical business decisions. Quantitative research methods are necessary for the growth of any organization. Insights drawn from complex numerical data and analysis prove to be highly effective when making decisions about the business’s future.

Qualitative Research vs Quantitative Research

Here is a chart that highlights the major differences between qualitative and quantitative research:

In summary or analysis , the step of qualitative research is more exploratory and focuses on understanding the subjective experiences of individuals, while quantitative research is more focused on objective data and statistical analysis.

You can further break down the types of research design into five categories:

types of research design

1. Descriptive: In a descriptive composition, a researcher is solely interested in describing the situation or case under their research study. It is a theory-based design method created by gathering, analyzing, and presenting collected data. This allows a researcher to provide insights into the why and how of research. Descriptive design helps others better understand the need for the research. If the problem statement is not clear, you can conduct exploratory research. 

2. Experimental: Experimental research establishes a relationship between the cause and effect of a situation. It is a causal research design where one observes the impact caused by the independent variable on the dependent variable. For example, one monitors the influence of an independent variable such as a price on a dependent variable such as customer satisfaction or brand loyalty. It is an efficient research method as it contributes to solving a problem.

The independent variables are manipulated to monitor the change it has on the dependent variable. Social sciences often use it to observe human behavior by analyzing two groups. Researchers can have participants change their actions and study how the people around them react to understand social psychology better.

3. Correlational research: Correlational research  is a non-experimental research technique. It helps researchers establish a relationship between two closely connected variables. There is no assumption while evaluating a relationship between two other variables, and statistical analysis techniques calculate the relationship between them. This type of research requires two different groups.

A correlation coefficient determines the correlation between two variables whose values range between -1 and +1. If the correlation coefficient is towards +1, it indicates a positive relationship between the variables, and -1 means a negative relationship between the two variables. 

4. Diagnostic research: In diagnostic design, the researcher is looking to evaluate the underlying cause of a specific topic or phenomenon. This method helps one learn more about the factors that create troublesome situations. 

This design has three parts of the research:

  • Inception of the issue
  • Diagnosis of the issue
  • Solution for the issue

5. Explanatory research : Explanatory design uses a researcher’s ideas and thoughts on a subject to further explore their theories. The study explains unexplored aspects of a subject and details the research questions’ what, how, and why.

Benefits of Research Design

There are several benefits of having a well-designed research plan. Including:

  • Clarity of research objectives: Research design provides a clear understanding of the research objectives and the desired outcomes.
  • Increased validity and reliability: To ensure the validity and reliability of results, research design help to minimize the risk of bias and helps to control extraneous variables.
  • Improved data collection: Research design helps to ensure that the proper data is collected and data is collected systematically and consistently.
  • Better data analysis: Research design helps ensure that the collected data can be analyzed effectively, providing meaningful insights and conclusions.
  • Improved communication: A well-designed research helps ensure the results are clean and influential within the research team and external stakeholders.
  • Efficient use of resources: reducing the risk of waste and maximizing the impact of the research, research design helps to ensure that resources are used efficiently.

A well-designed research plan is essential for successful research, providing clear and meaningful insights and ensuring that resources are practical.

QuestionPro offers a comprehensive solution for researchers looking to conduct research. With its user-friendly interface, robust data collection and analysis tools, and the ability to integrate results from multiple sources, QuestionPro provides a versatile platform for designing and executing research projects.

Our robust suite of research tools provides you with all you need to derive research results. Our online survey platform includes custom point-and-click logic and advanced question types. Uncover the insights that matter the most.

FREE TRIAL         LEARN MORE

MORE LIKE THIS

customer experience automation

Customer Experience Automation: Benefits and Best Tools

Apr 1, 2024

market segmentation tools

7 Best Market Segmentation Tools in 2024

in-app feedback tools

In-App Feedback Tools: How to Collect, Uses & 14 Best Tools

Mar 29, 2024

Customer Journey Analytics Software

11 Best Customer Journey Analytics Software in 2024

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Guide to Experimental Design | Overview, Steps, & Examples

Guide to Experimental Design | Overview, 5 steps & Examples

Published on December 3, 2019 by Rebecca Bevans . Revised on June 21, 2023.

Experiments are used to study causal relationships . You manipulate one or more independent variables and measure their effect on one or more dependent variables.

Experimental design create a set of procedures to systematically test a hypothesis . A good experimental design requires a strong understanding of the system you are studying.

There are five key steps in designing an experiment:

  • Consider your variables and how they are related
  • Write a specific, testable hypothesis
  • Design experimental treatments to manipulate your independent variable
  • Assign subjects to groups, either between-subjects or within-subjects
  • Plan how you will measure your dependent variable

For valid conclusions, you also need to select a representative sample and control any  extraneous variables that might influence your results. If random assignment of participants to control and treatment groups is impossible, unethical, or highly difficult, consider an observational study instead. This minimizes several types of research bias, particularly sampling bias , survivorship bias , and attrition bias as time passes.

Table of contents

Step 1: define your variables, step 2: write your hypothesis, step 3: design your experimental treatments, step 4: assign your subjects to treatment groups, step 5: measure your dependent variable, other interesting articles, frequently asked questions about experiments.

You should begin with a specific research question . We will work with two research question examples, one from health sciences and one from ecology:

To translate your research question into an experimental hypothesis, you need to define the main variables and make predictions about how they are related.

Start by simply listing the independent and dependent variables .

Then you need to think about possible extraneous and confounding variables and consider how you might control  them in your experiment.

Finally, you can put these variables together into a diagram. Use arrows to show the possible relationships between variables and include signs to show the expected direction of the relationships.

Diagram of the relationship between variables in a sleep experiment

Here we predict that increasing temperature will increase soil respiration and decrease soil moisture, while decreasing soil moisture will lead to decreased soil respiration.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

example research design

Now that you have a strong conceptual understanding of the system you are studying, you should be able to write a specific, testable hypothesis that addresses your research question.

The next steps will describe how to design a controlled experiment . In a controlled experiment, you must be able to:

  • Systematically and precisely manipulate the independent variable(s).
  • Precisely measure the dependent variable(s).
  • Control any potential confounding variables.

If your study system doesn’t match these criteria, there are other types of research you can use to answer your research question.

How you manipulate the independent variable can affect the experiment’s external validity – that is, the extent to which the results can be generalized and applied to the broader world.

First, you may need to decide how widely to vary your independent variable.

  • just slightly above the natural range for your study region.
  • over a wider range of temperatures to mimic future warming.
  • over an extreme range that is beyond any possible natural variation.

Second, you may need to choose how finely to vary your independent variable. Sometimes this choice is made for you by your experimental system, but often you will need to decide, and this will affect how much you can infer from your results.

  • a categorical variable : either as binary (yes/no) or as levels of a factor (no phone use, low phone use, high phone use).
  • a continuous variable (minutes of phone use measured every night).

How you apply your experimental treatments to your test subjects is crucial for obtaining valid and reliable results.

First, you need to consider the study size : how many individuals will be included in the experiment? In general, the more subjects you include, the greater your experiment’s statistical power , which determines how much confidence you can have in your results.

Then you need to randomly assign your subjects to treatment groups . Each group receives a different level of the treatment (e.g. no phone use, low phone use, high phone use).

You should also include a control group , which receives no treatment. The control group tells us what would have happened to your test subjects without any experimental intervention.

When assigning your subjects to groups, there are two main choices you need to make:

  • A completely randomized design vs a randomized block design .
  • A between-subjects design vs a within-subjects design .

Randomization

An experiment can be completely randomized or randomized within blocks (aka strata):

  • In a completely randomized design , every subject is assigned to a treatment group at random.
  • In a randomized block design (aka stratified random design), subjects are first grouped according to a characteristic they share, and then randomly assigned to treatments within those groups.

Sometimes randomization isn’t practical or ethical , so researchers create partially-random or even non-random designs. An experimental design where treatments aren’t randomly assigned is called a quasi-experimental design .

Between-subjects vs. within-subjects

In a between-subjects design (also known as an independent measures design or classic ANOVA design), individuals receive only one of the possible levels of an experimental treatment.

In medical or social research, you might also use matched pairs within your between-subjects design to make sure that each treatment group contains the same variety of test subjects in the same proportions.

In a within-subjects design (also known as a repeated measures design), every individual receives each of the experimental treatments consecutively, and their responses to each treatment are measured.

Within-subjects or repeated measures can also refer to an experimental design where an effect emerges over time, and individual responses are measured over time in order to measure this effect as it emerges.

Counterbalancing (randomizing or reversing the order of treatments among subjects) is often used in within-subjects designs to ensure that the order of treatment application doesn’t influence the results of the experiment.

Finally, you need to decide how you’ll collect data on your dependent variable outcomes. You should aim for reliable and valid measurements that minimize research bias or error.

Some variables, like temperature, can be objectively measured with scientific instruments. Others may need to be operationalized to turn them into measurable observations.

  • Ask participants to record what time they go to sleep and get up each day.
  • Ask participants to wear a sleep tracker.

How precisely you measure your dependent variable also affects the kinds of statistical analysis you can use on your data.

Experiments are always context-dependent, and a good experimental design will take into account all of the unique considerations of your study system to produce information that is both valid and relevant to your research question.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Likert scale

Research bias

  • Implicit bias
  • Framing effect
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic

Experimental design means planning a set of procedures to investigate a relationship between variables . To design a controlled experiment, you need:

  • A testable hypothesis
  • At least one independent variable that can be precisely manipulated
  • At least one dependent variable that can be precisely measured

When designing the experiment, you decide:

  • How you will manipulate the variable(s)
  • How you will control for any potential confounding variables
  • How many subjects or samples will be included in the study
  • How subjects will be assigned to treatment levels

Experimental design is essential to the internal and external validity of your experiment.

The key difference between observational studies and experimental designs is that a well-done observational study does not influence the responses of participants, while experiments do have some sort of treatment condition applied to at least some participants by random assignment .

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word “between” means that you’re comparing different conditions between groups, while the word “within” means you’re comparing different conditions within the same group.

An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bevans, R. (2023, June 21). Guide to Experimental Design | Overview, 5 steps & Examples. Scribbr. Retrieved April 1, 2024, from https://www.scribbr.com/methodology/experimental-design/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Other students also liked, random assignment in experiments | introduction & examples, quasi-experimental design | definition, types & examples, how to write a lab report, unlimited academic ai-proofreading.

✔ Document error-free in 5minutes ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

Logo for Open Educational Resources

Chapter 2. Research Design

Getting started.

When I teach undergraduates qualitative research methods, the final product of the course is a “research proposal” that incorporates all they have learned and enlists the knowledge they have learned about qualitative research methods in an original design that addresses a particular research question. I highly recommend you think about designing your own research study as you progress through this textbook. Even if you don’t have a study in mind yet, it can be a helpful exercise as you progress through the course. But how to start? How can one design a research study before they even know what research looks like? This chapter will serve as a brief overview of the research design process to orient you to what will be coming in later chapters. Think of it as a “skeleton” of what you will read in more detail in later chapters. Ideally, you will read this chapter both now (in sequence) and later during your reading of the remainder of the text. Do not worry if you have questions the first time you read this chapter. Many things will become clearer as the text advances and as you gain a deeper understanding of all the components of good qualitative research. This is just a preliminary map to get you on the right road.

Null

Research Design Steps

Before you even get started, you will need to have a broad topic of interest in mind. [1] . In my experience, students can confuse this broad topic with the actual research question, so it is important to clearly distinguish the two. And the place to start is the broad topic. It might be, as was the case with me, working-class college students. But what about working-class college students? What’s it like to be one? Why are there so few compared to others? How do colleges assist (or fail to assist) them? What interested me was something I could barely articulate at first and went something like this: “Why was it so difficult and lonely to be me?” And by extension, “Did others share this experience?”

Once you have a general topic, reflect on why this is important to you. Sometimes we connect with a topic and we don’t really know why. Even if you are not willing to share the real underlying reason you are interested in a topic, it is important that you know the deeper reasons that motivate you. Otherwise, it is quite possible that at some point during the research, you will find yourself turned around facing the wrong direction. I have seen it happen many times. The reason is that the research question is not the same thing as the general topic of interest, and if you don’t know the reasons for your interest, you are likely to design a study answering a research question that is beside the point—to you, at least. And this means you will be much less motivated to carry your research to completion.

Researcher Note

Why do you employ qualitative research methods in your area of study? What are the advantages of qualitative research methods for studying mentorship?

Qualitative research methods are a huge opportunity to increase access, equity, inclusion, and social justice. Qualitative research allows us to engage and examine the uniquenesses/nuances within minoritized and dominant identities and our experiences with these identities. Qualitative research allows us to explore a specific topic, and through that exploration, we can link history to experiences and look for patterns or offer up a unique phenomenon. There’s such beauty in being able to tell a particular story, and qualitative research is a great mode for that! For our work, we examined the relationships we typically use the term mentorship for but didn’t feel that was quite the right word. Qualitative research allowed us to pick apart what we did and how we engaged in our relationships, which then allowed us to more accurately describe what was unique about our mentorship relationships, which we ultimately named liberationships ( McAloney and Long 2021) . Qualitative research gave us the means to explore, process, and name our experiences; what a powerful tool!

How do you come up with ideas for what to study (and how to study it)? Where did you get the idea for studying mentorship?

Coming up with ideas for research, for me, is kind of like Googling a question I have, not finding enough information, and then deciding to dig a little deeper to get the answer. The idea to study mentorship actually came up in conversation with my mentorship triad. We were talking in one of our meetings about our relationship—kind of meta, huh? We discussed how we felt that mentorship was not quite the right term for the relationships we had built. One of us asked what was different about our relationships and mentorship. This all happened when I was taking an ethnography course. During the next session of class, we were discussing auto- and duoethnography, and it hit me—let’s explore our version of mentorship, which we later went on to name liberationships ( McAloney and Long 2021 ). The idea and questions came out of being curious and wanting to find an answer. As I continue to research, I see opportunities in questions I have about my work or during conversations that, in our search for answers, end up exposing gaps in the literature. If I can’t find the answer already out there, I can study it.

—Kim McAloney, PhD, College Student Services Administration Ecampus coordinator and instructor

When you have a better idea of why you are interested in what it is that interests you, you may be surprised to learn that the obvious approaches to the topic are not the only ones. For example, let’s say you think you are interested in preserving coastal wildlife. And as a social scientist, you are interested in policies and practices that affect the long-term viability of coastal wildlife, especially around fishing communities. It would be natural then to consider designing a research study around fishing communities and how they manage their ecosystems. But when you really think about it, you realize that what interests you the most is how people whose livelihoods depend on a particular resource act in ways that deplete that resource. Or, even deeper, you contemplate the puzzle, “How do people justify actions that damage their surroundings?” Now, there are many ways to design a study that gets at that broader question, and not all of them are about fishing communities, although that is certainly one way to go. Maybe you could design an interview-based study that includes and compares loggers, fishers, and desert golfers (those who golf in arid lands that require a great deal of wasteful irrigation). Or design a case study around one particular example where resources were completely used up by a community. Without knowing what it is you are really interested in, what motivates your interest in a surface phenomenon, you are unlikely to come up with the appropriate research design.

These first stages of research design are often the most difficult, but have patience . Taking the time to consider why you are going to go through a lot of trouble to get answers will prevent a lot of wasted energy in the future.

There are distinct reasons for pursuing particular research questions, and it is helpful to distinguish between them.  First, you may be personally motivated.  This is probably the most important and the most often overlooked.   What is it about the social world that sparks your curiosity? What bothers you? What answers do you need in order to keep living? For me, I knew I needed to get a handle on what higher education was for before I kept going at it. I needed to understand why I felt so different from my peers and whether this whole “higher education” thing was “for the likes of me” before I could complete my degree. That is the personal motivation question. Your personal motivation might also be political in nature, in that you want to change the world in a particular way. It’s all right to acknowledge this. In fact, it is better to acknowledge it than to hide it.

There are also academic and professional motivations for a particular study.  If you are an absolute beginner, these may be difficult to find. We’ll talk more about this when we discuss reviewing the literature. Simply put, you are probably not the only person in the world to have thought about this question or issue and those related to it. So how does your interest area fit into what others have studied? Perhaps there is a good study out there of fishing communities, but no one has quite asked the “justification” question. You are motivated to address this to “fill the gap” in our collective knowledge. And maybe you are really not at all sure of what interests you, but you do know that [insert your topic] interests a lot of people, so you would like to work in this area too. You want to be involved in the academic conversation. That is a professional motivation and a very important one to articulate.

Practical and strategic motivations are a third kind. Perhaps you want to encourage people to take better care of the natural resources around them. If this is also part of your motivation, you will want to design your research project in a way that might have an impact on how people behave in the future. There are many ways to do this, one of which is using qualitative research methods rather than quantitative research methods, as the findings of qualitative research are often easier to communicate to a broader audience than the results of quantitative research. You might even be able to engage the community you are studying in the collecting and analyzing of data, something taboo in quantitative research but actively embraced and encouraged by qualitative researchers. But there are other practical reasons, such as getting “done” with your research in a certain amount of time or having access (or no access) to certain information. There is nothing wrong with considering constraints and opportunities when designing your study. Or maybe one of the practical or strategic goals is about learning competence in this area so that you can demonstrate the ability to conduct interviews and focus groups with future employers. Keeping that in mind will help shape your study and prevent you from getting sidetracked using a technique that you are less invested in learning about.

STOP HERE for a moment

I recommend you write a paragraph (at least) explaining your aims and goals. Include a sentence about each of the following: personal/political goals, practical or professional/academic goals, and practical/strategic goals. Think through how all of the goals are related and can be achieved by this particular research study . If they can’t, have a rethink. Perhaps this is not the best way to go about it.

You will also want to be clear about the purpose of your study. “Wait, didn’t we just do this?” you might ask. No! Your goals are not the same as the purpose of the study, although they are related. You can think about purpose lying on a continuum from “ theory ” to “action” (figure 2.1). Sometimes you are doing research to discover new knowledge about the world, while other times you are doing a study because you want to measure an impact or make a difference in the world.

Purpose types: Basic Research, Applied Research, Summative Evaluation, Formative Evaluation, Action Research

Basic research involves research that is done for the sake of “pure” knowledge—that is, knowledge that, at least at this moment in time, may not have any apparent use or application. Often, and this is very important, knowledge of this kind is later found to be extremely helpful in solving problems. So one way of thinking about basic research is that it is knowledge for which no use is yet known but will probably one day prove to be extremely useful. If you are doing basic research, you do not need to argue its usefulness, as the whole point is that we just don’t know yet what this might be.

Researchers engaged in basic research want to understand how the world operates. They are interested in investigating a phenomenon to get at the nature of reality with regard to that phenomenon. The basic researcher’s purpose is to understand and explain ( Patton 2002:215 ).

Basic research is interested in generating and testing hypotheses about how the world works. Grounded Theory is one approach to qualitative research methods that exemplifies basic research (see chapter 4). Most academic journal articles publish basic research findings. If you are working in academia (e.g., writing your dissertation), the default expectation is that you are conducting basic research.

Applied research in the social sciences is research that addresses human and social problems. Unlike basic research, the researcher has expectations that the research will help contribute to resolving a problem, if only by identifying its contours, history, or context. From my experience, most students have this as their baseline assumption about research. Why do a study if not to make things better? But this is a common mistake. Students and their committee members are often working with default assumptions here—the former thinking about applied research as their purpose, the latter thinking about basic research: “The purpose of applied research is to contribute knowledge that will help people to understand the nature of a problem in order to intervene, thereby allowing human beings to more effectively control their environment. While in basic research the source of questions is the tradition within a scholarly discipline, in applied research the source of questions is in the problems and concerns experienced by people and by policymakers” ( Patton 2002:217 ).

Applied research is less geared toward theory in two ways. First, its questions do not derive from previous literature. For this reason, applied research studies have much more limited literature reviews than those found in basic research (although they make up for this by having much more “background” about the problem). Second, it does not generate theory in the same way as basic research does. The findings of an applied research project may not be generalizable beyond the boundaries of this particular problem or context. The findings are more limited. They are useful now but may be less useful later. This is why basic research remains the default “gold standard” of academic research.

Evaluation research is research that is designed to evaluate or test the effectiveness of specific solutions and programs addressing specific social problems. We already know the problems, and someone has already come up with solutions. There might be a program, say, for first-generation college students on your campus. Does this program work? Are first-generation students who participate in the program more likely to graduate than those who do not? These are the types of questions addressed by evaluation research. There are two types of research within this broader frame; however, one more action-oriented than the next. In summative evaluation , an overall judgment about the effectiveness of a program or policy is made. Should we continue our first-gen program? Is it a good model for other campuses? Because the purpose of such summative evaluation is to measure success and to determine whether this success is scalable (capable of being generalized beyond the specific case), quantitative data is more often used than qualitative data. In our example, we might have “outcomes” data for thousands of students, and we might run various tests to determine if the better outcomes of those in the program are statistically significant so that we can generalize the findings and recommend similar programs elsewhere. Qualitative data in the form of focus groups or interviews can then be used for illustrative purposes, providing more depth to the quantitative analyses. In contrast, formative evaluation attempts to improve a program or policy (to help “form” or shape its effectiveness). Formative evaluations rely more heavily on qualitative data—case studies, interviews, focus groups. The findings are meant not to generalize beyond the particular but to improve this program. If you are a student seeking to improve your qualitative research skills and you do not care about generating basic research, formative evaluation studies might be an attractive option for you to pursue, as there are always local programs that need evaluation and suggestions for improvement. Again, be very clear about your purpose when talking through your research proposal with your committee.

Action research takes a further step beyond evaluation, even formative evaluation, to being part of the solution itself. This is about as far from basic research as one could get and definitely falls beyond the scope of “science,” as conventionally defined. The distinction between action and research is blurry, the research methods are often in constant flux, and the only “findings” are specific to the problem or case at hand and often are findings about the process of intervention itself. Rather than evaluate a program as a whole, action research often seeks to change and improve some particular aspect that may not be working—maybe there is not enough diversity in an organization or maybe women’s voices are muted during meetings and the organization wonders why and would like to change this. In a further step, participatory action research , those women would become part of the research team, attempting to amplify their voices in the organization through participation in the action research. As action research employs methods that involve people in the process, focus groups are quite common.

If you are working on a thesis or dissertation, chances are your committee will expect you to be contributing to fundamental knowledge and theory ( basic research ). If your interests lie more toward the action end of the continuum, however, it is helpful to talk to your committee about this before you get started. Knowing your purpose in advance will help avoid misunderstandings during the later stages of the research process!

The Research Question

Once you have written your paragraph and clarified your purpose and truly know that this study is the best study for you to be doing right now , you are ready to write and refine your actual research question. Know that research questions are often moving targets in qualitative research, that they can be refined up to the very end of data collection and analysis. But you do have to have a working research question at all stages. This is your “anchor” when you get lost in the data. What are you addressing? What are you looking at and why? Your research question guides you through the thicket. It is common to have a whole host of questions about a phenomenon or case, both at the outset and throughout the study, but you should be able to pare it down to no more than two or three sentences when asked. These sentences should both clarify the intent of the research and explain why this is an important question to answer. More on refining your research question can be found in chapter 4.

Chances are, you will have already done some prior reading before coming up with your interest and your questions, but you may not have conducted a systematic literature review. This is the next crucial stage to be completed before venturing further. You don’t want to start collecting data and then realize that someone has already beaten you to the punch. A review of the literature that is already out there will let you know (1) if others have already done the study you are envisioning; (2) if others have done similar studies, which can help you out; and (3) what ideas or concepts are out there that can help you frame your study and make sense of your findings. More on literature reviews can be found in chapter 9.

In addition to reviewing the literature for similar studies to what you are proposing, it can be extremely helpful to find a study that inspires you. This may have absolutely nothing to do with the topic you are interested in but is written so beautifully or organized so interestingly or otherwise speaks to you in such a way that you want to post it somewhere to remind you of what you want to be doing. You might not understand this in the early stages—why would you find a study that has nothing to do with the one you are doing helpful? But trust me, when you are deep into analysis and writing, having an inspirational model in view can help you push through. If you are motivated to do something that might change the world, you probably have read something somewhere that inspired you. Go back to that original inspiration and read it carefully and see how they managed to convey the passion that you so appreciate.

At this stage, you are still just getting started. There are a lot of things to do before setting forth to collect data! You’ll want to consider and choose a research tradition and a set of data-collection techniques that both help you answer your research question and match all your aims and goals. For example, if you really want to help migrant workers speak for themselves, you might draw on feminist theory and participatory action research models. Chapters 3 and 4 will provide you with more information on epistemologies and approaches.

Next, you have to clarify your “units of analysis.” What is the level at which you are focusing your study? Often, the unit in qualitative research methods is individual people, or “human subjects.” But your units of analysis could just as well be organizations (colleges, hospitals) or programs or even whole nations. Think about what it is you want to be saying at the end of your study—are the insights you are hoping to make about people or about organizations or about something else entirely? A unit of analysis can even be a historical period! Every unit of analysis will call for a different kind of data collection and analysis and will produce different kinds of “findings” at the conclusion of your study. [2]

Regardless of what unit of analysis you select, you will probably have to consider the “human subjects” involved in your research. [3] Who are they? What interactions will you have with them—that is, what kind of data will you be collecting? Before answering these questions, define your population of interest and your research setting. Use your research question to help guide you.

Let’s use an example from a real study. In Geographies of Campus Inequality , Benson and Lee ( 2020 ) list three related research questions: “(1) What are the different ways that first-generation students organize their social, extracurricular, and academic activities at selective and highly selective colleges? (2) how do first-generation students sort themselves and get sorted into these different types of campus lives; and (3) how do these different patterns of campus engagement prepare first-generation students for their post-college lives?” (3).

Note that we are jumping into this a bit late, after Benson and Lee have described previous studies (the literature review) and what is known about first-generation college students and what is not known. They want to know about differences within this group, and they are interested in ones attending certain kinds of colleges because those colleges will be sites where academic and extracurricular pressures compete. That is the context for their three related research questions. What is the population of interest here? First-generation college students . What is the research setting? Selective and highly selective colleges . But a host of questions remain. Which students in the real world, which colleges? What about gender, race, and other identity markers? Will the students be asked questions? Are the students still in college, or will they be asked about what college was like for them? Will they be observed? Will they be shadowed? Will they be surveyed? Will they be asked to keep diaries of their time in college? How many students? How many colleges? For how long will they be observed?

Recommendation

Take a moment and write down suggestions for Benson and Lee before continuing on to what they actually did.

Have you written down your own suggestions? Good. Now let’s compare those with what they actually did. Benson and Lee drew on two sources of data: in-depth interviews with sixty-four first-generation students and survey data from a preexisting national survey of students at twenty-eight selective colleges. Let’s ignore the survey for our purposes here and focus on those interviews. The interviews were conducted between 2014 and 2016 at a single selective college, “Hilltop” (a pseudonym ). They employed a “purposive” sampling strategy to ensure an equal number of male-identifying and female-identifying students as well as equal numbers of White, Black, and Latinx students. Each student was interviewed once. Hilltop is a selective liberal arts college in the northeast that enrolls about three thousand students.

How did your suggestions match up to those actually used by the researchers in this study? It is possible your suggestions were too ambitious? Beginning qualitative researchers can often make that mistake. You want a research design that is both effective (it matches your question and goals) and doable. You will never be able to collect data from your entire population of interest (unless your research question is really so narrow to be relevant to very few people!), so you will need to come up with a good sample. Define the criteria for this sample, as Benson and Lee did when deciding to interview an equal number of students by gender and race categories. Define the criteria for your sample setting too. Hilltop is typical for selective colleges. That was a research choice made by Benson and Lee. For more on sampling and sampling choices, see chapter 5.

Benson and Lee chose to employ interviews. If you also would like to include interviews, you have to think about what will be asked in them. Most interview-based research involves an interview guide, a set of questions or question areas that will be asked of each participant. The research question helps you create a relevant interview guide. You want to ask questions whose answers will provide insight into your research question. Again, your research question is the anchor you will continually come back to as you plan for and conduct your study. It may be that once you begin interviewing, you find that people are telling you something totally unexpected, and this makes you rethink your research question. That is fine. Then you have a new anchor. But you always have an anchor. More on interviewing can be found in chapter 11.

Let’s imagine Benson and Lee also observed college students as they went about doing the things college students do, both in the classroom and in the clubs and social activities in which they participate. They would have needed a plan for this. Would they sit in on classes? Which ones and how many? Would they attend club meetings and sports events? Which ones and how many? Would they participate themselves? How would they record their observations? More on observation techniques can be found in both chapters 13 and 14.

At this point, the design is almost complete. You know why you are doing this study, you have a clear research question to guide you, you have identified your population of interest and research setting, and you have a reasonable sample of each. You also have put together a plan for data collection, which might include drafting an interview guide or making plans for observations. And so you know exactly what you will be doing for the next several months (or years!). To put the project into action, there are a few more things necessary before actually going into the field.

First, you will need to make sure you have any necessary supplies, including recording technology. These days, many researchers use their phones to record interviews. Second, you will need to draft a few documents for your participants. These include informed consent forms and recruiting materials, such as posters or email texts, that explain what this study is in clear language. Third, you will draft a research protocol to submit to your institutional review board (IRB) ; this research protocol will include the interview guide (if you are using one), the consent form template, and all examples of recruiting material. Depending on your institution and the details of your study design, it may take weeks or even, in some unfortunate cases, months before you secure IRB approval. Make sure you plan on this time in your project timeline. While you wait, you can continue to review the literature and possibly begin drafting a section on the literature review for your eventual presentation/publication. More on IRB procedures can be found in chapter 8 and more general ethical considerations in chapter 7.

Once you have approval, you can begin!

Research Design Checklist

Before data collection begins, do the following:

  • Write a paragraph explaining your aims and goals (personal/political, practical/strategic, professional/academic).
  • Define your research question; write two to three sentences that clarify the intent of the research and why this is an important question to answer.
  • Review the literature for similar studies that address your research question or similar research questions; think laterally about some literature that might be helpful or illuminating but is not exactly about the same topic.
  • Find a written study that inspires you—it may or may not be on the research question you have chosen.
  • Consider and choose a research tradition and set of data-collection techniques that (1) help answer your research question and (2) match your aims and goals.
  • Define your population of interest and your research setting.
  • Define the criteria for your sample (How many? Why these? How will you find them, gain access, and acquire consent?).
  • If you are conducting interviews, draft an interview guide.
  •  If you are making observations, create a plan for observations (sites, times, recording, access).
  • Acquire any necessary technology (recording devices/software).
  • Draft consent forms that clearly identify the research focus and selection process.
  • Create recruiting materials (posters, email, texts).
  • Apply for IRB approval (proposal plus consent form plus recruiting materials).
  • Block out time for collecting data.
  • At the end of the chapter, you will find a " Research Design Checklist " that summarizes the main recommendations made here ↵
  • For example, if your focus is society and culture , you might collect data through observation or a case study. If your focus is individual lived experience , you are probably going to be interviewing some people. And if your focus is language and communication , you will probably be analyzing text (written or visual). ( Marshall and Rossman 2016:16 ). ↵
  • You may not have any "live" human subjects. There are qualitative research methods that do not require interactions with live human beings - see chapter 16 , "Archival and Historical Sources." But for the most part, you are probably reading this textbook because you are interested in doing research with people. The rest of the chapter will assume this is the case. ↵

One of the primary methodological traditions of inquiry in qualitative research, ethnography is the study of a group or group culture, largely through observational fieldwork supplemented by interviews. It is a form of fieldwork that may include participant-observation data collection. See chapter 14 for a discussion of deep ethnography. 

A methodological tradition of inquiry and research design that focuses on an individual case (e.g., setting, institution, or sometimes an individual) in order to explore its complexity, history, and interactive parts.  As an approach, it is particularly useful for obtaining a deep appreciation of an issue, event, or phenomenon of interest in its particular context.

The controlling force in research; can be understood as lying on a continuum from basic research (knowledge production) to action research (effecting change).

In its most basic sense, a theory is a story we tell about how the world works that can be tested with empirical evidence.  In qualitative research, we use the term in a variety of ways, many of which are different from how they are used by quantitative researchers.  Although some qualitative research can be described as “testing theory,” it is more common to “build theory” from the data using inductive reasoning , as done in Grounded Theory .  There are so-called “grand theories” that seek to integrate a whole series of findings and stories into an overarching paradigm about how the world works, and much smaller theories or concepts about particular processes and relationships.  Theory can even be used to explain particular methodological perspectives or approaches, as in Institutional Ethnography , which is both a way of doing research and a theory about how the world works.

Research that is interested in generating and testing hypotheses about how the world works.

A methodological tradition of inquiry and approach to analyzing qualitative data in which theories emerge from a rigorous and systematic process of induction.  This approach was pioneered by the sociologists Glaser and Strauss (1967).  The elements of theory generated from comparative analysis of data are, first, conceptual categories and their properties and, second, hypotheses or generalized relations among the categories and their properties – “The constant comparing of many groups draws the [researcher’s] attention to their many similarities and differences.  Considering these leads [the researcher] to generate abstract categories and their properties, which, since they emerge from the data, will clearly be important to a theory explaining the kind of behavior under observation.” (36).

An approach to research that is “multimethod in focus, involving an interpretative, naturalistic approach to its subject matter.  This means that qualitative researchers study things in their natural settings, attempting to make sense of, or interpret, phenomena in terms of the meanings people bring to them.  Qualitative research involves the studied use and collection of a variety of empirical materials – case study, personal experience, introspective, life story, interview, observational, historical, interactional, and visual texts – that describe routine and problematic moments and meanings in individuals’ lives." ( Denzin and Lincoln 2005:2 ). Contrast with quantitative research .

Research that contributes knowledge that will help people to understand the nature of a problem in order to intervene, thereby allowing human beings to more effectively control their environment.

Research that is designed to evaluate or test the effectiveness of specific solutions and programs addressing specific social problems.  There are two kinds: summative and formative .

Research in which an overall judgment about the effectiveness of a program or policy is made, often for the purpose of generalizing to other cases or programs.  Generally uses qualitative research as a supplement to primary quantitative data analyses.  Contrast formative evaluation research .

Research designed to improve a program or policy (to help “form” or shape its effectiveness); relies heavily on qualitative research methods.  Contrast summative evaluation research

Research carried out at a particular organizational or community site with the intention of affecting change; often involves research subjects as participants of the study.  See also participatory action research .

Research in which both researchers and participants work together to understand a problematic situation and change it for the better.

The level of the focus of analysis (e.g., individual people, organizations, programs, neighborhoods).

The large group of interest to the researcher.  Although it will likely be impossible to design a study that incorporates or reaches all members of the population of interest, this should be clearly defined at the outset of a study so that a reasonable sample of the population can be taken.  For example, if one is studying working-class college students, the sample may include twenty such students attending a particular college, while the population is “working-class college students.”  In quantitative research, clearly defining the general population of interest is a necessary step in generalizing results from a sample.  In qualitative research, defining the population is conceptually important for clarity.

A fictional name assigned to give anonymity to a person, group, or place.  Pseudonyms are important ways of protecting the identity of research participants while still providing a “human element” in the presentation of qualitative data.  There are ethical considerations to be made in selecting pseudonyms; some researchers allow research participants to choose their own.

A requirement for research involving human participants; the documentation of informed consent.  In some cases, oral consent or assent may be sufficient, but the default standard is a single-page easy-to-understand form that both the researcher and the participant sign and date.   Under federal guidelines, all researchers "shall seek such consent only under circumstances that provide the prospective subject or the representative sufficient opportunity to consider whether or not to participate and that minimize the possibility of coercion or undue influence. The information that is given to the subject or the representative shall be in language understandable to the subject or the representative.  No informed consent, whether oral or written, may include any exculpatory language through which the subject or the representative is made to waive or appear to waive any of the subject's rights or releases or appears to release the investigator, the sponsor, the institution, or its agents from liability for negligence" (21 CFR 50.20).  Your IRB office will be able to provide a template for use in your study .

An administrative body established to protect the rights and welfare of human research subjects recruited to participate in research activities conducted under the auspices of the institution with which it is affiliated. The IRB is charged with the responsibility of reviewing all research involving human participants. The IRB is concerned with protecting the welfare, rights, and privacy of human subjects. The IRB has the authority to approve, disapprove, monitor, and require modifications in all research activities that fall within its jurisdiction as specified by both the federal regulations and institutional policy.

Introduction to Qualitative Research Methods Copyright © 2023 by Allison Hurst is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

The Four Types of Research Design — Everything You Need to Know

Jenny Romanchuk

Updated: December 11, 2023

Published: January 18, 2023

When you conduct research, you need to have a clear idea of what you want to achieve and how to accomplish it. A good research design enables you to collect accurate and reliable data to draw valid conclusions.

research design used to test different beauty products

In this blog post, we'll outline the key features of the four common types of research design with real-life examples from UnderArmor, Carmex, and more. Then, you can easily choose the right approach for your project.

Table of Contents

What is research design?

The four types of research design, research design examples.

Research design is the process of planning and executing a study to answer specific questions. This process allows you to test hypotheses in the business or scientific fields.

Research design involves choosing the right methodology, selecting the most appropriate data collection methods, and devising a plan (or framework) for analyzing the data. In short, a good research design helps us to structure our research.

Marketers use different types of research design when conducting research .

There are four common types of research design — descriptive, correlational, experimental, and diagnostic designs. Let’s take a look at each in more detail.

Researchers use different designs to accomplish different research objectives. Here, we'll discuss how to choose the right type, the benefits of each, and use cases.

Research can also be classified as quantitative or qualitative at a higher level. Some experiments exhibit both qualitative and quantitative characteristics.

example research design

Free Market Research Kit

5 Research and Planning Templates + a Free Guide on How to Use Them in Your Market Research

  • SWOT Analysis Template
  • Survey Template
  • Focus Group Template

You're all set!

Click this link to access this resource at any time.

Experimental

An experimental design is used when the researcher wants to examine how variables interact with each other. The researcher manipulates one variable (the independent variable) and observes the effect on another variable (the dependent variable).

In other words, the researcher wants to test a causal relationship between two or more variables.

In marketing, an example of experimental research would be comparing the effects of a television commercial versus an online advertisement conducted in a controlled environment (e.g. a lab). The objective of the research is to test which advertisement gets more attention among people of different age groups, gender, etc.

Another example is a study of the effect of music on productivity. A researcher assigns participants to one of two groups — those who listen to music while working and those who don't — and measure their productivity.

The main benefit of an experimental design is that it allows the researcher to draw causal relationships between variables.

One limitation: This research requires a great deal of control over the environment and participants, making it difficult to replicate in the real world. In addition, it’s quite costly.

Best for: Testing a cause-and-effect relationship (i.e., the effect of an independent variable on a dependent variable).

Correlational

A correlational design examines the relationship between two or more variables without intervening in the process.

Correlational design allows the analyst to observe natural relationships between variables. This results in data being more reflective of real-world situations.

For example, marketers can use correlational design to examine the relationship between brand loyalty and customer satisfaction. In particular, the researcher would look for patterns or trends in the data to see if there is a relationship between these two entities.

Similarly, you can study the relationship between physical activity and mental health. The analyst here would ask participants to complete surveys about their physical activity levels and mental health status. Data would show how the two variables are related.

Best for: Understanding the extent to which two or more variables are associated with each other in the real world.

Descriptive

Descriptive research refers to a systematic process of observing and describing what a subject does without influencing them.

Methods include surveys, interviews, case studies, and observations. Descriptive research aims to gather an in-depth understanding of a phenomenon and answers when/what/where.

SaaS companies use descriptive design to understand how customers interact with specific features. Findings can be used to spot patterns and roadblocks.

For instance, product managers can use screen recordings by Hotjar to observe in-app user behavior. This way, the team can precisely understand what is happening at a certain stage of the user journey and act accordingly.

Brand24, a social listening tool, tripled its sign-up conversion rate from 2.56% to 7.42%, thanks to locating friction points in the sign-up form through screen recordings.

different types of research design: descriptive research example.

Carma Laboratories worked with research company MMR to measure customers’ reactions to the lip-care company’s packaging and product . The goal was to find the cause of low sales for a recently launched line extension in Europe.

The team moderated a live, online focus group. Participants were shown w product samples, while AI and NLP natural language processing identified key themes in customer feedback.

This helped uncover key reasons for poor performance and guided changes in packaging.

research design example, tweezerman

  • Basics of Research Process
  • Methodology
  • Research Design: Definition, Types, Characteristics & Study Examples
  • Speech Topics
  • Basics of Essay Writing
  • Essay Topics
  • Other Essays
  • Main Academic Essays
  • Research Paper Topics
  • Basics of Research Paper Writing
  • Miscellaneous
  • Chicago/ Turabian
  • Data & Statistics
  • Admission Writing Tips
  • Admission Advice
  • Other Guides
  • Student Life
  • Studying Tips
  • Understanding Plagiarism
  • Academic Writing Tips
  • Basics of Dissertation & Thesis Writing

Illustration

  • Essay Guides
  • Research Paper Guides
  • Formatting Guides
  • Admission Guides
  • Dissertation & Thesis Guides

Research Design: Definition, Types, Characteristics & Study Examples

Research design

Table of contents

Illustration

Use our free Readability checker

A research design is the blueprint for any study. It's the plan that outlines how the research will be carried out. A study design usually includes the methods of data collection, the type of data to be gathered, and how it will be analyzed. Research designs help ensure the study is reliable, valid, and can answer the research question.

Behind every groundbreaking discovery and innovation lies a well-designed research. Whether you're investigating a new technology or exploring a social phenomenon, a solid research design is key to achieving reliable results. But what exactly does it means, and how do you create an effective one? Stay with our paper writers and find out:

  • Detailed definition
  • Types of research study designs
  • How to write a research design
  • Useful examples.

Whether you're a seasoned researcher or just getting started, understanding the core principles will help you conduct better studies and make more meaningful contributions.

What Is a Research Design: Definition

Research design is an overall study plan outlining a specific approach to investigating a research question . It covers particular methods and strategies for collecting, measuring and analyzing data. Students  are required to build a study design either as an individual task or as a separate chapter in a research paper , thesis or dissertation .

Before designing a research project, you need to consider a series aspects of your future study:

  • Research aims What research objectives do you want to accomplish with your study? What approach will you take to get there? Will you use a quantitative, qualitative, or mixed methods approach?
  • Type of data Will you gather new data (primary research), or rely on existing data (secondary research) to answer your research question?
  • Sampling methods How will you pick participants? What criteria will you use to ensure your sample is representative of the population?
  • Data collection methods What tools or instruments will you use to gather data (e.g., conducting a survey , interview, or observation)?
  • Measurement  What metrics will you use to capture and quantify data?
  • Data analysis  What statistical or qualitative techniques will you use to make sense of your findings?

By using a well-designed research plan, you can make sure your findings are solid and can be generalized to a larger group.

Research design example

You are going to investigate the effectiveness of a mindfulness-based intervention for reducing stress and anxiety among college students. You decide to organize an experiment to explore the impact. Participants should be randomly assigned to either an intervention group or a control group. You need to conduct pre- and post-intervention using self-report measures of stress and anxiety.

What Makes a Good Study Design? 

To design a research study that works, you need to carefully think things through. Make sure your strategy is tailored to your research topic and watch out for potential biases. Your procedures should be flexible enough to accommodate changes that may arise during the course of research. 

A good research design should be:

  • Clear and methodologically sound
  • Feasible and realistic
  • Knowledge-driven.

By following these guidelines, you'll set yourself up for success and be able to produce reliable results.

Research Study Design Structure

A structured research design provides a clear and organized plan for carrying out a study. It helps researchers to stay on track and ensure that the study stays within the bounds of acceptable time, resources, and funding.

A typical design includes 5 main components:

  • Research question(s): Central research topic(s) or issue(s).
  • Sampling strategy: Method for selecting participants or subjects.
  • Data collection techniques: Tools or instruments for retrieving data.
  • Data analysis approaches: Techniques for interpreting and scrutinizing assembled data.
  • Ethical considerations: Principles for protecting human subjects (e.g., obtaining a written consent, ensuring confidentiality guarantees).

Research Design Essential Characteristics

Creating a research design warrants a firm foundation for your exploration. The cost of making a mistake is too high. This is not something scholars can afford, especially if financial resources or a considerable amount of time is invested. Choose the wrong strategy, and you risk undermining your whole study and wasting resources. 

To avoid any unpleasant surprises, make sure your study conforms to the key characteristics. Here are some core features of research designs:

  • Reliability   Reliability is stability of your measures or instruments over time. A reliable research design is one that can be reproduced in the same way and deliver consistent outcomes. It should also nurture accurate representations of actual conditions and guarantee data quality.
  • Validity For a study to be valid , it must measure what it claims to measure. This means that methodological approaches should be carefully considered and aligned to the main research question(s).
  • Generalizability Generalizability means that your insights can be practiced outside of the scope of a study. When making inferences, researchers must take into account determinants such as sample size, sampling technique, and context.
  • Neutrality A study model should be free from personal or cognitive biases to ensure an impartial investigation of a research topic. Steer clear of highlighting any particular group or achievement.

Key Concepts in Research Design

Now let’s discuss the fundamental principles that underpin study designs in research. This will help you develop a strong framework and make sure all the puzzles fit together.

Primary concepts

Types of Approaches to Research Design

Study frameworks can fall into 2 major categories depending on the approach to compiling data you opt for. The 2 main types of study designs in research are qualitative and quantitative research. Both approaches have their unique strengths and weaknesses, and can be utilized based on the nature of information you are dealing with. 

Quantitative Research  

Quantitative study is focused on establishing empirical relationships between variables and collecting numerical data. It involves using statistics, surveys, and experiments to measure the effects of certain phenomena. This research design type looks at hard evidence and provides measurements that can be analyzed using statistical techniques. 

Qualitative Research 

Qualitative approach is used to examine the behavior, attitudes, and perceptions of individuals in a given environment. This type of study design relies on unstructured data retrieved through interviews, open-ended questions and observational methods. 

If you need your study done yesterday, leave StudyCrumb a “ write my research paper for me ” notice and have your project completed by experts.

Types of Research Designs & Examples

Choosing a research design may be tough especially for the first-timers. One of the great ways to get started is to pick the right design that will best fit your objectives. There are 4 different types of research designs you can opt for to carry out your investigation:

  • Experimental
  • Correlational
  • Descriptive
  • Diagnostic/explanatory.

Below we will go through each type and offer you examples of study designs to assist you with selection.

1. Experimental

In experimental research design , scientists manipulate one or more independent variables and control other factors in order to observe their effect on a dependent variable. This type of research design is used for experiments where the goal is to determine a causal relationship. 

Its core characteristics include:

  • Randomization
  • Manipulation
  • Replication.
A pharmaceutical company wants to test a new drug to investigate its effectiveness in treating a specific medical condition. Researchers would randomly assign participants to either a control group (receiving a placebo) or an experimental group (receiving the new drug). They would rigorously control all variables (e.g, age, medical history) and manipulate them to get reliable results.

2. Correlational

Correlational study is used to examine the existing relationships between variables. In this type of design, you don’t need to manipulate other variables. Here, researchers just focus on observing and measuring the naturally occurring relationship.

Correlational studies encompass such features: 

  • Data collection from natural settings
  • No intervention by the researcher
  • Observation over time.
A research team wants to examine the relationship between academic performance and extracurricular activities. They would observe students' performance in courses and measure how much time they spend engaging in extracurricular activities.

3. Descriptive 

Descriptive research design is all about describing a particular population or phenomenon without any interruption. This study design is especially helpful when we're not sure about something and want to understand it better.

Descriptive studies are characterized by such features:

  • Random and convenience sampling
  • Observation
  • No intervention.
A psychologist wants to understand how parents' behavior affects their child's self-concept. They would observe the interaction between children and their parents in a natural setting. Gathered information will help her get an overview of this situation and recognize some patterns.

4. Diagnostic

Diagnostic or explanatory research is used to determine the cause of an existing problem or a chronic symptom. Unlike other types of design, here scientists try to understand why something is happening. 

Among essential hallmarks of explanatory studies are: 

  • Testing hypotheses and theories
  • Examining existing data
  • Comparative analysis.
A public health specialist wants to identify the cause of an outbreak of water-borne disease in a certain area. They would inspect water samples and records to compare them with similar outbreaks in other areas. This will help to uncover reasons behind this accident.

How to Design a Research Study: Step-by-Step Process

When designing your research don't just jump into it. It's important to take the time and do things right in order to attain accurate findings. Follow these simple steps on how to design a study to get the most out of your project.

1. Determine Your Aims 

The first step in the research design process is figuring out what you want to achieve. This involves identifying your research question, goals and specific objectives you want to accomplish. Think whether you want to explore a specific issue or develop a new theory? Setting your aims from the get-go will help you stay focused and ensure that your study is driven by purpose. 

Once  you are clear with your goals, you need to decide on the main approach. Will you use qualitative or quantitative methods? Or perhaps a mixture of both?

2. Select a Type of Research Design

Choosing a suitable design requires considering multiple factors, such as your research question, data collection methods, and resources. There are various research design types, each with its own advantages and limitations. Think about the kind of data that would be most useful to address your questions. Ultimately, a well-devised strategy should help you gather accurate data to achieve your objectives.

3. Define Your Population and Sampling Methods

To design a research project, it is essential to establish your target population and parameters for selecting participants. First, identify a cohort of individuals who share common characteristics and possess relevant experiences. 

For instance, if you are researching the impact of social media on mental health, your population could be young adults aged 18-25 who use social media frequently.

With your population in mind, you can now choose an optimal sampling method. Sampling is basically the process of narrowing down your target group to only those individuals who will participate in your study. At this point, you need to decide on whether you want to randomly choose the participants (probability sampling) or set out any selection criteria (non-probability sampling). 

To examine the influence of social media on mental well-being, we will divide a whole population into smaller subgroups using stratified random sampling . Then, we will randomly pick participants from each subcategory to make sure that findings are also true for a broader group of young adults.

4. Decide on Your Data Collection Methods

When devising your study, it is also important to consider how you will retrieve data.  Depending on the type of design you are using, you may deploy diverse methods. Below you can see various data collection techniques suited for different research designs. 

Data collection methods in various studies

Additionally, if you plan on integrating existing data sources like medical records or publicly available datasets, you want to mention this as well. 

5. Arrange Your Data Collection Process

Your data collection process should also be meticulously thought out. This stage involves scheduling interviews, arranging questionnaires and preparing all the necessary tools for collecting information from participants. Detail how long your study will take and what procedures will be followed for recording and analyzing the data. 

State which variables will be studied and what measures or scales will be used when assessing each variable.

Measures and scales 

Measures and scales are tools used to quantify variables in research. A measure is any method used to collect data on a variable, while a scale is a set of items or questions used to measure a particular construct or concept. Different types of scales include nominal, ordinal, interval, or ratio , each of which has distinct properties

Operationalization 

When working with abstract information that needs to be quantified, researchers often operationalize the variable by defining it in concrete terms that can be measured or observed. This allows the abstract concept to be studied systematically and rigorously. 

Operationalization in study design example

If studying the concept of happiness, researchers might operationalize it by using a scale that measures positive affect or life satisfaction. This allows us to quantify happiness and inspect its relationship with other variables, such as income or social support.

Remember that research design should be flexible enough to adjust for any unforeseen developments. Even with rigorous preparation, you may still face unexpected challenges during your project. That’s why you need to work out contingency plans when designing research.

6. Choose Data Analysis Techniques

It’s impossible to design research without mentioning how you are going to scrutinize data. To select a proper method, take into account the type of data you are dealing with and how many variables you need to analyze. 

Qualitative data may require thematic analysis or content analysis.

Quantitative data, on the other hand, could be processed with more sophisticated statistical analysis approaches such as regression analysis, factor analysis or descriptive statistics.

Finally, don’t forget about ethical considerations. Opt for those methods that minimize harm to participants and protect their rights.

Research Design Checklist

Having a checklist in front of you will help you design your research flawlessly.

  • checkbox I clearly defined my research question and its significance.
  • checkbox I considered crucial factors such as the nature of my study, type of required data and available resources to choose a suitable design.
  • checkbox A sample size is sufficient to provide statistically significant results.
  • checkbox My data collection methods are reliable and valid.
  • checkbox Analysis methods are appropriate for the type of data I will be gathering.
  • checkbox My research design protects the rights and privacy of my participants.
  • checkbox I created a realistic timeline for research, including deadlines for data collection, analysis, and write-up.
  • checkbox I considered funding sources and potential limitations.

Bottom Line on Research Design & Study Types

Designing a research project involves making countless decisions that can affect the quality of your work. By planning out each step and selecting the best methods for data collection and analysis, you can ensure that your project is conducted professionally.

We hope this article has helped you to better understand the research design process. If you have any questions or comments, ping us in the comments section below.

Illustration

Entrust us your study and we will find the best research paper writer to complete your project. Count on a unique work and fast turnaround.

FAQ About Research Study Designs

1. what is a study design.

Study design, or else called research design, is the overall plan for a project, including its purpose, methodology, data collection and analysis techniques. A good design ensures that your project is conducted in an organized and ethical manner. It also provides clear guidelines for replicating or extending a study in the future.

2. What is the purpose of a research design?

The purpose of a research design is to provide a structure and framework for your project. By outlining your methodology, data collection techniques, and analysis methods in advance, you can ensure that your project will be conducted effectively.

3. What is the importance of research designs?

Research designs are critical to the success of any research project for several reasons. Specifically, study designs grant:

  • Clear direction for all stages of a study
  • Validity and reliability of findings
  • Roadmap for replication or further extension
  • Accurate results by controlling for potential bias
  • Comparison between studies by providing consistent guidelines.

By following an established plan, researchers can be sure that their projects are organized, ethical, and reliable.

4. What are the 4 types of study designs?

There are generally 4 types of study designs commonly used in research:

  • Experimental studies: investigate cause-and-effect relationships by manipulating the independent variable.
  • Correlational studies: examine relationships between 2 or more variables without intruding them.
  • Descriptive studies: describe the characteristics of a population or phenomenon without making any inferences about cause and effect.
  • Explanatory studies: intended to explain causal relationships.

Joe_Eckel_1_ab59a03630.jpg

Joe Eckel is an expert on Dissertations writing. He makes sure that each student gets precious insights on composing A-grade academic writing.

You may also like

Descriptive Research

For more advanced studies, you can even combine several types. Mixed-methods research may come in handy when exploring complex phenomena that cannot be adequately captured by one method alone.

Enago Academy

Experimental Research Design — 6 mistakes you should never make!

' src=

Since school days’ students perform scientific experiments that provide results that define and prove the laws and theorems in science. These experiments are laid on a strong foundation of experimental research designs.

An experimental research design helps researchers execute their research objectives with more clarity and transparency.

In this article, we will not only discuss the key aspects of experimental research designs but also the issues to avoid and problems to resolve while designing your research study.

Table of Contents

What Is Experimental Research Design?

Experimental research design is a framework of protocols and procedures created to conduct experimental research with a scientific approach using two sets of variables. Herein, the first set of variables acts as a constant, used to measure the differences of the second set. The best example of experimental research methods is quantitative research .

Experimental research helps a researcher gather the necessary data for making better research decisions and determining the facts of a research study.

When Can a Researcher Conduct Experimental Research?

A researcher can conduct experimental research in the following situations —

  • When time is an important factor in establishing a relationship between the cause and effect.
  • When there is an invariable or never-changing behavior between the cause and effect.
  • Finally, when the researcher wishes to understand the importance of the cause and effect.

Importance of Experimental Research Design

To publish significant results, choosing a quality research design forms the foundation to build the research study. Moreover, effective research design helps establish quality decision-making procedures, structures the research to lead to easier data analysis, and addresses the main research question. Therefore, it is essential to cater undivided attention and time to create an experimental research design before beginning the practical experiment.

By creating a research design, a researcher is also giving oneself time to organize the research, set up relevant boundaries for the study, and increase the reliability of the results. Through all these efforts, one could also avoid inconclusive results. If any part of the research design is flawed, it will reflect on the quality of the results derived.

Types of Experimental Research Designs

Based on the methods used to collect data in experimental studies, the experimental research designs are of three primary types:

1. Pre-experimental Research Design

A research study could conduct pre-experimental research design when a group or many groups are under observation after implementing factors of cause and effect of the research. The pre-experimental design will help researchers understand whether further investigation is necessary for the groups under observation.

Pre-experimental research is of three types —

  • One-shot Case Study Research Design
  • One-group Pretest-posttest Research Design
  • Static-group Comparison

2. True Experimental Research Design

A true experimental research design relies on statistical analysis to prove or disprove a researcher’s hypothesis. It is one of the most accurate forms of research because it provides specific scientific evidence. Furthermore, out of all the types of experimental designs, only a true experimental design can establish a cause-effect relationship within a group. However, in a true experiment, a researcher must satisfy these three factors —

  • There is a control group that is not subjected to changes and an experimental group that will experience the changed variables
  • A variable that can be manipulated by the researcher
  • Random distribution of the variables

This type of experimental research is commonly observed in the physical sciences.

3. Quasi-experimental Research Design

The word “Quasi” means similarity. A quasi-experimental design is similar to a true experimental design. However, the difference between the two is the assignment of the control group. In this research design, an independent variable is manipulated, but the participants of a group are not randomly assigned. This type of research design is used in field settings where random assignment is either irrelevant or not required.

The classification of the research subjects, conditions, or groups determines the type of research design to be used.

experimental research design

Advantages of Experimental Research

Experimental research allows you to test your idea in a controlled environment before taking the research to clinical trials. Moreover, it provides the best method to test your theory because of the following advantages:

  • Researchers have firm control over variables to obtain results.
  • The subject does not impact the effectiveness of experimental research. Anyone can implement it for research purposes.
  • The results are specific.
  • Post results analysis, research findings from the same dataset can be repurposed for similar research ideas.
  • Researchers can identify the cause and effect of the hypothesis and further analyze this relationship to determine in-depth ideas.
  • Experimental research makes an ideal starting point. The collected data could be used as a foundation to build new research ideas for further studies.

6 Mistakes to Avoid While Designing Your Research

There is no order to this list, and any one of these issues can seriously compromise the quality of your research. You could refer to the list as a checklist of what to avoid while designing your research.

1. Invalid Theoretical Framework

Usually, researchers miss out on checking if their hypothesis is logical to be tested. If your research design does not have basic assumptions or postulates, then it is fundamentally flawed and you need to rework on your research framework.

2. Inadequate Literature Study

Without a comprehensive research literature review , it is difficult to identify and fill the knowledge and information gaps. Furthermore, you need to clearly state how your research will contribute to the research field, either by adding value to the pertinent literature or challenging previous findings and assumptions.

3. Insufficient or Incorrect Statistical Analysis

Statistical results are one of the most trusted scientific evidence. The ultimate goal of a research experiment is to gain valid and sustainable evidence. Therefore, incorrect statistical analysis could affect the quality of any quantitative research.

4. Undefined Research Problem

This is one of the most basic aspects of research design. The research problem statement must be clear and to do that, you must set the framework for the development of research questions that address the core problems.

5. Research Limitations

Every study has some type of limitations . You should anticipate and incorporate those limitations into your conclusion, as well as the basic research design. Include a statement in your manuscript about any perceived limitations, and how you considered them while designing your experiment and drawing the conclusion.

6. Ethical Implications

The most important yet less talked about topic is the ethical issue. Your research design must include ways to minimize any risk for your participants and also address the research problem or question at hand. If you cannot manage the ethical norms along with your research study, your research objectives and validity could be questioned.

Experimental Research Design Example

In an experimental design, a researcher gathers plant samples and then randomly assigns half the samples to photosynthesize in sunlight and the other half to be kept in a dark box without sunlight, while controlling all the other variables (nutrients, water, soil, etc.)

By comparing their outcomes in biochemical tests, the researcher can confirm that the changes in the plants were due to the sunlight and not the other variables.

Experimental research is often the final form of a study conducted in the research process which is considered to provide conclusive and specific results. But it is not meant for every research. It involves a lot of resources, time, and money and is not easy to conduct, unless a foundation of research is built. Yet it is widely used in research institutes and commercial industries, for its most conclusive results in the scientific approach.

Have you worked on research designs? How was your experience creating an experimental design? What difficulties did you face? Do write to us or comment below and share your insights on experimental research designs!

Frequently Asked Questions

Randomization is important in an experimental research because it ensures unbiased results of the experiment. It also measures the cause-effect relationship on a particular group of interest.

Experimental research design lay the foundation of a research and structures the research to establish quality decision making process.

There are 3 types of experimental research designs. These are pre-experimental research design, true experimental research design, and quasi experimental research design.

The difference between an experimental and a quasi-experimental design are: 1. The assignment of the control group in quasi experimental research is non-random, unlike true experimental design, which is randomly assigned. 2. Experimental research group always has a control group; on the other hand, it may not be always present in quasi experimental research.

Experimental research establishes a cause-effect relationship by testing a theory or hypothesis using experimental groups or control variables. In contrast, descriptive research describes a study or a topic by defining the variables under it and answering the questions related to the same.

' src=

good and valuable

Very very good

Good presentation.

Rate this article Cancel Reply

Your email address will not be published.

example research design

Enago Academy's Most Popular Articles

7 Step Guide for Optimizing Impactful Research Process

  • Publishing Research
  • Reporting Research

How to Optimize Your Research Process: A step-by-step guide

For researchers across disciplines, the path to uncovering novel findings and insights is often filled…

Launch of "Sony Women in Technology Award with Nature"

  • Industry News
  • Trending Now

Breaking Barriers: Sony and Nature unveil “Women in Technology Award”

Sony Group Corporation and the prestigious scientific journal Nature have collaborated to launch the inaugural…

Guide to Adhere Good Research Practice (FREE CHECKLIST)

Achieving Research Excellence: Checklist for good research practices

Academia is built on the foundation of trustworthy and high-quality research, supported by the pillars…

ResearchSummary

  • Promoting Research

Plain Language Summary — Communicating your research to bridge the academic-lay gap

Science can be complex, but does that mean it should not be accessible to the…

Journals Combat Image Manipulation with AI

Science under Surveillance: Journals adopt advanced AI to uncover image manipulation

Journals are increasingly turning to cutting-edge AI tools to uncover deceitful images published in manuscripts.…

Choosing the Right Analytical Approach: Thematic analysis vs. content analysis for…

Comparing Cross Sectional and Longitudinal Studies: 5 steps for choosing the right…

Research Recommendations – Guiding policy-makers for evidence-based decision making

example research design

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

example research design

What should universities' stance be on AI tools in research and academic writing?

  • Privacy Policy

Buy Me a Coffee

Research Method

Home » Descriptive Research Design – Types, Methods and Examples

Descriptive Research Design – Types, Methods and Examples

Table of Contents

Descriptive Research Design

Descriptive Research Design

Definition:

Descriptive research design is a type of research methodology that aims to describe or document the characteristics, behaviors, attitudes, opinions, or perceptions of a group or population being studied.

Descriptive research design does not attempt to establish cause-and-effect relationships between variables or make predictions about future outcomes. Instead, it focuses on providing a detailed and accurate representation of the data collected, which can be useful for generating hypotheses, exploring trends, and identifying patterns in the data.

Types of Descriptive Research Design

Types of Descriptive Research Design are as follows:

Cross-sectional Study

This involves collecting data at a single point in time from a sample or population to describe their characteristics or behaviors. For example, a researcher may conduct a cross-sectional study to investigate the prevalence of certain health conditions among a population, or to describe the attitudes and beliefs of a particular group.

Longitudinal Study

This involves collecting data over an extended period of time, often through repeated observations or surveys of the same group or population. Longitudinal studies can be used to track changes in attitudes, behaviors, or outcomes over time, or to investigate the effects of interventions or treatments.

This involves an in-depth examination of a single individual, group, or situation to gain a detailed understanding of its characteristics or dynamics. Case studies are often used in psychology, sociology, and business to explore complex phenomena or to generate hypotheses for further research.

Survey Research

This involves collecting data from a sample or population through standardized questionnaires or interviews. Surveys can be used to describe attitudes, opinions, behaviors, or demographic characteristics of a group, and can be conducted in person, by phone, or online.

Observational Research

This involves observing and documenting the behavior or interactions of individuals or groups in a natural or controlled setting. Observational studies can be used to describe social, cultural, or environmental phenomena, or to investigate the effects of interventions or treatments.

Correlational Research

This involves examining the relationships between two or more variables to describe their patterns or associations. Correlational studies can be used to identify potential causal relationships or to explore the strength and direction of relationships between variables.

Data Analysis Methods

Descriptive research design data analysis methods depend on the type of data collected and the research question being addressed. Here are some common methods of data analysis for descriptive research:

Descriptive Statistics

This method involves analyzing data to summarize and describe the key features of a sample or population. Descriptive statistics can include measures of central tendency (e.g., mean, median, mode) and measures of variability (e.g., range, standard deviation).

Cross-tabulation

This method involves analyzing data by creating a table that shows the frequency of two or more variables together. Cross-tabulation can help identify patterns or relationships between variables.

Content Analysis

This method involves analyzing qualitative data (e.g., text, images, audio) to identify themes, patterns, or trends. Content analysis can be used to describe the characteristics of a sample or population, or to identify factors that influence attitudes or behaviors.

Qualitative Coding

This method involves analyzing qualitative data by assigning codes to segments of data based on their meaning or content. Qualitative coding can be used to identify common themes, patterns, or categories within the data.

Visualization

This method involves creating graphs or charts to represent data visually. Visualization can help identify patterns or relationships between variables and make it easier to communicate findings to others.

Comparative Analysis

This method involves comparing data across different groups or time periods to identify similarities and differences. Comparative analysis can help describe changes in attitudes or behaviors over time or differences between subgroups within a population.

Applications of Descriptive Research Design

Descriptive research design has numerous applications in various fields. Some of the common applications of descriptive research design are:

  • Market research: Descriptive research design is widely used in market research to understand consumer preferences, behavior, and attitudes. This helps companies to develop new products and services, improve marketing strategies, and increase customer satisfaction.
  • Health research: Descriptive research design is used in health research to describe the prevalence and distribution of a disease or health condition in a population. This helps healthcare providers to develop prevention and treatment strategies.
  • Educational research: Descriptive research design is used in educational research to describe the performance of students, schools, or educational programs. This helps educators to improve teaching methods and develop effective educational programs.
  • Social science research: Descriptive research design is used in social science research to describe social phenomena such as cultural norms, values, and beliefs. This helps researchers to understand social behavior and develop effective policies.
  • Public opinion research: Descriptive research design is used in public opinion research to understand the opinions and attitudes of the general public on various issues. This helps policymakers to develop effective policies that are aligned with public opinion.
  • Environmental research: Descriptive research design is used in environmental research to describe the environmental conditions of a particular region or ecosystem. This helps policymakers and environmentalists to develop effective conservation and preservation strategies.

Descriptive Research Design Examples

Here are some real-time examples of descriptive research designs:

  • A restaurant chain wants to understand the demographics and attitudes of its customers. They conduct a survey asking customers about their age, gender, income, frequency of visits, favorite menu items, and overall satisfaction. The survey data is analyzed using descriptive statistics and cross-tabulation to describe the characteristics of their customer base.
  • A medical researcher wants to describe the prevalence and risk factors of a particular disease in a population. They conduct a cross-sectional study in which they collect data from a sample of individuals using a standardized questionnaire. The data is analyzed using descriptive statistics and cross-tabulation to identify patterns in the prevalence and risk factors of the disease.
  • An education researcher wants to describe the learning outcomes of students in a particular school district. They collect test scores from a representative sample of students in the district and use descriptive statistics to calculate the mean, median, and standard deviation of the scores. They also create visualizations such as histograms and box plots to show the distribution of scores.
  • A marketing team wants to understand the attitudes and behaviors of consumers towards a new product. They conduct a series of focus groups and use qualitative coding to identify common themes and patterns in the data. They also create visualizations such as word clouds to show the most frequently mentioned topics.
  • An environmental scientist wants to describe the biodiversity of a particular ecosystem. They conduct an observational study in which they collect data on the species and abundance of plants and animals in the ecosystem. The data is analyzed using descriptive statistics to describe the diversity and richness of the ecosystem.

How to Conduct Descriptive Research Design

To conduct a descriptive research design, you can follow these general steps:

  • Define your research question: Clearly define the research question or problem that you want to address. Your research question should be specific and focused to guide your data collection and analysis.
  • Choose your research method: Select the most appropriate research method for your research question. As discussed earlier, common research methods for descriptive research include surveys, case studies, observational studies, cross-sectional studies, and longitudinal studies.
  • Design your study: Plan the details of your study, including the sampling strategy, data collection methods, and data analysis plan. Determine the sample size and sampling method, decide on the data collection tools (such as questionnaires, interviews, or observations), and outline your data analysis plan.
  • Collect data: Collect data from your sample or population using the data collection tools you have chosen. Ensure that you follow ethical guidelines for research and obtain informed consent from participants.
  • Analyze data: Use appropriate statistical or qualitative analysis methods to analyze your data. As discussed earlier, common data analysis methods for descriptive research include descriptive statistics, cross-tabulation, content analysis, qualitative coding, visualization, and comparative analysis.
  • I nterpret results: Interpret your findings in light of your research question and objectives. Identify patterns, trends, and relationships in the data, and describe the characteristics of your sample or population.
  • Draw conclusions and report results: Draw conclusions based on your analysis and interpretation of the data. Report your results in a clear and concise manner, using appropriate tables, graphs, or figures to present your findings. Ensure that your report follows accepted research standards and guidelines.

When to Use Descriptive Research Design

Descriptive research design is used in situations where the researcher wants to describe a population or phenomenon in detail. It is used to gather information about the current status or condition of a group or phenomenon without making any causal inferences. Descriptive research design is useful in the following situations:

  • Exploratory research: Descriptive research design is often used in exploratory research to gain an initial understanding of a phenomenon or population.
  • Identifying trends: Descriptive research design can be used to identify trends or patterns in a population, such as changes in consumer behavior or attitudes over time.
  • Market research: Descriptive research design is commonly used in market research to understand consumer preferences, behavior, and attitudes.
  • Health research: Descriptive research design is useful in health research to describe the prevalence and distribution of a disease or health condition in a population.
  • Social science research: Descriptive research design is used in social science research to describe social phenomena such as cultural norms, values, and beliefs.
  • Educational research: Descriptive research design is used in educational research to describe the performance of students, schools, or educational programs.

Purpose of Descriptive Research Design

The main purpose of descriptive research design is to describe and measure the characteristics of a population or phenomenon in a systematic and objective manner. It involves collecting data that describe the current status or condition of the population or phenomenon of interest, without manipulating or altering any variables.

The purpose of descriptive research design can be summarized as follows:

  • To provide an accurate description of a population or phenomenon: Descriptive research design aims to provide a comprehensive and accurate description of a population or phenomenon of interest. This can help researchers to develop a better understanding of the characteristics of the population or phenomenon.
  • To identify trends and patterns: Descriptive research design can help researchers to identify trends and patterns in the data, such as changes in behavior or attitudes over time. This can be useful for making predictions and developing strategies.
  • To generate hypotheses: Descriptive research design can be used to generate hypotheses or research questions that can be tested in future studies. For example, if a descriptive study finds a correlation between two variables, this could lead to the development of a hypothesis about the causal relationship between the variables.
  • To establish a baseline: Descriptive research design can establish a baseline or starting point for future research. This can be useful for comparing data from different time periods or populations.

Characteristics of Descriptive Research Design

Descriptive research design has several key characteristics that distinguish it from other research designs. Some of the main characteristics of descriptive research design are:

  • Objective : Descriptive research design is objective in nature, which means that it focuses on collecting factual and accurate data without any personal bias. The researcher aims to report the data objectively without any personal interpretation.
  • Non-experimental: Descriptive research design is non-experimental, which means that the researcher does not manipulate any variables. The researcher simply observes and records the behavior or characteristics of the population or phenomenon of interest.
  • Quantitative : Descriptive research design is quantitative in nature, which means that it involves collecting numerical data that can be analyzed using statistical techniques. This helps to provide a more precise and accurate description of the population or phenomenon.
  • Cross-sectional: Descriptive research design is often cross-sectional, which means that the data is collected at a single point in time. This can be useful for understanding the current state of the population or phenomenon, but it may not provide information about changes over time.
  • Large sample size: Descriptive research design typically involves a large sample size, which helps to ensure that the data is representative of the population of interest. A large sample size also helps to increase the reliability and validity of the data.
  • Systematic and structured: Descriptive research design involves a systematic and structured approach to data collection, which helps to ensure that the data is accurate and reliable. This involves using standardized procedures for data collection, such as surveys, questionnaires, or observation checklists.

Advantages of Descriptive Research Design

Descriptive research design has several advantages that make it a popular choice for researchers. Some of the main advantages of descriptive research design are:

  • Provides an accurate description: Descriptive research design is focused on accurately describing the characteristics of a population or phenomenon. This can help researchers to develop a better understanding of the subject of interest.
  • Easy to conduct: Descriptive research design is relatively easy to conduct and requires minimal resources compared to other research designs. It can be conducted quickly and efficiently, and data can be collected through surveys, questionnaires, or observations.
  • Useful for generating hypotheses: Descriptive research design can be used to generate hypotheses or research questions that can be tested in future studies. For example, if a descriptive study finds a correlation between two variables, this could lead to the development of a hypothesis about the causal relationship between the variables.
  • Large sample size : Descriptive research design typically involves a large sample size, which helps to ensure that the data is representative of the population of interest. A large sample size also helps to increase the reliability and validity of the data.
  • Can be used to monitor changes : Descriptive research design can be used to monitor changes over time in a population or phenomenon. This can be useful for identifying trends and patterns, and for making predictions about future behavior or attitudes.
  • Can be used in a variety of fields : Descriptive research design can be used in a variety of fields, including social sciences, healthcare, business, and education.

Limitation of Descriptive Research Design

Descriptive research design also has some limitations that researchers should consider before using this design. Some of the main limitations of descriptive research design are:

  • Cannot establish cause and effect: Descriptive research design cannot establish cause and effect relationships between variables. It only provides a description of the characteristics of the population or phenomenon of interest.
  • Limited generalizability: The results of a descriptive study may not be generalizable to other populations or situations. This is because descriptive research design often involves a specific sample or situation, which may not be representative of the broader population.
  • Potential for bias: Descriptive research design can be subject to bias, particularly if the researcher is not objective in their data collection or interpretation. This can lead to inaccurate or incomplete descriptions of the population or phenomenon of interest.
  • Limited depth: Descriptive research design may provide a superficial description of the population or phenomenon of interest. It does not delve into the underlying causes or mechanisms behind the observed behavior or characteristics.
  • Limited utility for theory development: Descriptive research design may not be useful for developing theories about the relationship between variables. It only provides a description of the variables themselves.
  • Relies on self-report data: Descriptive research design often relies on self-report data, such as surveys or questionnaires. This type of data may be subject to biases, such as social desirability bias or recall bias.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Questionnaire

Questionnaire – Definition, Types, and Examples

Case Study Research

Case Study – Methods, Examples and Guide

Observational Research

Observational Research – Methods and Guide

Quantitative Research

Quantitative Research – Methods, Types and...

Qualitative Research Methods

Qualitative Research Methods

Explanatory Research

Explanatory Research – Types, Methods, Guide

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of springeropen

Language: English | German

How to Construct a Mixed Methods Research Design

Wie man ein mixed methods-forschungs-design konstruiert, judith schoonenboom.

1 Institut für Bildungswissenschaft, Universität Wien, Sensengasse 3a, 1090 Wien, Austria

R. Burke Johnson

2 Department of Professional Studies, University of South Alabama, UCOM 3700, 36688-0002 Mobile, AL USA

This article provides researchers with knowledge of how to design a high quality mixed methods research study. To design a mixed study, researchers must understand and carefully consider each of the dimensions of mixed methods design, and always keep an eye on the issue of validity. We explain the seven major design dimensions: purpose, theoretical drive, timing (simultaneity and dependency), point of integration, typological versus interactive design approaches, planned versus emergent design, and design complexity. There also are multiple secondary dimensions that need to be considered during the design process. We explain ten secondary dimensions of design to be considered for each research study. We also provide two case studies showing how the mixed designs were constructed.

Zusammenfassung

Der Beitrag gibt einen Überblick darüber, wie das Forschungsdesign bei Mixed Methods-Studien angelegt sein sollte. Um ein Mixed Methods-Forschungsdesign aufzustellen, müssen Forschende sorgfältig alle Dimensionen von Methodenkombinationen abwägen und von Anfang an auf die Güte und damit verbundene etwaige Probleme achten. Wir erklären und diskutieren die für Forschungsdesigns relevanten sieben Dimensionen von Methodenkombinationen: Untersuchungsziel, Rolle von Theorie im Forschungsprozess, Timing (Simultanität und Abhängigkeit), Schnittstellen, an denen Integration stattfindet, systematische vs. interaktive Design-Ansätze, geplante vs. emergente Designs und Komplexität des Designs. Es gibt außerdem zahlreiche sekundäre Dimensionen, die bei der Aufstellung des Forschungsdesigns berücksichtigt werden müssen, von denen wir zehn erklären. Der Beitrag schließt mit zwei Fallbeispielen ab, anhand derer konkret gezeigt wird, wie Mixed Methods-Forschungsdesigns aufgestellt werden können.

What is a mixed methods design?

This article addresses the process of selecting and constructing mixed methods research (MMR) designs. The word “design” has at least two distinct meanings in mixed methods research (Maxwell 2013 ). One meaning focuses on the process of design; in this meaning, design is often used as a verb. Someone can be engaged in designing a study (in German: “eine Studie konzipieren” or “eine Studie designen”). Another meaning is that of a product, namely the result of designing. The result of designing as a verb is a mixed methods design as a noun (in German: “das Forschungsdesign” or “Design”), as it has, for example, been described in a journal article. In mixed methods design, both meanings are relevant. To obtain a strong design as a product, one needs to carefully consider a number of rules for designing as an activity. Obeying these rules is not a guarantee of a strong design, but it does contribute to it. A mixed methods design is characterized by the combination of at least one qualitative and one quantitative research component. For the purpose of this article, we use the following definition of mixed methods research (Johnson et al. 2007 , p. 123):

Mixed methods research is the type of research in which a researcher or team of researchers combines elements of qualitative and quantitative research approaches (e. g., use of qualitative and quantitative viewpoints, data collection, analysis, inference techniques) for the broad purposes of breadth and depth of understanding and corroboration.

Mixed methods research (“Mixed Methods” or “MM”) is the sibling of multimethod research (“Methodenkombination”) in which either solely multiple qualitative approaches or solely multiple quantitative approaches are combined.

In a commonly used mixed methods notation system (Morse 1991 ), the components are indicated as qual and quan (or QUAL and QUAN to emphasize primacy), respectively, for qualitative and quantitative research. As discussed below, plus (+) signs refer to concurrent implementation of components (“gleichzeitige Durchführung der Teilstudien” or “paralleles Mixed Methods-Design”) and arrows (→) refer to sequential implementation (“Sequenzielle Durchführung der Teilstudien” or “sequenzielles Mixed Methods-Design”) of components. Note that each research tradition receives an equal number of letters (four) in its abbreviation for equity. In this article, this notation system is used in some depth.

A mixed methods design as a product has several primary characteristics that should be considered during the design process. As shown in Table  1 , the following primary design “dimensions” are emphasized in this article: purpose of mixing, theoretical drive, timing, point of integration, typological use, and degree of complexity. These characteristics are discussed below. We also provide some secondary dimensions to consider when constructing a mixed methods design (Johnson and Christensen 2017 ).

List of Primary and Secondary Design Dimensions

On the basis of these dimensions, mixed methods designs can be classified into a mixed methods typology or taxonomy. In the mixed methods literature, various typologies of mixed methods designs have been proposed (for an overview see Creswell and Plano Clark 2011 , p. 69–72).

The overall goal of mixed methods research, of combining qualitative and quantitative research components, is to expand and strengthen a study’s conclusions and, therefore, contribute to the published literature. In all studies, the use of mixed methods should contribute to answering one’s research questions.

Ultimately, mixed methods research is about heightened knowledge and validity. The design as a product should be of sufficient quality to achieve multiple validities legitimation (Johnson and Christensen 2017 ; Onwuegbuzie and Johnson 2006 ), which refers to the mixed methods research study meeting the relevant combination or set of quantitative, qualitative, and mixed methods validities in each research study.

Given this goal of answering the research question(s) with validity, a researcher can nevertheless have various reasons or purposes for wanting to strengthen the research study and its conclusions. Following is the first design dimension for one to consider when designing a study: Given the research question(s), what is the purpose of the mixed methods study?

A popular classification of purposes of mixed methods research was first introduced in 1989 by Greene, Caracelli, and Graham, based on an analysis of published mixed methods studies. This classification is still in use (Greene 2007 ). Greene et al. ( 1989 , p. 259) distinguished the following five purposes for mixing in mixed methods research:

1.  Triangulation seeks convergence, corroboration, correspondence of results from different methods; 2.  Complementarity seeks elaboration, enhancement, illustration, clarification of the results from one method with the results from the other method; 3.  Development seeks to use the results from one method to help develop or inform the other method, where development is broadly construed to include sampling and implementation, as well as measurement decisions; 4.  Initiation seeks the discovery of paradox and contradiction, new perspectives of frameworks, the recasting of questions or results from one method with questions or results from the other method; 5.  Expansion seeks to extend the breadth and range of inquiry by using different methods for different inquiry components.

In the past 28 years, this classification has been supplemented by several others. On the basis of a review of the reasons for combining qualitative and quantitative research mentioned by the authors of mixed methods studies, Bryman ( 2006 ) formulated a list of more concrete rationales for performing mixed methods research (see Appendix). Bryman’s classification breaks down Greene et al.’s ( 1989 ) categories into several aspects, and he adds a number of additional aspects, such as the following:

(a)  Credibility – refers to suggestions that employing both approaches enhances the integrity of findings. (b)  Context – refers to cases in which the combination is justified in terms of qualitative research providing contextual understanding coupled with either generalizable, externally valid findings or broad relationships among variables uncovered through a survey. (c)  Illustration – refers to the use of qualitative data to illustrate quantitative findings, often referred to as putting “meat on the bones” of “dry” quantitative findings. (d)  Utility or improving the usefulness of findings – refers to a suggestion, which is more likely to be prominent among articles with an applied focus, that combining the two approaches will be more useful to practitioners and others. (e)  Confirm and discover – this entails using qualitative data to generate hypotheses and using quantitative research to test them within a single project. (f)  Diversity of views – this includes two slightly different rationales – namely, combining researchers’ and participants’ perspectives through quantitative and qualitative research respectively, and uncovering relationships between variables through quantitative research while also revealing meanings among research participants through qualitative research. (Bryman, p. 106)

Views can be diverse (f) in various ways. Some examples of mixed methods design that include a diversity of views are:

  • Iteratively/sequentially connecting local/idiographic knowledge with national/general/nomothetic knowledge;
  • Learning from different perspectives on teams and in the field and literature;
  • Achieving multiple participation, social justice, and action;
  • Determining what works for whom and the relevance/importance of context;
  • Producing interdisciplinary substantive theory, including/comparing multiple perspectives and data regarding a phenomenon;
  • Juxtaposition-dialogue/comparison-synthesis;
  • Breaking down binaries/dualisms (some of both);
  • Explaining interaction between/among natural and human systems;
  • Explaining complexity.

The number of possible purposes for mixing is very large and is increasing; hence, it is not possible to provide an exhaustive list. Greene et al.’s ( 1989 ) purposes, Bryman’s ( 2006 ) rationales, and our examples of a diversity of views were formulated as classifications on the basis of examination of many existing research studies. They indicate how the qualitative and quantitative research components of a study relate to each other. These purposes can be used post hoc to classify research or a priori in the design of a new study. When designing a mixed methods study, it is sometimes helpful to list the purpose in the title of the study design.

The key point of this section is for the researcher to begin a study with at least one research question and then carefully consider what the purposes for mixing are. One can use mixed methods to examine different aspects of a single research question, or one can use separate but related qualitative and quantitative research questions. In all cases, the mixing of methods, methodologies, and/or paradigms will help answer the research questions and make improvements over a more basic study design. Fuller and richer information will be obtained in the mixed methods study.

Theoretical drive

In addition to a mixing purpose, a mixed methods research study might have an overall “theoretical drive” (Morse and Niehaus 2009 ). When designing a mixed methods study, it is occasionally helpful to list the theoretical drive in the title of the study design. An investigation, in Morse and Niehaus’s ( 2009 ) view, is focused primarily on either exploration-and-description or on testing-and-prediction. In the first case, the theoretical drive is called “inductive” or “qualitative”; in the second case, it is called “deductive” or “quantitative”. In the case of mixed methods, the component that corresponds to the theoretical drive is referred to as the “core” component (“Kernkomponente”), and the other component is called the “supplemental” component (“ergänzende Komponente”). In Morse’s notation system, the core component is written in capitals and the supplemental component is written in lowercase letters. For example, in a QUAL → quan design, more weight is attached to the data coming from the core qualitative component. Due to the decisive character of the core component, the core component must be able to stand on its own, and should be implemented rigorously. The supplemental component does not have to stand on its own.

Although this distinction is useful in some circumstances, we do not advise to apply it to every mixed methods design. First, Morse and Niehaus contend that the supplemental component can be done “less rigorously” but do not explain which aspects of rigor can be dropped. In addition, the idea of decreased rigor is in conflict with one key theme of the present article, namely that mixed methods designs should always meet the criterion of multiple validities legitimation (Onwuegbuzie and Johnson 2006 ).

The idea of theoretical drive as explicated by Morse and Niehaus has been criticized. For example, we view a theoretical drive as a feature not of a whole study, but of a research question, or, more precisely, of an interpretation of a research question. For example, if one study includes multiple research questions, it might include several theoretical drives (Schoonenboom 2016 ).

Another criticism of Morse and Niehaus’ conceptualization of theoretical drive is that it does not allow for equal-status mixed methods research (“Mixed Methods Forschung, bei der qualitative und quantitative Methoden die gleiche Bedeutung haben” or “gleichrangige Mixed Methods-Designs”), in which both the qualitative and quantitative component are of equal value and weight; this same criticism applies to Morgan’s ( 2014 ) set of designs. We agree with Greene ( 2015 ) that mixed methods research can be integrated at the levels of method, methodology, and paradigm. In this view, equal-status mixed methods research designs are possible, and they result when both the qualitative and the quantitative components, approaches, and thinking are of equal value, they take control over the research process in alternation, they are in constant interaction, and the outcomes they produce are integrated during and at the end of the research process. Therefore, equal-status mixed methods research (that we often advocate) is also called “interactive mixed methods research”.

Mixed methods research can have three different drives, as formulated by Johnson et al. ( 2007 , p. 123):

Qualitative dominant [or qualitatively driven] mixed methods research is the type of mixed research in which one relies on a qualitative, constructivist-poststructuralist-critical view of the research process, while concurrently recognizing that the addition of quantitative data and approaches are likely to benefit most research projects. Quantitative dominant [or quantitatively driven] mixed methods research is the type of mixed research in which one relies on a quantitative, postpositivist view of the research process, while concurrently recognizing that the addition of qualitative data and approaches are likely to benefit most research projects. (p. 124) The area around the center of the [qualitative-quantitative] continuum, equal status , is the home for the person that self-identifies as a mixed methods researcher. This researcher takes as his or her starting point the logic and philosophy of mixed methods research. These mixed methods researchers are likely to believe that qualitative and quantitative data and approaches will add insights as one considers most, if not all, research questions.

We leave it to the reader to decide if he or she desires to conduct a qualitatively driven study, a quantitatively driven study, or an equal-status/“interactive” study. According to the philosophies of pragmatism (Johnson and Onwuegbuzie 2004 ) and dialectical pluralism (Johnson 2017 ), interactive mixed methods research is very much a possibility. By successfully conducting an equal-status study, the pragmatist researcher shows that paradigms can be mixed or combined, and that the incompatibility thesis does not always apply to research practice. Equal status research is most easily conducted when a research team is composed of qualitative, quantitative, and mixed researchers, interacts continually, and conducts a study to address one superordinate goal.

Timing: simultaneity and dependence

Another important distinction when designing a mixed methods study relates to the timing of the two (or more) components. When designing a mixed methods study, it is usually helpful to include the word “concurrent” (“parallel”) or “sequential” (“sequenziell”) in the title of the study design; a complex design can be partially concurrent and partially sequential. Timing has two aspects: simultaneity and dependence (Guest 2013 ).

Simultaneity (“Simultanität”) forms the basis of the distinction between concurrent and sequential designs. In a  sequential design , the quantitative component precedes the qualitative component, or vice versa. In a  concurrent design , both components are executed (almost) simultaneously. In the notation of Morse ( 1991 ), concurrence is indicated by a “+” between components (e. g., QUAL + quan), while sequentiality is indicated with a “→” (QUAL → quan). Note that the use of capital letters for one component and lower case letters for another component in the same design suggest that one component is primary and the other is secondary or supplemental.

Some designs are sequential by nature. For example, in a  conversion design, qualitative categories and themes might be first obtained by collection and analysis of qualitative data, and then subsequently quantitized (Teddlie and Tashakkori 2009 ). Likewise, with Greene et al.’s ( 1989 ) initiation purpose, the initiation strand follows the unexpected results that it is supposed to explain. In other cases, the researcher has a choice. It is possible, e. g., to collect interview data and survey data of one inquiry simultaneously; in that case, the research activities would be concurrent. It is also possible to conduct the interviews after the survey data have been collected (or vice versa); in that case, research activities are performed sequentially. Similarly, a study with the purpose of expansion can be designed in which data on an effect and the intervention process are collected simultaneously, or they can be collected sequentially.

A second aspect of timing is dependence (“Abhängigkeit”) . We call two research components dependent if the implementation of the second component depends on the results of data analysis in the first component. Two research components are independent , if their implementation does not depend on the results of data analysis in the other component. Often, a researcher has a choice to perform data analysis independently or not. A researcher could analyze interview data and questionnaire data of one inquiry independently; in that case, the research activities would be independent. It is also possible to let the interview questions depend upon the outcomes of the analysis of the questionnaire data (or vice versa); in that case, research activities are performed dependently. Similarly, the empirical outcome/effect and process in a study with the purpose of expansion might be investigated independently, or the process study might take the effect/outcome as given (dependent).

In the mixed methods literature, the distinction between sequential and concurrent usually refers to the combination of concurrent/independent and sequential/dependent, and to the combination of data collection and data analysis. It is said that in a concurrent design, the data collection and data analysis of both components occurs (almost) simultaneously and independently, while in a sequential design, the data collection and data analysis of one component take place after the data collection and data analysis of the other component and depends on the outcomes of the other component.

In our opinion, simultaneity and dependence are two separate dimensions. Simultaneity indicates whether data collection is done concurrent or sequentially. Dependence indicates whether the implementation of one component depends upon the results of data analysis of the other component. As we will see in the example case studies, a concurrent design could include dependent data analysis, and a sequential design could include independent data analysis. It is conceivable that one simultaneously conducts interviews and collects questionnaire data (concurrent), while allowing the analysis focus of the interviews to depend on what emerges from the survey data (dependence).

Dependent research activities include a redirection of subsequent research inquiry. Using the outcomes of the first research component, the researcher decides what to do in the second component. Depending on the outcomes of the first research component, the researcher will do something else in the second component. If this is so, the research activities involved are said to be sequential-dependent, and any component preceded by another component should appropriately build on the previous component (see sequential validity legitimation ; Johnson and Christensen 2017 ; Onwuegbuzie and Johnson 2006 ).

It is under the purposive discretion of the researcher to determine whether a concurrent-dependent design, a concurrent-independent design, a sequential-dependent design, or a sequential-dependent design is needed to answer a particular research question or set of research questions in a given situation.

Point of integration

Each true mixed methods study has at least one “point of integration” – called the “point of interface” by Morse and Niehaus ( 2009 ) and Guest ( 2013 ) –, at which the qualitative and quantitative components are brought together. Having one or more points of integration is the distinguishing feature of a design based on multiple components. It is at this point that the components are “mixed”, hence the label “mixed methods designs”. The term “mixing”, however, is misleading, as the components are not simply mixed, but have to be integrated very carefully.

Determining where the point of integration will be, and how the results will be integrated, is an important, if not the most important, decision in the design of mixed methods research. Morse and Niehaus ( 2009 ) identify two possible points of integration: the results point of integration and the analytical point of integration.

Most commonly, integration takes place in the results point of integration . At some point in writing down the results of the first component, the results of the second component are added and integrated. A  joint display (listing the qualitative and quantitative findings and an integrative statement) might be used to facilitate this process.

In the case of an analytical point of integration , a first analytical stage of a qualitative component is followed by a second analytical stage, in which the topics identified in the first analytical stage are quantitized. The results of the qualitative component ultimately, and before writing down the results of the analytical phase as a whole, become quantitative; qualitizing also is a possible strategy, which would be the converse of this.

Other authors assume more than two possible points of integration. Teddlie and Tashakkori ( 2009 ) distinguish four different stages of an investigation: the conceptualization stage, the methodological experimental stage (data collection), the analytical experimental stage (data analysis), and the inferential stage. According to these authors, in all four stages, mixing is possible, and thus all four stages are potential points or integration.

However, the four possible points of integration used by Teddlie and Tashakkori ( 2009 ) are still too coarse to distinguish some types of mixing. Mixing in the experiential stage can take many different forms, for example the use of cognitive interviews to improve a questionnaire (tool development), or selecting people for an interview on the basis of the results of a questionnaire (sampling). Extending the definition by Guest ( 2013 ), we define the point of integration as “any point in a study where two or more research components are mixed or connected in some way”. Then, the point of integration in the two examples of this paragraph can be defined more accurately as “instrument development”, and “development of the sample”.

It is at the point of integration that qualitative and quantitative components are integrated. Some primary ways that the components can be connected to each other are as follows:

(1) merging the two data sets, (2) connecting from the analysis of one set of data to the collection of a second set of data, (3) embedding of one form of data within a larger design or procedure, and (4) using a framework (theoretical or program) to bind together the data sets (Creswell and Plano Clark 2011 , p. 76).

More generally, one can consider mixing at any or all of the following research components: purposes, research questions, theoretical drive, methods, methodology, paradigm, data, analysis, and results. One can also include mixing views of different researchers, participants, or stakeholders. The creativity of the mixed methods researcher designing a study is extensive.

Substantively, it can be useful to think of integration or mixing as comparing and bringing together two (or more) components on the basis of one or more of the purposes set out in the first section of this article. For example, it is possible to use qualitative data to illustrate a quantitative effect, or to determine whether the qualitative and the quantitative component yield convergent results ( triangulation ). An integrated result could also consist of a combination of a quantitatively established effect and a qualitative description of the underlying process . In the case of development, integration consists of an adjustment of an, often quantitative, for example, instrument or model or interpretation, based on qualitative assessments by members of the target group.

A special case is the integration of divergent results. The power of mixed methods research is its ability to deal with diversity and divergence. In the literature, we find two kinds of strategies for dealing with divergent results. A first set of strategies takes the detected divergence as the starting point for further analysis, with the aim to resolve the divergence. One possibility is to carry out further research (Cook 1985 ; Greene and Hall 2010 ). Further research is not always necessary. One can also look for a more comprehensive theory, which is able to account for both the results of the first component and the deviating results of the second component. This is a form of abduction (Erzberger and Prein 1997 ).

A fruitful starting point in trying to resolve divergence through abduction is to determine which component has resulted in a finding that is somehow expected, logical, and/or in line with existing research. The results of this research component, called the “sense” (“Lesart”), are subsequently compared to the results of the other component, called the “anti-sense” (“alternative Lesart”), which are considered dissonant, unexpected, and/or contrary to what had been found in the literature. The aim is to develop an overall explanation that fits both the sense and the anti-sense (Bazeley and Kemp 2012 ; Mendlinger and Cwikel 2008 ). Finally, a reanalysis of the data can sometimes lead to resolving divergence (Creswell and Plano Clark 2011 ).

Alternatively, one can question the existence of the encountered divergence. In this regard, Mathison ( 1988 ) recommends determining whether deviating results shown by the data can be explained by knowledge about the research and/or knowledge of the social world. Differences between results from different data sources could also be the result of properties of the methods involved, rather than reflect differences in reality (Yanchar and Williams 2006 ). In general, the conclusions of the individual components can be subjected to an inference quality audit (Teddlie and Tashakkori 2009 ), in which the researcher investigates the strength of each of the divergent conclusions. We recommend that researchers first determine whether there is “real” divergence, according to the strategies mentioned in the last paragraph. Next, an attempt can be made to resolve cases of “true” divergence, using one or more of the methods mentioned in this paragraph.

Design typology utilization

As already mentioned in Sect. 1, mixed methods designs can be classified into a mixed methods typology or taxonomy. A typology serves several purposes, including the following: guiding practice, legitimizing the field, generating new possibilities, and serving as a useful pedagogical tool (Teddlie and Tashakkori 2009 ). Note, however, that not all types of typologies are equally suitable for all purposes. For generating new possibilities, one will need a more exhaustive typology, while a useful pedagogical tool might be better served by a non-exhaustive overview of the most common mixed methods designs. Although some of the current MM design typologies include more designs than others, none of the current typologies is fully exhaustive. When designing a mixed methods study, it is often useful to borrow its name from an existing typology, or to construct a superior and nuanced clear name when your design is based on a modification of one or more of the designs.

Various typologies of mixed methods designs have been proposed. Creswell and Plano Clark’s ( 2011 ) typology of some “commonly used designs” includes six “major mixed methods designs”. Our summary of these designs runs as follows:

  • Convergent parallel design (“paralleles Design”) (the quantitative and qualitative strands of the research are performed independently, and their results are brought together in the overall interpretation),
  • Explanatory sequential design (“explanatives Design”) (a first phase of quantitative data collection and analysis is followed by the collection of qualitative data, which are used to explain the initial quantitative results),
  • Exploratory sequential design (“exploratives Design”) (a first phase of qualitative data collection and analysis is followed by the collection of quantitative data to test or generalize the initial qualitative results),
  • Embedded design (“Einbettungs-Design”) (in a traditional qualitative or quantitative design, a strand of the other type is added to enhance the overall design),
  • Transformative design (“politisch-transformatives Design”) (a transformative theoretical framework, e. g. feminism or critical race theory, shapes the interaction, priority, timing and mixing of the qualitative and quantitative strand),
  • Multiphase design (“Mehrphasen-Design”) (more than two phases or both sequential and concurrent strands are combined over a period of time within a program of study addressing an overall program objective).

Most of their designs presuppose a specific juxtaposition of the qualitative and quantitative component. Note that the last design is a complex type that is required in many mixed methods studies.

The following are our adapted definitions of Teddlie and Tashakkori’s ( 2009 ) five sets of mixed methods research designs (adapted from Teddlie and Tashakkori 2009 , p. 151):

  • Parallel mixed designs (“paralleles Mixed-Methods-Design”) – In these designs, one has two or more parallel quantitative and qualitative strands, either with some minimal time lapse or simultaneously; the strand results are integrated into meta-inferences after separate analysis are conducted; related QUAN and QUAL research questions are answered or aspects of the same mixed research question is addressed.
  • Sequential mixed designs (“sequenzielles Mixed-Methods-Design”) – In these designs, QUAL and QUAN strands occur across chronological phases, and the procedures/questions from the later strand emerge/depend/build on on the previous strand; the research questions are interrelated and sometimes evolve during the study.
  • Conversion mixed designs (“Transfer-Design” or “Konversionsdesign”) – In these parallel designs, mixing occurs when one type of data is transformed to the other type and then analyzed, and the additional findings are added to the results; this design answers related aspects of the same research question,
  • Multilevel mixed designs (“Mehrebenen-Mixed-Methods-Design”) – In these parallel or sequential designs, mixing occurs across multiple levels of analysis, as QUAN and QUAL data are analyzed and integrated to answer related aspects of the same research question or related questions.
  • Fully integrated mixed designs (“voll integriertes Mixed-Methods-Design”) – In these designs, mixing occurs in an interactive manner at all stages of the study. At each stage, one approach affects the formulation of the other, and multiple types of implementation processes can occur. For example, rather than including integration only at the findings/results stage, or only across phases in a sequential design, mixing might occur at the conceptualization stage, the methodological stage, the analysis stage, and the inferential stage.

We recommend adding to Teddlie and Tashakkori’s typology a sixth design type, specifically, a  “hybrid” design type to include complex combinations of two or more of the other design types. We expect that many published MM designs will fall into the hybrid design type.

Morse and Niehaus ( 2009 ) listed eight mixed methods designs in their book (and suggested that authors create more complex combinations when needed). Our shorthand labels and descriptions (adapted from Morse and Niehaus 2009 , p. 25) run as follows:

  • QUAL + quan (inductive-simultaneous design where, the core component is qualitative and the supplemental component is quantitative)
  • QUAL → quan (inductive-sequential design, where the core component is qualitative and the supplemental component is quantitative)
  • QUAN + qual (deductive-simultaneous design where, the core component is quantitative and the supplemental component is qualitative)
  • QUAN → qual (deductive-sequential design, where the core component is quantitative and the supplemental component is qualitative)
  • QUAL + qual (inductive-simultaneous design, where both components are qualitative; this is a multimethod design rather than a mixed methods design)
  • QUAL → qual (inductive-sequential design, where both components are qualitative; this is a multimethod design rather than a mixed methods design)
  • QUAN + quan (deductive-simultaneous design, where both components are quantitative; this is a multimethod design rather than a mixed methods design)
  • QUAN → quan (deductive-sequential design, where both components are quantitative; this is a multimethod design rather than a mixed methods design).

Notice that Morse and Niehaus ( 2009 ) included four mixed methods designs (the first four designs shown above) and four multimethod designs (the second set of four designs shown above) in their typology. The reader can, therefore, see that the design notation also works quite well for multimethod research designs. Notably absent from Morse and Niehaus’s book are equal-status or interactive designs. In addition, they assume that the core component should always be performed either concurrent with or before the supplemental component.

Johnson, Christensen, and Onwuegbuzie constructed a set of mixed methods designs without these limitations. The resulting mixed methods design matrix (see Johnson and Christensen 2017 , p. 478) contains nine designs, which we can label as follows (adapted from Johnson and Christensen 2017 , p. 478):

  • QUAL + QUAN (equal-status concurrent design),
  • QUAL + quan (qualitatively driven concurrent design),
  • QUAN + qual (quantitatively driven concurrent design),
  • QUAL → QUAN (equal-status sequential design),
  • QUAN → QUAL (equal-status sequential design),
  • QUAL → quan (qualitatively driven sequential design),
  • qual → QUAN (quantitatively driven sequential design),
  • QUAN → qual (quantitatively driven sequential design), and
  • quan → QUAL (qualitatively driven sequential design).

The above set of nine designs assumed only one qualitative and one quantitative component. However, this simplistic assumption can be relaxed in practice, allowing the reader to construct more complex designs. The Morse notation system is very powerful. For example, here is a three-stage equal-status concurrent-sequential design:

The key point here is that the Morse notation provides researchers with a powerful language for depicting and communicating the design constructed for a specific research study.

When designing a mixed methods study, it is sometimes helpful to include the mixing purpose (or characteristic on one of the other dimensions shown in Table  1 ) in the title of the study design (e. g., an explanatory sequential MM design, an exploratory-confirmatory MM design, a developmental MM design). Much more important, however, than a design name is for the author to provide an accurate description of what was done in the research study, so the reader will know exactly how the study was conducted. A design classification label can never replace such a description.

The common complexity of mixed methods design poses a problem to the above typologies of mixed methods research. The typologies were designed to classify whole mixed methods studies, and they are basically based on a classification of simple designs. In practice, many/most designs are complex. Complex designs are sometimes labeled “complex design”, “multiphase design”, “fully integrated design”, “hybrid design” and the like. Because complex designs occur very often in practice, the above typologies are not able to classify a large part of existing mixed methods research any further than by labeling them “complex”, which in itself is not very informative about the particular design. This problem does not fully apply to Morse’s notation system, which can be used to symbolize some more complex designs.

Something similar applies to the classification of the purposes of mixed methods research. The classifications of purposes mentioned in the “Purpose”-section, again, are basically meant for the classification of whole mixed methods studies. In practice, however, one single study often serves more than one purpose (Schoonenboom et al. 2017 ). The more purposes that are included in one study, the more difficult it becomes to select a design on the basis of the purpose of the investigation, as advised by Greene ( 2007 ). Of all purposes involved, then, which one should be the primary basis for the design? Or should the design be based upon all purposes included? And if so, how? For more information on how to articulate design complexity based on multiple purposes of mixing, see Schoonenboom et al. ( 2017 ).

It should be clear to the reader that, although much progress has been made in the area of mixed methods design typologies, the problem remains in developing a single typology that is effective in comprehensively listing a set of designs for mixed methods research. This is why we emphasize in this article the importance of learning to build on simple designs and construct one’s own design for one’s research questions. This will often result in a combination or “hybrid” design that goes beyond basic designs found in typologies, and a methodology section that provides much more information than a design name.

Typological versus interactive approaches to design

In the introduction, we made a distinction between design as a product and design as a process. Related to this, two different approaches to design can be distinguished: typological/taxonomic approaches (“systematische Ansätze”), such as those in the previous section, and interactive approaches (“interaktive Ansätze”) (the latter were called “dynamic” approaches by Creswell and Plano Clark 2011 ). Whereas typological/taxonomic approaches view designs as a sort of mold, in which the inquiry can be fit, interactive approaches (Maxwell 2013 ) view design as a process, in which a certain design-as-a-product might be the outcome of the process, but not its input.

The most frequently mentioned interactive approach to mixed methods research is the approach by Maxwell and Loomis ( 2003 ). Maxwell and Loomis distinguish the following components of a design: goals, conceptual framework, research question, methods, and validity. They argue convincingly that the most important task of the researcher is to deliver as the end product of the design process a design in which these five components fit together properly. During the design process, the researcher works alternately on the individual components, and as a result, their initial fit, if it existed, tends to get lost. The researcher should therefore regularly check during the research and continuing design process whether the components still fit together, and, if not, should adapt one or the other component to restore the fit between them. In an interactive approach, unlike the typological approach, design is viewed as an interactive process in which the components are continually compared during the research study to each other and adapted to each other.

Typological and interactive approaches to mixed methods research have been presented as mutually exclusive alternatives. In our view, however, they are not mutually exclusive. The interactive approach of Maxwell is a very powerful tool for conducting research, yet this approach is not specific to mixed methods research. Maxwell’s interactive approach emphasizes that the researcher should keep and monitor a close fit between the five components of research design. However, it does not indicate how one should combine qualitative and quantitative subcomponents within one of Maxwell’s five components (e. g., how one should combine a qualitative and a quantitative method, or a qualitative and a quantitative research question). Essential elements of the design process, such as timing and the point of integration are not covered by Maxwell’s approach. This is not a shortcoming of Maxwell’s approach, but it indicates that to support the design of mixed methods research, more is needed than Maxwell’s model currently has to offer.

Some authors state that design typologies are particularly useful for beginning researchers and interactive approaches are suited for experienced researchers (Creswell and Plano Clark 2011 ). However, like an experienced researcher, a research novice needs to align the components of his or her design properly with each other, and, like a beginning researcher, an advanced researcher should indicate how qualitative and quantitative components are combined with each other. This makes an interactive approach desirable, also for beginning researchers.

We see two merits of the typological/taxonomic approach . We agree with Greene ( 2007 ), who states that the value of the typological approach mainly lies in the different dimensions of mixed methods that result from its classifications. In this article, the primary dimensions include purpose, theoretical drive, timing, point of integration, typological vs. interactive approaches, planned vs. emergent designs, and complexity (also see secondary dimensions in Table  1 ). Unfortunately, all of these dimensions are not reflected in any single design typology reviewed here. A second merit of the typological approach is the provision of common mixed methods research designs, of common ways in which qualitative and quantitative research can be combined, as is done for example in the major designs of Creswell and Plano Clark ( 2011 ). Contrary to other authors, however, we do not consider these designs as a feature of a whole study, but rather, in line with Guest ( 2013 ), as a feature of one part of a design in which one qualitative and one quantitative component are combined. Although one study could have only one purpose, one point of integration, et cetera, we believe that combining “designs” is the rule and not the exception. Therefore, complex designs need to be constructed and modified as needed, and during the writing phase the design should be described in detail and perhaps given a creative and descriptive name.

Planned versus emergent designs

A mixed methods design can be thought out in advance, but can also arise during the course of the conduct of the study; the latter is called an “emergent” design (Creswell and Plano Clark 2011 ). Emergent designs arise, for example, when the researcher discovers during the study that one of the components is inadequate (Morse and Niehaus 2009 ). Addition of a component of the other type can sometimes remedy such an inadequacy. Some designs contain an emergent component by their nature. Initiation, for example, is the further exploration of unexpected outcomes. Unexpected outcomes are by definition not foreseen, and therefore cannot be included in the design in advance.

The question arises whether researchers should plan all these decisions beforehand, or whether they can make them during, and depending on the course of, the research process. The answer to this question is twofold. On the one hand, a researcher should decide beforehand which research components to include in the design, such that the conclusion that will be drawn will be robust. On the other hand, developments during research execution will sometimes prompt the researcher to decide to add additional components. In general, the advice is to be prepared for the unexpected. When one is able to plan for emergence, one should not refrain from doing so.

Dimension of complexity

Next, mixed methods designs are characterized by their complexity. In the literature, simple and complex designs are distinguished in various ways. A common distinction is between simple investigations with a single point of integration versus complex investigations with multiple points of integration (Guest 2013 ). When designing a mixed methods study, it can be useful to mention in the title whether the design of the study is simple or complex. The primary message of this section is as follows: It is the responsibility of the researcher to create more complex designs when needed to answer his or her research question(s) .

Teddlie and Tashakkori’s ( 2009 ) multilevel mixed designs and fully integrated mixed designs are both complex designs, but for different reasons. A multilevel mixed design is more complex ontologically, because it involves multiple levels of reality. For example, data might be collected both at the levels of schools and students, neighborhood and households, companies and employees, communities and inhabitants, or medical practices and patients (Yin 2013 ). Integration of these data does not only involve the integration of qualitative and quantitative data, but also the integration of data originating from different sources and existing at different levels. Little if any published research has discussed the possible ways of integrating data obtained in a multilevel mixed design (see Schoonenboom 2016 ). This is an area in need of additional research.

The fully-integrated mixed design is more complex because it contains multiple points of integration. As formulated by Teddlie and Tashakkori ( 2009 , p. 151):

In these designs, mixing occurs in an interactive manner at all stages of the study. At each stage, one approach affects the formulation of the other, and multiple types of implementation processes can occur.

Complexity, then, not only depends on the number of components, but also on the extent to which they depend on each other (e. g., “one approach affects the formulation of the other”).

Many of our design dimensions ultimately refer to different ways in which the qualitative and quantitative research components are interdependent. Different purposes of mixing ultimately differ in the way one component relates to, and depends upon, the other component. For example, these purposes include dependencies, such as “x illustrates y” and “x explains y”. Dependencies in the implementation of x and y occur to the extent that the design of y depends on the results of x (sequentiality). The theoretical drive creates dependencies, because the supplemental component y is performed and interpreted within the context and the theoretical drive of core component x. As a general rule in designing mixed methods research, one should examine and plan carefully the ways in which and the extent to which the various components depend on each other.

The dependence among components, which may or may not be present, has been summarized by Greene ( 2007 ). It is seen in the distinction between component designs (“Komponenten-Designs”), in which the components are independent of each other, and integrated designs (“integrierte Designs”), in which the components are interdependent. Of these two design categories, integrated designs are the more complex designs.

Secondary design considerations

The primary design dimensions explained above have been the focus of this article. There are a number of secondary considerations for researchers to also think about when they design their studies (Johnson and Christensen 2017 ). Now we list some secondary design issues and questions that should be thoughtfully considered during the construction of a strong mixed methods research design.

  • Phenomenon: Will the study be addressing (a) the same part or different parts of one phenomenon? (b) different phenomena?, or (c) the phenomenon/phenomena from different perspectives? Is the phenomenon (a) expected to be unique (e. g., historical event, particular group)?, (b) something expected to be part of a more regular and predictable phenomenon, or (c) a complex mixture of these?
  • Social scientific theory: Will the study generate a new substantive theory, test an already constructed theory, or achieve both in a sequential arrangement? Or is the researcher not interested in substantive theory based on empirical data?
  • Ideological drive: Will the study have an explicitly articulated ideological drive (e. g., feminism, critical race paradigm, transformative paradigm)?
  • Combination of sampling methods: What specific quantitative sampling method(s) will be used? What specific qualitative sampling methods(s) will be used? How will these be combined or related?
  • Degree to which the research participants will be similar or different: For example, participants or stakeholders with known differences of perspective would provide participants that are quite different.
  • Degree to which the researchers on the research team will be similar or different: For example, an experiment conducted by one researcher would be high on similarity, but the use of a heterogeneous and participatory research team would include many differences.
  • Implementation setting: Will the phenomenon be studied naturalistically, experimentally, or through a combination of these?
  • Degree to which the methods similar or different: For example, a structured interview and questionnaire are fairly similar but administration of a standardized test and participant observation in the field are quite different.
  • Validity criteria and strategies: What validity criteria and strategies will be used to address the defensibility of the study and the conclusions that will be drawn from it (see Chapter 11 in Johnson and Christensen 2017 )?
  • Full study: Will there be essentially one research study or more than one? How will the research report be structured?

Two case studies

The above design dimensions are now illustrated by examples. A nice collection of examples of mixed methods studies can be found in Hesse-Biber ( 2010 ), from which the following examples are taken. The description of the first case example is shown in Box 1.

Box 1

Summary of Roth ( 2006 ), research regarding the gender-wage gap within Wall Street securities firms. Adapted from Hesse-Biber ( 2010 , pp. 457–458)

Louise Marie Roth’s research, Selling Women Short: Gender and Money on Wall Street ( 2006 ), tackles gender inequality in the workplace. She was interested in understanding the gender-wage gap among highly performing Wall Street MBAs, who on the surface appeared to have the same “human capital” qualifications and were placed in high-ranking Wall Street securities firms as their first jobs. In addition, Roth wanted to understand the “structural factors” within the workplace setting that may contribute to the gender-wage gap and its persistence over time. […] Roth conducted semistructured interviews, nesting quantitative closed-ended questions into primarily qualitative in-depth interviews […] In analyzing the quantitative data from her sample, she statistically considered all those factors that might legitimately account for gendered differences such as number of hours worked, any human capital differences, and so on. Her analysis of the quantitative data revealed the presence of a significant gender gap in wages that remained unexplained after controlling for any legitimate factors that might otherwise make a difference. […] Quantitative findings showed the extent of the wage gap while providing numerical understanding of the disparity but did not provide her with an understanding of the specific processes within the workplace that might have contributed to the gender gap in wages. […] Her respondents’ lived experiences over time revealed the hidden inner structures of the workplace that consist of discriminatory organizational practices with regard to decision making in performance evaluations that are tightly tied to wage increases and promotion.

This example nicely illustrates the distinction we made between simultaneity and dependency. On the two aspects of the timing dimension, this study was a concurrent-dependent design answering a set of related research questions. The data collection in this example was conducted simultaneously, and was thus concurrent – the quantitative closed-ended questions were embedded into the qualitative in-depth interviews. In contrast, the analysis was dependent, as explained in the next paragraph.

One of the purposes of this study was explanation: The qualitative data were used to understand the processes underlying the quantitative outcomes. It is therefore an explanatory design, and might be labelled an “explanatory concurrent design”. Conceptually, explanatory designs are often dependent: The qualitative component is used to explain and clarify the outcomes of the quantitative component. In that sense, the qualitative analysis in the case study took the outcomes of the quantitative component (“the existence of the gender-wage gap” and “numerical understanding of the disparity”), and aimed at providing an explanation for that result of the quantitative data analysis , by relating it to the contextual circumstances in which the quantitative outcomes were produced. This purpose of mixing in the example corresponds to Bryman’s ( 2006 ) “contextual understanding”. On the other primary dimensions, (a) the design was ongoing over a three-year period but was not emergent, (b) the point of integration was results, and (c) the design was not complex with respect to the point of integration, as it had only one point of integration. Yet, it was complex in the sense of involving multiple levels; both the level of the individual and the organization were included. According to the approach of Johnson and Christensen ( 2017 ), this was a QUAL + quan design (that was qualitatively driven, explanatory, and concurrent). If we give this study design a name, perhaps it should focus on what was done in the study: “explaining an effect from the process by which it is produced”. Having said this, the name “explanatory concurrent design” could also be used.

The description of the second case example is shown in Box 2.

Box 2

Summary of McMahon’s ( 2007 ) explorative study of the meaning, role, and salience of rape myths within the subculture of college student athletes. Adapted from Hesse-Biber ( 2010 , pp. 461–462)

Sarah McMahon ( 2007 ) wanted to explore the subculture of college student athletes and specifically the meaning, role, and salience of rape myths within that culture. […] While she was looking for confirmation between the quantitative ([structured] survey) and qualitative (focus groups and individual interviews) findings, she entered this study skeptical of whether or not her quantitative and qualitative findings would mesh with one another. McMahon […] first administered a survey [instrument] to 205 sophomore and junior student athletes at one Northeast public university. […] The quantitative data revealed a very low acceptance of rape myths among this student population but revealed a higher acceptance of violence among men and individuals who did not know a survivor of sexual assault. In the second qualitative (QUAL) phase, “focus groups were conducted as semi-structured interviews” and facilitated by someone of the same gender as the participants (p. 360). […] She followed this up with a third qualitative component (QUAL), individual interviews, which were conducted to elaborate on themes discovered in the focus groups and determine any differences in students’ responses between situations (i. e., group setting vs. individual). The interview guide was designed specifically to address focus group topics that needed “more in-depth exploration” or clarification (p. 361). The qualitative findings from the focus groups and individual qualitative interviews revealed “subtle yet pervasive rape myths” that fell into four major themes: “the misunderstanding of consent, the belief in ‘accidental’ and fabricated rape, the contention that some women provoke rape, and the invulnerability of female athletes” (p. 363). She found that the survey’s finding of a “low acceptance of rape myths … was contradicted by the findings of the focus groups and individual interviews, which indicated the presence of subtle rape myths” (p. 362).

On the timing dimension, this is an example of a sequential-independent design. It is sequential, because the qualitative focus groups were conducted after the survey was administered. The analysis of the quantitative and qualitative data was independent: Both were analyzed independently, to see whether they yielded the same results (which they did not). This purpose, therefore, was triangulation. On the other primary dimensions, (a) the design was planned, (b) the point of integration was results, and (c) the design was not complex as it had only one point of integration, and involved only the level of the individual. The author called this a “sequential explanatory” design. We doubt, however, whether this is the most appropriate label, because the qualitative component did not provide an explanation for quantitative results that were taken as given. On the contrary, the qualitative results contradicted the quantitative results. Thus, a “sequential-independent” design, or a “sequential-triangulation” design or a “sequential-comparative” design would probably be a better name.

Notice further that the second case study had the same point of integration as the first case study. The two components were brought together in the results. Thus, although the case studies are very dissimilar in many respects, this does not become visible in their point of integration. It can therefore be helpful to determine whether their point of extension is different. A  point of extension is the point in the research process at which the second (or later) component comes into play. In the first case study, two related, but different research questions were answered, namely the quantitative question “How large is the gender-wage gap among highly performing Wall Street MBAs after controlling for any legitimate factors that might otherwise make a difference?”, and the qualitative research question “How do structural factors within the workplace setting contribute to the gender-wage gap and its persistence over time?” This case study contains one qualitative research question and one quantitative research question. Therefore, the point of extension is the research question. In the second case study, both components answered the same research question. They differed in their data collection (and subsequently in their data analysis): qualitative focus groups and individual interviews versus a quantitative questionnaire. In this case study, the point of extension was data collection. Thus, the point of extension can be used to distinguish between the two case studies.

Summary and conclusions

The purpose of this article is to help researchers to understand how to design a mixed methods research study. Perhaps the simplest approach is to design is to look at a single book and select one from the few designs included in that book. We believe that is only useful as a starting point. Here we have shown that one often needs to construct a research design to fit one’s unique research situation and questions.

First, we showed that there are there are many purposes for which qualitative and quantitative methods, methodologies, and paradigms can be mixed. This must be determined in interaction with the research questions. Inclusion of a purpose in the design name can sometimes provide readers with useful information about the study design, as in, e. g., an “explanatory sequential design” or an “exploratory-confirmatory design”.

The second dimension is theoretical drive in the sense that Morse and Niehaus ( 2009 ) use this term. That is, will the study have an inductive or a deductive drive, or, we added, a combination of these. Related to this idea is whether one will conduct a qualitatively driven, a quantitatively driven, or an equal-status mixed methods study. This language is sometimes included in the design name to communicate this characteristic of the study design (e. g., a “quantitatively driven sequential mixed methods design”).

The third dimension is timing , which has two aspects: simultaneity and dependence. Simultaneity refers to whether the components are to be implemented concurrently, sequentially, or a combination of these in a multiphase design. Simultaneity is commonly used in the naming of a mixed methods design because it communicates key information. The second aspect of timing, dependence , refers to whether a later component depends on the results of an earlier component, e. g., Did phase two specifically build on phase one in the research study? The fourth design dimension is the point of integration, which is where the qualitative and quantitative components are brought together and integrated. This is an essential dimension, but it usually does not need to be incorporated into the design name.

The fifth design dimension is that of typological vs. interactive design approaches . That is, will one select a design from a typology or use a more interactive approach to construct one’s own design? There are many typologies of designs currently in the literature. Our recommendation is that readers examine multiple design typologies to better understand the design process in mixed methods research and to understand what designs have been identified as popular in the field. However, when a design that would follow from one’s research questions is not available, the researcher can and should (a) combine designs into new designs or (b) simply construct a new and unique design. One can go a long way in depicting a complex design with Morse’s ( 1991 ) notation when used to its full potential. We also recommend that researchers understand the process approach to design from Maxwell and Loomis ( 2003 ), and realize that research design is a process and it needs, oftentimes, to be flexible and interactive.

The sixth design dimension or consideration is whether a design will be fully specified during the planning of the research study or if the design (or part of the design) will be allowed to emerge during the research process, or a combination of these. The seventh design dimension is called complexity . One sort of complexity mentioned was multilevel designs, but there are many complexities that can enter designs. The key point is that good research often requires the use of complex designs to answer one’s research questions. This is not something to avoid. It is the responsibility of the researcher to learn how to construct and describe and name mixed methods research designs. Always remember that designs should follow from one’s research questions and purposes, rather than questions and purposes following from a few currently named designs.

In addition to the six primary design dimensions or considerations, we provided a set of additional or secondary dimensions/considerations or questions to ask when constructing a mixed methods study design. Our purpose throughout this article has been to show what factors must be considered to design a high quality mixed methods research study. The more one knows and thinks about the primary and secondary dimensions of mixed methods design the better equipped one will be to pursue mixed methods research.

Acknowledgments

Open access funding provided by University of Vienna.

Biographies

1965, Dr., Professor of Empirical Pedagogy at University of Vienna, Austria. Research Areas: Mixed Methods Design, Philosophy of Mixed Methods Research, Innovation in Higher Education, Design and Evaluation of Intervention Studies, Educational Technology. Publications: Mixed methods in early childhood education. In: M. Fleer & B. v. Oers (Eds.), International handbook on early childhood education (Vol. 1). Dordrecht, The Netherlands: Springer 2017; The multilevel mixed intact group analysis: A mixed method to seek, detect, describe and explain differences between intact groups. Journal of Mixed Methods Research 10, 2016; The realist survey: How respondents’ voices can be used to test and revise correlational models. Journal of Mixed Methods Research 2015. Advance online publication.

1957, PhD, Professor of Professional Studies at University of South Alabama, Mobile, Alabama USA. Research Areas: Methods of Social Research, Program Evaluation, Quantitative, Qualitative and Mixed Methods, Philosophy of Social Science. Publications: Research methods, design and analysis. Boston, MA 2014 (with L. Christensen and L. Turner); Educational research: Quantitative, qualitative and mixed approaches. Los Angeles, CA 2017 (with L. Christensen); The Oxford handbook of multimethod and mixed methods research inquiry. New York, NY 2015 (with S. Hesse-Biber).

Bryman’s ( 2006 ) scheme of rationales for combining quantitative and qualitative research 1

  • Triangulation or greater validity – refers to the traditional view that quantitative and qualitative research might be combined to triangulate findings in order that they may be mutually corroborated. If the term was used as a synonym for integrating quantitative and qualitative research, it was not coded as triangulation.
  • Offset – refers to the suggestion that the research methods associated with both quantitative and qualitative research have their own strengths and weaknesses so that combining them allows the researcher to offset their weaknesses to draw on the strengths of both.
  • Completeness – refers to the notion that the researcher can bring together a more comprehensive account of the area of enquiry in which he or she is interested if both quantitative and qualitative research are employed.
  • Process – quantitative research provides an account of structures in social life but qualitative research provides sense of process.
  • Different research questions – this is the argument that quantitative and qualitative research can each answer different research questions but this item was coded only if authors explicitly stated that they were doing this.
  • Explanation – one is used to help explain findings generated by the other.
  • Unexpected results – refers to the suggestion that quantitative and qualitative research can be fruitfully combined when one generates surprising results that can be understood by employing the other.
  • Instrument development – refers to contexts in which qualitative research is employed to develop questionnaire and scale items – for example, so that better wording or more comprehensive closed answers can be generated.
  • Sampling – refers to situations in which one approach is used to facilitate the sampling of respondents or cases.
  • Credibility – refer s to suggestions that employing both approaches enhances the integrity of findings.
  • Context – refers to cases in which the combination is rationalized in terms of qualitative research providing contextual understanding coupled with either generalizable, externally valid findings or broad relationships among variables uncovered through a survey.
  • Illustration – refers to the use of qualitative data to illustrate quantitative findings, often referred to as putting “meat on the bones” of “dry” quantitative findings.
  • Utility or improving the usefulness of findings – refers to a suggestion, which is more likely to be prominent among articles with an applied focus, that combining the two approaches will be more useful to practitioners and others.
  • Confirm and discover – this entails using qualitative data to generate hypotheses and using quantitative research to test them within a single project.
  • Diversity of views – this includes two slightly different rationales – namely, combining researchers’ and participants’ perspectives through quantitative and qualitative research respectively, and uncovering relationships between variables through quantitative research while also revealing meanings among research participants through qualitative research.
  • Enhancement or building upon quantitative/qualitative findings – this entails a reference to making more of or augmenting either quantitative or qualitative findings by gathering data using a qualitative or quantitative research approach.
  • Other/unclear.
  • Not stated.

1 Reprinted with permission from “Integrating quantitative and qualitative research: How is it done?” by Alan Bryman ( 2006 ), Qualitative Research, 6, pp. 105–107.

Contributor Information

Judith Schoonenboom, Email: [email protected] .

R. Burke Johnson, Email: ude.amabalahtuos@nosnhojb .

  • Bazeley, Pat, Lynn Kemp Mosaics, triangles, and DNA: Metaphors for integrated analysis in mixed methods research. Journal of Mixed Methods Research. 2012; 6 :55–72. doi: 10.1177/1558689811419514. [ CrossRef ] [ Google Scholar ]
  • Bryman A. Integrating quantitative and qualitative research: how is it done? Qualitative Research. 2006; 6 :97–113. doi: 10.1177/1468794106058877. [ CrossRef ] [ Google Scholar ]
  • Cook TD. Postpositivist critical multiplism. In: Shotland RL, Mark MM, editors. Social science and social policy. Beverly Hills: SAGE; 1985. pp. 21–62. [ Google Scholar ]
  • Creswell JW, Plano Clark VL. Designing and conducting mixed methods research. 2. Los Angeles: SAGE; 2011. [ Google Scholar ]
  • Erzberger C, Prein G. Triangulation: Validity and empirically-based hypothesis construction. Quality and Quantity. 1997; 31 :141–154. doi: 10.1023/A:1004249313062. [ CrossRef ] [ Google Scholar ]
  • Greene JC. Mixed methods in social inquiry. San Francisco: Jossey-Bass; 2007. [ Google Scholar ]
  • Greene JC. Preserving distinctions within the multimethod and mixed methods research merger. Sharlene Hesse-Biber and R. Burke Johnson. New York: Oxford University Press; 2015. [ Google Scholar ]
  • Greene JC, Valerie J, Caracelli, Graham WF. Toward a conceptual framework for mixed-method evaluation designs. Educational Evaluation and Policy Analysis. 1989; 11 :255–274. doi: 10.3102/01623737011003255. [ CrossRef ] [ Google Scholar ]
  • Greene JC, Hall JN. Dialectics and pragmatism. In: Tashakkori A, Teddlie C, editors. SAGE handbook of mixed methods in social & behavioral research. 2. Los Angeles: SAGE; 2010. pp. 119–167. [ Google Scholar ]
  • Guest, Greg Describing mixed methods research: An alternative to typologies. Journal of Mixed Methods Research. 2013; 7 :141–151. doi: 10.1177/1558689812461179. [ CrossRef ] [ Google Scholar ]
  • Hesse-Biber S. Qualitative approaches to mixed methods practice. Qualitative Inquiry. 2010; 16 :455–468. doi: 10.1177/1077800410364611. [ CrossRef ] [ Google Scholar ]
  • Johnson BR. Dialectical pluralism: A metaparadigm whose time has come. Journal of Mixed Methods Research. 2017; 11 :156–173. doi: 10.1177/1558689815607692. [ CrossRef ] [ Google Scholar ]
  • Johnson BR, Christensen LB. Educational research: Quantitative, qualitative, and mixed approaches. 6. Los Angeles: SAGE; 2017. [ Google Scholar ]
  • Johnson BR, Onwuegbuzie AJ. Mixed methods research: a research paradigm whose time has come. Educational Researcher. 2004; 33 (7):14–26. doi: 10.3102/0013189X033007014. [ CrossRef ] [ Google Scholar ]
  • Johnson BR, Onwuegbuzie AJ, Turner LA. Toward a definition of mixed methods research. Journal of Mixed Methods Research. 2007; 1 :112–133. doi: 10.1177/1558689806298224. [ CrossRef ] [ Google Scholar ]
  • Mathison S. Why triangulate? Educational Researcher. 1988; 17 :13–17. doi: 10.3102/0013189X017002013. [ CrossRef ] [ Google Scholar ]
  • Maxwell JA. Qualitative research design: An interactive approach. 3. Los Angeles: SAGE; 2013. [ Google Scholar ]
  • Maxwell, Joseph A., and Diane M. Loomis. 2003. Mixed methods design: An alternative approach. In Handbook of mixed methods in social & behavioral research , Eds. Abbas Tashakkori and Charles Teddlie, 241–271. Thousand Oaks: Sage.
  • McMahon S. Understanding community-specific rape myths: Exploring student athlete culture. Affilia. 2007; 22 :357–370. doi: 10.1177/0886109907306331. [ CrossRef ] [ Google Scholar ]
  • Mendlinger S, Cwikel J. Spiraling between qualitative and quantitative data on women’s health behaviors: A double helix model for mixed methods. Qualitative Health Research. 2008; 18 :280–293. doi: 10.1177/1049732307312392. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Morgan DL. Integrating qualitative and quantitative methods: a pragmatic approach. Los Angeles: Sage; 2014. [ Google Scholar ]
  • Morse JM. Approaches to qualitative-quantitative methodological triangulation. Nursing Research. 1991; 40 :120–123. doi: 10.1097/00006199-199103000-00014. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Morse JM, Niehaus L. Mixed method design: Principles and procedures. Walnut Creek: Left Coast Press; 2009. [ Google Scholar ]
  • Onwuegbuzie AJ, Burke Johnson R. The “validity” issue in mixed research. Research in the Schools. 2006; 13 :48–63. [ Google Scholar ]
  • Roth LM. Selling women short: Gender and money on Wall Street. Princeton: Princeton University Press; 2006. [ Google Scholar ]
  • Schoonenboom J. The multilevel mixed intact group analysis: a mixed method to seek, detect, describe and explain differences between intact groups. Journal of Mixed Methods Research. 2016; 10 :129–146. doi: 10.1177/1558689814536283. [ CrossRef ] [ Google Scholar ]
  • Schoonenboom, Judith, R. Burke Johnson, and Dominik E. Froehlich. 2017, in press. Combining multiple purposes of mixing within a mixed methods research design. International Journal of Multiple Research Approaches .
  • Teddlie CB, Tashakkori A. Foundations of mixed methods research: Integrating quantitative and qualitative approaches in the social and behavioral sciences. Los Angeles: Sage; 2009. [ Google Scholar ]
  • Yanchar SC, Williams DD. Reconsidering the compatibility thesis and eclecticism: Five proposed guidelines for method use. Educational Researcher. 2006; 35 (9):3–12. doi: 10.3102/0013189X035009003. [ CrossRef ] [ Google Scholar ]
  • Yin RK. Case study research: design and methods. 5. Los Angeles: SAGE; 2013. [ Google Scholar ]
  • Methodology
  • Open access
  • Published: 28 March 2024

Leveraging human-centered design and causal pathway diagramming toward enhanced specification and development of innovative implementation strategies: a case example of an outreach tool to address racial inequities in breast cancer screening

  • Leah M. Marcotte   ORCID: orcid.org/0000-0003-3130-3525 1 ,
  • Raina Langevin 2 ,
  • Bridgette R. Hempstead 3 ,
  • Anisha Ganguly 4 ,
  • Aaron R. Lyon 5 ,
  • Bryan J. Weiner 6 , 7 ,
  • Nkem Akinsoto 8 ,
  • Paula L. Houston 8 ,
  • Victoria Fang 1 , 8 &
  • Gary Hsieh 2  

Implementation Science Communications volume  5 , Article number:  31 ( 2024 ) Cite this article

47 Accesses

Metrics details

Implementation strategies are strategies to improve uptake of evidence-based practices or interventions and are essential to implementation science. Developing or tailoring implementation strategies may benefit from integrating approaches from other disciplines; yet current guidance on how to effectively incorporate methods from other disciplines to develop and refine innovative implementation strategies is limited. We describe an approach that combines community-engaged methods, human-centered design (HCD) methods, and causal pathway diagramming (CPD)—an implementation science tool to map an implementation strategy as it is intended to work—to develop innovative implementation strategies.

We use a case example of developing a conversational agent or chatbot to address racial inequities in breast cancer screening via mammography. With an interdisciplinary team including community members and operational leaders, we conducted a rapid evidence review and elicited qualitative data through interviews and focus groups using HCD methods to identify and prioritize key determinants (facilitators and barriers) of the evidence-based intervention (breast cancer screening) and the implementation strategy (chatbot). We developed a CPD using key determinants and proposed strategy mechanisms and proximal outcomes based in conceptual frameworks.

We identified key determinants for breast cancer screening and for the chatbot implementation strategy. Mistrust was a key barrier to both completing breast cancer screening and using the chatbot. We focused design for the initial chatbot interaction to engender trust and developed a CPD to guide chatbot development. We used the persuasive health message framework and conceptual frameworks about trust from marketing and artificial intelligence disciplines. We developed a CPD for the initial interaction with the chatbot with engagement as a mechanism to use and trust as a proximal outcome leading to further engagement with the chatbot.

Conclusions

The use of interdisciplinary methods is core to implementation science. HCD is a particularly synergistic discipline with multiple existing applications of HCD to implementation research. We present an extension of this work and an example of the potential value in an integrated community-engaged approach of HCD and implementation science researchers and methods to combine strengths of both disciplines and develop human-centered implementation strategies rooted in causal perspective and healthcare equity.

Peer Review reports

Contributions to the literature

The integration of human-centered design and implementation science researchers and methods can synthesize strengths of both disciplines.

Human-centered design methods can be employed as part of an overarching co-creation approach to including partners/communities in research.

A human-centered design approach rooted in causal pathway diagramming can help to address challenges of both basing implementation strategies in theory and meeting the needs of partners and/or communities.

The field of implementation science was created to address the gap between what should be done based on existing evidence and what is done in practice. Implementation strategies—“methods or techniques used to enhance the adoption, implementation, and sustainability of a clinical program or practice” [ 1 ]—are central to implementation science. In 2015, the Expert Recommendations for Implementing Change project compiled 73 different implementation strategies used in the field [ 2 ]. However, as implementation science has evolved, experts have recognized that (a) more implementation strategies exist than have been cataloged and (b) developing or tailoring implementation strategies may benefit from integrating approaches from other disciplines (e.g., behavioral economics and human-centered design) [ 3 , 4 , 5 ]. Yet, current guidance on how to effectively incorporate methods from other disciplines to develop and refine innovative implementation strategies is limited.

The causal pathway diagram (CPD) is an implementation science method that can be used to support development and refinement of implementation strategies [ 6 ]. CPDs help researchers to understand implementation strategies as they are intended to work. In building a CPD, researchers identify the implementation strategy, the mechanism(s) through which the strategy is thought to lead to the intended outcome, proximal outcomes which may provide signals of effect earlier than the intended outcome, and the distal (intended) outcome. CPDs also include moderators which may enhance or dampen pathway effect and pre-conditions which are necessary for the pathway to proceed. In developing and refining implementation strategies, CPDs help investigators map key determinants (barriers or facilitators) that implementation strategies address and mechanisms by which strategies are posited to effect change. Theory and existing evidence are typically used to construct CPDs [ 7 ]. One potential limitation of the CPD is that while there is an emphasis on incorporating theory and existing evidence, there are fewer examples of incorporating implementation partners’ and/or community needs and context in the initial creation of the CPD [ 8 ].

Co-creation—a collaborative process including people with a diversity of roles/positions to attain goals—is increasingly recognized as an approach for implementation scientists to integrate partners/communities in research [ 9 ]. In co-creation, researchers may employ several different methodologies and methods. For example, many researchers use community-engaged research approaches to meaningfully include community members in intervention development and implementation with goals to increase relevance, effectiveness, and sustainment of interventions [ 9 , 10 ]. Co-creation can be framed as an overarching concept that includes co-design (intervention development) and co-production (intervention implementation) [ 11 ]. Co-design is specifically relevant for the design of novel implementation strategies and researchers often use human-centered design methods for co-designing interventions [ 12 , 13 ]. In co-design, multiple methods may be used synergistically. For example, researchers can simultaneously use community-based participatory research methods to meaningfully include community members and human-centered design (HCD) methods to guide intervention design [ 14 , 15 ].

HCD methods are particularly useful in co-design for technology-based implementation strategies with key standards and principles for designing interactive systems [ 16 ]. HCD is a “flexible, yet disciplined and repeatable approach to innovation that puts people at the center of activity.” [ 17 ] HCD methods elicit information regarding user environment and experience through continuous partner/user engagement and draw from multidisciplinary expertise [ 18 ]. An initial phase of HCD is to establish the context of use and requirements of users [ 19 ]. In this exploratory phase, researchers often collect and analyze qualitative data through interviews, focus groups, and/or co-design sessions. In addition to questions about context and user requirements, interviews will often include “mockups” or early prototypes for initial reactions and feedback.

Establishing context of use and user requirements in HCD is synergistic to the needs for specifying implementation strategies and developing CPDs [ 20 ]. HCD methods can be used to build and inform CPDs by gaining understanding of key determinants to the desired program or practice within a specific context and identifying potential facilitators and barriers to the implementation strategy itself. Early qualitative data from HCD methods and identification of key determinants can help to inform use of theory in building CPDs. Finally, HCD methods can further help to understand and test assumptions related to mechanisms of an implementation strategy [ 20 ].

Haines et al. described the application of HCD methods in defining context and connecting to evidence-based practices and implementation strategies. [ 4 ] We build on this work by describing a way to explicitly include partners (organizational and community) in the co-design of implementation strategies while maintaining a causal basis and perspective. We present an example incorporating HCD methods in CPDs to design an innovative outreach strategy to address inequities in breast cancer screening using mammography among Black women. In this case study, we illustrate how HCD methods were used to (1) identify and prioritize key determinants, (2) select and apply conceptual frameworks, and (3) understand (and design for) strategy mechanisms.

Case example: designing an outreach tool to address breast cancer screening inequities

Inequities in breast cancer mortality among Black people have been recognized for decades and yet persist [ 21 , 22 ]. These inequities are partly due to later stage breast cancer diagnosis [ 21 ]. Regular interval breast cancer screening with mammography aligned with the United States Preventive Services Task Force (USPSTF) guidelines is an evidence-based intervention to improve earlier diagnosis of and mortality from breast cancer [ 23 ]. Therefore, addressing breast cancer screening inequities among Black people eligible for screening aligned with guidelines may facilitate early detection and improve breast cancer survival [ 24 , 25 , 26 ]. Research to date has demonstrated that Black women experience multiple barriers to breast cancer screening including reduced access to care, mistrust and decreased self-efficacy, fear of diagnosis, prior negative health care experiences, and lack of information regarding breast cancer risk [ 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 ]. Black women may also not feel included or prioritized in breast cancer screening campaigns [ 30 ].

Tailored interventions to improve breast cancer screening among Black women have demonstrated modest effect in improving breast cancer screening rates, yet many of these interventions, such as use of health navigators, are resource intensive and must be repeated annually [ 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 ]. Mobile technology interventions can be culturally tailored and may address limitations related to cost and time [ 48 , 49 ]. Mobile technology interventions using short message service (SMS) text are accessible to individuals across a range of sociodemographic factors and have been shown to be effective in primary care behavioral and disease management interventions [ 50 ]. Black women have reported SMS text-based breast cancer screening interventions to be accessible and acceptable [ 51 ]. SMS text-based interventions are also more accessible than patient portal-based interventions which lack adequate reach due to substantial racial inequities in patient portal use [ 52 , 53 ]. Mobile health interventions using conversational interfaces such as chatbots via SMS text can act as virtual health navigators providing individualized information about and connecting individuals to healthcare [ 54 ]. Prior research has shown chatbots increase levels of trust in web-based information and are easy to use and scale [ 55 , 56 ].

While the use of health navigators is an established implementation strategy, there are little data on integrating conversational agents in primary care outreach and none that we are aware of that specifically address healthcare inequities in cancer screening [ 57 ]. Literature on digital health interventions emphasizes need for careful attention to and planning for implementation to optimize integration in the healthcare system and patient use [ 58 ]. Moreover, evidence of bias in artificial intelligence raises caution in the design of chatbot interventions [ 59 , 60 , 61 ]. We identified chatbots as a promising, innovative implementation strategy to address breast cancer screening inequities; however, one that warrants rigorous methods and community engagement to design and tailor.

In late 2020, we brought together a team of researchers and health system leaders at a large academic medical center to address inequities in breast cancer screening through the design of a chatbot that could facilitate outreach. Breast cancer screening rates in the health system at the time using the National Committee for Quality Assurance Healthcare Effectiveness Data and Information Set measure based on the USPSTF guidelines were 61.5% among Black women compared to 73.3% among White women (internal health system data) [ 23 , 62 ]. The chatbot implementation strategy was favored as an intervention among interdisciplinary team members because of its innovation and the low resource burden to primary care with better potential for sustainability. Usual care consisted of chart review and telephone outreach by a primary care health navigator and then connection to radiology for mammogram scheduling. The chatbot intervention could be sent to people due for screening and could be configured to schedule a mammogram during the chatbot interaction, expending less resources with greater efficiency. The study protocol was reviewed and determined exempt by the University of Washington Institutional Review Board. This manuscript adheres to the Enhancing the Quality and Transparency of health research (EQUATOR) Better Reporting of Interventions: template for intervention description and replication (TIDieR) checklist and guide and the Consolidated criteria for REporting Qualitative research (COREQ) guidelines [ 63 , 64 ].

To approach implementation-focused research questions, interdisciplinary teams of researchers, operational partners, and end-users are advantageous to develop optimized implementation strategies or innovations. Implementation science and human-centered design researchers co-leading efforts (e.g., as PI, Co-PI, or MPIs) can help to support integration of methods and perspectives. In patient-facing interventions—particularly those addressing inequities among marginalized communities—including patient/community partners can center intervention development on patient/community needs, facilitate participant recruitment, help refine study protocol, and support the analyses of collected data [ 10 , 65 ].

In designing a chatbot for breast cancer screening outreach to address racial inequities, our team included an HCD researcher (G.H.), a primary care physician and early-stage investigator with focus in implementation science (L.M.M.), an HCD PhD candidate (R.L.), and a community-based organization leader (B.R.H.) with expertise in conducting interviews and focus groups for qualitative research and extensive community connections. We received project mentorship from the Optimizing Implementation in Cancer Control (OPTICC) team that includes several leaders and experts in implementation science (e.g., B.J.W., A.R.L.) [ 66 ]. We drew input from key health system partners including health care equity leadership (P.L.H.), primary care and population health leadership (N.A., V.F.), and primary care health navigators. We held regular interdisciplinary team meetings, most frequently in the initial stages of innovation design. Throughout the development of the chatbot tool, we sought feedback from community members through interviews, focus groups, and (planned) co-design sessions.

Positionality statement

Our team included trainees, researchers, clinicians, and operational leaders at the University of Washington and a community-based organization leader. Previous research interests/experience included communication technologies to promote health and well-being (G.H.) and improving quality and equity in primary care services (L.M.M.). B.R.H. provided health equity expertise; she has led a Seattle-based survivor and support organization for African American women with cancer for over 25 years; in that time, she has collaborated with researchers on over 60 grants. Most of our team identifies as women and several of our team members identify as Black women. Research analysis was conducted primarily by R.L., G.H., and L.M.M. (none of whom are Black/African American); all data analysis/interpretation was reviewed with B.R.H. in bi-weekly research meetings.

Identify and prioritize key determinants

Key determinant (i.e., facilitators and barriers) identification is critical in implementation strategy development and a first step in creating a CPD. Determinants may be identified initially through evidence review and contextually through qualitative (e.g., interviews) and/or quantitative (e.g., survey) methods. The use of HCD methods can augment identification of key determinants and other components in CPD via mockups and/or early prototypes to elicit feedback on initial design and use. This approach is particularly useful because determinants can be elicited in the context of the implementation strategy—which may help to optimize determinant-strategy matching. We identified and prioritized key determinants through rapid evidence review of breast cancer screening determinants among Black women and HCD methods. We conducted qualitative analysis of semi-structured interviews including a chatbot mockup and focus groups with end-users who were shown an early prototype of the chatbot which was iterated based on qualitative data analysis of the interviews. Interviews and focus groups were led almost entirely by B.R.H., a community member, to provide comfortable environments to share perspectives. Our underlying interpretive framework most closely followed social constructivism; we focused on the content of participant words and experiences with the goal to minimize researcher interpretation [ 67 ]. Any question of participant meaning was reviewed with B.R.H.

Rapid evidence review

Our objective was to identify determinants to breast cancer screening among Black women emergent from recent literature.

We conducted rapid evidence review following established methods described in the National Collaborating Centre for Methods and Tools Rapid Review Guidebook [ 68 ]. We defined a research question—“among Black women in the United States, what are determinants (i.e., facilitators and barriers) to breast cancer screening?”, searched for research evidence, critically appraised information sources, and synthesized evidence.

Search strategy

Our search strategy prioritized evidence in the past 3 years and included search terms in or related to the research question: (Mammogram, Mammography, Cancer Screening, Breast Cancer Screening), (Breast Cancer), (Women), (Black, African American, African American, Minority), (Race, Ethnicity), (Disparities, Determinants), and (Facilitators, Barriers). Searches were conducted in PubMed, Health Evidence, Public Health + , and the National Institute of Health and Care Excellence.

Review criteria

We considered studies done in the USA as race is a social construct and the experience and impacts of individual and systemic racism differ geographically. We focused on results among Black/African American individuals given the research question and aim to identify specific determinants within this group; however, we did include studies with multiple racial groups represented. We focused on studies that included individuals aged 40–74 years to match the population eligible for average-risk breast cancer screening. Publications in the 3 years prior to evidence review were prioritized acknowledging determinants may change over time (e.g., with technology advancements such as online scheduling or policy changes allowing for mammogram scheduling without primary care provider (PCP) referral) and in keeping with methods in the National Collaborating Centre for Methods and Tools Rapid Review Guidebook [ 68 ].

Critical appraisal and evidence synthesis

Critical appraisal was guided by the 6S Pyramid framework developed and made available by the National Collaborating Centre for Methods and Tools [ 69 ]. We categorized data by source (i.e., search engine), study type (e.g., single study, meta-analysis), population, and results.

Interviews with mockup

Our objective was to elicit determinants to breast cancer screening among Black women living in western Washington as well as feedback about an initial mockup of the chatbot through interviews with community members.

Interview guide

The interview guide was developed by our research team with additional input from members of the Breast Health Equity committee—a health system committee including operational leaders, physicians, and researchers dedicated to addressing inequities in care related to breast cancer screening, diagnosis, and treatment (Additional file 1 : Appendix A). While we incorporated feedback from committee members after the guide was drafted, we did not pilot test with community members before starting interviews. Questions focused on determinants to breast cancer screening and past experiences with breast cancer screening. Additionally, two members of the research team (G.H. and R.L.) created a mockup of the chatbot tool including several mockups of a chatbot for breast cancer screening outreach (Fig. 1 ).

figure 1

Initial mockup

Sample and recruitment

We used convenience sampling through fliers posted in primary care clinics and email to the research team’s established community networks to identify and recruit individuals who identified as Black/African American women between the ages of 40 and 74 years and lived in either King or Pierce counties in Washington state. We recruited 21 individuals for interviews which we estimated would be adequate to provide sufficient data for understanding our research question [ 70 ].

Two members of the research team (B.R.H., A.G.) conducted interviews ( n  = 21); the community engagement lead on the team (B.R.H.) conducted the vast majority ( n  = 18). Interviews were conducted via Zoom videoconferencing technology and were audio recorded. In addition to questions regarding determinants to and experience of breast cancer screening, we showed participants screenshots of the initial mockup for the chatbot tool and asked specific questions for feedback. No field notes were collected during/after interviews. All interviews were transcribed.

Four members of the research team (R.L. and three undergraduate students listed in acknowledgements: A.A., N.S., X.S.) read and coded the transcripts to generate and refine themes through several iterations until consensus was reached. Each interview was analyzed and coded once by individuals on the research team using a directed content analysis approach; codes were then discussed as a team [ 5 , 71 ]. Deductive codes were created using prior research organizing breast cancer screening barriers as personal, structural, and clinical [ 34 ]. Inductive codes emerged from a close reading of an initial subset of the transcripts and were added to the codebook (Additional file 1 : Appendix Table B). Qualitative data analysis resulted in themes around the chatbot design, and determinants to breast cancer screening. We facilitated an ideation workshop with the research team and Breast Health Equity committee to brainstorm how this research might address the themes brought up in the interviews. We used a 2 × 2 prioritization matrix as a tool to identify the most impactful and feasible ideas that arose. This analysis was used to develop an early chatbot prototype. Results were shared with participants in a newsletter with invitation to respond to interpretation and/or presentation prior to manuscript submission (Additional file 1 : Appendix C).

Focus groups

The objectives were to elicit feedback on an early static prototype of the chatbot tool informed by the interviews.

Early prototype creation

The research team developed an early static prototype of the chatbot tool iterating on the initial mockup using themes and feedback that emerged from qualitative data analysis of the individual interviews (Fig. S 1 ). In addition to the prototype screens, we included short videos with questions and answers to questions such as—“Why should I get screened?”, “What can I expect from a mammogram?”, “What happens if the mammogram is abnormal?”.

Focus group guide

The focus group guide was developed by our research team with additional input from members of the Breast Health Equity committee (Additional file 1 : Appendix D). The guide included questions about perceptions of, engagement with, and usability of the chatbot based on the prototype screens and videos.

The same sampling and recruiting methods were used for the focus groups as were used for the individual interviews. We conducted three focus groups with a total of nine participants.

Focus groups were led by B.R.H. and joined by multiple members of the research team (V.F., A.G., R.L., L.M.M.). We showed participants three example interactions with the chatbot prototype, (1) patient-initiated scheduling of a mammogram, (2) system-initiated patient education, and (3) system-initiated re-scheduling, and asked specific questions for feedback. The same procedures were followed as for the individual interviews; team members debriefed after each focus group.

We used template analysis with pre-defined domains derived from focus group questions and interview themes to analyze focus group content [ 72 ]. Template analysis may be used as a rapid qualitative analysis approach for focus group data [ 73 ]. Template domains were agreed upon by investigators (L.M.M., G.H., R.L., B.R.H.). One investigator (L.M.M.) then reviewed focus groups and conducted content analysis using the templates. The completed templates were summarized in a matrix for data visualization and reviewed by all investigators; any disagreements were addressed and resolved. Results were shared with participants in a newsletter with invitation to respond to interpretation and/or presentation prior to manuscript submission (Additional file 1 : Appendix C).

Synthesis: developing a causal pathway diagram

CPDs can help to map out the pathway of an implementation strategy as it is intended to work [ 6 ]. For innovative implementation strategies, CPDs can help establish and test theorized mechanisms. Conceptual frameworks help to inform pathway components and may be drawn from disciplines outside of implementation research for novel strategies.

From our rapid evidence review, qualitative interviews, and focus groups, we identified key determinants—both in the context of breast cancer screening and the chatbot implementation strategy—and hypothesized mechanisms. Our research team prioritized determinants that were identified across data sources (e.g., in interviews and in rapid evidence review). We focused CPD development on the initial engagement with the chatbot.

Using the selected key determinants, we worked to identify conceptual frameworks to develop a CPD. We expanded the search for conceptual frameworks outside of healthcare to include disciplines such as marketing and computer science that are relevant to the implementation strategy. We selected frameworks based on relevance to and connection of our implementation strategy and proposed mechanism. We used the conceptual frameworks to inform mechanisms through which we hypothesize the implementation strategy to work and moderators which could increase or decrease strategy effect via the strategy mechanism. We iterated on the CPD as a team and received feedback from the OPTICC center team (B.J.W. and A.R.L.).

Informing CPD development: identifying and prioritizing key determinants

In the rapid evidence review, 41 relevant studies were identified out of 114 search results. A narrative synthesis was written summarizing determinants identified in the literature (Additional file 1 : Appendix E). Determinants identified were cataloged and prioritized based on relevance to the implementation strategy. For example, one study found perceptions of lower quality of care if mammograms were done in a mobile clinic setting; we did not include this as a priority determinant because this would not be particularly modifiable in the chatbot design [ 33 ]. Priority determinants included facilitators such as having personal or family history of breast cancer and recommendations from PCPs and barriers such as medical mistrust (Table  1 ).

One priority barrier that emerged from the rapid evidence review was lack of knowledge about breast cancer screening; prior literature recommended patient education to explain and help individuals learn about the process of getting a mammogram [ 33 , 76 , 78 ]. This informed our design of the initial mockup and early chatbot prototype as a patient education and scheduling tool.

For the qualitative data analysis, we interviewed 21 of 39 individuals who responded to recruitment and completed a screening survey. Only 4 of 39 were ineligible due to living outside the 2 designated counties. Several people invited for interviews had to cancel due to schedule conflicts. All interviews were completed once started (no one dropped out of the study after starting an interview). Interview participants all identified as Black/African American, were between the ages of 40 and 69 years, and several were multilingual. We did not collect specific demographic characteristics for focus group participants. Focus group participants signed up for 1 of 4 focus groups; we did not have anyone drop out of focus groups once started.

In the qualitative analysis of interviews and focus groups, we elicited facilitators and barriers to breast cancer screening. Most of the determinants that emerged from interviews and focus groups were also identified in the evidence review (Table  1 ). Facilitators that appeared in evidence review and interviews and/or focus groups included recommendations and/or advocacy from a PCP, health-related social support, family or personal history of breast cancer, and adequate preparation before a mammogram (lack of preparation was framed as a barrier in evidence review). Overlapping barriers included lack of resources (e.g., cost, insurance, transportation), anxiety about what to expect, fear about negative outcomes associated with the procedure (e.g., pain), medical mistrust, prior negative experiences with the health system (including experiences of racism), lack of knowledge about breast cancer screening, lack of discussion with family and friends, and lack of clear recommendation from a PCP. Some determinants that arose from the interview and focus group data were not present in the evidence review but were prioritized given relevance to the implementation strategy. For example, participants identified the time spent to make an appointment and the time until the appointment as moderators to scheduling a mammogram (i.e., a barrier if time to make an appointment and time until appointment is long and facilitator if time to make an appointment and time until appointment is relatively short). In terms of initial reactions to the chatbot mockup, 18 out of 20 participants asked thought that the chatbot would be useful for scheduling (one participant was not asked this question).

In the template analysis of focus groups, we elicited facilitators and barriers to and feedback about the chatbot implementation strategy (Table  2 ; Additional file 1 : Appendix F). Participants appreciated the purpose of the chatbot but thought that in many ways it fell short.

“I mean because that's what the app is for… To kind of make us feel… to draw us in and make us feel taken care of and informed. Educated.” (Participant, Focus Group 2).

Participants expressed mistrust in the chatbot persona, questioning the chatbot’s credibility and describing privacy concerns and intent. They emphasized the importance of cultural inclusivity and familiarity but did not feel like the chatbot prototype achieved these goals.

“I do agree with the fact that it needs to be more culturally inclusive and appropriate for us. I didn't feel like it was personalized outside of [B.R.H.’s] involvement, there was nothing that really spoke to our people.” (Participant, Focus Group 2).

The chatbot presented to the focus groups was named “Sesi” which means “sister” in Sotho, a Bantu language spoken mostly in Southern Africa. Participants expressed frustration about conflating African and Black American experience.

“Sometimes, because we're Black, other communities patronize on us being Black… they just patronize us as if we know what it is to be in Africa and we don't. We've never been to Africa. We still have the same issues, yes, but we've never been there so we can't relate to certain things or cultures that have because we don't have that. We've never, that was not brought along with us here.” (Participant, Focus Group 3).

They questioned the value-add of the chatbot presumed to be an app that would require effort to download onto a phone but might only be used once a year. Though participants did think that they would use the chatbot if it could be used to schedule a mammogram more efficiently than by telephone conversation.

Overall, mistrust was a major theme in both qualitative data analyses and rapid evidence review. In interviews and focus groups, mistrust arose both in the context of interactions with the health care system and the chatbot technology. At the same time, participants noted aspects of the chatbot that increased trust and engagement. For example, they felt reassured to see women who looked like themselves in the chatbot interaction, such as an image of a Black female mammography technician. Given these findings, we decided to focus on optimizing trust in the design of the initial chatbot engagement.

Causal pathway diagram

We drew from multiple theoretical frameworks regarding trust as a determinant to the evidence-based intervention (breast cancer screening) and the implementation strategy (chatbot) (Table  3 ). The persuasive health message framework for developing culturally specific messages details source, channel, and message as distinct components in health messaging and has been used in prior breast cancer screening campaigns [ 81 , 82 ]. We defined our implementation strategy components using these conventions—source (i.e., chatbot persona—communication style and identity), channel (i.e., form of message delivery, e.g., SMS text), and message (i.e., content of messages) (Fig. 2 ). As we focused first on the initial engagement with the chatbot, we attended to source credibility to engender trust. We drew from a conceptual framework in marketing that identifies expertise, homophily, and trustworthiness as characteristics of source credibility—or the belief that a source of information can be trusted [ 83 ]. Finally, to capture trustworthiness in technology, we used a conceptual framework regarding trust in artificial intelligence which includes personality and ability as human characteristics that are important drivers of trust in AI [ 84 ].

figure 2

We used the CPD to model how we might address the barrier of mistrust using initial engagement with the chatbot as a mechanism and trust as a proximal outcome. Using the conceptual frameworks described above, we posited moderators to be (1) chatbot expertise, (2) chatbot designed for familiarity (i.e., homophily), and (3) chatbot personality or communication style. With our components defined, we created a CPD (Fig.  2 ).

We presented a case study example of the use of HCD methods to inform and build CPDs to design an implementation strategy rooted in causal perspective and informed by community partners. This approach addresses gaps in the use of implementation strategies including identifying and prioritizing determinants and knowledge of strategy mechanisms [ 6 , 66 ].

Our work highlights the value of HCD methods which integrate end-users in the development of innovative implementation strategies and provides an approach for community/partner engagement that is especially useful when designing technology-based strategies. By using HCD methods, we were able to elicit determinants to both the evidence-based practice (breast cancer screening) and the proposed implementation strategy (chatbot). Moreover, we received nuanced feedback about the chatbot in its current design rather than as a hypothetical strategy (as visual tools in qualitative research can enhance the quality and clarity of data) [ 85 ]. By doing so, we gained important insights about the implementation strategy—e.g., our early prototype design lacked the depth of cultural inclusivity and familiarity needed to elicit trust and promote use of the chatbot despite being informed by breast cancer screening determinants elicited in our initial interviews. These insights were integral to the CPD development and prioritization of trust as a determinant to chatbot use and subsequent breast cancer screening.

Using HCD methods meant that we brought community partners into the design process in the earliest stages. Having text and visual content to react to allowed community partners to identify specifically what they liked and disliked about the design and how they would word chatbot messages differently—giving very concrete feedback to incorporate into future design iterations. There have been many calls to incorporate health equity in implementation science frameworks, strategies, and outcomes [ 65 , 86 , 87 , 88 , 89 ]. Integrating the perspectives of populations with marginalized identities into the development of implementation strategies can help to address health equity more effectively and mitigate intervention-generated inequities [ 90 , 91 ].

The CPD which was constructed by data from HCD methods directly informed our next steps in development of the chatbot prototype—(1) a factorial design experiment measuring degree of trust and engagement with different chatbot personas and (2) co-design sessions to craft chatbot messaging. While our case example details the design of an innovative implementation strategy, we believe this approach could be useful in tailoring a broad spectrum of implementation strategies and adds to existing literature on methods to tailor implementation strategies [ 92 ].

Limitations

Our case study example has several limitations. Topically, while most people will have access to SMS-based interventions, this intervention will not be accessible and/or acceptable for all eligible patients [ 51 ]. During this work, we received feedback that the chatbot may be especially effective among younger age groups, but that uptake may be lower among older adults. As the USPSTF guidelines are expected to change to recommend earlier screening starting at 40 years, this implementation strategy could be particularly acceptable to outreach to newly eligible patients [ 93 ]. We readily acknowledge that a single intervention will not fully address breast cancer screening inequities and should be implemented as one part of a multi-faceted health system approach.

Methodologically, the chatbot development case example could have been strengthened using a determinant framework. We would encourage investigators interested in this approach to incorporate determinant frameworks in initial evidence review and data collection. Our methodological approach could also be strengthened by increasing community engagement [ 10 , 14 ]. While we incorporated several elements of community-engaged research (including community members on the research team, having a trained community member conduct research with community participants, and community member co-authorship), we could have further expanded community participation (e.g., seek community input in initial intervention design, create a community advisory board).

The use of interdisciplinary methods is core to implementation science [ 94 ]. HCD is a particularly synergistic discipline with multiple existing applications of HCD to implementation research. We present an extension of this work and an example of the potential value in an integrated approach of HCD and IS researchers and methods to combine strengths of both disciplines and develop human-centered, co-designed implementation strategies rooted in causal perspective and healthcare equity.

Availability of data and materials

Aggregated qualitative data is available by request in accordance with a funding agreement. Disaggregated data is not available to maintain privacy and confidentiality of interview and focus group participants.

Abbreviations

Electronic health record

Enhancing the Quality and Transparency of health research

  • Human-centered design

Primary care provider

Short message service

Template for intervention description and replication

United States Preventive Services Task Force

Proctor,Enola. Implementation strategies: recommendations for specifying and reporting. https://doi.org/10.1186/1748-5908-8-139 .

Powell BJ, Waltz TJ, Chinman MJ, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10(1):21. https://doi.org/10.1186/s13012-015-0209-1 .

Article   PubMed   PubMed Central   Google Scholar  

Beidas RS, Buttenheim AM, Mandell DS. Transforming mental health care delivery through implementation science and behavioral economics. JAMA Psychiat. 2021;78(9):941–2. https://doi.org/10.1001/jamapsychiatry.2021.1120 .

Article   Google Scholar  

Haines ER, Dopp A, Lyon AR, et al. Harmonizing evidence-based practice, implementation context, and implementation strategies with user-centered design: a case example in young adult cancer care. Implement Sci Commun. 2021;2(1):45. https://doi.org/10.1186/s43058-021-00147-4 .

Lyon AR, Munson SA, Renn BN, et al. Use of human-centered design to improve implementation of evidence-based psychotherapies in low-resource communities: protocol for studies applying a framework to assess usability. JMIR Res Protoc. 2019;8(10):e14990. https://doi.org/10.2196/14990 .

Lewis CC, Klasnja P, Powell BJ, et al. From classification to causality: advancing understanding of mechanisms of change in implementation science. Front Public Health. 2018;6:136. https://doi.org/10.3389/fpubh.2018.00136 .

Lewis CC, Klasnja P, Lyon AR, et al. The mechanics of implementation strategies and measures: advancing the study of implementation mechanisms. Implement Sci Commun. 2022;3(1):114. https://doi.org/10.1186/s43058-022-00358-3 .

Beidas RS, Dorsey S, Lewis CC, et al. Promises and pitfalls in implementation science from the perspective of US-based researchers: learning from a pre-mortem. Implement Sci IS. 2022;17(1):55. https://doi.org/10.1186/s13012-022-01226-3 .

Article   PubMed   Google Scholar  

Pérez Jolles M, Willging CE, Stadnick NA, et al. Understanding implementation research collaborations from a co-creation lens: Recommendations for a path forward. Front Health Serv. 2022;2:942658. https://doi.org/10.3389/frhs.2022.942658 .

Key KD, Furr-Holden D, Lewis EY, et al. The continuum of community engagement in research: a roadmap for understanding and assessing progress. Prog Community Health Partnersh Res Educ Action. 2019;13(4):427–34. https://doi.org/10.1353/cpr.2019.0064 .

Vargas C, Whelan J, Brimblecombe J, Allender S. Co-creation, co-design, co-production for public health - a perspective on definition and distinctions. Public Health Res Pract. 2022;32(2):3222211. https://doi.org/10.17061/phrp3222211 .

Slattery P, Saeri AK, Bragge P. Research co-design in health: a rapid overview of reviews. Health Res Policy Syst. 2020;18(1):17. https://doi.org/10.1186/s12961-020-0528-9 .

Woodward M, Dixon-Woods M, Randall W, et al. How to co-design a prototype of a clinical practice tool: a framework with practical guidance and a case study. BMJ Qual Saf. Published online December 12, 2023:bmjqs-2023–016196. https://doi.org/10.1136/bmjqs-2023-016196 .

Henderson VA, Barr KL, An LC, et al. Community-based participatory research and user-centered design in a diabetes medication information and decision tool. Prog Community Health Partnersh Res Educ Action. 2013;7(2):171–84. https://doi.org/10.1353/cpr.2013.0024 .

Chen E, Leos C, Kowitt SD, Moracco KE. Enhancing community-based participatory research through human-centered design strategies. Health Promot Pract. 2020;21(1):37–48. https://doi.org/10.1177/1524839919850557 .

ISO 9241–210:2019(en), Ergonomics of human-system interaction — part 210: human-centred design for interactive systems. Accessed February 24, 2024. https://www.iso.org/obp/ui/#iso:std:iso:9241:-210:ed-2:v1:en .

Holeman I, Kane D. Human-centered design for global health equity. Inf Technol Dev. 2019;26(3):477–505. https://doi.org/10.1080/02681102.2019.1667289 .

Gasson S. Human-centered vs. user-centered approaches to information system design. J Inf Technol Theory Appl JITTA . 2003;5(2). https://aisel.aisnet.org/jitta/vol5/iss2/5 .

Harte R, Glynn L, Rodríguez-Molinero A, et al. A human-centered design methodology to enhance the usability, human factors, and user experience of connected health systems: a three-phase methodology. JMIR Hum Factors. 2017;4(1):e8. https://doi.org/10.2196/humanfactors.5443 .

Lyon AR, Brewer SK, Areán PA. Leveraging human-centered design to implement modern psychological science: Return on an early investment. Am Psychol. 2020;75(8):1067–79. https://doi.org/10.1037/amp0000652 .

Hardy D, Du DY. Socioeconomic and racial disparities in cancer stage at diagnosis, tumor size, and clinical outcomes in a large cohort of women with breast cancer, 2007–2016. J Racial Ethn Health Disparities. 2021;8(4):990–1001. https://doi.org/10.1007/s40615-020-00855-y .

Eley JW, Hill HA, Chen VW, et al. Racial differences in survival from breast cancer. Results of the National Cancer Institute Black/White Cancer Survival Study. JAMA. 1994;272(12):947–54. https://doi.org/10.1001/jama.272.12.947 .

Article   CAS   PubMed   Google Scholar  

Siu AL, U.S. Preventive Services Task Force. Screening for Breast Cancer: U.S. Preventive Services Task Force Recommendation Statement. Ann Intern Med. 2016;164(4):279–296. https://doi.org/10.7326/M15-2886 .

Ahmed AT, Welch BT, Brinjikji W, et al. Racial disparities in screening mammography in the United States: a systematic review and meta-analysis. J Am Coll Radiol JACR. 2017;14(2):157-165.e9. https://doi.org/10.1016/j.jacr.2016.07.034 .

Smith-Bindman R, Miglioretti DL, Lurie N, et al. Does utilization of screening mammography explain racial and ethnic differences in breast cancer? Ann Intern Med. 2006;144(8):541–53. https://doi.org/10.7326/0003-4819-144-8-200604180-00004 .

Chapman CH, Schechter CB, Cadham CJ, et al. Identifying equitable screening mammography strategies for Black women in the United States using simulation modeling. Ann Intern Med. 2021;174(12):1637–46. https://doi.org/10.7326/M20-6506 .

Ko NY, Hong S, Winn RA, Calip GS. Association of insurance status and racial disparities with the detection of early-stage breast cancer. JAMA Oncol. 2020;6(3):385–92. https://doi.org/10.1001/jamaoncol.2019.5672 .

Jones CE, Maben J, Jack RH, et al. A systematic review of barriers to early presentation and diagnosis with breast cancer among black women. BMJ Open. 2014;4(2):e004076. https://doi.org/10.1136/bmjopen-2013-004076 .

Katapodi MC, Pierce PF, Facione NC. Distrust, predisposition to use health services and breast cancer screening: results from a multicultural community-based survey. Int J Nurs Stud. 2010;47(8):975–83. https://doi.org/10.1016/j.ijnurstu.2009.12.014 .

Passmore SR, Williams-Parry KF, Casper E, Thomas SB. Message received: African American women and breast cancer screening. Health Promot Pract. 2017;18(5):726–33. https://doi.org/10.1177/1524839917696714 .

Thompson HS, Valdimarsdottir HB, Winkel G, Jandorf L, Redd W. The Group-Based Medical Mistrust Scale: psychometric properties and association with breast cancer screening. Prev Med. 2004;38(2):209–18. https://doi.org/10.1016/j.ypmed.2003.09.041 .

Orji CC, Kanu C, Adelodun AI, Brown CM. Factors that influence mammography use for breast cancer screening among African American women. J Natl Med Assoc. 2020;112(6):578–92. https://doi.org/10.1016/j.jnma.2020.05.004 .

Adegboyega A, Aroh A, Voigts K, Jennifer H. Regular mammography screening among African American (AA) women: qualitative application of the PEN-3 framework. J Transcult Nurs Off J Transcult Nurs Soc. 2019;30(5):444–52. https://doi.org/10.1177/1043659618803146 .

Young RF, Schwartz K, Booza J. Medical barriers to mammography screening of African American women in a high cancer mortality area: implications for cancer educators and health providers. J Cancer Educ. 2011;26(2):262–9. https://doi.org/10.1007/s13187-010-0184-9 .

Molina Y, Kim S, Berrios N, Calhoun EA. Medical mistrust and patient satisfaction with mammography: the mediating effects of perceived self-efficacy among navigated African American women. Health Expect Int J Public Particip Health Care Health Policy. 2015;18(6):2941–50. https://doi.org/10.1111/hex.12278 .

Copeland VC, Kim YJ, Eack SM. Effectiveness of interventions for breast cancer screening in African American women: a meta-analysis. Health Serv Res. 2018;53 Suppl 1(Suppl Suppl 1):3170–88. https://doi.org/10.1111/1475-6773.12806 .

Sung JF, Blumenthal DS, Coates RJ, Williams JE, Alema-Mensah E, Liff JM. Effect of a cancer screening intervention conducted by lay health workers among inner-city women. Am J Prev Med. 1997;13(1):51–7.

West DS, Greene P, Pulley L, et al. Stepped-care, community clinic interventions to promote mammography use among low-income rural African American women. Health Educ Behav Off Publ Soc Public Health Educ. 2004;31(4 Suppl):29S-44S. https://doi.org/10.1177/1090198104266033 .

Russell KM, Champion VL, Monahan PO, et al. Randomized trial of a lay health advisor and computer intervention to increase mammography screening in African American women. Cancer Epidemiol Biomark Prev Publ Am Assoc Cancer Res Cosponsored Am Soc Prev Oncol. 2010;19(1):201–10. https://doi.org/10.1158/1055-9965.EPI-09-0569 .

Marshall JK, Mbah OM, Ford JG, et al. Effect of patient navigation on breast cancer screening among African American Medicare beneficiaries: a randomized controlled trial. J Gen Intern Med. 2016;31(1):68–76. https://doi.org/10.1007/s11606-015-3484-2 .

Zhu K, Hunter S, Bernard L, et al. An intervention study on screening for breast cancer among single African-American women aged 65 and older. Ann Epidemiol. 2000;10(7):462–3. https://doi.org/10.1016/s1047-2797(00)00089-2 .

Goel A, George J, Burack RC. Telephone reminders increase re-screening in a county breast screening program. J Health Care Poor Underserved. 2008;19(2):512–21. https://doi.org/10.1353/hpu.0.0025 .

Hendren S, Winters P, Humiston S, et al. Randomized, controlled trial of a multimodal intervention to improve cancer screening rates in a safety-net primary care practice. J Gen Intern Med. 2014;29(1):41–9. https://doi.org/10.1007/s11606-013-2506-1 .

Jibaja-Weiss ML, Volk RJ, Kingery P, Smith QW, Holcomb JD. Tailored messages for breast and cervical cancer screening of low-income and minority women using medical records data. Patient Educ Couns. 2003;50(2):123–32. https://doi.org/10.1016/s0738-3991(02)00119-2 .

Gathirua-Mwangi WG, Monahan PO, Stump T, Rawl SM, Skinner CS, Champion VL. Mammography adherence in African-American women: results of a randomized controlled trial. Ann Behav Med Publ Soc Behav Med. 2016;50(1):70–8. https://doi.org/10.1007/s12160-015-9733-0 .

Kreuter MW, Sugg-Skinner C, Holt CL, et al. Cultural tailoring for mammography and fruit and vegetable intake among low-income African-American women in urban public health centers. Prev Med. 2005;41(1):53–62. https://doi.org/10.1016/j.ypmed.2004.10.013 .

Champion VL, Springston JK, Zollinger TW, et al. Comparison of three interventions to increase mammography screening in low income African American women. Cancer Detect Prev. 2006;30(6):535–44. https://doi.org/10.1016/j.cdp.2006.10.003 .

De Jesus M, Ramachandra S, De Silva A, et al. A mobile health breast cancer educational and screening intervention tailored for low-income, uninsured latina immigrants. Womens Health Rep New Rochelle N. 2021;2(1):325–36. https://doi.org/10.1089/whr.2020.0112 .

Ruco A, Dossa F, Tinmouth J, et al. Social media and mHealth technology for cancer screening: systematic review and meta-analysis. J Med Internet Res. 2021;23(7):e26759. https://doi.org/10.2196/26759 .

Free C, Phillips G, Watson L, et al. The effectiveness of mobile-health technologies to improve health care service delivery processes: a systematic review and meta-analysis. PLoS Med. 2013;10(1):e1001363. https://doi.org/10.1371/journal.pmed.1001363 .

Ntiri SO, Swanson M, Klyushnenkova EN. Text messaging as a communication modality to promote screening mammography in low-income African American women. J Med Syst. 2022;46(5):28. https://doi.org/10.1007/s10916-022-01814-2 .

Peacock S, Reddy A, Leveille SG, et al. Patient portals and personal health information online: perception, access, and use by US adults. J Am Med Inform Assoc JAMIA. 2017;24(e1):e173–7. https://doi.org/10.1093/jamia/ocw095 .

Anthony DL, Campos-Castillo C, Lim PS. Who isn’t using patient portals and why? Evidence and implications from a national sample of US adults. Health Aff (Millwood). 2018;37(12):1948–54. https://doi.org/10.1377/hlthaff.2018.05117 .

P G, Mb H, L R, et al. Reaching women through health information technology: the Gabby preconception care system. Am J Health Promot AJHP. 2013;27(3 Suppl). https://doi.org/10.4278/ajhp.1200113-QUAN-18 .

Rickenberg R, Reeves B. The effects of animated characters on anxiety, task performance, and evaluations of user interfaces. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI ’00. Association for Computing Machinery; 2000:49–56. https://doi.org/10.1145/332040.332406 .

Bickmore TW, Pfeifer LM, Byron D, et al. Usability of conversational agents by patients with inadequate health literacy: evidence from two clinical trials. J Health Commun. 2010;15(Suppl 2):197–210. https://doi.org/10.1080/10810730.2010.499991 .

Graham AK, Lattie EG, Powell BJ, et al. Implementation strategies for digital mental health interventions in health care settings. Am Psychol. 2020;75(8):1080–92. https://doi.org/10.1037/amp0000686 .

Parker VA, Lemak CH. Navigating patient navigation: crossing health services research and clinical boundaries. Adv Health Care Manag. 2011;11:149–83. https://doi.org/10.1108/s1474-8231(2011)0000011010 .

Oca MC, Meller L, Wilson K, et al. Bias and inaccuracy in AI chatbot ophthalmologist recommendations. Cureus. 2023;15(9):e45911. https://doi.org/10.7759/cureus.45911 .

Sallam M. ChatGPT utility in healthcare education, research, and practice: systematic review on the promising perspectives and valid concerns. Healthc Basel Switz. 2023;11(6):887. https://doi.org/10.3390/healthcare11060887 .

Garcia Valencia OA, Suppadungsuk S, Thongprayoon C, et al. Ethical implications of chatbot utilization in nephrology. J Pers Med. 2023;13(9):1363. https://doi.org/10.3390/jpm13091363 .

HEDIS. NCQA. Accessed September 23, 2023. https://www.ncqa.org/hedis/ .

Hoffmann TC, Glasziou PP, Boutron I, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ. 2014;348:g1687. https://doi.org/10.1136/bmj.g1687 .

Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care J Int Soc Qual Health Care. 2007;19(6):349–57. https://doi.org/10.1093/intqhc/mzm042 .

Adsul P, Chambers D, Brandt HM, et al. Grounding implementation science in health equity for cancer prevention and control. Implement Sci Commun. 2022;3(1):56. https://doi.org/10.1186/s43058-022-00311-4 .

Lewis CC, Hannon PA, Klasnja P, et al. Optimizing Implementation in Cancer Control (OPTICC): protocol for an implementation science center. Implement Sci Commun. 2021;2(1):44. https://doi.org/10.1186/s43058-021-00117-w .

Creswell JW and Poth CN. Qualitative Inquiry and Research Design: Choosing Among Five Approaches. 4th ed. Los Angeles | London | New Dehli | Singapore | Washington DC: Sage; 2017. p. 15–40.

Dobbins M. Rapid review guidebook. Hamilton, ON: National Collaborating Centre for Methods and Tools. Retrieved from Organization website. 2017. http://www.nccmt.ca/resources/rapid-review-guidebook . Accessed 18 Mar 2024.

Search | National Collaborating Centre for Methods and Tools. Accessed February 14, 2023. https://www.nccmt.ca/tools/eiph/search .

Hennink MM, Kaiser BN, Marconi VC. Code saturation versus meaning saturation: how many interviews are enough? Qual Health Res. 2017;27(4):591–608. https://doi.org/10.1177/1049732316665344 .

Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15(9):1277–88. https://doi.org/10.1177/1049732305276687 .

Brooks J, McCluskey S, Turley E, King N. The utility of template analysis in qualitative psychology research. Qual Res Psychol. 2015;12(2):202–22. https://doi.org/10.1080/14780887.2014.955224 .

Fox AB, Hamilton AB, Frayne SM, et al. Effectiveness of an evidence-based quality improvement approach to cultural competence training: The Veterans Affairs’ “Caring for Women Veterans” Program. J Contin Educ Health Prof. 2016;36(2):96–103. https://doi.org/10.1097/CEH.0000000000000073 .

Tsapatsaris A, Reichman M. Project ScanVan: mobile mammography services to decrease socioeconomic barriers and racial disparities among medically underserved women in NYC. Clin Imaging. 2021;78:60–3. https://doi.org/10.1016/j.clinimag.2021.02.040 .

Wang H, Gregg A, Qiu F, et al. Breast cancer screening for patients of rural accountable care organization clinics: a multi-level analysis of barriers and facilitators. J Community Health. 2018;43(2):248–58. https://doi.org/10.1007/s10900-017-0412-x .

Huq MR, Woodard N, Okwara L, Knott CL. Breast cancer educational needs and concerns of African American women below screening age. J Cancer Educ Off J Am Assoc Cancer Educ. 2022;37(6):1677–83. https://doi.org/10.1007/s13187-021-02012-3 .

Guo Y, Cheng TC, Yun LH. Factors associated with adherence to preventive breast cancer screenings among middle-aged African American women. Soc Work Public Health. 2019;34(7):646–56. https://doi.org/10.1080/19371918.2019.1649226 .

Ferreira CS, Rodrigues J, Moreira S, Ribeiro F, Longatto-Filho A. Breast cancer screening adherence rates and barriers of implementation in ethnic, cultural and religious minorities: a systematic review. Mol Clin Oncol. 2021;15(1):139. https://doi.org/10.3892/mco.2021.2301 .

Davis CM. Health beliefs and breast cancer screening practices among African American women in California. Int Q Community Health Educ. 2021;41(3):259–66. https://doi.org/10.1177/0272684X20942084 .

Agrawal P, Chen TA, McNeill LH, et al. Factors associated with breast cancer screening adherence among church-going African American women. Int J Environ Res Public Health. 2021;18(16):8494. https://doi.org/10.3390/ijerph18168494 .

Hall IJ, Johnson-Turbes A. Use of the Persuasive Health Message framework in the development of a community-based mammography promotion campaign. Cancer Causes Control CCC. 2015;26(5):775. https://doi.org/10.1007/s10552-015-0537-0 .

Witte K. Fishing for success: Using the persuasive health message framework to generate effective campaign messages. In: Designing Health Messages: Approaches from Communication Theory and Public Health Practice. Sage Publications, Inc; 1995:145–166. https://doi.org/10.4135/9781452233451.n8 .

Ismagilova E, Slade E, Rana NP, Dwivedi YK. The effect of characteristics of source credibility on consumer behaviour: a meta-analysis. J Retail Consum Serv. 2020;53: 101736. https://doi.org/10.1016/j.jretconser.2019.01.005 .

Siau K, Wang W. Building trust in artificial intelligence, machine learning, and robotics. Cut Bus Technol J. 2018;31(2):47–53.

Google Scholar  

Glegg SMN. Facilitating interviews in qualitative research with visual tools: a typology. Qual Health Res. 2019;29(2):301–10. https://doi.org/10.1177/1049732318786485 .

Adapting strategies to promote implementation reach and equity (ASPIRE) in school mental health services - Gaias - 2022 - Psychology in the Schools - Wiley Online Library. Accessed February 15, 2023. https://doi.org/10.1002/pits.22515 .

Allen M, Wilhelm A, Ortega LE, Pergament S, Bates N, Cunningham B. Applying a race(ism)-conscious adaptation of the CFIR framework to understand implementation of a school-based equity-oriented intervention. Ethn Dis. 2021;31(Suppl 1):375–88. https://doi.org/10.18865/ed.31.S1.375 .

Chinman M, Woodward EN, Curran GM, Hausmann LRM. Harnessing implementation science to increase the impact of health equity research. Med Care. 2017;55 Suppl 9 Suppl 2(Suppl 9 2):S16–23. https://doi.org/10.1097/MLR.0000000000000769 .

Woodward EN, Matthieu MM, Uchendu US, Rogal S, Kirchner JE. The health equity implementation framework: proposal and preliminary study of hepatitis C virus treatment. Implement Sci IS. 2019;14(1):26. https://doi.org/10.1186/s13012-019-0861-y .

Veinot TC, Mitchell H, Ancker JS. Good intentions are not enough: how informatics interventions can worsen inequality. J Am Med Inform Assoc JAMIA. 2018;25(8):1080–8. https://doi.org/10.1093/jamia/ocy052 .

Lorenc T, Oliver K. Adverse effects of public health interventions: a conceptual framework. J Epidemiol Community Health. 2014;68(3):288–90. https://doi.org/10.1136/jech-2013-203118 .

Powell BJ, Beidas RS, Lewis CC, et al. Methods to improve the selection and tailoring of implementation strategies. J Behav Health Serv Res. 2017;44(2):177–94. https://doi.org/10.1007/s11414-015-9475-6 .

Draft Recommendation: Breast Cancer: Screening | United States Preventive Services Taskforce. Accessed July 8, 2023. https://www.uspreventiveservicestaskforce.org/uspstf/draft-recommendation/breast-cancer-screening-adults#bcei-recommendation-title-area .

Mitchell SA, Chambers DA. Leveraging implementation science to improve cancer care delivery and patient outcomes. J Oncol Pract. 2017;13(8):523–9. https://doi.org/10.1200/JOP.2017.024729 .

Download references

Acknowledgements

We thank Aiza Ali, Natasha Schmid, and Xuan Song for their work in qualitative data collection and analysis, Lorella Palazzo and Nora Henrikson for their support and feedback in rapid qualitative methods and rapid evidence review, respectively, and Predrag Klasnja for his advice and guidance in this work. We thank all the interview and focus group participants who gave us valuable feedback to inform this work.

Research reported in this publication was supported by the University of Washington Medicine Patients are First Innovation Pilot (institutional funding, no grant number), the National Cancer Institute of the National Institutes of Health under Award Number P50CA244432, and the Agency for Healthcare Research and Quality under Award Number K12HS026369. The content is solely the responsibility of the authors and does not necessarily represent the official views of University of Washington, the National Institutes of Health, or the Agency for Healthcare Research and Quality.

Author information

Authors and affiliations.

Department of Medicine, University of Washington School of Medicine, 908 Jefferson St, Seattle, WA, 98104, USA

Leah M. Marcotte & Victoria Fang

Department of Human Centered Design and Engineering, University of Washington, Seattle, WA, USA

Raina Langevin & Gary Hsieh

Cierra Sisters, Seattle, WA, USA

Bridgette R. Hempstead

Department of Medicine, University of Texas Southwestern Medical Center, Dallas, TX, USA

Anisha Ganguly

Department of Psychiatry and Behavioral Sciences, University of Washington School of Medicine, Seattle, WA, USA

Aaron R. Lyon

Departments of Global Health, University of Washington School of Medicine, Seattle, WA, USA

Bryan J. Weiner

Health Systems and Population Health, University of Washington School of Medicine, Seattle, WA, USA

UW Medicine, enterprise health system of the University of Washington, Seattle, USA

Nkem Akinsoto, Paula L. Houston & Victoria Fang

You can also search for this author in PubMed   Google Scholar

Contributions

L.M.M. and G.H. had full access to all the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. L.M.M., R.L., B.R.H., A.G., A.R.L., B.J.W., N.A., P.L.H., V.F., and G.H. were involved in concept and design. L.M.M., R.L., B.R.H., A.G., N.A., V.F., and G.H. were involved in the data collection process. L.M.M., R.L., B.R.H., and G.H. analyzed and interpreted the data. L.M.M. drafted the manuscript and L.M.M., R.L., B.R.H., A.G., A.R.L., B.J.W., N.A., P.L.H., V.F., and G.H. revised the manuscript. L.M.M., R.L., B.R.H., A.G., A.R.L., B.J.W., N.A., P.L.H., V.F., and G.H. approved the final submitted version, agreed to be accountable for the report, and had final responsibility for the decision to submit for publication. L.M.M. and G.H. confirm that they had access to the data in the study and accept responsibility to submit for publication.

Corresponding author

Correspondence to Leah M. Marcotte .

Ethics declarations

Ethics approval and consent to participate.

The study protocol was reviewed and determined exempt by the University of Washington Institutional Review Board. Participants were consented to participate prior to interviews and focus groups. All participants were sent a newsletter update summarizing manuscript and including all quotes for member checking with invitation to review full manuscript prior to submission.

Consent for publication

Competing interests.

Authors report no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

Appendices.

Additional file 2: Fig. S1.

Early Chatbot Prototype.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Marcotte, L.M., Langevin, R., Hempstead, B.R. et al. Leveraging human-centered design and causal pathway diagramming toward enhanced specification and development of innovative implementation strategies: a case example of an outreach tool to address racial inequities in breast cancer screening. Implement Sci Commun 5 , 31 (2024). https://doi.org/10.1186/s43058-024-00569-w

Download citation

Received : 05 May 2023

Accepted : 09 March 2024

Published : 28 March 2024

DOI : https://doi.org/10.1186/s43058-024-00569-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Implementation strategies
  • Causal pathway diagrams
  • Healthcare equity

Implementation Science Communications

ISSN: 2662-2211

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

example research design

What is artificial general intelligence (AGI)?

A profile of a 3d head made of concrete that is sliced in half creating two separate parts. Pink neon binary numbers travel from one half of the a head to the other by a stone bridge that connects the two parts.

You’ve read the think pieces. AI—in particular, the generative AI (gen AI) breakthroughs achieved in the past year or so—is poised to revolutionize not just the way we create content but the very makeup of our economies and societies as a whole. But although gen AI tools such as ChatGPT may seem like a great leap forward, in reality they are just a step in the direction of an even greater breakthrough: artificial general intelligence, or AGI.

Get to know and directly engage with senior McKinsey experts on AGI

Aamer Baig is a senior partner in McKinsey’s Chicago office; Federico Berruti is a partner in the Toronto office; Ben Ellencweig is a senior partner in the Stamford, Connecticut, office; Damian Lewandowski is a consultant in the Miami office; Roger Roberts is a partner in the Bay Area office, where Lareina Yee is a senior partner;  Alex Singla  is a senior partner in the Chicago office and the global leader of QuantumBlack, AI by McKinsey;  Kate Smaje  and Alex Sukharevsky  are senior partners in the London office;   Jonathan Tilley is a partner in the Southern California office; and Rodney Zemmel is a senior partner in the New York office.

AGI is AI with capabilities that rival those of a human . While purely theoretical at this stage, someday AGI may replicate human-like cognitive abilities including reasoning, problem solving, perception, learning, and language comprehension. When AI’s abilities are indistinguishable from those of a human, it will have passed what is known as the Turing test , first proposed by 20th-century computer scientist Alan Turing.

But let’s not get ahead of ourselves. AI has made significant strides in recent years, but no AI tool to date has passed the Turing test. We’re still far from reaching a point where AI tools can understand, communicate, and act with the same nuance and sensitivity of a human—and, critically, understand the meaning behind it. Most researchers and academics believe we are decades away from realizing AGI; a few even predict we won’t see AGI this century (or ever). Rodney Brooks, a roboticist at the Massachusetts Institute of Technology and cofounder of iRobot, believes AGI won’t arrive until the year 2300 .

If you’re thinking that AI already seems pretty smart, that’s understandable. We’ve seen gen AI  do remarkable things in recent years, from writing code to composing sonnets in seconds. But there’s a critical difference between AI and AGI. Although the latest gen AI technologies, including ChatGPT, DALL-E, and others, have been hogging headlines, they are essentially prediction machines—albeit very good ones. In other words, they can predict, with a high degree of accuracy, the answer to a specific prompt because they’ve been trained on huge amounts of data. This is impressive, but it’s not at a human level of performance in terms of creativity, logical reasoning, sensory perception, and other capabilities . By contrast, AGI tools could feature cognitive and emotional abilities (like empathy) indistinguishable from those of a human. Depending on your definition of AGI, they might even be capable of consciously grasping the meaning behind what they’re doing.

The timing of AGI’s emergence is uncertain. But when it does arrive—and it likely will at some point—it’s going to be a very big deal for every aspect of our lives, businesses, and societies. Executives can begin working now to better understand the path to machines achieving human-level intelligence and making the transition to a more automated world.

Learn more about QuantumBlack, AI by McKinsey .

What is needed for AI to become AGI?

Here are eight capabilities AI needs to master before achieving AGI. Click each card to learn more.

How will people access AGI tools?

Today, most people engage with AI in the same ways they’ve accessed digital power for years: via 2D screens such as laptops, smartphones, and TVs. The future will probably look a lot different. Some of the brightest minds (and biggest budgets) in tech are devoting themselves to figuring out how we’ll access AI (and possibly AGI) in the future. One example you’re likely familiar with is augmented reality and virtual reality headsets , through which users experience an immersive virtual world . Another example would be humans accessing the AI world through implanted neurons in the brain. This might sound like something out of a sci-fi novel, but it’s not. In January 2024, Neuralink implanted a chip in a human brain, with the goal of allowing the human to control a phone or computer purely by thought.

A final mode of interaction with AI seems ripped from sci-fi as well: robots. These can take the form of mechanized limbs connected to humans or machine bases or even programmed humanoid robots.

What is a robot and what types of robots are there?

The simplest definition of a robot is a machine that can perform tasks on its own or with minimal assistance from humans. The most sophisticated robots can also interact with their surroundings.

Programmable robots have been operational since the 1950s. McKinsey estimates that 3.5 million robots are currently in use, with 550,000 more deployed every year. But while programmable robots are more commonplace than ever in the workforce, they have a long way to go before they outnumber their human counterparts. The Republic of Korea, home to the world’s highest density of robots, still employs 100 times as many humans as robots.

Circular, white maze filled with white semicircles.

Introducing McKinsey Explainers : Direct answers to complex questions

But as hardware and software limitations become increasingly surmountable, companies that manufacture robots are beginning to program units with new AI tools and techniques. These dramatically improve robots’ ability to perform tasks typically handled by humans, including walking, sensing, communicating, and manipulating objects. In May 2023, Sanctuary AI, for example, launched Phoenix, a bipedal humanoid robot that stands 5’ 7” tall, lifts objects weighing as much as 55 pounds, and travels three miles per hour—not to mention it also folds clothes, stocks shelves, and works a register.

As we edge closer to AGI, we can expect increasingly sophisticated AI tools and techniques to be programmed into robots of all kinds. Here are a few categories of robots that are currently operational:

  • Stand-alone autonomous industrial robots : Equipped with sensors and computer systems to navigate their surroundings and interact with other machines, these robots are critical components of the modern automated manufacturing industry.
  • Collaborative robots : Also known as cobots, these robots are specifically engineered to operate in collaboration with humans in a shared environment. Their primary purpose is to alleviate repetitive or hazardous tasks. These types of robots are already being used in environments such as restaurant kitchens and more.
  • Mobile robots : Utilizing wheels as their primary means of movement, mobile robots are commonly used for materials handling in warehouses and factories. The military also uses these machines for various purposes, such as reconnaissance and bomb disposal.
  • Human–hybrid robots : These robots have both human and robotic features. This could include a robot with an appearance, movement capabilities, or cognition that resemble those of a human, or a human with a robotic limb or even a brain implant.
  • Humanoids or androids : These robots are designed to emulate the appearance, movement, communicative abilities, and emotions of humans while continuously enhancing their cognitive capabilities via deep learning models. In other words, humanoid robots will think like a human, move like a human, and look like a human.

What advances could speed up the development of AGI?

Advances in algorithms, computing, and data  have brought about the recent acceleration of AI. We can get a sense of what the future may hold by looking at these three capabilities:

Algorithmic advances and new robotics approaches . We may need entirely new approaches to algorithms and robots to achieve AGI. One way researchers are thinking about this is by exploring the concept of embodied cognition. The idea is that robots will need to learn very quickly from their environments through a multitude of senses, just like humans do when they’re very young. Similarly, to develop cognition in the same way humans do, robots will need to experience the physical world like we do (because we’ve designed our spaces based on how our bodies and minds work).

The latest AI-based robot systems are using gen AI technologies including large language models (LLMs) and large behavior models (LBMs). LLMs give robots advanced natural-language-processing capabilities like what we’ve seen with generative AI models and other LLM-enabled tools. LBMs allow robots to emulate human actions and movements. These models are created by training AI on large data sets of observed human actions and movements. Ultimately, these models could allow robots to perform a wide range of activities with limited task-specific training.

A real advance would be to develop new AI systems that start out with a certain level of built-in knowledge, just like a baby fawn knows how to stand and feed without being taught. It’s possible that the recent success of deep-learning-based AI systems may have drawn research attention away from the more fundamental cognitive work required to make progress toward AGI.

  • Computing advancements. Graphics processing units (GPUs) have made the major AI advances of the past few years possible . Here’s why. For one, GPUs are designed to handle multiple tasks related to visual data simultaneously, including rendering images, videos, and graphics-related computations. Their efficiency at handling massive amounts of visual data makes them useful in training complex neural networks. They also have a high memory bandwidth, meaning faster data transfer. Before AGI can be achieved, similar significant advancements will need to be made in computing infrastructure. Quantum computing  is touted as one way of achieving this. However, today’s quantum computers, while powerful, aren’t yet ready for everyday applications. But once they are, they could play a role in the achievement of AGI.

Growth in data volume and new sources of data . Some experts believe 5G  mobile infrastructure could bring about a significant increase in data. That’s because the technology could power a surge in connected devices, or the Internet of Things . But, for a variety of reasons, we think most of the benefits of 5G have already appeared . For AGI to be achieved, there will need to be another catalyst for a huge increase in data volume.

New robotics approaches could yield new sources of training data. Placing human-like robots among us could allow companies to mine large sets of data that mimic our own senses to help the robots train themselves. Advanced self-driving cars are one example: data is being collected from cars that are already on the roads, so these vehicles are acting as a training set for future self-driving cars.

What can executives do about AGI?

AGI is still decades away, at the very least. But AI is here to stay—and it is advancing extremely quickly. Smart leaders can think about how to respond to the real progress that’s happening, as well as how to prepare for the automated future. Here are a few things to consider:

  • Stay informed about developments in AI and AGI . Connect with start-ups and develop a framework for tracking progress in AGI that is relevant to your business. Also, start to think about the right governance, conditions, and boundaries for success within your business and communities.
  • Invest in AI now . “The cost of doing nothing,” says McKinsey senior partner Nicolai Müller , “is just too high  because everybody has this at the top of their agenda. I think it’s the one topic that every management board  has looked into, that every CEO  has explored across all regions and industries.” The organizations that get it right now will be poised to win in the coming era.
  • Continue to place humans at the center . Invest in human–machine interfaces, or “human in the loop” technologies that augment human intelligence. People at all levels of an organization need training and support to thrive in an increasingly automated world. AI is just the latest tool to help individuals and companies alike boost their efficiency.
  • Consider the ethical and security implications . This should include addressing cybersecurity , data privacy, and algorithm bias.
  • Build a strong foundation of data, talent, and capabilities . AI runs on data; having a strong foundation of high-quality data is critical to its success.
  • Organize your workers for new economies of scale and skill . Yesterday’s rigid organizational structures and operating models aren’t suited to the reality of rapidly advancing AI. One way to address this is by instituting flow-to-the-work models, where people can move seamlessly between initiatives and groups.
  • Place small bets to preserve strategic options in areas of your business that are exposed to AI developments . For example, consider investing in technology firms that are pursuing ambitious AI research and development projects in your industry. Not all these bets will necessarily pay off, but they could help hedge some of the existential risk your business may face in the future.

Learn more about QuantumBlack, AI by McKinsey . And check out AI-related job opportunities if you’re interested in working at McKinsey.

Articles referenced:

  • “ Generative AI in operations: Capturing the value ,” January 3, 2024, Marie El Hoyek and  Nicolai Müller
  • “ The economic potential of generative AI: The next productivity frontier ,” June 14, 2023, Michael Chui , Eric Hazan , Roger Roberts , Alex Singla , Kate Smaje , Alex Sukharevsky , Lareina Yee , and Rodney Zemmel
  • “ What every CEO should know about generative AI ,” May 12, 2023, Michael Chui , Roger Roberts , Tanya Rodchenko, Alex Singla , Alex Sukharevsky , Lareina Yee , and Delphine Zurkiya
  • “ An executive primer on artificial general intelligence ,” April 29, 2020, Federico Berruti , Pieter Nel, and Rob Whiteman
  • “ Notes from the AI frontier: Applications and value of deep learning ,” April 17, 2018, Michael Chui , James Manyika , Mehdi Miremadi, Nicolaus Henke, Rita Chung, Pieter Nel, and Sankalp Malhotra
  • “ Augmented and virtual reality: The promise and peril of immersive technologies ,” October 3, 2017, Stefan Hall and Ryo Takahashi

A profile of a 3d head made of concrete that is sliced in half creating two separate parts. Pink neon binary numbers travel from one half of the a head to the other by a stone bridge that connects the two parts.

Want to know more about artificial general intelligence (AGI)?

Related articles.

An executive primer on artificial general intelligence

An executive primer on artificial general intelligence

Moving illustration of wavy blue lines that was produced using computer code

What every CEO should know about generative AI

Visualizing the uses and potential impact of AI and other analytics

Notes from the AI frontier: Applications and value of deep learning

IMAGES

  1. What is Research Design in Qualitative Research

    example research design

  2. (PDF) Research Design

    example research design

  3. 😂 Sample research design paper. Research Design. 2019-02-13

    example research design

  4. PPT

    example research design

  5. Experimental Study Design: Research, Types of Design, Methods and

    example research design

  6. How to Write a Research Design

    example research design

VIDEO

  1. Types of Research Design- Exploratory Research Design

  2. RESEARCH METHODOLOGY| IGNOU| MMPC 15| MALAYALAM| PART 2| BLOCK 1| MBA|RESEARCH DESIGN TYPES|EXAMPLES

  3. concepts in research design

  4. Types of Research Design

  5. Introduction to research design #researchmethodology #research

  6. What is research design? #how to design a research advantages of research design

COMMENTS

  1. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  2. What Is Research Design? 8 Types + Examples

    Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data. Research designs for quantitative studies include descriptive, correlational, experimental and quasi-experimenta l designs. Research designs for qualitative studies include phenomenological ...

  3. Research Design

    Table of contents. Step 1: Consider your aims and approach. Step 2: Choose a type of research design. Step 3: Identify your population and sampling method. Step 4: Choose your data collection methods. Step 5: Plan your data collection procedures. Step 6: Decide on your data analysis strategies.

  4. Research Design

    This will guide your research design and help you select appropriate methods. Select a research design: There are many different research designs to choose from, including experimental, survey, case study, and qualitative designs. Choose a design that best fits your research question and objectives.

  5. What is Research Design? Types, Elements and Examples

    Research design elements include the following: Clear purpose: The research question or hypothesis must be clearly defined and focused. Sampling: This includes decisions about sample size, sampling method, and criteria for inclusion or exclusion. The approach varies for different research design types.

  6. Research Design: What is Research Design, Types, Methods, and Examples

    There are various types of research design, each suited to different research questions and objectives: • Quantitative Research: Focuses on numerical data and statistical analysis to quantify relationships and patterns. Common methods include surveys, experiments, and observational studies. • Qualitative Research: Emphasizes understanding ...

  7. How to Write a Research Design

    A research design is a structure that combines different components of research. It involves the use of different data collection and data analysis techniques logically to answer the research questions. It would be best to make some decisions about addressing the research questions adequately before starting the research process, which is achieved with the help of the research design.

  8. Organizing Your Social Sciences Research Paper

    Research Design: Creating Robust Approaches for the Social Sciences. Thousand Oaks, CA: Sage, 2013; Kemmis, Stephen and Robin McTaggart. ... It is often used to narrow down a very broad field of research into one or a few easily researchable examples. The case study research design is also useful for testing whether a specific theory and model ...

  9. Types of Research Designs Compared

    Types of Research Designs Compared | Guide & Examples. Published on June 20, 2019 by Shona McCombes.Revised on June 22, 2023. When you start planning a research project, developing research questions and creating a research design, you will have to make various decisions about the type of research you want to do.. There are many ways to categorize different types of research.

  10. Research Design: What it is, Elements & Types

    Research design is the framework of research methods and techniques chosen by a researcher to conduct a study. The design allows researchers to sharpen the research methods suitable for the subject matter and set up their studies for success. Creating a research topic explains the type of research (experimental,survey research,correlational ...

  11. Guide to Experimental Design

    Step 1: Define your variables. You should begin with a specific research question. We will work with two research question examples, one from health sciences and one from ecology: Example question 1: Phone use and sleep. You want to know how phone use before bedtime affects sleep patterns.

  12. What is a Research Design? Definition, Types, Methods and Examples

    Research design methods refer to the systematic approaches and techniques used to plan, structure, and conduct a research study. The choice of research design method depends on the research questions, objectives, and the nature of the study. Here are some key research design methods commonly used in various fields: 1.

  13. Research Design Steps

    Chapter 2. Research Design Getting Started. When I teach undergraduates qualitative research methods, the final product of the course is a "research proposal" that incorporates all they have learned and enlists the knowledge they have learned about qualitative research methods in an original design that addresses a particular research question.

  14. The Four Types of Research Design

    In short, a good research design helps us to structure our research. Marketers use different types of research design when conducting research. There are four common types of research design — descriptive, correlational, experimental, and diagnostic designs. Let's take a look at each in more detail.

  15. Study designs: Part 1

    The study design used to answer a particular research question depends on the nature of the question and the availability of resources. In this article, which is the first part of a series on "study designs," we provide an overview of research study designs and their classification. The subsequent articles will focus on individual designs.

  16. What Is a Research Design: Types, Characteristics & Examples

    A research design is the blueprint for any study. It's the plan that outlines how the research will be carried out. A study design usually includes the methods of data collection, the type of data to be gathered, and how it will be analyzed. Research designs help ensure the study is reliable, valid, and can answer the research question.

  17. Research design

    Research design refers to the overall strategy utilized to answer research questions. A research design typically outlines the theories and models underlying a project; the research question(s) of a project; a strategy for gathering data and information; and a strategy for producing answers from the data. A strong research design yields valid answers to research questions while weak designs ...

  18. PDF Research Design and Research Methods

    Research Design and Research Methods. CHAPTER 3. This chapter uses an emphasis on research design to discuss qualitative, quantitative, and mixed methods research as three major approaches to research in the social sciences. The first major section considers the role of research methods in each of these approaches.

  19. Research Design: What It Is (Plus 20 Types)

    Related: Research Skills: Definition and Examples 19. Longitudinal research design Another type of quantitative, observational research design is longitudinal design. The longitudinal research design involves observing the same sample repeatedly over a period of time. This time span might be anywhere from a few weeks to several decades ...

  20. Experimental Research Designs: Types, Examples & Advantages

    Experimental Research Design Example. In an experimental design, a researcher gathers plant samples and then randomly assigns half the samples to photosynthesize in sunlight and the other half to be kept in a dark box without sunlight, while controlling all the other variables (nutrients, water, soil, etc.)

  21. A Practical Guide to Writing Quantitative and Qualitative Research

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  22. Descriptive Research Design

    Descriptive Research Design Examples. Here are some real-time examples of descriptive research designs: A restaurant chain wants to understand the demographics and attitudes of its customers. They conduct a survey asking customers about their age, gender, income, frequency of visits, favorite menu items, and overall satisfaction.

  23. (PDF) Research Design

    Research design is the plan, structure and strategy and investigation concaved so as to obtain search question and control variance" (Borwankar, 1995). ... For example, you can experiment with ...

  24. How to Construct a Mixed Methods Research Design

    The result of designing as a verb is a mixed methods design as a noun (in German: "das Forschungsdesign" or "Design"), as it has, for example, been described in a journal article. In mixed methods design, both meanings are relevant. To obtain a strong design as a product, one needs to carefully consider a number of rules for designing ...

  25. Leveraging human-centered design and causal pathway diagramming toward

    Sample and recruitment. We used convenience sampling through fliers posted in primary care clinics and email to the research team's established community networks to identify and recruit individuals who identified as Black/African American women between the ages of 40 and 74 years and lived in either King or Pierce counties in Washington state.

  26. Searching for synergy between research and teaching

    Example 2: technology transfer. While using students as subjects in our research has historical difficulties with many reviewers simply rejecting the generalizability, Compeau et al. (Citation 2012) provide a discussion of appropriate generalizability from student subjects.In their review of articles published in MIS Quarterly and Information Systems Research from 1990-2010, 421 out of 971 ...

  27. What is Artificial General Intelligence (AGI)?

    AGI is AI with capabilities that rival those of a human. While purely theoretical at this stage, someday AGI may replicate human-like cognitive abilities including reasoning, problem solving, perception, learning, and language comprehension. When AI's abilities are indistinguishable from those of a human, it will have passed what is known as ...