• En español – ExME
  • Em português – EME

An introduction to different types of study design

Posted on 6th April 2021 by Hadi Abbas

""

Study designs are the set of methods and procedures used to collect and analyze data in a study.

Broadly speaking, there are 2 types of study designs: descriptive studies and analytical studies.

Descriptive studies

  • Describes specific characteristics in a population of interest
  • The most common forms are case reports and case series
  • In a case report, we discuss our experience with the patient’s symptoms, signs, diagnosis, and treatment
  • In a case series, several patients with similar experiences are grouped.

Analytical Studies

Analytical studies are of 2 types: observational and experimental.

Observational studies are studies that we conduct without any intervention or experiment. In those studies, we purely observe the outcomes.  On the other hand, in experimental studies, we conduct experiments and interventions.

Observational studies

Observational studies include many subtypes. Below, I will discuss the most common designs.

Cross-sectional study:

  • This design is transverse where we take a specific sample at a specific time without any follow-up
  • It allows us to calculate the frequency of disease ( p revalence ) or the frequency of a risk factor
  • This design is easy to conduct
  • For example – if we want to know the prevalence of migraine in a population, we can conduct a cross-sectional study whereby we take a sample from the population and calculate the number of patients with migraine headaches.

Cohort study:

  • We conduct this study by comparing two samples from the population: one sample with a risk factor while the other lacks this risk factor
  • It shows us the risk of developing the disease in individuals with the risk factor compared to those without the risk factor ( RR = relative risk )
  • Prospective : we follow the individuals in the future to know who will develop the disease
  • Retrospective : we look to the past to know who developed the disease (e.g. using medical records)
  • This design is the strongest among the observational studies
  • For example – to find out the relative risk of developing chronic obstructive pulmonary disease (COPD) among smokers, we take a sample including smokers and non-smokers. Then, we calculate the number of individuals with COPD among both.

Case-Control Study:

  • We conduct this study by comparing 2 groups: one group with the disease (cases) and another group without the disease (controls)
  • This design is always retrospective
  •  We aim to find out the odds of having a risk factor or an exposure if an individual has a specific disease (Odds ratio)
  •  Relatively easy to conduct
  • For example – we want to study the odds of being a smoker among hypertensive patients compared to normotensive ones. To do so, we choose a group of patients diagnosed with hypertension and another group that serves as the control (normal blood pressure). Then we study their smoking history to find out if there is a correlation.

Experimental Studies

  • Also known as interventional studies
  • Can involve animals and humans
  • Pre-clinical trials involve animals
  • Clinical trials are experimental studies involving humans
  • In clinical trials, we study the effect of an intervention compared to another intervention or placebo. As an example, I have listed the four phases of a drug trial:

I:  We aim to assess the safety of the drug ( is it safe ? )

II: We aim to assess the efficacy of the drug ( does it work ? )

III: We want to know if this drug is better than the old treatment ( is it better ? )

IV: We follow-up to detect long-term side effects ( can it stay in the market ? )

  • In randomized controlled trials, one group of participants receives the control, while the other receives the tested drug/intervention. Those studies are the best way to evaluate the efficacy of a treatment.

Finally, the figure below will help you with your understanding of different types of study designs.

A visual diagram describing the following. Two types of epidemiological studies are descriptive and analytical. Types of descriptive studies are case reports, case series, descriptive surveys. Types of analytical studies are observational or experimental. Observational studies can be cross-sectional, case-control or cohort studies. Types of experimental studies can be lab trials or field trials.

References (pdf)

You may also be interested in the following blogs for further reading:

An introduction to randomized controlled trials

Case-control and cohort studies: a brief overview

Cohort studies: prospective and retrospective designs

Prevalence vs Incidence: what is the difference?

' src=

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

No Comments on An introduction to different types of study design

' src=

you are amazing one!! if I get you I’m working with you! I’m student from Ethiopian higher education. health sciences student

' src=

Very informative and easy understandable

' src=

You are my kind of doctor. Do not lose sight of your objective.

' src=

Wow very erll explained and easy to understand

' src=

I’m Khamisu Habibu community health officer student from Abubakar Tafawa Balewa university teaching hospital Bauchi, Nigeria, I really appreciate your write up and you have make it clear for the learner. thank you

' src=

well understood,thank you so much

' src=

Well understood…thanks

' src=

Simply explained. Thank You.

' src=

Thanks a lot for this nice informative article which help me to understand different study designs that I felt difficult before

' src=

That’s lovely to hear, Mona, thank you for letting the author know how useful this was. If there are any other particular topics you think would be useful to you, and are not already on the website, please do let us know.

' src=

it is very informative and useful.

thank you statistician

Fabulous to hear, thank you John.

' src=

Thanks for this information

Thanks so much for this information….I have clearly known the types of study design Thanks

That’s so good to hear, Mirembe, thank you for letting the author know.

' src=

Very helpful article!! U have simplified everything for easy understanding

' src=

I’m a health science major currently taking statistics for health care workers…this is a challenging class…thanks for the simified feedback.

That’s good to hear this has helped you. Hopefully you will find some of the other blogs useful too. If you see any topics that are missing from the website, please do let us know!

' src=

Hello. I liked your presentation, the fact that you ranked them clearly is very helpful to understand for people like me who is a novelist researcher. However, I was expecting to read much more about the Experimental studies. So please direct me if you already have or will one day. Thank you

Dear Ay. My sincere apologies for not responding to your comment sooner. You may find it useful to filter the blogs by the topic of ‘Study design and research methods’ – here is a link to that filter: https://s4be.cochrane.org/blog/topic/study-design/ This will cover more detail about experimental studies. Or have a look on our library page for further resources there – you’ll find that on the ‘Resources’ drop down from the home page.

However, if there are specific things you feel you would like to learn about experimental studies, that are missing from the website, it would be great if you could let me know too. Thank you, and best of luck. Emma

' src=

Great job Mr Hadi. I advise you to prepare and study for the Australian Medical Board Exams as soon as you finish your undergrad study in Lebanon. Good luck and hope we can meet sometime in the future. Regards ;)

' src=

You have give a good explaination of what am looking for. However, references am not sure of where to get them from.

Subscribe to our newsletter

You will receive our monthly newsletter and free access to Trip Premium.

Related Articles

""

Cluster Randomized Trials: Concepts

This blog summarizes the concepts of cluster randomization, and the logistical and statistical considerations while designing a cluster randomized controlled trial.

""

Expertise-based Randomized Controlled Trials

This blog summarizes the concepts of Expertise-based randomized controlled trials with a focus on the advantages and challenges associated with this type of study.

a research study design

A well-designed cohort study can provide powerful results. This blog introduces prospective and retrospective cohort studies, discussing the advantages, disadvantages and use of these type of study designs.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Research Design | Step-by-Step Guide with Examples

Published on 5 May 2022 by Shona McCombes . Revised on 20 March 2023.

A research design is a strategy for answering your research question  using empirical data. Creating a research design means making decisions about:

  • Your overall aims and approach
  • The type of research design you’ll use
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research aims and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, frequently asked questions.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities – start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative approach Quantitative approach

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Prevent plagiarism, run a free check.

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types. Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships, while descriptive and correlational designs allow you to measure variables and describe relationships between them.

Type of design Purpose and characteristics
Experimental
Quasi-experimental
Correlational
Descriptive

With descriptive and correlational designs, you can get a clear picture of characteristics, trends, and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analysing the data.

Type of design Purpose and characteristics
Grounded theory
Phenomenology

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study – plants, animals, organisations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region, or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalise your results to the population as a whole.

Probability sampling Non-probability sampling

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study, your aim is to deeply understand a specific context, not to generalise to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question.

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviours, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews.

Questionnaires Interviews

Observation methods

Observations allow you to collect data unobtrusively, observing characteristics, behaviours, or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Quantitative observation

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

Field Examples of data collection methods
Media & communication Collecting a sample of texts (e.g., speeches, articles, or social media posts) for data on cultural norms and narratives
Psychology Using technologies like neuroimaging, eye-tracking, or computer-based tasks to collect data on things like attention, emotional response, or reaction time
Education Using tests or assignments to collect data on knowledge and skills
Physical sciences Using scientific instruments to collect data on things like weight, blood pressure, or chemical composition

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected – for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are reliable and valid.

Operationalisation

Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalisation means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in – for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced , while validity means that you’re actually measuring the concept you’re interested in.

Reliability Validity

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method, you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample – by mail, online, by phone, or in person?

If you’re using a probability sampling method, it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method, how will you avoid bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organising and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymise and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well organised will save time when it comes to analysing them. It can also help other researchers validate and add to your findings.

On their own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyse the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarise your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarise your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

Approach Characteristics
Thematic analysis
Discourse analysis

There are many other ways of analysing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2023, March 20). Research Design | Step-by-Step Guide with Examples. Scribbr. Retrieved 29 August 2024, from https://www.scribbr.co.uk/research-methods/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

a research study design

Research Design 101

Everything You Need To Get Started (With Examples)

By: Derek Jansen (MBA) | Reviewers: Eunice Rautenbach (DTech) & Kerryn Warren (PhD) | April 2023

Research design for qualitative and quantitative studies

Navigating the world of research can be daunting, especially if you’re a first-time researcher. One concept you’re bound to run into fairly early in your research journey is that of “ research design ”. Here, we’ll guide you through the basics using practical examples , so that you can approach your research with confidence.

Overview: Research Design 101

What is research design.

  • Research design types for quantitative studies
  • Video explainer : quantitative research design
  • Research design types for qualitative studies
  • Video explainer : qualitative research design
  • How to choose a research design
  • Key takeaways

Research design refers to the overall plan, structure or strategy that guides a research project , from its conception to the final data analysis. A good research design serves as the blueprint for how you, as the researcher, will collect and analyse data while ensuring consistency, reliability and validity throughout your study.

Understanding different types of research designs is essential as helps ensure that your approach is suitable  given your research aims, objectives and questions , as well as the resources you have available to you. Without a clear big-picture view of how you’ll design your research, you run the risk of potentially making misaligned choices in terms of your methodology – especially your sampling , data collection and data analysis decisions.

The problem with defining research design…

One of the reasons students struggle with a clear definition of research design is because the term is used very loosely across the internet, and even within academia.

Some sources claim that the three research design types are qualitative, quantitative and mixed methods , which isn’t quite accurate (these just refer to the type of data that you’ll collect and analyse). Other sources state that research design refers to the sum of all your design choices, suggesting it’s more like a research methodology . Others run off on other less common tangents. No wonder there’s confusion!

In this article, we’ll clear up the confusion. We’ll explain the most common research design types for both qualitative and quantitative research projects, whether that is for a full dissertation or thesis, or a smaller research paper or article.

Free Webinar: Research Methodology 101

Research Design: Quantitative Studies

Quantitative research involves collecting and analysing data in a numerical form. Broadly speaking, there are four types of quantitative research designs: descriptive , correlational , experimental , and quasi-experimental . 

Descriptive Research Design

As the name suggests, descriptive research design focuses on describing existing conditions, behaviours, or characteristics by systematically gathering information without manipulating any variables. In other words, there is no intervention on the researcher’s part – only data collection.

For example, if you’re studying smartphone addiction among adolescents in your community, you could deploy a survey to a sample of teens asking them to rate their agreement with certain statements that relate to smartphone addiction. The collected data would then provide insight regarding how widespread the issue may be – in other words, it would describe the situation.

The key defining attribute of this type of research design is that it purely describes the situation . In other words, descriptive research design does not explore potential relationships between different variables or the causes that may underlie those relationships. Therefore, descriptive research is useful for generating insight into a research problem by describing its characteristics . By doing so, it can provide valuable insights and is often used as a precursor to other research design types.

Correlational Research Design

Correlational design is a popular choice for researchers aiming to identify and measure the relationship between two or more variables without manipulating them . In other words, this type of research design is useful when you want to know whether a change in one thing tends to be accompanied by a change in another thing.

For example, if you wanted to explore the relationship between exercise frequency and overall health, you could use a correlational design to help you achieve this. In this case, you might gather data on participants’ exercise habits, as well as records of their health indicators like blood pressure, heart rate, or body mass index. Thereafter, you’d use a statistical test to assess whether there’s a relationship between the two variables (exercise frequency and health).

As you can see, correlational research design is useful when you want to explore potential relationships between variables that cannot be manipulated or controlled for ethical, practical, or logistical reasons. It is particularly helpful in terms of developing predictions , and given that it doesn’t involve the manipulation of variables, it can be implemented at a large scale more easily than experimental designs (which will look at next).

That said, it’s important to keep in mind that correlational research design has limitations – most notably that it cannot be used to establish causality . In other words, correlation does not equal causation . To establish causality, you’ll need to move into the realm of experimental design, coming up next…

Need a helping hand?

a research study design

Experimental Research Design

Experimental research design is used to determine if there is a causal relationship between two or more variables . With this type of research design, you, as the researcher, manipulate one variable (the independent variable) while controlling others (dependent variables). Doing so allows you to observe the effect of the former on the latter and draw conclusions about potential causality.

For example, if you wanted to measure if/how different types of fertiliser affect plant growth, you could set up several groups of plants, with each group receiving a different type of fertiliser, as well as one with no fertiliser at all. You could then measure how much each plant group grew (on average) over time and compare the results from the different groups to see which fertiliser was most effective.

Overall, experimental research design provides researchers with a powerful way to identify and measure causal relationships (and the direction of causality) between variables. However, developing a rigorous experimental design can be challenging as it’s not always easy to control all the variables in a study. This often results in smaller sample sizes , which can reduce the statistical power and generalisability of the results.

Moreover, experimental research design requires random assignment . This means that the researcher needs to assign participants to different groups or conditions in a way that each participant has an equal chance of being assigned to any group (note that this is not the same as random sampling ). Doing so helps reduce the potential for bias and confounding variables . This need for random assignment can lead to ethics-related issues . For example, withholding a potentially beneficial medical treatment from a control group may be considered unethical in certain situations.

Quasi-Experimental Research Design

Quasi-experimental research design is used when the research aims involve identifying causal relations , but one cannot (or doesn’t want to) randomly assign participants to different groups (for practical or ethical reasons). Instead, with a quasi-experimental research design, the researcher relies on existing groups or pre-existing conditions to form groups for comparison.

For example, if you were studying the effects of a new teaching method on student achievement in a particular school district, you may be unable to randomly assign students to either group and instead have to choose classes or schools that already use different teaching methods. This way, you still achieve separate groups, without having to assign participants to specific groups yourself.

Naturally, quasi-experimental research designs have limitations when compared to experimental designs. Given that participant assignment is not random, it’s more difficult to confidently establish causality between variables, and, as a researcher, you have less control over other variables that may impact findings.

All that said, quasi-experimental designs can still be valuable in research contexts where random assignment is not possible and can often be undertaken on a much larger scale than experimental research, thus increasing the statistical power of the results. What’s important is that you, as the researcher, understand the limitations of the design and conduct your quasi-experiment as rigorously as possible, paying careful attention to any potential confounding variables .

The four most common quantitative research design types are descriptive, correlational, experimental and quasi-experimental.

Research Design: Qualitative Studies

There are many different research design types when it comes to qualitative studies, but here we’ll narrow our focus to explore the “Big 4”. Specifically, we’ll look at phenomenological design, grounded theory design, ethnographic design, and case study design.

Phenomenological Research Design

Phenomenological design involves exploring the meaning of lived experiences and how they are perceived by individuals. This type of research design seeks to understand people’s perspectives , emotions, and behaviours in specific situations. Here, the aim for researchers is to uncover the essence of human experience without making any assumptions or imposing preconceived ideas on their subjects.

For example, you could adopt a phenomenological design to study why cancer survivors have such varied perceptions of their lives after overcoming their disease. This could be achieved by interviewing survivors and then analysing the data using a qualitative analysis method such as thematic analysis to identify commonalities and differences.

Phenomenological research design typically involves in-depth interviews or open-ended questionnaires to collect rich, detailed data about participants’ subjective experiences. This richness is one of the key strengths of phenomenological research design but, naturally, it also has limitations. These include potential biases in data collection and interpretation and the lack of generalisability of findings to broader populations.

Grounded Theory Research Design

Grounded theory (also referred to as “GT”) aims to develop theories by continuously and iteratively analysing and comparing data collected from a relatively large number of participants in a study. It takes an inductive (bottom-up) approach, with a focus on letting the data “speak for itself”, without being influenced by preexisting theories or the researcher’s preconceptions.

As an example, let’s assume your research aims involved understanding how people cope with chronic pain from a specific medical condition, with a view to developing a theory around this. In this case, grounded theory design would allow you to explore this concept thoroughly without preconceptions about what coping mechanisms might exist. You may find that some patients prefer cognitive-behavioural therapy (CBT) while others prefer to rely on herbal remedies. Based on multiple, iterative rounds of analysis, you could then develop a theory in this regard, derived directly from the data (as opposed to other preexisting theories and models).

Grounded theory typically involves collecting data through interviews or observations and then analysing it to identify patterns and themes that emerge from the data. These emerging ideas are then validated by collecting more data until a saturation point is reached (i.e., no new information can be squeezed from the data). From that base, a theory can then be developed .

As you can see, grounded theory is ideally suited to studies where the research aims involve theory generation , especially in under-researched areas. Keep in mind though that this type of research design can be quite time-intensive , given the need for multiple rounds of data collection and analysis.

a research study design

Ethnographic Research Design

Ethnographic design involves observing and studying a culture-sharing group of people in their natural setting to gain insight into their behaviours, beliefs, and values. The focus here is on observing participants in their natural environment (as opposed to a controlled environment). This typically involves the researcher spending an extended period of time with the participants in their environment, carefully observing and taking field notes .

All of this is not to say that ethnographic research design relies purely on observation. On the contrary, this design typically also involves in-depth interviews to explore participants’ views, beliefs, etc. However, unobtrusive observation is a core component of the ethnographic approach.

As an example, an ethnographer may study how different communities celebrate traditional festivals or how individuals from different generations interact with technology differently. This may involve a lengthy period of observation, combined with in-depth interviews to further explore specific areas of interest that emerge as a result of the observations that the researcher has made.

As you can probably imagine, ethnographic research design has the ability to provide rich, contextually embedded insights into the socio-cultural dynamics of human behaviour within a natural, uncontrived setting. Naturally, however, it does come with its own set of challenges, including researcher bias (since the researcher can become quite immersed in the group), participant confidentiality and, predictably, ethical complexities . All of these need to be carefully managed if you choose to adopt this type of research design.

Case Study Design

With case study research design, you, as the researcher, investigate a single individual (or a single group of individuals) to gain an in-depth understanding of their experiences, behaviours or outcomes. Unlike other research designs that are aimed at larger sample sizes, case studies offer a deep dive into the specific circumstances surrounding a person, group of people, event or phenomenon, generally within a bounded setting or context .

As an example, a case study design could be used to explore the factors influencing the success of a specific small business. This would involve diving deeply into the organisation to explore and understand what makes it tick – from marketing to HR to finance. In terms of data collection, this could include interviews with staff and management, review of policy documents and financial statements, surveying customers, etc.

While the above example is focused squarely on one organisation, it’s worth noting that case study research designs can have different variation s, including single-case, multiple-case and longitudinal designs. As you can see in the example, a single-case design involves intensely examining a single entity to understand its unique characteristics and complexities. Conversely, in a multiple-case design , multiple cases are compared and contrasted to identify patterns and commonalities. Lastly, in a longitudinal case design , a single case or multiple cases are studied over an extended period of time to understand how factors develop over time.

As you can see, a case study research design is particularly useful where a deep and contextualised understanding of a specific phenomenon or issue is desired. However, this strength is also its weakness. In other words, you can’t generalise the findings from a case study to the broader population. So, keep this in mind if you’re considering going the case study route.

Case study design often involves investigating an individual to gain an in-depth understanding of their experiences, behaviours or outcomes.

How To Choose A Research Design

Having worked through all of these potential research designs, you’d be forgiven for feeling a little overwhelmed and wondering, “ But how do I decide which research design to use? ”. While we could write an entire post covering that alone, here are a few factors to consider that will help you choose a suitable research design for your study.

Data type: The first determining factor is naturally the type of data you plan to be collecting – i.e., qualitative or quantitative. This may sound obvious, but we have to be clear about this – don’t try to use a quantitative research design on qualitative data (or vice versa)!

Research aim(s) and question(s): As with all methodological decisions, your research aim and research questions will heavily influence your research design. For example, if your research aims involve developing a theory from qualitative data, grounded theory would be a strong option. Similarly, if your research aims involve identifying and measuring relationships between variables, one of the experimental designs would likely be a better option.

Time: It’s essential that you consider any time constraints you have, as this will impact the type of research design you can choose. For example, if you’ve only got a month to complete your project, a lengthy design such as ethnography wouldn’t be a good fit.

Resources: Take into account the resources realistically available to you, as these need to factor into your research design choice. For example, if you require highly specialised lab equipment to execute an experimental design, you need to be sure that you’ll have access to that before you make a decision.

Keep in mind that when it comes to research, it’s important to manage your risks and play as conservatively as possible. If your entire project relies on you achieving a huge sample, having access to niche equipment or holding interviews with very difficult-to-reach participants, you’re creating risks that could kill your project. So, be sure to think through your choices carefully and make sure that you have backup plans for any existential risks. Remember that a relatively simple methodology executed well generally will typically earn better marks than a highly-complex methodology executed poorly.

a research study design

Recap: Key Takeaways

We’ve covered a lot of ground here. Let’s recap by looking at the key takeaways:

  • Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data.
  • Research designs for quantitative studies include descriptive , correlational , experimental and quasi-experimenta l designs.
  • Research designs for qualitative studies include phenomenological , grounded theory , ethnographic and case study designs.
  • When choosing a research design, you need to consider a variety of factors, including the type of data you’ll be working with, your research aims and questions, your time and the resources available to you.

If you need a helping hand with your research design (or any other aspect of your research), check out our private coaching services .

a research study design

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

13 Comments

Wei Leong YONG

Is there any blog article explaining more on Case study research design? Is there a Case study write-up template? Thank you.

Solly Khan

Thanks this was quite valuable to clarify such an important concept.

hetty

Thanks for this simplified explanations. it is quite very helpful.

Belz

This was really helpful. thanks

Imur

Thank you for your explanation. I think case study research design and the use of secondary data in researches needs to be talked about more in your videos and articles because there a lot of case studies research design tailored projects out there.

Please is there any template for a case study research design whose data type is a secondary data on your repository?

Sam Msongole

This post is very clear, comprehensive and has been very helpful to me. It has cleared the confusion I had in regard to research design and methodology.

Robyn Pritchard

This post is helpful, easy to understand, and deconstructs what a research design is. Thanks

Rachael Opoku

This post is really helpful.

kelebogile

how to cite this page

Peter

Thank you very much for the post. It is wonderful and has cleared many worries in my mind regarding research designs. I really appreciate .

ali

how can I put this blog as my reference(APA style) in bibliography part?

Joreme

This post has been very useful to me. Confusing areas have been cleared

Esther Mwamba

This is very helpful and very useful!

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly
  • Privacy Policy

Research Method

Home » Research Design – Types, Methods and Examples

Research Design – Types, Methods and Examples

Table of Contents

Research Design

Research Design

Definition:

Research design refers to the overall strategy or plan for conducting a research study. It outlines the methods and procedures that will be used to collect and analyze data, as well as the goals and objectives of the study. Research design is important because it guides the entire research process and ensures that the study is conducted in a systematic and rigorous manner.

Types of Research Design

Types of Research Design are as follows:

Descriptive Research Design

This type of research design is used to describe a phenomenon or situation. It involves collecting data through surveys, questionnaires, interviews, and observations. The aim of descriptive research is to provide an accurate and detailed portrayal of a particular group, event, or situation. It can be useful in identifying patterns, trends, and relationships in the data.

Correlational Research Design

Correlational research design is used to determine if there is a relationship between two or more variables. This type of research design involves collecting data from participants and analyzing the relationship between the variables using statistical methods. The aim of correlational research is to identify the strength and direction of the relationship between the variables.

Experimental Research Design

Experimental research design is used to investigate cause-and-effect relationships between variables. This type of research design involves manipulating one variable and measuring the effect on another variable. It usually involves randomly assigning participants to groups and manipulating an independent variable to determine its effect on a dependent variable. The aim of experimental research is to establish causality.

Quasi-experimental Research Design

Quasi-experimental research design is similar to experimental research design, but it lacks one or more of the features of a true experiment. For example, there may not be random assignment to groups or a control group. This type of research design is used when it is not feasible or ethical to conduct a true experiment.

Case Study Research Design

Case study research design is used to investigate a single case or a small number of cases in depth. It involves collecting data through various methods, such as interviews, observations, and document analysis. The aim of case study research is to provide an in-depth understanding of a particular case or situation.

Longitudinal Research Design

Longitudinal research design is used to study changes in a particular phenomenon over time. It involves collecting data at multiple time points and analyzing the changes that occur. The aim of longitudinal research is to provide insights into the development, growth, or decline of a particular phenomenon over time.

Structure of Research Design

The format of a research design typically includes the following sections:

  • Introduction : This section provides an overview of the research problem, the research questions, and the importance of the study. It also includes a brief literature review that summarizes previous research on the topic and identifies gaps in the existing knowledge.
  • Research Questions or Hypotheses: This section identifies the specific research questions or hypotheses that the study will address. These questions should be clear, specific, and testable.
  • Research Methods : This section describes the methods that will be used to collect and analyze data. It includes details about the study design, the sampling strategy, the data collection instruments, and the data analysis techniques.
  • Data Collection: This section describes how the data will be collected, including the sample size, data collection procedures, and any ethical considerations.
  • Data Analysis: This section describes how the data will be analyzed, including the statistical techniques that will be used to test the research questions or hypotheses.
  • Results : This section presents the findings of the study, including descriptive statistics and statistical tests.
  • Discussion and Conclusion : This section summarizes the key findings of the study, interprets the results, and discusses the implications of the findings. It also includes recommendations for future research.
  • References : This section lists the sources cited in the research design.

Example of Research Design

An Example of Research Design could be:

Research question: Does the use of social media affect the academic performance of high school students?

Research design:

  • Research approach : The research approach will be quantitative as it involves collecting numerical data to test the hypothesis.
  • Research design : The research design will be a quasi-experimental design, with a pretest-posttest control group design.
  • Sample : The sample will be 200 high school students from two schools, with 100 students in the experimental group and 100 students in the control group.
  • Data collection : The data will be collected through surveys administered to the students at the beginning and end of the academic year. The surveys will include questions about their social media usage and academic performance.
  • Data analysis : The data collected will be analyzed using statistical software. The mean scores of the experimental and control groups will be compared to determine whether there is a significant difference in academic performance between the two groups.
  • Limitations : The limitations of the study will be acknowledged, including the fact that social media usage can vary greatly among individuals, and the study only focuses on two schools, which may not be representative of the entire population.
  • Ethical considerations: Ethical considerations will be taken into account, such as obtaining informed consent from the participants and ensuring their anonymity and confidentiality.

How to Write Research Design

Writing a research design involves planning and outlining the methodology and approach that will be used to answer a research question or hypothesis. Here are some steps to help you write a research design:

  • Define the research question or hypothesis : Before beginning your research design, you should clearly define your research question or hypothesis. This will guide your research design and help you select appropriate methods.
  • Select a research design: There are many different research designs to choose from, including experimental, survey, case study, and qualitative designs. Choose a design that best fits your research question and objectives.
  • Develop a sampling plan : If your research involves collecting data from a sample, you will need to develop a sampling plan. This should outline how you will select participants and how many participants you will include.
  • Define variables: Clearly define the variables you will be measuring or manipulating in your study. This will help ensure that your results are meaningful and relevant to your research question.
  • Choose data collection methods : Decide on the data collection methods you will use to gather information. This may include surveys, interviews, observations, experiments, or secondary data sources.
  • Create a data analysis plan: Develop a plan for analyzing your data, including the statistical or qualitative techniques you will use.
  • Consider ethical concerns : Finally, be sure to consider any ethical concerns related to your research, such as participant confidentiality or potential harm.

When to Write Research Design

Research design should be written before conducting any research study. It is an important planning phase that outlines the research methodology, data collection methods, and data analysis techniques that will be used to investigate a research question or problem. The research design helps to ensure that the research is conducted in a systematic and logical manner, and that the data collected is relevant and reliable.

Ideally, the research design should be developed as early as possible in the research process, before any data is collected. This allows the researcher to carefully consider the research question, identify the most appropriate research methodology, and plan the data collection and analysis procedures in advance. By doing so, the research can be conducted in a more efficient and effective manner, and the results are more likely to be valid and reliable.

Purpose of Research Design

The purpose of research design is to plan and structure a research study in a way that enables the researcher to achieve the desired research goals with accuracy, validity, and reliability. Research design is the blueprint or the framework for conducting a study that outlines the methods, procedures, techniques, and tools for data collection and analysis.

Some of the key purposes of research design include:

  • Providing a clear and concise plan of action for the research study.
  • Ensuring that the research is conducted ethically and with rigor.
  • Maximizing the accuracy and reliability of the research findings.
  • Minimizing the possibility of errors, biases, or confounding variables.
  • Ensuring that the research is feasible, practical, and cost-effective.
  • Determining the appropriate research methodology to answer the research question(s).
  • Identifying the sample size, sampling method, and data collection techniques.
  • Determining the data analysis method and statistical tests to be used.
  • Facilitating the replication of the study by other researchers.
  • Enhancing the validity and generalizability of the research findings.

Applications of Research Design

There are numerous applications of research design in various fields, some of which are:

  • Social sciences: In fields such as psychology, sociology, and anthropology, research design is used to investigate human behavior and social phenomena. Researchers use various research designs, such as experimental, quasi-experimental, and correlational designs, to study different aspects of social behavior.
  • Education : Research design is essential in the field of education to investigate the effectiveness of different teaching methods and learning strategies. Researchers use various designs such as experimental, quasi-experimental, and case study designs to understand how students learn and how to improve teaching practices.
  • Health sciences : In the health sciences, research design is used to investigate the causes, prevention, and treatment of diseases. Researchers use various designs, such as randomized controlled trials, cohort studies, and case-control studies, to study different aspects of health and healthcare.
  • Business : Research design is used in the field of business to investigate consumer behavior, marketing strategies, and the impact of different business practices. Researchers use various designs, such as survey research, experimental research, and case studies, to study different aspects of the business world.
  • Engineering : In the field of engineering, research design is used to investigate the development and implementation of new technologies. Researchers use various designs, such as experimental research and case studies, to study the effectiveness of new technologies and to identify areas for improvement.

Advantages of Research Design

Here are some advantages of research design:

  • Systematic and organized approach : A well-designed research plan ensures that the research is conducted in a systematic and organized manner, which makes it easier to manage and analyze the data.
  • Clear objectives: The research design helps to clarify the objectives of the study, which makes it easier to identify the variables that need to be measured, and the methods that need to be used to collect and analyze data.
  • Minimizes bias: A well-designed research plan minimizes the chances of bias, by ensuring that the data is collected and analyzed objectively, and that the results are not influenced by the researcher’s personal biases or preferences.
  • Efficient use of resources: A well-designed research plan helps to ensure that the resources (time, money, and personnel) are used efficiently and effectively, by focusing on the most important variables and methods.
  • Replicability: A well-designed research plan makes it easier for other researchers to replicate the study, which enhances the credibility and reliability of the findings.
  • Validity: A well-designed research plan helps to ensure that the findings are valid, by ensuring that the methods used to collect and analyze data are appropriate for the research question.
  • Generalizability : A well-designed research plan helps to ensure that the findings can be generalized to other populations, settings, or situations, which increases the external validity of the study.

Research Design Vs Research Methodology

Research DesignResearch Methodology
The plan and structure for conducting research that outlines the procedures to be followed to collect and analyze data.The set of principles, techniques, and tools used to carry out the research plan and achieve research objectives.
Describes the overall approach and strategy used to conduct research, including the type of data to be collected, the sources of data, and the methods for collecting and analyzing data.Refers to the techniques and methods used to gather, analyze and interpret data, including sampling techniques, data collection methods, and data analysis techniques.
Helps to ensure that the research is conducted in a systematic, rigorous, and valid way, so that the results are reliable and can be used to make sound conclusions.Includes a set of procedures and tools that enable researchers to collect and analyze data in a consistent and valid manner, regardless of the research design used.
Common research designs include experimental, quasi-experimental, correlational, and descriptive studies.Common research methodologies include qualitative, quantitative, and mixed-methods approaches.
Determines the overall structure of the research project and sets the stage for the selection of appropriate research methodologies.Guides the researcher in selecting the most appropriate research methods based on the research question, research design, and other contextual factors.
Helps to ensure that the research project is feasible, relevant, and ethical.Helps to ensure that the data collected is accurate, valid, and reliable, and that the research findings can be interpreted and generalized to the population of interest.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Figures in Research Paper

Figures in Research Paper – Examples and Guide

Evaluating Research

Evaluating Research – Process, Examples and...

Research Methodology

Research Methodology – Types, Examples and...

Research Questions

Research Questions – Types, Examples and Writing...

Context of the Study

Context of the Study – Writing Guide and Examples

Research Results

Research Results Section – Writing Guide and...

Leave a comment x.

Save my name, email, and website in this browser for the next time I comment.

IdeaScale Logo

What is a Research Design? Definition, Types, Methods and Examples

By Nick Jain

Published on: September 8, 2023

What is Research Design?

Table of Contents

What is a Research Design?

10 types of research design, top 16 research design methods, research design examples.

A research design is defined as the overall plan or structure that guides the process of conducting research. It is a critical component of the research process and serves as a blueprint for how a study will be carried out, including the methods and techniques that will be used to collect and analyze data. A well-designed research study is essential for ensuring that the research objectives are met and that the results are valid and reliable.

Key elements of research design include:

  • Research Objectives: Clearly define the goals and objectives of the research study. What is the research trying to achieve or investigate?
  • Research Questions or Hypotheses: Formulating specific research questions or hypotheses that address the objectives of the study. These questions guide the research process.
  • Data Collection Methods: Determining how data will be collected, whether through surveys, experiments, observations, interviews, archival research, or a combination of these methods.
  • Sampling: Deciding on the target population and selecting a sample that represents that population. Sampling methods can vary, such as random sampling, stratified sampling, or convenience sampling.
  • Data Collection Instruments: Developing or selecting the tools and instruments needed to collect data, such as questionnaires, surveys, or experimental equipment.
  • Data Analysis: Defining the statistical or analytical techniques that will be used to analyze the collected data. This may involve qualitative or quantitative methods , depending on the research goals.
  • Time Frame: Establishing a timeline for the research project, including when data will be collected, analyzed, and reported.
  • Ethical Considerations: Addressing ethical issues, including obtaining informed consent from participants, ensuring the privacy and confidentiality of data, and adhering to ethical guidelines.
  • Resources: Identifying the resources needed for the research , including funding, personnel, equipment, and access to data sources.
  • Data Presentation and Reporting: Planning how the research findings will be presented and reported, whether through written reports, presentations, or other formats.

There are various research designs, such as experimental, observational, survey, case study, and longitudinal designs, each suited to different research questions and objectives. The choice of research design depends on the nature of the research and the goals of the study.

A well-constructed research design is crucial because it helps ensure the validity, reliability, and generalizability of research findings, allowing researchers to draw meaningful conclusions and contribute to the body of knowledge in their field.

Understanding the intricate tapestry of research design is pivotal for steering your investigations toward unparalleled success. Dive deep into the realm of methodologies, where precision meets impact, and craft tailored approaches to illuminate every research endeavor.

1. Experimental Research Design: Mastering Controlled Trials

Delve into the heart of experimentation with Randomized Controlled Trials (RCTs). By randomizing participants into experimental and control groups, RCTs meticulously assess the efficacy of interventions or treatments, establishing clear cause-and-effect relationships.

2. Quasi-Experimental Research Design: Bridging the Gap Ethically

When randomness isn’t feasible, embrace the pragmatic alternative of Non-equivalent Group Designs. These designs allow ethical comparison across multiple groups without random assignment, ensuring robust research conduct.

3. Observational Research Design: Capturing Real-world Dynamics

Capture snapshots of reality with Cross-Sectional Studies, unraveling intricate relationships and disparities between variables in a single moment. Embark on longitudinal journeys with Longitudinal Studies, tracking evolving trends and patterns over time.

4. Descriptive Research Design: Unveiling Insights Through Data

Plunge into the depths of data collection with Survey Research, extracting insights into attitudes, characteristics, and opinions. Engage in profound exploration through Case Studies, dissecting singular phenomena to unveil profound insights.

5. Correlational Research Design: Navigating Interrelationships

Traverse the realm of correlations with Correlational Studies, scrutinizing interrelationships between variables without inferring causality. Uncover insights into the dynamic web of connections shaping research landscapes.

6. Ex Post Facto Research Design: Retroactive Revelations

Explore existing conditions retrospectively with Retrospective Exploration, shedding light on potential causes where variable manipulation isn’t feasible. Uncover hidden insights through meticulous retrospective analysis.

7. Exploratory Research Design: Pioneering New Frontiers

Initiate your research odyssey with Pilot Studies, laying the groundwork for comprehensive investigations while refining research procedures. Blaze trails into uncharted territories and unearth groundbreaking discoveries.

8. Cohort Study: Chronicling Evolution

Embark on longitudinal expeditions with Cohort Studies, monitoring cohorts to elucidate the evolution of specific outcomes over time. Witness the unfolding narrative of change and transformation.

9. Action Research: Driving Practical Solutions

Collaboratively navigate challenges with Action Research, fostering improvements in educational or organizational settings. Drive meaningful change through actionable insights derived from collaborative endeavors.

10. Meta-Analysis: Synthesizing Knowledge

Combine perspectives gleaned from various studies through Meta-Analyses, providing a comprehensive panorama of research discoveries.

By honing in on the nuances of each research design and aligning your content with strategic SEO principles, you can ascend to the zenith of search engine rankings and establish your authority in the domain of research methodology.

Learn more: What is Research?

Research design methods refer to the systematic approaches and techniques used to plan, structure, and conduct a research study. The choice of research design method depends on the research questions, objectives, and the nature of the study. Here are some key research design methods commonly used in various fields:

1. Experimental Method

Controlled Experiments: In controlled experiments, researchers manipulate one or more independent variables and measure their effects on dependent variables while controlling for confounding factors.

2. Observational Method

Naturalistic Observation: Researchers observe and record behavior in its natural setting without intervening. This method is often used in psychology and anthropology.

Structured Observation: Observations are made using a predetermined set of criteria or a structured observation schedule.

3. Survey Method

Questionnaires: Researchers collect data by administering structured questionnaires to participants. This method is widely used for collecting quantitative research data.

Interviews: In interviews, researchers ask questions directly to participants, allowing for more in-depth responses. Interviews can take on structured, semi-structured, or unstructured formats.

4. Case Study Method

Single-Case Study: Focuses on a single individual or entity, providing an in-depth analysis of that case.

Multiple-Case Study: Involves the examination of multiple cases to identify patterns, commonalities, or differences.

5. Content Analysis

Researchers analyze textual, visual, or audio data to identify patterns, themes, and trends. This method is commonly used in media studies and social sciences.

6. Historical Research

Researchers examine historical documents, records, and artifacts to understand past events, trends, and contexts.

7. Action Research

Researchers work collaboratively with practitioners to address practical problems or implement interventions in real-world settings.

8. Ethnographic Research

Researchers immerse themselves in a particular cultural or social group to gain a deep understanding of their behaviors, beliefs, and practices.

9. Cross-sectional and Longitudinal Surveys

Cross-sectional surveys collect data from a sample of participants at a single point in time.

Longitudinal surveys collect data from the same participants over an extended period, allowing for the study of changes over time.

10. Meta-Analysis

Researchers conduct a quantitative synthesis of data from multiple studies to provide a comprehensive overview of research findings on a particular topic.

11. Mixed-Methods Research

Combines qualitative and quantitative research methods to provide a more holistic understanding of a research problem.

12. Grounded Theory

A qualitative research method that aims to develop theories or explanations grounded in the data collected during the research process.

13. Simulation and Modeling

Researchers use mathematical or computational models to simulate real-world phenomena and explore various scenarios.

14. Survey Experiments

Combines elements of surveys and experiments, allowing researchers to manipulate variables within a survey context.

15. Case-Control Studies and Cohort Studies

These epidemiological research methods are used to study the causes and risk factors associated with diseases and health outcomes.

16. Cross-Sequential Design

Combines elements of cross-sectional and longitudinal research to examine both age-related changes and cohort differences.

The selection of a specific research design method should align with the research objectives, the type of data needed, available resources, ethical considerations, and the overall research approach. Researchers often choose methods that best suit the nature of their study and research questions to ensure that they collect relevant and valid data.

Learn more: What is Research Objective?

Research Design Examples

Research designs can vary significantly depending on the research questions and objectives. Here are some examples of research designs across different disciplines:

  • Experimental Design: A pharmaceutical company conducts a randomized controlled trial (RCT) to test the efficacy of a new drug. Participants are randomly assigned to two groups: one receiving the new drug and the other a placebo. The company measures the health outcomes of both groups over a specific period.
  • Observational Design: An ecologist observes the behavior of a particular bird species in its natural habitat to understand its feeding patterns, mating rituals, and migration habits.
  • Survey Design: A market research firm conducts a survey to gather data on consumer preferences for a new product. They distribute a questionnaire to a representative sample of the target population and analyze the responses.
  • Case Study Design: A psychologist conducts a case study on an individual with a rare psychological disorder to gain insights into the causes, symptoms, and potential treatments of the condition.
  • Content Analysis: Researchers analyze a large dataset of social media posts to identify trends in public opinion and sentiment during a political election campaign.
  • Historical Research: A historian examines primary sources such as letters, diaries, and official documents to reconstruct the events and circumstances leading up to a significant historical event.
  • Action Research: A school teacher collaborates with colleagues to implement a new teaching method in their classrooms and assess its impact on student learning outcomes through continuous reflection and adjustment.
  • Ethnographic Research: An anthropologist lives with and observes an indigenous community for an extended period to understand their culture, social structures, and daily lives.
  • Cross-Sectional Survey: A public health agency conducts a cross-sectional survey to assess the prevalence of smoking among different age groups in a specific region during a particular year.
  • Longitudinal Study: A developmental psychologist follows a group of children from infancy through adolescence to study their cognitive, emotional, and social development over time.
  • Meta-Analysis: Researchers aggregate and analyze the results of multiple studies on the effectiveness of a specific type of therapy to provide a comprehensive overview of its outcomes.
  • Mixed-Methods Research: A sociologist combines surveys and in-depth interviews to study the impact of a community development program on residents’ quality of life.
  • Grounded Theory: A sociologist conducts interviews with homeless individuals to develop a theory explaining the factors that contribute to homelessness and the strategies they use to cope.
  • Simulation and Modeling: Climate scientists use computer models to simulate the effects of various greenhouse gas emission scenarios on global temperatures and sea levels.
  • Case-Control Study: Epidemiologists investigate a disease outbreak by comparing a group of individuals who contracted the disease (cases) with a group of individuals who did not (controls) to identify potential risk factors.

These examples demonstrate the diversity of research designs used in different fields to address a wide range of research questions and objectives. Researchers select the most appropriate design based on the specific context and goals of their study.

Learn more: What is Competitive Research?

Enhance Your Research

Collect feedback and conduct research with IdeaScale’s award-winning software

Most Recent Blogs

Explore the latest innovation insights and trends with our recent blog posts.

Business Growth

What is Business Growth? Definition, Stages, Strategy, and Plan

Emerging Trends

Empowering Leaders: How IdeaScale Supports Strategic Customer-Focused Decision-Making

Survey to AI

I Sent a Survey to AI, and the Results were Brilliant… and Dangerous

Government innovation

Data-Driven Innovation: How Analytics Enhance Your Idea Management Process

ISO Innovation Standards

ISO Innovation Standards: The Good, the Bad, and the Missing

Using Scorecards To Simplify Strategy

“Oh, So That’s What You Meant!” Using Scorecards To Simplify Strategy

2024-Paris-Olympics

Going for Gold: What Top Innovators and Olympic Athletes Have in Common

The Cost of Silence

The Cost of Silence: Why Neglecting New Hires’ Ideas Hurts Revenue

Continuous Improvement

Driving Continuous Improvement: Leveraging IdeaScale for Ongoing Innovation

Elevate research and feedback with your ideascale community.

IdeaScale is an innovation management solution that inspires people to take action on their ideas. Your community’s ideas can change lives, your business and the world. Connect to the ideas that matter and start co-creating the future.

Copyright © 2024 IdeaScale

Privacy Overview

CookieDurationDescription
cookielawinfo-checbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • Types of Research Designs
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Applying Critical Thinking
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

Introduction

Before beginning your paper, you need to decide how you plan to design the study .

The research design refers to the overall strategy and analytical approach that you have chosen in order to integrate, in a coherent and logical way, the different components of the study, thus ensuring that the research problem will be thoroughly investigated. It constitutes the blueprint for the collection, measurement, and interpretation of information and data. Note that the research problem determines the type of design you choose, not the other way around!

De Vaus, D. A. Research Design in Social Research . London: SAGE, 2001; Trochim, William M.K. Research Methods Knowledge Base. 2006.

General Structure and Writing Style

The function of a research design is to ensure that the evidence obtained enables you to effectively address the research problem logically and as unambiguously as possible . In social sciences research, obtaining information relevant to the research problem generally entails specifying the type of evidence needed to test the underlying assumptions of a theory, to evaluate a program, or to accurately describe and assess meaning related to an observable phenomenon.

With this in mind, a common mistake made by researchers is that they begin their investigations before they have thought critically about what information is required to address the research problem. Without attending to these design issues beforehand, the overall research problem will not be adequately addressed and any conclusions drawn will run the risk of being weak and unconvincing. As a consequence, the overall validity of the study will be undermined.

The length and complexity of describing the research design in your paper can vary considerably, but any well-developed description will achieve the following :

  • Identify the research problem clearly and justify its selection, particularly in relation to any valid alternative designs that could have been used,
  • Review and synthesize previously published literature associated with the research problem,
  • Clearly and explicitly specify hypotheses [i.e., research questions] central to the problem,
  • Effectively describe the information and/or data which will be necessary for an adequate testing of the hypotheses and explain how such information and/or data will be obtained, and
  • Describe the methods of analysis to be applied to the data in determining whether or not the hypotheses are true or false.

The research design is usually incorporated into the introduction of your paper . You can obtain an overall sense of what to do by reviewing studies that have utilized the same research design [e.g., using a case study approach]. This can help you develop an outline to follow for your own paper.

NOTE: Use the SAGE Research Methods Online and Cases and the SAGE Research Methods Videos databases to search for scholarly resources on how to apply specific research designs and methods . The Research Methods Online database contains links to more than 175,000 pages of SAGE publisher's book, journal, and reference content on quantitative, qualitative, and mixed research methodologies. Also included is a collection of case studies of social research projects that can be used to help you better understand abstract or complex methodological concepts. The Research Methods Videos database contains hours of tutorials, interviews, video case studies, and mini-documentaries covering the entire research process.

Creswell, John W. and J. David Creswell. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 5th edition. Thousand Oaks, CA: Sage, 2018; De Vaus, D. A. Research Design in Social Research . London: SAGE, 2001; Gorard, Stephen. Research Design: Creating Robust Approaches for the Social Sciences . Thousand Oaks, CA: Sage, 2013; Leedy, Paul D. and Jeanne Ellis Ormrod. Practical Research: Planning and Design . Tenth edition. Boston, MA: Pearson, 2013; Vogt, W. Paul, Dianna C. Gardner, and Lynne M. Haeffele. When to Use What Research Design . New York: Guilford, 2012.

Action Research Design

Definition and Purpose

The essentials of action research design follow a characteristic cycle whereby initially an exploratory stance is adopted, where an understanding of a problem is developed and plans are made for some form of interventionary strategy. Then the intervention is carried out [the "action" in action research] during which time, pertinent observations are collected in various forms. The new interventional strategies are carried out, and this cyclic process repeats, continuing until a sufficient understanding of [or a valid implementation solution for] the problem is achieved. The protocol is iterative or cyclical in nature and is intended to foster deeper understanding of a given situation, starting with conceptualizing and particularizing the problem and moving through several interventions and evaluations.

What do these studies tell you ?

  • This is a collaborative and adaptive research design that lends itself to use in work or community situations.
  • Design focuses on pragmatic and solution-driven research outcomes rather than testing theories.
  • When practitioners use action research, it has the potential to increase the amount they learn consciously from their experience; the action research cycle can be regarded as a learning cycle.
  • Action research studies often have direct and obvious relevance to improving practice and advocating for change.
  • There are no hidden controls or preemption of direction by the researcher.

What these studies don't tell you ?

  • It is harder to do than conducting conventional research because the researcher takes on responsibilities of advocating for change as well as for researching the topic.
  • Action research is much harder to write up because it is less likely that you can use a standard format to report your findings effectively [i.e., data is often in the form of stories or observation].
  • Personal over-involvement of the researcher may bias research results.
  • The cyclic nature of action research to achieve its twin outcomes of action [e.g. change] and research [e.g. understanding] is time-consuming and complex to conduct.
  • Advocating for change usually requires buy-in from study participants.

Coghlan, David and Mary Brydon-Miller. The Sage Encyclopedia of Action Research . Thousand Oaks, CA:  Sage, 2014; Efron, Sara Efrat and Ruth Ravid. Action Research in Education: A Practical Guide . New York: Guilford, 2013; Gall, Meredith. Educational Research: An Introduction . Chapter 18, Action Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Gorard, Stephen. Research Design: Creating Robust Approaches for the Social Sciences . Thousand Oaks, CA: Sage, 2013; Kemmis, Stephen and Robin McTaggart. “Participatory Action Research.” In Handbook of Qualitative Research . Norman Denzin and Yvonna S. Lincoln, eds. 2nd ed. (Thousand Oaks, CA: SAGE, 2000), pp. 567-605; McNiff, Jean. Writing and Doing Action Research . London: Sage, 2014; Reason, Peter and Hilary Bradbury. Handbook of Action Research: Participative Inquiry and Practice . Thousand Oaks, CA: SAGE, 2001.

Case Study Design

A case study is an in-depth study of a particular research problem rather than a sweeping statistical survey or comprehensive comparative inquiry. It is often used to narrow down a very broad field of research into one or a few easily researchable examples. The case study research design is also useful for testing whether a specific theory and model actually applies to phenomena in the real world. It is a useful design when not much is known about an issue or phenomenon.

  • Approach excels at bringing us to an understanding of a complex issue through detailed contextual analysis of a limited number of events or conditions and their relationships.
  • A researcher using a case study design can apply a variety of methodologies and rely on a variety of sources to investigate a research problem.
  • Design can extend experience or add strength to what is already known through previous research.
  • Social scientists, in particular, make wide use of this research design to examine contemporary real-life situations and provide the basis for the application of concepts and theories and the extension of methodologies.
  • The design can provide detailed descriptions of specific and rare cases.
  • A single or small number of cases offers little basis for establishing reliability or to generalize the findings to a wider population of people, places, or things.
  • Intense exposure to the study of a case may bias a researcher's interpretation of the findings.
  • Design does not facilitate assessment of cause and effect relationships.
  • Vital information may be missing, making the case hard to interpret.
  • The case may not be representative or typical of the larger problem being investigated.
  • If the criteria for selecting a case is because it represents a very unusual or unique phenomenon or problem for study, then your interpretation of the findings can only apply to that particular case.

Case Studies. Writing@CSU. Colorado State University; Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 4, Flexible Methods: Case Study Design. 2nd ed. New York: Columbia University Press, 1999; Gerring, John. “What Is a Case Study and What Is It Good for?” American Political Science Review 98 (May 2004): 341-354; Greenhalgh, Trisha, editor. Case Study Evaluation: Past, Present and Future Challenges . Bingley, UK: Emerald Group Publishing, 2015; Mills, Albert J. , Gabrielle Durepos, and Eiden Wiebe, editors. Encyclopedia of Case Study Research . Thousand Oaks, CA: SAGE Publications, 2010; Stake, Robert E. The Art of Case Study Research . Thousand Oaks, CA: SAGE, 1995; Yin, Robert K. Case Study Research: Design and Theory . Applied Social Research Methods Series, no. 5. 3rd ed. Thousand Oaks, CA: SAGE, 2003.

Causal Design

Causality studies may be thought of as understanding a phenomenon in terms of conditional statements in the form, “If X, then Y.” This type of research is used to measure what impact a specific change will have on existing norms and assumptions. Most social scientists seek causal explanations that reflect tests of hypotheses. Causal effect (nomothetic perspective) occurs when variation in one phenomenon, an independent variable, leads to or results, on average, in variation in another phenomenon, the dependent variable.

Conditions necessary for determining causality:

  • Empirical association -- a valid conclusion is based on finding an association between the independent variable and the dependent variable.
  • Appropriate time order -- to conclude that causation was involved, one must see that cases were exposed to variation in the independent variable before variation in the dependent variable.
  • Nonspuriousness -- a relationship between two variables that is not due to variation in a third variable.
  • Causality research designs assist researchers in understanding why the world works the way it does through the process of proving a causal link between variables and by the process of eliminating other possibilities.
  • Replication is possible.
  • There is greater confidence the study has internal validity due to the systematic subject selection and equity of groups being compared.
  • Not all relationships are causal! The possibility always exists that, by sheer coincidence, two unrelated events appear to be related [e.g., Punxatawney Phil could accurately predict the duration of Winter for five consecutive years but, the fact remains, he's just a big, furry rodent].
  • Conclusions about causal relationships are difficult to determine due to a variety of extraneous and confounding variables that exist in a social environment. This means causality can only be inferred, never proven.
  • If two variables are correlated, the cause must come before the effect. However, even though two variables might be causally related, it can sometimes be difficult to determine which variable comes first and, therefore, to establish which variable is the actual cause and which is the  actual effect.

Beach, Derek and Rasmus Brun Pedersen. Causal Case Study Methods: Foundations and Guidelines for Comparing, Matching, and Tracing . Ann Arbor, MI: University of Michigan Press, 2016; Bachman, Ronet. The Practice of Research in Criminology and Criminal Justice . Chapter 5, Causation and Research Designs. 3rd ed. Thousand Oaks, CA: Pine Forge Press, 2007; Brewer, Ernest W. and Jennifer Kubn. “Causal-Comparative Design.” In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 125-132; Causal Research Design: Experimentation. Anonymous SlideShare Presentation; Gall, Meredith. Educational Research: An Introduction . Chapter 11, Nonexperimental Research: Correlational Designs. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Trochim, William M.K. Research Methods Knowledge Base. 2006.

Cohort Design

Often used in the medical sciences, but also found in the applied social sciences, a cohort study generally refers to a study conducted over a period of time involving members of a population which the subject or representative member comes from, and who are united by some commonality or similarity. Using a quantitative framework, a cohort study makes note of statistical occurrence within a specialized subgroup, united by same or similar characteristics that are relevant to the research problem being investigated, rather than studying statistical occurrence within the general population. Using a qualitative framework, cohort studies generally gather data using methods of observation. Cohorts can be either "open" or "closed."

  • Open Cohort Studies [dynamic populations, such as the population of Los Angeles] involve a population that is defined just by the state of being a part of the study in question (and being monitored for the outcome). Date of entry and exit from the study is individually defined, therefore, the size of the study population is not constant. In open cohort studies, researchers can only calculate rate based data, such as, incidence rates and variants thereof.
  • Closed Cohort Studies [static populations, such as patients entered into a clinical trial] involve participants who enter into the study at one defining point in time and where it is presumed that no new participants can enter the cohort. Given this, the number of study participants remains constant (or can only decrease).
  • The use of cohorts is often mandatory because a randomized control study may be unethical. For example, you cannot deliberately expose people to asbestos, you can only study its effects on those who have already been exposed. Research that measures risk factors often relies upon cohort designs.
  • Because cohort studies measure potential causes before the outcome has occurred, they can demonstrate that these “causes” preceded the outcome, thereby avoiding the debate as to which is the cause and which is the effect.
  • Cohort analysis is highly flexible and can provide insight into effects over time and related to a variety of different types of changes [e.g., social, cultural, political, economic, etc.].
  • Either original data or secondary data can be used in this design.
  • In cases where a comparative analysis of two cohorts is made [e.g., studying the effects of one group exposed to asbestos and one that has not], a researcher cannot control for all other factors that might differ between the two groups. These factors are known as confounding variables.
  • Cohort studies can end up taking a long time to complete if the researcher must wait for the conditions of interest to develop within the group. This also increases the chance that key variables change during the course of the study, potentially impacting the validity of the findings.
  • Due to the lack of randominization in the cohort design, its external validity is lower than that of study designs where the researcher randomly assigns participants.

Healy P, Devane D. “Methodological Considerations in Cohort Study Designs.” Nurse Researcher 18 (2011): 32-36; Glenn, Norval D, editor. Cohort Analysis . 2nd edition. Thousand Oaks, CA: Sage, 2005; Levin, Kate Ann. Study Design IV: Cohort Studies. Evidence-Based Dentistry 7 (2003): 51–52; Payne, Geoff. “Cohort Study.” In The SAGE Dictionary of Social Research Methods . Victor Jupp, editor. (Thousand Oaks, CA: Sage, 2006), pp. 31-33; Study Design 101. Himmelfarb Health Sciences Library. George Washington University, November 2011; Cohort Study. Wikipedia.

Cross-Sectional Design

Cross-sectional research designs have three distinctive features: no time dimension; a reliance on existing differences rather than change following intervention; and, groups are selected based on existing differences rather than random allocation. The cross-sectional design can only measure differences between or from among a variety of people, subjects, or phenomena rather than a process of change. As such, researchers using this design can only employ a relatively passive approach to making causal inferences based on findings.

  • Cross-sectional studies provide a clear 'snapshot' of the outcome and the characteristics associated with it, at a specific point in time.
  • Unlike an experimental design, where there is an active intervention by the researcher to produce and measure change or to create differences, cross-sectional designs focus on studying and drawing inferences from existing differences between people, subjects, or phenomena.
  • Entails collecting data at and concerning one point in time. While longitudinal studies involve taking multiple measures over an extended period of time, cross-sectional research is focused on finding relationships between variables at one moment in time.
  • Groups identified for study are purposely selected based upon existing differences in the sample rather than seeking random sampling.
  • Cross-section studies are capable of using data from a large number of subjects and, unlike observational studies, is not geographically bound.
  • Can estimate prevalence of an outcome of interest because the sample is usually taken from the whole population.
  • Because cross-sectional designs generally use survey techniques to gather data, they are relatively inexpensive and take up little time to conduct.
  • Finding people, subjects, or phenomena to study that are very similar except in one specific variable can be difficult.
  • Results are static and time bound and, therefore, give no indication of a sequence of events or reveal historical or temporal contexts.
  • Studies cannot be utilized to establish cause and effect relationships.
  • This design only provides a snapshot of analysis so there is always the possibility that a study could have differing results if another time-frame had been chosen.
  • There is no follow up to the findings.

Bethlehem, Jelke. "7: Cross-sectional Research." In Research Methodology in the Social, Behavioural and Life Sciences . Herman J Adèr and Gideon J Mellenbergh, editors. (London, England: Sage, 1999), pp. 110-43; Bourque, Linda B. “Cross-Sectional Design.” In  The SAGE Encyclopedia of Social Science Research Methods . Michael S. Lewis-Beck, Alan Bryman, and Tim Futing Liao. (Thousand Oaks, CA: 2004), pp. 230-231; Hall, John. “Cross-Sectional Survey Design.” In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 173-174; Helen Barratt, Maria Kirwan. Cross-Sectional Studies: Design Application, Strengths and Weaknesses of Cross-Sectional Studies. Healthknowledge, 2009. Cross-Sectional Study. Wikipedia.

Descriptive Design

Descriptive research designs help provide answers to the questions of who, what, when, where, and how associated with a particular research problem; a descriptive study cannot conclusively ascertain answers to why. Descriptive research is used to obtain information concerning the current status of the phenomena and to describe "what exists" with respect to variables or conditions in a situation.

  • The subject is being observed in a completely natural and unchanged natural environment. True experiments, whilst giving analyzable data, often adversely influence the normal behavior of the subject [a.k.a., the Heisenberg effect whereby measurements of certain systems cannot be made without affecting the systems].
  • Descriptive research is often used as a pre-cursor to more quantitative research designs with the general overview giving some valuable pointers as to what variables are worth testing quantitatively.
  • If the limitations are understood, they can be a useful tool in developing a more focused study.
  • Descriptive studies can yield rich data that lead to important recommendations in practice.
  • Appoach collects a large amount of data for detailed analysis.
  • The results from a descriptive research cannot be used to discover a definitive answer or to disprove a hypothesis.
  • Because descriptive designs often utilize observational methods [as opposed to quantitative methods], the results cannot be replicated.
  • The descriptive function of research is heavily dependent on instrumentation for measurement and observation.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 5, Flexible Methods: Descriptive Research. 2nd ed. New York: Columbia University Press, 1999; Given, Lisa M. "Descriptive Research." In Encyclopedia of Measurement and Statistics . Neil J. Salkind and Kristin Rasmussen, editors. (Thousand Oaks, CA: Sage, 2007), pp. 251-254; McNabb, Connie. Descriptive Research Methodologies. Powerpoint Presentation; Shuttleworth, Martyn. Descriptive Research Design, September 26, 2008; Erickson, G. Scott. "Descriptive Research Design." In New Methods of Market Research and Analysis . (Northampton, MA: Edward Elgar Publishing, 2017), pp. 51-77; Sahin, Sagufta, and Jayanta Mete. "A Brief Study on Descriptive Research: Its Nature and Application in Social Science." International Journal of Research and Analysis in Humanities 1 (2021): 11; K. Swatzell and P. Jennings. “Descriptive Research: The Nuts and Bolts.” Journal of the American Academy of Physician Assistants 20 (2007), pp. 55-56; Kane, E. Doing Your Own Research: Basic Descriptive Research in the Social Sciences and Humanities . London: Marion Boyars, 1985.

Experimental Design

A blueprint of the procedure that enables the researcher to maintain control over all factors that may affect the result of an experiment. In doing this, the researcher attempts to determine or predict what may occur. Experimental research is often used where there is time priority in a causal relationship (cause precedes effect), there is consistency in a causal relationship (a cause will always lead to the same effect), and the magnitude of the correlation is great. The classic experimental design specifies an experimental group and a control group. The independent variable is administered to the experimental group and not to the control group, and both groups are measured on the same dependent variable. Subsequent experimental designs have used more groups and more measurements over longer periods. True experiments must have control, randomization, and manipulation.

  • Experimental research allows the researcher to control the situation. In so doing, it allows researchers to answer the question, “What causes something to occur?”
  • Permits the researcher to identify cause and effect relationships between variables and to distinguish placebo effects from treatment effects.
  • Experimental research designs support the ability to limit alternative explanations and to infer direct causal relationships in the study.
  • Approach provides the highest level of evidence for single studies.
  • The design is artificial, and results may not generalize well to the real world.
  • The artificial settings of experiments may alter the behaviors or responses of participants.
  • Experimental designs can be costly if special equipment or facilities are needed.
  • Some research problems cannot be studied using an experiment because of ethical or technical reasons.
  • Difficult to apply ethnographic and other qualitative methods to experimentally designed studies.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 7, Flexible Methods: Experimental Research. 2nd ed. New York: Columbia University Press, 1999; Chapter 2: Research Design, Experimental Designs. School of Psychology, University of New England, 2000; Chow, Siu L. "Experimental Design." In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 448-453; "Experimental Design." In Social Research Methods . Nicholas Walliman, editor. (London, England: Sage, 2006), pp, 101-110; Experimental Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Kirk, Roger E. Experimental Design: Procedures for the Behavioral Sciences . 4th edition. Thousand Oaks, CA: Sage, 2013; Trochim, William M.K. Experimental Design. Research Methods Knowledge Base. 2006; Rasool, Shafqat. Experimental Research. Slideshare presentation.

Exploratory Design

An exploratory design is conducted about a research problem when there are few or no earlier studies to refer to or rely upon to predict an outcome . The focus is on gaining insights and familiarity for later investigation or undertaken when research problems are in a preliminary stage of investigation. Exploratory designs are often used to establish an understanding of how best to proceed in studying an issue or what methodology would effectively apply to gathering information about the issue.

The goals of exploratory research are intended to produce the following possible insights:

  • Familiarity with basic details, settings, and concerns.
  • Well grounded picture of the situation being developed.
  • Generation of new ideas and assumptions.
  • Development of tentative theories or hypotheses.
  • Determination about whether a study is feasible in the future.
  • Issues get refined for more systematic investigation and formulation of new research questions.
  • Direction for future research and techniques get developed.
  • Design is a useful approach for gaining background information on a particular topic.
  • Exploratory research is flexible and can address research questions of all types (what, why, how).
  • Provides an opportunity to define new terms and clarify existing concepts.
  • Exploratory research is often used to generate formal hypotheses and develop more precise research problems.
  • In the policy arena or applied to practice, exploratory studies help establish research priorities and where resources should be allocated.
  • Exploratory research generally utilizes small sample sizes and, thus, findings are typically not generalizable to the population at large.
  • The exploratory nature of the research inhibits an ability to make definitive conclusions about the findings. They provide insight but not definitive conclusions.
  • The research process underpinning exploratory studies is flexible but often unstructured, leading to only tentative results that have limited value to decision-makers.
  • Design lacks rigorous standards applied to methods of data gathering and analysis because one of the areas for exploration could be to determine what method or methodologies could best fit the research problem.

Cuthill, Michael. “Exploratory Research: Citizen Participation, Local Government, and Sustainable Development in Australia.” Sustainable Development 10 (2002): 79-89; Streb, Christoph K. "Exploratory Case Study." In Encyclopedia of Case Study Research . Albert J. Mills, Gabrielle Durepos and Eiden Wiebe, editors. (Thousand Oaks, CA: Sage, 2010), pp. 372-374; Taylor, P. J., G. Catalano, and D.R.F. Walker. “Exploratory Analysis of the World City Network.” Urban Studies 39 (December 2002): 2377-2394; Exploratory Research. Wikipedia.

Field Research Design

Sometimes referred to as ethnography or participant observation, designs around field research encompass a variety of interpretative procedures [e.g., observation and interviews] rooted in qualitative approaches to studying people individually or in groups while inhabiting their natural environment as opposed to using survey instruments or other forms of impersonal methods of data gathering. Information acquired from observational research takes the form of “ field notes ” that involves documenting what the researcher actually sees and hears while in the field. Findings do not consist of conclusive statements derived from numbers and statistics because field research involves analysis of words and observations of behavior. Conclusions, therefore, are developed from an interpretation of findings that reveal overriding themes, concepts, and ideas. More information can be found HERE .

  • Field research is often necessary to fill gaps in understanding the research problem applied to local conditions or to specific groups of people that cannot be ascertained from existing data.
  • The research helps contextualize already known information about a research problem, thereby facilitating ways to assess the origins, scope, and scale of a problem and to gage the causes, consequences, and means to resolve an issue based on deliberate interaction with people in their natural inhabited spaces.
  • Enables the researcher to corroborate or confirm data by gathering additional information that supports or refutes findings reported in prior studies of the topic.
  • Because the researcher in embedded in the field, they are better able to make observations or ask questions that reflect the specific cultural context of the setting being investigated.
  • Observing the local reality offers the opportunity to gain new perspectives or obtain unique data that challenges existing theoretical propositions or long-standing assumptions found in the literature.

What these studies don't tell you

  • A field research study requires extensive time and resources to carry out the multiple steps involved with preparing for the gathering of information, including for example, examining background information about the study site, obtaining permission to access the study site, and building trust and rapport with subjects.
  • Requires a commitment to staying engaged in the field to ensure that you can adequately document events and behaviors as they unfold.
  • The unpredictable nature of fieldwork means that researchers can never fully control the process of data gathering. They must maintain a flexible approach to studying the setting because events and circumstances can change quickly or unexpectedly.
  • Findings can be difficult to interpret and verify without access to documents and other source materials that help to enhance the credibility of information obtained from the field  [i.e., the act of triangulating the data].
  • Linking the research problem to the selection of study participants inhabiting their natural environment is critical. However, this specificity limits the ability to generalize findings to different situations or in other contexts or to infer courses of action applied to other settings or groups of people.
  • The reporting of findings must take into account how the researcher themselves may have inadvertently affected respondents and their behaviors.

Historical Design

The purpose of a historical research design is to collect, verify, and synthesize evidence from the past to establish facts that defend or refute a hypothesis. It uses secondary sources and a variety of primary documentary evidence, such as, diaries, official records, reports, archives, and non-textual information [maps, pictures, audio and visual recordings]. The limitation is that the sources must be both authentic and valid.

  • The historical research design is unobtrusive; the act of research does not affect the results of the study.
  • The historical approach is well suited for trend analysis.
  • Historical records can add important contextual background required to more fully understand and interpret a research problem.
  • There is often no possibility of researcher-subject interaction that could affect the findings.
  • Historical sources can be used over and over to study different research problems or to replicate a previous study.
  • The ability to fulfill the aims of your research are directly related to the amount and quality of documentation available to understand the research problem.
  • Since historical research relies on data from the past, there is no way to manipulate it to control for contemporary contexts.
  • Interpreting historical sources can be very time consuming.
  • The sources of historical materials must be archived consistently to ensure access. This may especially challenging for digital or online-only sources.
  • Original authors bring their own perspectives and biases to the interpretation of past events and these biases are more difficult to ascertain in historical resources.
  • Due to the lack of control over external variables, historical research is very weak with regard to the demands of internal validity.
  • It is rare that the entirety of historical documentation needed to fully address a research problem is available for interpretation, therefore, gaps need to be acknowledged.

Howell, Martha C. and Walter Prevenier. From Reliable Sources: An Introduction to Historical Methods . Ithaca, NY: Cornell University Press, 2001; Lundy, Karen Saucier. "Historical Research." In The Sage Encyclopedia of Qualitative Research Methods . Lisa M. Given, editor. (Thousand Oaks, CA: Sage, 2008), pp. 396-400; Marius, Richard. and Melvin E. Page. A Short Guide to Writing about History . 9th edition. Boston, MA: Pearson, 2015; Savitt, Ronald. “Historical Research in Marketing.” Journal of Marketing 44 (Autumn, 1980): 52-58;  Gall, Meredith. Educational Research: An Introduction . Chapter 16, Historical Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007.

Longitudinal Design

A longitudinal study follows the same sample over time and makes repeated observations. For example, with longitudinal surveys, the same group of people is interviewed at regular intervals, enabling researchers to track changes over time and to relate them to variables that might explain why the changes occur. Longitudinal research designs describe patterns of change and help establish the direction and magnitude of causal relationships. Measurements are taken on each variable over two or more distinct time periods. This allows the researcher to measure change in variables over time. It is a type of observational study sometimes referred to as a panel study.

  • Longitudinal data facilitate the analysis of the duration of a particular phenomenon.
  • Enables survey researchers to get close to the kinds of causal explanations usually attainable only with experiments.
  • The design permits the measurement of differences or change in a variable from one period to another [i.e., the description of patterns of change over time].
  • Longitudinal studies facilitate the prediction of future outcomes based upon earlier factors.
  • The data collection method may change over time.
  • Maintaining the integrity of the original sample can be difficult over an extended period of time.
  • It can be difficult to show more than one variable at a time.
  • This design often needs qualitative research data to explain fluctuations in the results.
  • A longitudinal research design assumes present trends will continue unchanged.
  • It can take a long period of time to gather results.
  • There is a need to have a large sample size and accurate sampling to reach representativness.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 6, Flexible Methods: Relational and Longitudinal Research. 2nd ed. New York: Columbia University Press, 1999; Forgues, Bernard, and Isabelle Vandangeon-Derumez. "Longitudinal Analyses." In Doing Management Research . Raymond-Alain Thiétart and Samantha Wauchope, editors. (London, England: Sage, 2001), pp. 332-351; Kalaian, Sema A. and Rafa M. Kasim. "Longitudinal Studies." In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 440-441; Menard, Scott, editor. Longitudinal Research . Thousand Oaks, CA: Sage, 2002; Ployhart, Robert E. and Robert J. Vandenberg. "Longitudinal Research: The Theory, Design, and Analysis of Change.” Journal of Management 36 (January 2010): 94-120; Longitudinal Study. Wikipedia.

Meta-Analysis Design

Meta-analysis is an analytical methodology designed to systematically evaluate and summarize the results from a number of individual studies, thereby, increasing the overall sample size and the ability of the researcher to study effects of interest. The purpose is to not simply summarize existing knowledge, but to develop a new understanding of a research problem using synoptic reasoning. The main objectives of meta-analysis include analyzing differences in the results among studies and increasing the precision by which effects are estimated. A well-designed meta-analysis depends upon strict adherence to the criteria used for selecting studies and the availability of information in each study to properly analyze their findings. Lack of information can severely limit the type of analyzes and conclusions that can be reached. In addition, the more dissimilarity there is in the results among individual studies [heterogeneity], the more difficult it is to justify interpretations that govern a valid synopsis of results. A meta-analysis needs to fulfill the following requirements to ensure the validity of your findings:

  • Clearly defined description of objectives, including precise definitions of the variables and outcomes that are being evaluated;
  • A well-reasoned and well-documented justification for identification and selection of the studies;
  • Assessment and explicit acknowledgment of any researcher bias in the identification and selection of those studies;
  • Description and evaluation of the degree of heterogeneity among the sample size of studies reviewed; and,
  • Justification of the techniques used to evaluate the studies.
  • Can be an effective strategy for determining gaps in the literature.
  • Provides a means of reviewing research published about a particular topic over an extended period of time and from a variety of sources.
  • Is useful in clarifying what policy or programmatic actions can be justified on the basis of analyzing research results from multiple studies.
  • Provides a method for overcoming small sample sizes in individual studies that previously may have had little relationship to each other.
  • Can be used to generate new hypotheses or highlight research problems for future studies.
  • Small violations in defining the criteria used for content analysis can lead to difficult to interpret and/or meaningless findings.
  • A large sample size can yield reliable, but not necessarily valid, results.
  • A lack of uniformity regarding, for example, the type of literature reviewed, how methods are applied, and how findings are measured within the sample of studies you are analyzing, can make the process of synthesis difficult to perform.
  • Depending on the sample size, the process of reviewing and synthesizing multiple studies can be very time consuming.

Beck, Lewis W. "The Synoptic Method." The Journal of Philosophy 36 (1939): 337-345; Cooper, Harris, Larry V. Hedges, and Jeffrey C. Valentine, eds. The Handbook of Research Synthesis and Meta-Analysis . 2nd edition. New York: Russell Sage Foundation, 2009; Guzzo, Richard A., Susan E. Jackson and Raymond A. Katzell. “Meta-Analysis Analysis.” In Research in Organizational Behavior , Volume 9. (Greenwich, CT: JAI Press, 1987), pp 407-442; Lipsey, Mark W. and David B. Wilson. Practical Meta-Analysis . Thousand Oaks, CA: Sage Publications, 2001; Study Design 101. Meta-Analysis. The Himmelfarb Health Sciences Library, George Washington University; Timulak, Ladislav. “Qualitative Meta-Analysis.” In The SAGE Handbook of Qualitative Data Analysis . Uwe Flick, editor. (Los Angeles, CA: Sage, 2013), pp. 481-495; Walker, Esteban, Adrian V. Hernandez, and Micheal W. Kattan. "Meta-Analysis: It's Strengths and Limitations." Cleveland Clinic Journal of Medicine 75 (June 2008): 431-439.

Mixed-Method Design

  • Narrative and non-textual information can add meaning to numeric data, while numeric data can add precision to narrative and non-textual information.
  • Can utilize existing data while at the same time generating and testing a grounded theory approach to describe and explain the phenomenon under study.
  • A broader, more complex research problem can be investigated because the researcher is not constrained by using only one method.
  • The strengths of one method can be used to overcome the inherent weaknesses of another method.
  • Can provide stronger, more robust evidence to support a conclusion or set of recommendations.
  • May generate new knowledge new insights or uncover hidden insights, patterns, or relationships that a single methodological approach might not reveal.
  • Produces more complete knowledge and understanding of the research problem that can be used to increase the generalizability of findings applied to theory or practice.
  • A researcher must be proficient in understanding how to apply multiple methods to investigating a research problem as well as be proficient in optimizing how to design a study that coherently melds them together.
  • Can increase the likelihood of conflicting results or ambiguous findings that inhibit drawing a valid conclusion or setting forth a recommended course of action [e.g., sample interview responses do not support existing statistical data].
  • Because the research design can be very complex, reporting the findings requires a well-organized narrative, clear writing style, and precise word choice.
  • Design invites collaboration among experts. However, merging different investigative approaches and writing styles requires more attention to the overall research process than studies conducted using only one methodological paradigm.
  • Concurrent merging of quantitative and qualitative research requires greater attention to having adequate sample sizes, using comparable samples, and applying a consistent unit of analysis. For sequential designs where one phase of qualitative research builds on the quantitative phase or vice versa, decisions about what results from the first phase to use in the next phase, the choice of samples and estimating reasonable sample sizes for both phases, and the interpretation of results from both phases can be difficult.
  • Due to multiple forms of data being collected and analyzed, this design requires extensive time and resources to carry out the multiple steps involved in data gathering and interpretation.

Burch, Patricia and Carolyn J. Heinrich. Mixed Methods for Policy Research and Program Evaluation . Thousand Oaks, CA: Sage, 2016; Creswell, John w. et al. Best Practices for Mixed Methods Research in the Health Sciences . Bethesda, MD: Office of Behavioral and Social Sciences Research, National Institutes of Health, 2010Creswell, John W. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 4th edition. Thousand Oaks, CA: Sage Publications, 2014; Domínguez, Silvia, editor. Mixed Methods Social Networks Research . Cambridge, UK: Cambridge University Press, 2014; Hesse-Biber, Sharlene Nagy. Mixed Methods Research: Merging Theory with Practice . New York: Guilford Press, 2010; Niglas, Katrin. “How the Novice Researcher Can Make Sense of Mixed Methods Designs.” International Journal of Multiple Research Approaches 3 (2009): 34-46; Onwuegbuzie, Anthony J. and Nancy L. Leech. “Linking Research Questions to Mixed Methods Data Analysis Procedures.” The Qualitative Report 11 (September 2006): 474-498; Tashakorri, Abbas and John W. Creswell. “The New Era of Mixed Methods.” Journal of Mixed Methods Research 1 (January 2007): 3-7; Zhanga, Wanqing. “Mixed Methods Application in Health Intervention Research: A Multiple Case Study.” International Journal of Multiple Research Approaches 8 (2014): 24-35 .

Observational Design

This type of research design draws a conclusion by comparing subjects against a control group, in cases where the researcher has no control over the experiment. There are two general types of observational designs. In direct observations, people know that you are watching them. Unobtrusive measures involve any method for studying behavior where individuals do not know they are being observed. An observational study allows a useful insight into a phenomenon and avoids the ethical and practical difficulties of setting up a large and cumbersome research project.

  • Observational studies are usually flexible and do not necessarily need to be structured around a hypothesis about what you expect to observe [data is emergent rather than pre-existing].
  • The researcher is able to collect in-depth information about a particular behavior.
  • Can reveal interrelationships among multifaceted dimensions of group interactions.
  • You can generalize your results to real life situations.
  • Observational research is useful for discovering what variables may be important before applying other methods like experiments.
  • Observation research designs account for the complexity of group behaviors.
  • Reliability of data is low because seeing behaviors occur over and over again may be a time consuming task and are difficult to replicate.
  • In observational research, findings may only reflect a unique sample population and, thus, cannot be generalized to other groups.
  • There can be problems with bias as the researcher may only "see what they want to see."
  • There is no possibility to determine "cause and effect" relationships since nothing is manipulated.
  • Sources or subjects may not all be equally credible.
  • Any group that is knowingly studied is altered to some degree by the presence of the researcher, therefore, potentially skewing any data collected.

Atkinson, Paul and Martyn Hammersley. “Ethnography and Participant Observation.” In Handbook of Qualitative Research . Norman K. Denzin and Yvonna S. Lincoln, eds. (Thousand Oaks, CA: Sage, 1994), pp. 248-261; Observational Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Patton Michael Quinn. Qualitiative Research and Evaluation Methods . Chapter 6, Fieldwork Strategies and Observational Methods. 3rd ed. Thousand Oaks, CA: Sage, 2002; Payne, Geoff and Judy Payne. "Observation." In Key Concepts in Social Research . The SAGE Key Concepts series. (London, England: Sage, 2004), pp. 158-162; Rosenbaum, Paul R. Design of Observational Studies . New York: Springer, 2010;Williams, J. Patrick. "Nonparticipant Observation." In The Sage Encyclopedia of Qualitative Research Methods . Lisa M. Given, editor.(Thousand Oaks, CA: Sage, 2008), pp. 562-563.

Philosophical Design

Understood more as an broad approach to examining a research problem than a methodological design, philosophical analysis and argumentation is intended to challenge deeply embedded, often intractable, assumptions underpinning an area of study. This approach uses the tools of argumentation derived from philosophical traditions, concepts, models, and theories to critically explore and challenge, for example, the relevance of logic and evidence in academic debates, to analyze arguments about fundamental issues, or to discuss the root of existing discourse about a research problem. These overarching tools of analysis can be framed in three ways:

  • Ontology -- the study that describes the nature of reality; for example, what is real and what is not, what is fundamental and what is derivative?
  • Epistemology -- the study that explores the nature of knowledge; for example, by what means does knowledge and understanding depend upon and how can we be certain of what we know?
  • Axiology -- the study of values; for example, what values does an individual or group hold and why? How are values related to interest, desire, will, experience, and means-to-end? And, what is the difference between a matter of fact and a matter of value?
  • Can provide a basis for applying ethical decision-making to practice.
  • Functions as a means of gaining greater self-understanding and self-knowledge about the purposes of research.
  • Brings clarity to general guiding practices and principles of an individual or group.
  • Philosophy informs methodology.
  • Refine concepts and theories that are invoked in relatively unreflective modes of thought and discourse.
  • Beyond methodology, philosophy also informs critical thinking about epistemology and the structure of reality (metaphysics).
  • Offers clarity and definition to the practical and theoretical uses of terms, concepts, and ideas.
  • Limited application to specific research problems [answering the "So What?" question in social science research].
  • Analysis can be abstract, argumentative, and limited in its practical application to real-life issues.
  • While a philosophical analysis may render problematic that which was once simple or taken-for-granted, the writing can be dense and subject to unnecessary jargon, overstatement, and/or excessive quotation and documentation.
  • There are limitations in the use of metaphor as a vehicle of philosophical analysis.
  • There can be analytical difficulties in moving from philosophy to advocacy and between abstract thought and application to the phenomenal world.

Burton, Dawn. "Part I, Philosophy of the Social Sciences." In Research Training for Social Scientists . (London, England: Sage, 2000), pp. 1-5; Chapter 4, Research Methodology and Design. Unisa Institutional Repository (UnisaIR), University of South Africa; Jarvie, Ian C., and Jesús Zamora-Bonilla, editors. The SAGE Handbook of the Philosophy of Social Sciences . London: Sage, 2011; Labaree, Robert V. and Ross Scimeca. “The Philosophical Problem of Truth in Librarianship.” The Library Quarterly 78 (January 2008): 43-70; Maykut, Pamela S. Beginning Qualitative Research: A Philosophic and Practical Guide . Washington, DC: Falmer Press, 1994; McLaughlin, Hugh. "The Philosophy of Social Research." In Understanding Social Work Research . 2nd edition. (London: SAGE Publications Ltd., 2012), pp. 24-47; Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, CSLI, Stanford University, 2013.

Sequential Design

  • The researcher has a limitless option when it comes to sample size and the sampling schedule.
  • Due to the repetitive nature of this research design, minor changes and adjustments can be done during the initial parts of the study to correct and hone the research method.
  • This is a useful design for exploratory studies.
  • There is very little effort on the part of the researcher when performing this technique. It is generally not expensive, time consuming, or workforce intensive.
  • Because the study is conducted serially, the results of one sample are known before the next sample is taken and analyzed. This provides opportunities for continuous improvement of sampling and methods of analysis.
  • The sampling method is not representative of the entire population. The only possibility of approaching representativeness is when the researcher chooses to use a very large sample size significant enough to represent a significant portion of the entire population. In this case, moving on to study a second or more specific sample can be difficult.
  • The design cannot be used to create conclusions and interpretations that pertain to an entire population because the sampling technique is not randomized. Generalizability from findings is, therefore, limited.
  • Difficult to account for and interpret variation from one sample to another over time, particularly when using qualitative methods of data collection.

Betensky, Rebecca. Harvard University, Course Lecture Note slides; Bovaird, James A. and Kevin A. Kupzyk. "Sequential Design." In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 1347-1352; Cresswell, John W. Et al. “Advanced Mixed-Methods Research Designs.” In Handbook of Mixed Methods in Social and Behavioral Research . Abbas Tashakkori and Charles Teddle, eds. (Thousand Oaks, CA: Sage, 2003), pp. 209-240; Henry, Gary T. "Sequential Sampling." In The SAGE Encyclopedia of Social Science Research Methods . Michael S. Lewis-Beck, Alan Bryman and Tim Futing Liao, editors. (Thousand Oaks, CA: Sage, 2004), pp. 1027-1028; Nataliya V. Ivankova. “Using Mixed-Methods Sequential Explanatory Design: From Theory to Practice.” Field Methods 18 (February 2006): 3-20; Bovaird, James A. and Kevin A. Kupzyk. “Sequential Design.” In Encyclopedia of Research Design . Neil J. Salkind, ed. Thousand Oaks, CA: Sage, 2010; Sequential Analysis. Wikipedia.

Systematic Review

  • A systematic review synthesizes the findings of multiple studies related to each other by incorporating strategies of analysis and interpretation intended to reduce biases and random errors.
  • The application of critical exploration, evaluation, and synthesis methods separates insignificant, unsound, or redundant research from the most salient and relevant studies worthy of reflection.
  • They can be use to identify, justify, and refine hypotheses, recognize and avoid hidden problems in prior studies, and explain data inconsistencies and conflicts in data.
  • Systematic reviews can be used to help policy makers formulate evidence-based guidelines and regulations.
  • The use of strict, explicit, and pre-determined methods of synthesis, when applied appropriately, provide reliable estimates about the effects of interventions, evaluations, and effects related to the overarching research problem investigated by each study under review.
  • Systematic reviews illuminate where knowledge or thorough understanding of a research problem is lacking and, therefore, can then be used to guide future research.
  • The accepted inclusion of unpublished studies [i.e., grey literature] ensures the broadest possible way to analyze and interpret research on a topic.
  • Results of the synthesis can be generalized and the findings extrapolated into the general population with more validity than most other types of studies .
  • Systematic reviews do not create new knowledge per se; they are a method for synthesizing existing studies about a research problem in order to gain new insights and determine gaps in the literature.
  • The way researchers have carried out their investigations [e.g., the period of time covered, number of participants, sources of data analyzed, etc.] can make it difficult to effectively synthesize studies.
  • The inclusion of unpublished studies can introduce bias into the review because they may not have undergone a rigorous peer-review process prior to publication. Examples may include conference presentations or proceedings, publications from government agencies, white papers, working papers, and internal documents from organizations, and doctoral dissertations and Master's theses.

Denyer, David and David Tranfield. "Producing a Systematic Review." In The Sage Handbook of Organizational Research Methods .  David A. Buchanan and Alan Bryman, editors. ( Thousand Oaks, CA: Sage Publications, 2009), pp. 671-689; Foster, Margaret J. and Sarah T. Jewell, editors. Assembling the Pieces of a Systematic Review: A Guide for Librarians . Lanham, MD: Rowman and Littlefield, 2017; Gough, David, Sandy Oliver, James Thomas, editors. Introduction to Systematic Reviews . 2nd edition. Los Angeles, CA: Sage Publications, 2017; Gopalakrishnan, S. and P. Ganeshkumar. “Systematic Reviews and Meta-analysis: Understanding the Best Evidence in Primary Healthcare.” Journal of Family Medicine and Primary Care 2 (2013): 9-14; Gough, David, James Thomas, and Sandy Oliver. "Clarifying Differences between Review Designs and Methods." Systematic Reviews 1 (2012): 1-9; Khan, Khalid S., Regina Kunz, Jos Kleijnen, and Gerd Antes. “Five Steps to Conducting a Systematic Review.” Journal of the Royal Society of Medicine 96 (2003): 118-121; Mulrow, C. D. “Systematic Reviews: Rationale for Systematic Reviews.” BMJ 309:597 (September 1994); O'Dwyer, Linda C., and Q. Eileen Wafford. "Addressing Challenges with Systematic Review Teams through Effective Communication: A Case Report." Journal of the Medical Library Association 109 (October 2021): 643-647; Okoli, Chitu, and Kira Schabram. "A Guide to Conducting a Systematic Literature Review of Information Systems Research."  Sprouts: Working Papers on Information Systems 10 (2010); Siddaway, Andy P., Alex M. Wood, and Larry V. Hedges. "How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-analyses, and Meta-syntheses." Annual Review of Psychology 70 (2019): 747-770; Torgerson, Carole J. “Publication Bias: The Achilles’ Heel of Systematic Reviews?” British Journal of Educational Studies 54 (March 2006): 89-102; Torgerson, Carole. Systematic Reviews . New York: Continuum, 2003.

  • << Previous: Purpose of Guide
  • Next: Design Flaws to Avoid >>
  • Last Updated: Aug 30, 2024 10:02 AM
  • URL: https://libguides.usc.edu/writingguide

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Types of Research Designs Compared | Guide & Examples

Types of Research Designs Compared | Guide & Examples

Published on June 20, 2019 by Shona McCombes . Revised on June 22, 2023.

When you start planning a research project, developing research questions and creating a  research design , you will have to make various decisions about the type of research you want to do.

There are many ways to categorize different types of research. The words you use to describe your research depend on your discipline and field. In general, though, the form your research design takes will be shaped by:

  • The type of knowledge you aim to produce
  • The type of data you will collect and analyze
  • The sampling methods , timescale and location of the research

This article takes a look at some common distinctions made between different types of research and outlines the key differences between them.

Table of contents

Types of research aims, types of research data, types of sampling, timescale, and location, other interesting articles.

The first thing to consider is what kind of knowledge your research aims to contribute.

Type of research What’s the difference? What to consider
Basic vs. applied Basic research aims to , while applied research aims to . Do you want to expand scientific understanding or solve a practical problem?
vs. Exploratory research aims to , while explanatory research aims to . How much is already known about your research problem? Are you conducting initial research on a newly-identified issue, or seeking precise conclusions about an established issue?
aims to , while aims to . Is there already some theory on your research problem that you can use to develop , or do you want to propose new theories based on your findings?

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

a research study design

The next thing to consider is what type of data you will collect. Each kind of data is associated with a range of specific research methods and procedures.

Type of research What’s the difference? What to consider
Primary research vs secondary research Primary data is (e.g., through or ), while secondary data (e.g., in government or scientific publications). How much data is already available on your topic? Do you want to collect original data or analyze existing data (e.g., through a )?
, while . Is your research more concerned with measuring something or interpreting something? You can also create a research design that has elements of both.
vs Descriptive research gathers data , while experimental research . Do you want to identify characteristics, patterns and or test causal relationships between ?

Finally, you have to consider three closely related questions: how will you select the subjects or participants of the research? When and how often will you collect data from your subjects? And where will the research take place?

Keep in mind that the methods that you choose bring with them different risk factors and types of research bias . Biases aren’t completely avoidable, but can heavily impact the validity and reliability of your findings if left unchecked.

Type of research What’s the difference? What to consider
allows you to , while allows you to draw conclusions . Do you want to produce  knowledge that applies to many contexts or detailed knowledge about a specific context (e.g. in a )?
vs Cross-sectional studies , while longitudinal studies . Is your research question focused on understanding the current situation or tracking changes over time?
Field research vs laboratory research Field research takes place in , while laboratory research takes place in . Do you want to find out how something occurs in the real world or draw firm conclusions about cause and effect? Laboratory experiments have higher but lower .
Fixed design vs flexible design In a fixed research design the subjects, timescale and location are begins, while in a flexible design these aspects may . Do you want to test hypotheses and establish generalizable facts, or explore concepts and develop understanding? For measuring, testing and making generalizations, a fixed research design has higher .

Choosing between all these different research types is part of the process of creating your research design , which determines exactly how your research will be conducted. But the type of research is only the first step: next, you have to make more concrete decisions about your research methods and the details of the study.

Read more about creating a research design

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Ecological validity

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, June 22). Types of Research Designs Compared | Guide & Examples. Scribbr. Retrieved August 29, 2024, from https://www.scribbr.com/methodology/types-of-research/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, what is a research design | types, guide & examples, qualitative vs. quantitative research | differences, examples & methods, what is a research methodology | steps & tips, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Understanding Research Study Designs

Phillips-Wangensteen Building.

Table of Contents

In order to find the best possible evidence, it helps to understand the basic designs of research studies. The following basic definitions and examples of clinical research designs follow the “ levels of evidence.”

Case Series and Case Reports

Case control studies, cohort studies, randomized controlled studies, double-blind method, meta-analyses, systematic reviews.

These consist either of collections of reports on the treatment of individual patients with the same condition, or of reports on a single patient.

  • Case series/reports are used to illustrate an aspect of a condition, the treatment or the adverse reaction to treatment.
  • Example : You have a patient that has a condition that you are unfamiliar with. You would search for case reports that could help you decide on a direction of treatment or to assist on a diagnosis.
  • Case series/reports have no control group (one to compare outcomes), so they have no statistical validity.
  • The benefits of case series/reports are that they are easy to understand and can be written up in a very short period of time.

a research study design

Patients who already have a certain condition are compared with people who do not.

  • Case control studies are generally designed to estimate the odds (using an odds ratio) of developing the studied condition/disease. They can determine if there is an associational relationship between condition and risk factor
  • Example: A study in which colon cancer patients are asked what kinds of food they have eaten in the past and the answers are compared with a selected control group.
  • Case control studies are less reliable than either randomized controlled trials or cohort studies.
  • A major drawback to case control studies is that one cannot directly obtain absolute risk (i.e. incidence) of a bad outcome.
  • The advantages of case control studies are they can be done quickly and are very efficient for conditions/diseases with rare outcomes.

a research study design

Also called longitudinal studies, involve a case-defined population who presently have a certain exposure and/or receive a particular treatment that are followed over time and compared with another group who are not affected by the exposure under investigation.

  • Cohort studies may be either prospective (i.e., exposure factors are identified at the beginning of a study and a defined population is followed into the future), or historical/retrospective (i.e., past medical records for the defined population are used to identify exposure factors).
  • Cohort studies are used to establish causation of a disease or to evaluate the outcome/impact of treatment, when randomized controlled clinical trials are not possible.
  • Example: One of the more well-know examples of a cohort study is the Framingham Heart Study, which followed generations of residents of Framingham, Massachusetts.
  • Cohort studies are not as reliable as randomized controlled studies, since the two groups may differ in ways other than the variable under study.
  • Other problems with cohort studies are that they require a large sample size, are inefficient for rare outcomes, and can take long periods of time. 

Cohort studies

This is a study in which 1) There are two groups, one treatment group and one control group. The treatment group receives the treatment under investigation, and the control group receives either no treatment (placebo) or standard treatment. 2) Patients are randomly assigned to all groups. 

  • Randomized controlled trials are considered the “gold standard” in medical research. They lend themselves best to answering questions about the effectiveness of different therapies or interventions.
  • Randomization helps avoid the bias in choice of patients-to-treatment that a physician might be subject to. It also increases the probability that differences between the groups can be attributed to the treatment(s) under study.
  • Having a  control group allows for a comparison of treatments – e.g., treatment A produced favorable results 56% of the time versus treatment B in which only 25% of patients had favorable results.
  • There are certain types of questions on which randomized controlled studies cannot be done for ethical reasons, for instance, if patients were asked to undertake harmful experiences (like smoking) or denied any treatment beyond a placebo when there are known effective treatments.

undefined

A type of randomized controlled clinical trial/study in which neither medical staff/physician nor the patient knows which of several possible treatments/therapies the patient is receiving.

  • Example : Studies of treatments that consist essentially of taking pills are very easy to do double blind – the patient takes one of two pills of identical size, shape, and color, and neither the patient nor the physician needs to know which is which.
  • A double blind study is the most rigorous clinical research design because, in addition to the randomization of subjects, which reduces the risk of bias, it can eliminate or minimize the placebo effect which is a further challenge to the validity of a study.

undefined

Meta-analysis is a systematic, objective way to combine data from many studies, usually from randomized controlled clinical trials, and arrive at a pooled estimate of treatment effectiveness and statistical significance.

  • Meta-analysis can also combine data from case/control and cohort studies. The advantage to merging these data is that it increases sample size and allows for analyses that would not otherwise be possible.
  • They should not be confused with reviews of the literature or systematic reviews. 
  • Two problems with meta-analysis are publication bias (studies showing no effect or little effect are often not published and just “filed” away) and the quality of the design of the studies from which data is pulled. This can lead to misleading results when all the data on the subject from “published” literature are summarized.

undefined

A systematic review is a comprehensive survey of a topic that takes great care to find all relevant studies of the highest level of evidence, published and unpublished, assess each study, synthesize the findings from individual studies in an unbiased, explicit and reproducible way and present a balanced and impartial summary of the findings with due consideration of any flaws in the evidence. In this way it can be used for the evaluation of either existing or new technologies and practices.   

A systematic review is more rigorous than a traditional literature review and attempts to reduce the influence of bias. In order to do this, a systematic review follows a formal process:

  • Clearly formulated research question
  • Published & unpublished (conferences, company reports, “file drawer reports”, etc.) literature is carefully searched for relevant research
  • Identified research is assessed according to an explicit methodology
  • Results of the critical assessment of the individual studies are combined
  • Final results are placed in context, addressing such issues are quality of the included studies, impact of bias and the applicability of the findings
  • The difference between a systematic review and a meta-analysis is that a systematic review looks at the whole picture (qualitative view), while a meta-analysis looks for the specific statistical picture (quantitative view). 

undefined

R esearch Process in the Health Sciences  (35:37 min): Overview of the scientific research process in the health sciences. Follows the seven steps: defining the problem, reviewing the literature, formulating a hypothesis, choosing a research design, collecting data, analyzing the data and interpretation and report writing. Includes a set of additional readings and library resources.

Research Study Designs in the Health Sciences  (29:36 min): An overview of research study designs used by health sciences researchers. Covers case reports/case series, case control studies, cohort studies, correlational studies, cross-sectional studies, experimental studies (including randomized control trials), systematic reviews and meta-analysis.  Additional readings and library resources are also provided.

Mobile logo non-retina

Types of Study Design

  • 📖 Geeky Medics OSCE Book
  • ⚡ Geeky Medics Bundles
  • ✨ 1300+ OSCE Stations
  • ✅ OSCE Checklist PDF Booklet
  • 🧠 UKMLA AKT Question Bank
  • 💊 PSA Question Bank
  • 💉 Clinical Skills App
  • 🗂️ Flashcard Collections | OSCE , Medicine , Surgery , Anatomy
  • 💬 SCA Cases for MRCGP

To be the first to know about our latest videos subscribe to our YouTube channel 🙌

Table of Contents

Suggest an improvement

  • Hidden Post Title
  • Hidden Post URL
  • Hidden Post ID
  • Type of issue * N/A Fix spelling/grammar issue Add or fix a link Add or fix an image Add more detail Improve the quality of the writing Fix a factual error
  • Please provide as much detail as possible * You don't need to tell us which article this feedback relates to, as we automatically capture that information for you.
  • Your Email (optional) This allows us to get in touch for more details if required.
  • Which organ is responsible for pumping blood around the body? * Enter a five letter word in lowercase
  • Comments This field is for validation purposes and should be left unchanged.

Introduction

Study designs are frameworks used in medical research to gather data and explore a specific research question .

Choosing an appropriate study design is one of many essential considerations before conducting research to minimise bias and yield valid results .

This guide provides a summary of study designs commonly used in medical research, their characteristics, advantages and disadvantages.

Case-report and case-series

A case report is a detailed description of a patient’s medical history, diagnosis, treatment, and outcome. A case report typically documents unusual or rare cases or reports  new or unexpected clinical findings .

A case series is a similar study that involves a group of patients sharing a similar disease or condition. A case series involves a comprehensive review of medical records for each patient to identify common features or disease patterns. Case series help better understand a disease’s presentation, diagnosis, and treatment.

While a case report focuses on a single patient, a case series involves a group of patients to provide a broader perspective on a specific disease. Both case reports and case series are important tools for understanding rare or unusual diseases .

Advantages of case series and case reports include:

  • Able to describe rare or poorly understood conditions or diseases
  • Helpful in generating hypotheses and identifying patterns or trends in patient populations
  • Can be conducted relatively quickly and at a lower cost compared to other research designs

Disadvantages

Disadvantages of case series and case reports include:

  • Prone to selection bias , meaning that the patients included in the series may not be representative of the general population
  • Lack a control group, which makes it difficult to conclude  the effectiveness of different treatments or interventions
  • They are descriptive and cannot establish causality or control for confounding factors

Cross-sectional study

A cross-sectional study aims to measure the prevalence or frequency of a disease in a population at a specific point in time. In other words, it provides a “ snapshot ” of the population at a single moment in time.

Cross-sectional studies are unique from other study designs in that they collect data on the exposure and the outcome of interest from a sample of individuals in the population. This type of data is used to investigate the distribution of health-related conditions and behaviours in different populations, which is especially useful for guiding the development of public health interventions .

Example of a cross-sectional study

A cross-sectional study might investigate the prevalence of hypertension (the outcome) in a sample of adults in a particular region. The researchers would measure blood pressure levels in each participant and gather information on other factors that could influence blood pressure, such as age, sex, weight, and lifestyle habits (exposure).

Advantages of cross-sectional studies include:

  • Relatively quick and inexpensive to conduct compared to other study designs, such as cohort or case-control studies
  • They can provide a snapshot of the prevalence and distribution of a particular health condition in a population
  • They can help to identify patterns and associations between exposure and outcome variables, which can be used to generate hypotheses for further research

Disadvantages of cross-sectional studies include:

  • They cannot establish causality , as they do not follow participants over time and cannot determine the temporal sequence between exposure and outcome
  • Prone to selection bias , as the sample may not represent the entire population being studied
  • They cannot account for confounding variables , which may affect the relationship between the exposure and outcome of interest

Case-control study

A case-control study compares people who have developed a disease of interest ( cases ) with people who have not developed the disease ( controls ) to identify potential risk factors associated with the disease.

Once cases and controls have been identified, researchers then collect information about related risk factors , such as age, sex, lifestyle factors, or environmental exposures, from individuals. By comparing the prevalence of risk factors between the cases and the controls, researchers can determine the association between the risk factors and the disease.

Example of a case-control study

A case-control study design might involve comparing a group of individuals with lung cancer (cases) to a group of individuals without lung cancer (controls) to assess the association between smoking (risk factor) and the development of lung cancer.

Advantages of case-control studies include:

  • Useful for studying rare diseases , as they allow researchers to selectively recruit cases with the disease of interest
  • Useful for investigating potential risk factors for a disease, as the researchers can collect data on many different factors from both cases and controls
  • Can be helpful in situations where it is not ethical or practical to manipulate exposure levels or randomise study participants

Disadvantages of case-control studies include:

  • Prone to selection bias , as the controls may not be representative of the general population or may have different underlying risk factors than the cases
  • Cannot establish causality , as they can only identify associations between factors and disease
  • May be limited by the availability of suitable controls , as finding appropriate controls who have similar characteristics to the cases can be challenging

Cohort study

A cohort study follows a group of individuals (a cohort) over time to investigate the relationship between an exposure or risk factor and a particular outcome or health condition. Cohort studies can be further classified into prospective or retrospective cohort studies.

Prospective cohort study

A prospective cohort study is a study in which the researchers select a group of individuals who do not have a particular disease or outcome of interest at the start of the study.

They then follow this cohort over time to track the number of patients who develop the outcome . Before the start of the study, information on exposure(s) of interest may also be collected.

Example of a prospective cohort study

A prospective cohort study might follow a group of individuals who have never smoked and measure their exposure to tobacco smoke over time to investigate the relationship between smoking and lung cancer .

Retrospective cohort study

In contrast, a retrospective cohort study is a study in which the researchers select a group of individuals who have already been exposed to something (e.g. smoking) and look back in time (for example, through patient charts) to see if they developed the outcome (e.g. lung cancer ).

The key difference in retrospective cohort studies is that data on exposure and outcome are collected after the outcome has occurred.

Example of a retrospective cohort study

A retrospective cohort study might look at the medical records of smokers and see if they developed a particular adverse event such as lung cancer.

Advantages of cohort studies include:

  • Generally considered to be the most appropriate study design for investigating the temporal relationship between exposure and outcome
  • Can provide estimates of incidence and relative risk , which are useful for quantifying the strength of the association between exposure and outcome
  • Can be used to investigate multiple outcomes or endpoints associated with a particular exposure, which can help to identify unexpected effects or outcomes

Disadvantages of cohort studies include:

  • Can be expensive and time-consuming to conduct, particularly for long-term follow-up
  • May suffer from selection bias , as the sample may not be representative of the entire population being studied
  • May suffer from attrition bias , as participants may drop out or be lost to follow-up over time

Meta-analysis

A meta-analysis is a type of study that involves extracting outcome data from all relevant studies in the literature and combining the results of multiple studies to produce an overall estimate of the effect size of an intervention or exposure.

Meta-analysis is often conducted alongside a systematic review and can be considered a study of studies . By doing this, researchers provide a more comprehensive and reliable estimate of the overall effect size and their confidence interval (a measure of precision).

Meta-analyses can be conducted for a wide range of research questions , including evaluating the effectiveness of medical interventions, identifying risk factors for disease, or assessing the accuracy of diagnostic tests. They are particularly useful when the results of individual studies are inconsistent or when the sample sizes of individual studies are small, as a meta-analysis can provide a more precise estimate of the true effect size.

When conducting a meta-analysis, researchers must carefully assess the risk of bias in each study to enhance the validity of the meta-analysis. Many aspects of research studies are prone to bias , such as the methodology and the reporting of results. Where studies exhibit a high risk of bias, authors may opt to exclude the study from the analysis or perform a subgroup or sensitivity analysis.

Advantages of a meta-analysis include:

  • Combine the results of multiple studies, resulting in a larger sample size and increased statistical power, to provide a more comprehensive and precise estimate of the effect size of an intervention or outcome
  • Can help to identify sources of heterogeneity or variability in the results of individual studies by exploring the influence of different study characteristics or subgroups
  • Can help to resolve conflicting results or controversies in the literature by providing a more robust estimate of the effect size

Disadvantages of a meta-analysis include:

  • Susceptible to publication bias , where studies with statistically significant or positive results are more likely to be published than studies with nonsignificant or negative results. This bias can lead to an overestimation of the treatment effect in a meta-analysis
  • May not be appropriate if the studies included are too heterogeneous , as this can make it difficult to draw meaningful conclusions from the pooled results
  • Depend on the quality and completeness of the data available from the individual studies and may be limited by the lack of data on certain outcomes or subgroups

Ecological study

An ecological study assesses the relationship between outcome and exposure at a population level or among groups of people rather than studying individuals directly.

The main goal of an ecological study is to observe and analyse patterns or trends at the population level and to identify potential associations or correlations between environmental factors or exposures and health outcomes.

Ecological studies focus on collecting data on population health outcomes , such as disease or mortality rates, and environmental factors or exposures, such as air pollution, temperature, or socioeconomic status.

Example of an ecological study

An ecological study might be used when comparing smoking rates and lung cancer incidence across different countries.

Advantages of an ecological study include:

  • Provide insights into how social, economic, and environmental factors may impact health outcomes in real-world settings , which can inform public health policies and interventions
  • Cost-effective and efficient, often using existing data or readily available data, such as data from national or regional databases

Disadvantages of an ecological study include:

  • Ecological fallacy occurs when conclusions about individual-level associations are drawn from population-level differences
  • Ecological studies rely on population-level (i.e. aggregate) rather than individual-level data; they cannot establish causal relationships between exposures and outcomes, as the studies do not account for differences or confounders at the individual level

Randomised controlled trial

A randomised controlled trial (RCT) is an important study design commonly used in medical research to determine the effectiveness of a treatment or intervention . It is considered the gold standard in research design because it allows researchers to draw cause-and-effect conclusions about the effects of an intervention.

In an RCT, participants are randomly assigned to two or more groups. One group receives the intervention being tested, such as a new drug or a specific medical procedure. In contrast, the other group is a control group and receives either no intervention or a placebo .

Randomisation ensures that each participant has an equal chance of being assigned to either group, thereby minimising selection bias . To reduce bias, an RCT often uses a technique called blinding , in which study participants, researchers, or analysts are kept unaware of participant assignment during the study. The participants are then followed over time, and outcome measures are collected and compared to determine if there is any statistical difference between the intervention and control groups.

Example of a randomised controlled trial

An RCT might be employed to evaluate the effectiveness of a new smoking cessation program in helping individuals quit smoking compared to the existing standard of care.

Advantages of an RCT include:

  • Considered the most reliable study design for establishing causal relationships between interventions and outcomes and determining the effectiveness of interventions
  • Randomisation of participants to intervention and control groups ensures that the groups are similar at the outset, reducing the risk of selection bias and enhancing internal validity
  • Using a control group allows researchers to compare with the group that received the intervention while controlling for confounding factors

Disadvantages of an RCT include:

  • Can raise ethical concerns ; for example, it may be considered unethical to withhold an intervention from a control group, especially if the intervention is known to be effective
  • Can be expensive and time-consuming to conduct, requiring resources for participant recruitment, randomisation, data collection, and analysis
  • Often have strict inclusion and exclusion criteria , which may limit the generalisability of the findings to broader populations
  • May not always be feasible or practical for certain research questions, especially in rare diseases or when studying long-term outcomes

Dr Chris Jefferies

  • Yuliya L, Qazi MA (eds.). Toronto Notes 2022. Toronto: Toronto Notes for Medical Students Inc; 2022.
  • Le T, Bhushan V, Qui C, Chalise A, Kaparaliotis P, Coleman C, Kallianos K. First Aid for the USMLE Step 1 2023. New York: McGraw-Hill Education; 2023.
  • Rothman KJ, Greenland S, Lash T. Modern Epidemiology. 3 rd ed. Philadelphia: Lippincott Williams & Wilkins; 2008.

Print Friendly, PDF & Email

Other pages

  • Product Bundles 🎉
  • Join the Team 🙌
  • Institutional Licence 📚
  • OSCE Station Creator Tool 🩺
  • Create and Share Flashcards 🗂️
  • OSCE Group Chat 💬
  • Newsletter 📰
  • Advertise With Us

Join the community

a research study design

What Is a Research Design? | Definition, Types & Guide

a research study design

Introduction

Parts of a research design, types of research methodology in qualitative research, narrative research designs, phenomenological research designs, grounded theory research designs.

  • Ethnographic research designs

Case study research design

Important reminders when designing a research study.

A research design in qualitative research is a critical framework that guides the methodological approach to studying complex social phenomena. Qualitative research designs determine how data is collected, analyzed, and interpreted, ensuring that the research captures participants' nuanced and subjective perspectives. Research designs also recognize ethical considerations and involve informed consent, ensuring confidentiality, and handling sensitive topics with the utmost respect and care. These considerations are crucial in qualitative research and other contexts where participants may share personal or sensitive information. A research design should convey coherence as it is essential for producing high-quality qualitative research, often following a recursive and evolving process.

a research study design

Theoretical concepts and research question

The first step in creating a research design is identifying the main theoretical concepts. To identify these concepts, a researcher should ask which theoretical keywords are implicit in the investigation. The next step is to develop a research question using these theoretical concepts. This can be done by identifying the relationship of interest among the concepts that catch the focus of the investigation. The question should address aspects of the topic that need more knowledge, shed light on new information, and specify which aspects should be prioritized before others. This step is essential in identifying which participants to include or which data collection methods to use. Research questions also put into practice the conceptual framework and make the initial theoretical concepts more explicit. Once the research question has been established, the main objectives of the research can be specified. For example, these objectives may involve identifying shared experiences around a phenomenon or evaluating perceptions of a new treatment.

Methodology

After identifying the theoretical concepts, research question, and objectives, the next step is to determine the methodology that will be implemented. This is the lifeline of a research design and should be coherent with the objectives and questions of the study. The methodology will determine how data is collected, analyzed, and presented. Popular qualitative research methodologies include case studies, ethnography , grounded theory , phenomenology, and narrative research . Each methodology is tailored to specific research questions and facilitates the collection of rich, detailed data. For example, a narrative approach may focus on only one individual and their story, while phenomenology seeks to understand participants' lived common experiences. Qualitative research designs differ significantly from quantitative research, which often involves experimental research, correlational designs, or variance analysis to test hypotheses about relationships between two variables, a dependent variable and an independent variable while controlling for confounding variables.

a research study design

Literature review

After the methodology is identified, conducting a thorough literature review is integral to the research design. This review identifies gaps in knowledge, positioning the new study within the larger academic dialogue and underlining its contribution and relevance. Meta-analysis, a form of secondary research, can be particularly useful in synthesizing findings from multiple studies to provide a clear picture of the research landscape.

Data collection

The sampling method in qualitative research is designed to delve deeply into specific phenomena rather than to generalize findings across a broader population. The data collection methods—whether interviews, focus groups, observations, or document analysis—should align with the chosen methodology, ethical considerations, and other factors such as sample size. In some cases, repeated measures may be collected to observe changes over time.

Data analysis

Analysis in qualitative research typically involves methods such as coding and thematic analysis to distill patterns from the collected data. This process delineates how the research results will be systematically derived from the data. It is recommended that the researcher ensures that the final interpretations are coherent with the observations and analyses, making clear connections between the data and the conclusions drawn. Reporting should be narrative-rich, offering a comprehensive view of the context and findings.

Overall, a coherent qualitative research design that incorporates these elements facilitates a study that not only adds theoretical and practical value to the field but also adheres to high quality. This methodological thoroughness is essential for achieving significant, insightful findings. Examples of well-executed research designs can be valuable references for other researchers conducting qualitative or quantitative investigations. An effective research design is critical for producing robust and impactful research outcomes.

Each qualitative research design is unique, diverse, and meticulously tailored to answer specific research questions, meet distinct objectives, and explore the unique nature of the phenomenon under investigation. The methodology is the wider framework that a research design follows. Each methodology in a research design consists of methods, tools, or techniques that compile data and analyze it following a specific approach.

The methods enable researchers to collect data effectively across individuals, different groups, or observations, ensuring they are aligned with the research design. The following list includes the most commonly used methodologies employed in qualitative research designs, highlighting how they serve different purposes and utilize distinct methods to gather and analyze data.

a research study design

The narrative approach in research focuses on the collection and detailed examination of life stories, personal experiences, or narratives to gain insights into individuals' lives as told from their perspectives. It involves constructing a cohesive story out of the diverse experiences shared by participants, often using chronological accounts. It seeks to understand human experience and social phenomena through the form and content of the stories. These can include spontaneous narrations such as memoirs or diaries from participants or diaries solicited by the researcher. Narration helps construct the identity of an individual or a group and can rationalize, persuade, argue, entertain, confront, or make sense of an event or tragedy. To conduct a narrative investigation, it is recommended that researchers follow these steps:

Identify if the research question fits the narrative approach. Its methods are best employed when a researcher wants to learn about the lifestyle and life experience of a single participant or a small number of individuals.

Select the best-suited participants for the research design and spend time compiling their stories using different methods such as observations, diaries, interviewing their family members, or compiling related secondary sources.

Compile the information related to the stories. Narrative researchers collect data based on participants' stories concerning their personal experiences, for example about their workplace or homes, their racial or ethnic culture, and the historical context in which the stories occur.

Analyze the participant stories and "restore" them within a coherent framework. This involves collecting the stories, analyzing them based on key elements such as time, place, plot, and scene, and then rewriting them in a chronological sequence (Ollerenshaw & Creswell, 2000). The framework may also include elements such as a predicament, conflict, or struggle; a protagonist; and a sequence with implicit causality, where the predicament is somehow resolved (Carter, 1993).

Collaborate with participants by actively involving them in the research. Both the researcher and the participant negotiate the meaning of their stories, adding a credibility check to the analysis (Creswell & Miller, 2000).

A narrative investigation includes collecting a large amount of data from the participants and the researcher needs to understand the context of the individual's life. A keen eye is needed to collect particular stories that capture the individual experiences. Active collaboration with the participant is necessary, and researchers need to discuss and reflect on their own beliefs and backgrounds. Multiple questions could arise in the collection, analysis, and storytelling of individual stories that need to be addressed, such as: Whose story is it? Who can tell it? Who can change it? Which version is compelling? What happens when narratives compete? In a community, what do the stories do among them? (Pinnegar & Daynes, 2006).

a research study design

Make the most of your data with ATLAS.ti

Powerful tools in an intuitive interface, ready for you with a free trial today.

A research design based on phenomenology aims to understand the essence of the lived experiences of a group of people regarding a particular concept or phenomenon. Researchers gather deep insights from individuals who have experienced the phenomenon, striving to describe "what" they experienced and "how" they experienced it. This approach to a research design typically involves detailed interviews and aims to reach a deep existential understanding. The purpose is to reduce individual experiences to a description of the universal essence or understanding the phenomenon's nature (van Manen, 1990). In phenomenology, the following steps are usually followed:

Identify a phenomenon of interest . For example, the phenomenon might be anger, professionalism in the workplace, or what it means to be a fighter.

Recognize and specify the philosophical assumptions of phenomenology , for example, one could reflect on the nature of objective reality and individual experiences.

Collect data from individuals who have experienced the phenomenon . This typically involves conducting in-depth interviews, including multiple sessions with each participant. Additionally, other forms of data may be collected using several methods, such as observations, diaries, art, poetry, music, recorded conversations, written responses, or other secondary sources.

Ask participants two general questions that encompass the phenomenon and how the participant experienced it (Moustakas, 1994). For example, what have you experienced in this phenomenon? And what contexts or situations have typically influenced your experiences within the phenomenon? Other open-ended questions may also be asked, but these two questions particularly focus on collecting research data that will lead to a textural description and a structural description of the experiences, and ultimately provide an understanding of the common experiences of the participants.

Review data from the questions posed to participants . It is recommended that researchers review the answers and highlight "significant statements," phrases, or quotes that explain how participants experienced the phenomenon. The researcher can then develop meaningful clusters from these significant statements into patterns or key elements shared across participants.

Write a textual description of what the participants experienced based on the answers and themes of the two main questions. The answers are also used to write about the characteristics and describe the context that influenced the way the participants experienced the phenomenon, called imaginative variation or structural description. Researchers should also write about their own experiences and context or situations that influenced them.

Write a composite description from the structural and textural description that presents the "essence" of the phenomenon, called the essential and invariant structure.

A phenomenological approach to a research design includes the strict and careful selection of participants in the study where bracketing personal experiences can be difficult to implement. The researcher decides how and in which way their knowledge will be introduced. It also involves some understanding and identification of the broader philosophical assumptions.

a research study design

Grounded theory is used in a research design when the goal is to inductively develop a theory "grounded" in data that has been systematically gathered and analyzed. Starting from the data collection, researchers identify characteristics, patterns, themes, and relationships, gradually forming a theoretical framework that explains relevant processes, actions, or interactions grounded in the observed reality. A grounded theory study goes beyond descriptions and its objective is to generate a theory, an abstract analytical scheme of a process. Developing a theory doesn't come "out of nothing" but it is constructed and based on clear data collection. We suggest the following steps to follow a grounded theory approach in a research design:

Determine if grounded theory is the best for your research problem . Grounded theory is a good design when a theory is not already available to explain a process.

Develop questions that aim to understand how individuals experienced or enacted the process (e.g., What was the process? How did it unfold?). Data collection and analysis occur in tandem, so that researchers can ask more detailed questions that shape further analysis, such as: What was the focal point of the process (central phenomenon)? What influenced or caused this phenomenon to occur (causal conditions)? What strategies were employed during the process? What effect did it have (consequences)?

Gather relevant data about the topic in question . Data gathering involves questions that are usually asked in interviews, although other forms of data can also be collected, such as observations, documents, and audio-visual materials from different groups.

Carry out the analysis in stages . Grounded theory analysis begins with open coding, where the researcher forms codes that inductively emerge from the data (rather than preconceived categories). Researchers can thus identify specific properties and dimensions relevant to their research question.

Assemble the data in new ways and proceed to axial coding . Axial coding involves using a coding paradigm or logic diagram, such as a visual model, to systematically analyze the data. Begin by identifying a central phenomenon, which is the main category or focus of the research problem. Next, explore the causal conditions, which are the categories of factors that influence the phenomenon. Specify the strategies, which are the actions or interactions associated with the phenomenon. Then, identify the context and intervening conditions—both narrow and broad factors that affect the strategies. Finally, delineate the consequences, which are the outcomes or results of employing the strategies.

Use selective coding to construct a "storyline" that links the categories together. Alternatively, the researcher may formulate propositions or theory-driven questions that specify predicted relationships among these categories.

Develop and visually present a matrix that clarifies the social, historical, and economic conditions influencing the central phenomenon. This optional step encourages viewing the model from the narrowest to the broadest perspective.

Write a substantive-level theory that is closely related to a specific problem or population. This step is optional but provides a focused theoretical framework that can later be tested with quantitative data to explore its generalizability to a broader sample.

Allow theory to emerge through the memo-writing process, where ideas about the theory evolve continuously throughout the stages of open, axial, and selective coding.

The researcher should initially set aside any preconceived theoretical ideas to allow for the emergence of analytical and substantive theories. This is a systematic research approach, particularly when following the methodological steps outlined by Strauss and Corbin (1990). For those seeking more flexibility in their research process, the approach suggested by Charmaz (2006) might be preferable.

One of the challenges when using this method in a research design is determining when categories are sufficiently saturated and when the theory is detailed enough. To achieve saturation, discriminant sampling may be employed, where additional information is gathered from individuals similar to those initially interviewed to verify the applicability of the theory to these new participants. Ultimately, its goal is to develop a theory that comprehensively describes the central phenomenon, causal conditions, strategies, context, and consequences.

a research study design

Ethnographic research design

An ethnographic approach in research design involves the extended observation and data collection of a group or community. The researcher immerses themselves in the setting, often living within the community for long periods. During this time, they collect data by observing and recording behaviours, conversations, and rituals to understand the group's social dynamics and cultural norms. We suggest following these steps for ethnographic methods in a research design:

Assess whether ethnography is the best approach for the research design and questions. It's suitable if the goal is to describe how a cultural group functions and to delve into their beliefs, language, behaviours, and issues like power, resistance, and domination, particularly if there is limited literature due to the group’s marginal status or unfamiliarity to mainstream society.

Identify and select a cultural group for your research design. Choose one that has a long history together, forming distinct languages, behaviours, and attitudes. This group often might be marginalized within society.

Choose cultural themes or issues to examine within the group. Analyze interactions in everyday settings to identify pervasive patterns such as life cycles, events, and overarching cultural themes. Culture is inferred from the group members' words, actions, and the tension between their actual and expected behaviours, as well as the artifacts they use.

Conduct fieldwork to gather detailed information about the group’s living and working environments. Visit the site, respect the daily lives of the members, and collect a diverse range of materials, considering ethical aspects such as respect and reciprocity.

Compile and analyze cultural data to develop a set of descriptive and thematic insights. Begin with a detailed description of the group based on observations of specific events or activities over time. Then, conduct a thematic analysis to identify patterns or themes that illustrate how the group functions and lives. The final output should be a comprehensive cultural portrait that integrates both the participants (emic) and the researcher’s (etic) perspectives, potentially advocating for the group’s needs or suggesting societal changes to better accommodate them.

Researchers engaging in ethnography need a solid understanding of cultural anthropology and the dynamics of sociocultural systems, which are commonly explored in ethnographic research. The data collection phase is notably extensive, requiring prolonged periods in the field. Ethnographers often employ a literary, quasi-narrative style in their narratives, which can pose challenges for those accustomed to more conventional social science writing methods.

Another potential issue is the risk of researchers "going native," where they become overly assimilated into the community under study, potentially jeopardizing the objectivity and completion of their research. It's crucial for researchers to be aware of their impact on the communities and environments they are studying.

The case study approach in a research design focuses on a detailed examination of a single case or a small number of cases. Cases can be individuals, groups, organizations, or events. Case studies are particularly useful for research designs that aim to understand complex issues in real-life contexts. The aim is to provide a thorough description and contextual analysis of the cases under investigation. We suggest following these steps in a case study design:

Assess if a case study approach suits your research questions . This approach works well when you have distinct cases with defined boundaries and aim to deeply understand these cases or compare multiple cases.

Choose your case or cases. These could involve individuals, groups, programs, events, or activities. Decide whether an individual or collective, multi-site or single-site case study is most appropriate, focusing on specific cases or themes (Stake, 1995; Yin, 2003).

Gather data extensively from diverse sources . Collect information through archival records, interviews, direct and participant observations, and physical artifacts (Yin, 2003).

Analyze the data holistically or in focused segments . Provide a comprehensive overview of the entire case or concentrate on specific aspects. Start with a detailed description including the history of the case and its chronological events then narrow down to key themes. The aim is to delve into the case's complexity rather than generalize findings.

Interpret and report the significance of the case in the final phase . Explain what insights were gained, whether about the subject of the case in an instrumental study or an unusual situation in an intrinsic study (Lincoln & Guba, 1985).

The investigator must carefully select the case or cases to study, recognizing that multiple potential cases could illustrate a chosen topic or issue. This selection process involves deciding whether to focus on a single case for deeper analysis or multiple cases, which may provide broader insights but less depth per case. Each choice requires a well-justified rationale for the selected cases. Researchers face the challenge of defining the boundaries of a case, such as its temporal scope and the events and processes involved. This decision in a research design is crucial as it affects the depth and value of the information presented in the study, and therefore should be planned to ensure a comprehensive portrayal of the case.

a research study design

Qualitative and quantitative research designs are distinct in their approach to data collection and data analysis. Unlike quantitative research, which focuses on numerical data and statistical analysis, qualitative research prioritizes understanding the depth and richness of human experiences, behaviours, and interactions.

Qualitative methods in a research design have to have internal coherence, meaning that all elements of the research project—research question, data collection, data analysis, findings, and theory—are well-aligned and consistent with each other. This coherence in the research study is especially crucial in inductive qualitative research, where the research process often follows a recursive and evolving path. Ensuring that each component of the research design fits seamlessly with the others enhances the clarity and impact of the study, making the research findings more robust and compelling. Whether it is a descriptive research design, explanatory research design, diagnostic research design, or correlational research design coherence is an important element in both qualitative and quantitative research.

Finally, a good research design ensures that the research is conducted ethically and considers the well-being and rights of participants when managing collected data. The research design guides researchers in providing a clear rationale for their methodologies, which is crucial for justifying the research objectives to the scientific community. A thorough research design also contributes to the body of knowledge, enabling researchers to build upon past research studies and explore new dimensions within their fields. At the core of the design, there is a clear articulation of the research objectives. These objectives should be aligned with the underlying concepts being investigated, offering a concise method to answer the research questions and guiding the direction of the study with proper qualitative methods.

Carter, K. (1993). The place of a story in the study of teaching and teacher education. Educational Researcher, 22(1), 5-12, 18.

Charmaz, K. (2006). Constructing grounded theory. London: Sage.

Creswell, J. W., & Miller, D. L. (2000). Determining validity in qualitative inquiry. Theory Into Practice, 39(3), 124-130.

Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Newbury Park, CA: Sage.

Moustakas, C. (1994). Phenomenological research methods. Thousand Oaks, CA: Sage.

Ollerenshaw, J. A., & Creswell, J. W. (2000, April). Data analysis in narrative research: A comparison of two “restoring” approaches. Paper presented at the annual meeting of the American Educational Research Association, New Orleans, LA.

Stake, R. E. (1995). The art of case study research. Thousand Oaks, CA: Sage.

Strauss, A., & Corbin, J. (1990). Basics of qualitative research: Grounded theory procedures and techniques. Newbury Park, CA: Sage.

van Manen, M. (1990). Researching lived experience: Human science for an action sensitive pedagogy. Ontario, Canada: University of Western Ontario.

Yin, R. K. (2003). Case study research: Design and methods (3rd ed.). Thousand Oaks, CA: Sage

a research study design

Whatever your research objectives, make it happen with ATLAS.ti!

Download a free trial today.

a research study design

Sacred Heart University Library

Organizing Academic Research Papers: Types of Research Designs

  • Purpose of Guide
  • Design Flaws to Avoid
  • Glossary of Research Terms
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Executive Summary
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tertiary Sources
  • What Is Scholarly vs. Popular?
  • Qualitative Methods
  • Quantitative Methods
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Annotated Bibliography
  • Dealing with Nervousness
  • Using Visual Aids
  • Grading Someone Else's Paper
  • How to Manage Group Projects
  • Multiple Book Review Essay
  • Reviewing Collected Essays
  • About Informed Consent
  • Writing Field Notes
  • Writing a Policy Memo
  • Writing a Research Proposal
  • Acknowledgements

Introduction

Before beginning your paper, you need to decide how you plan to design the study .

The research design refers to the overall strategy that you choose to integrate the different components of the study in a coherent and logical way, thereby, ensuring you will effectively address the research problem; it constitutes the blueprint for the collection, measurement, and analysis of data. Note that your research problem determines the type of design you can use, not the other way around!

General Structure and Writing Style

Action research design, case study design, causal design, cohort design, cross-sectional design, descriptive design, experimental design, exploratory design, historical design, longitudinal design, observational design, philosophical design, sequential design.

Kirshenblatt-Gimblett, Barbara. Part 1, What Is Research Design? The Context of Design. Performance Studies Methods Course syllabus . New York University, Spring 2006; Trochim, William M.K. Research Methods Knowledge Base . 2006.

The function of a research design is to ensure that the evidence obtained enables you to effectively address the research problem as unambiguously as possible. In social sciences research, obtaining evidence relevant to the research problem generally entails specifying the type of evidence needed to test a theory, to evaluate a program, or to accurately describe a phenomenon. However, researchers can often begin their investigations far too early, before they have thought critically about about what information is required to answer the study's research questions. Without attending to these design issues beforehand, the conclusions drawn risk being weak and unconvincing and, consequently, will fail to adequate address the overall research problem.

 Given this, the length and complexity of research designs can vary considerably, but any sound design will do the following things:

  • Identify the research problem clearly and justify its selection,
  • Review previously published literature associated with the problem area,
  • Clearly and explicitly specify hypotheses [i.e., research questions] central to the problem selected,
  • Effectively describe the data which will be necessary for an adequate test of the hypotheses and explain how such data will be obtained, and
  • Describe the methods of analysis which will be applied to the data in determining whether or not the hypotheses are true or false.

Kirshenblatt-Gimblett, Barbara. Part 1, What Is Research Design? The Context of Design. Performance Studies Methods Course syllabus . New Yortk University, Spring 2006.

Definition and Purpose

The essentials of action research design follow a characteristic cycle whereby initially an exploratory stance is adopted, where an understanding of a problem is developed and plans are made for some form of interventionary strategy. Then the intervention is carried out (the action in Action Research) during which time, pertinent observations are collected in various forms. The new interventional strategies are carried out, and the cyclic process repeats, continuing until a sufficient understanding of (or implement able solution for) the problem is achieved. The protocol is iterative or cyclical in nature and is intended to foster deeper understanding of a given situation, starting with conceptualizing and particularizing the problem and moving through several interventions and evaluations.

What do these studies tell you?

  • A collaborative and adaptive research design that lends itself to use in work or community situations.
  • Design focuses on pragmatic and solution-driven research rather than testing theories.
  • When practitioners use action research it has the potential to increase the amount they learn consciously from their experience. The action research cycle can also be regarded as a learning cycle.
  • Action search studies often have direct and obvious relevance to practice.
  • There are no hidden controls or preemption of direction by the researcher.

What these studies don't tell you?

  • It is harder to do than conducting conventional studies because the researcher takes on responsibilities for encouraging change as well as for research.
  • Action research is much harder to write up because you probably can’t use a standard format to report your findings effectively.
  • Personal over-involvement of the researcher may bias research results.
  • The cyclic nature of action research to achieve its twin outcomes of action (e.g. change) and research (e.g. understanding) is time-consuming and complex to conduct.

Gall, Meredith. Educational Research: An Introduction . Chapter 18, Action Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Kemmis, Stephen and Robin McTaggart. “Participatory Action Research.” In Handbook of Qualitative Research . Norman Denzin and Yvonna S. Locoln, eds. 2nd ed. (Thousand Oaks, CA: SAGE, 2000), pp. 567-605.; Reason, Peter and Hilary Bradbury. Handbook of Action Research: Participative Inquiry and Practice . Thousand Oaks, CA: SAGE, 2001.

A case study is an in-depth study of a particular research problem rather than a sweeping statistical survey. It is often used to narrow down a very broad field of research into one or a few easily researchable examples. The case study research design is also useful for testing whether a specific theory and model actually applies to phenomena in the real world. It is a useful design when not much is known about a phenomenon.

  • Approach excels at bringing us to an understanding of a complex issue through detailed contextual analysis of a limited number of events or conditions and their relationships.
  • A researcher using a case study design can apply a vaiety of methodologies and rely on a variety of sources to investigate a research problem.
  • Design can extend experience or add strength to what is already known through previous research.
  • Social scientists, in particular, make wide use of this research design to examine contemporary real-life situations and provide the basis for the application of concepts and theories and extension of methods.
  • The design can provide detailed descriptions of specific and rare cases.
  • A single or small number of cases offers little basis for establishing reliability or to generalize the findings to a wider population of people, places, or things.
  • The intense exposure to study of the case may bias a researcher's interpretation of the findings.
  • Design does not facilitate assessment of cause and effect relationships.
  • Vital information may be missing, making the case hard to interpret.
  • The case may not be representative or typical of the larger problem being investigated.
  • If the criteria for selecting a case is because it represents a very unusual or unique phenomenon or problem for study, then your intepretation of the findings can only apply to that particular case.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 4, Flexible Methods: Case Study Design. 2nd ed. New York: Columbia University Press, 1999; Stake, Robert E. The Art of Case Study Research . Thousand Oaks, CA: SAGE, 1995; Yin, Robert K. Case Study Research: Design and Theory . Applied Social Research Methods Series, no. 5. 3rd ed. Thousand Oaks, CA: SAGE, 2003.

Causality studies may be thought of as understanding a phenomenon in terms of conditional statements in the form, “If X, then Y.” This type of research is used to measure what impact a specific change will have on existing norms and assumptions. Most social scientists seek causal explanations that reflect tests of hypotheses. Causal effect (nomothetic perspective) occurs when variation in one phenomenon, an independent variable, leads to or results, on average, in variation in another phenomenon, the dependent variable.

Conditions necessary for determining causality:

  • Empirical association--a valid conclusion is based on finding an association between the independent variable and the dependent variable.
  • Appropriate time order--to conclude that causation was involved, one must see that cases were exposed to variation in the independent variable before variation in the dependent variable.
  • Nonspuriousness--a relationship between two variables that is not due to variation in a third variable.
  • Causality research designs helps researchers understand why the world works the way it does through the process of proving a causal link between variables and eliminating other possibilities.
  • Replication is possible.
  • There is greater confidence the study has internal validity due to the systematic subject selection and equity of groups being compared.
  • Not all relationships are casual! The possibility always exists that, by sheer coincidence, two unrelated events appear to be related [e.g., Punxatawney Phil could accurately predict the duration of Winter for five consecutive years but, the fact remains, he's just a big, furry rodent].
  • Conclusions about causal relationships are difficult to determine due to a variety of extraneous and confounding variables that exist in a social environment. This means causality can only be inferred, never proven.
  • If two variables are correlated, the cause must come before the effect. However, even though two variables might be causally related, it can sometimes be difficult to determine which variable comes first and therefore to establish which variable is the actual cause and which is the  actual effect.

Bachman, Ronet. The Practice of Research in Criminology and Criminal Justice . Chapter 5, Causation and Research Designs. 3rd ed.  Thousand Oaks, CA: Pine Forge Press, 2007; Causal Research Design: Experimentation. Anonymous SlideShare Presentation ; Gall, Meredith. Educational Research: An Introduction . Chapter 11, Nonexperimental Research: Correlational Designs. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Trochim, William M.K. Research Methods Knowledge Base . 2006.

Often used in the medical sciences, but also found in the applied social sciences, a cohort study generally refers to a study conducted over a period of time involving members of a population which the subject or representative member comes from, and who are united by some commonality or similarity. Using a quantitative framework, a cohort study makes note of statistical occurrence within a specialized subgroup, united by same or similar characteristics that are relevant to the research problem being investigated, r ather than studying statistical occurrence within the general population. Using a qualitative framework, cohort studies generally gather data using methods of observation. Cohorts can be either "open" or "closed."

  • Open Cohort Studies [dynamic populations, such as the population of Los Angeles] involve a population that is defined just by the state of being a part of the study in question (and being monitored for the outcome). Date of entry and exit from the study is individually defined, therefore, the size of the study population is not constant. In open cohort studies, researchers can only calculate rate based data, such as, incidence rates and variants thereof.
  • Closed Cohort Studies [static populations, such as patients entered into a clinical trial] involve participants who enter into the study at one defining point in time and where it is presumed that no new participants can enter the cohort. Given this, the number of study participants remains constant (or can only decrease).
  • The use of cohorts is often mandatory because a randomized control study may be unethical. For example, you cannot deliberately expose people to asbestos, you can only study its effects on those who have already been exposed. Research that measures risk factors  often relies on cohort designs.
  • Because cohort studies measure potential causes before the outcome has occurred, they can demonstrate that these “causes” preceded the outcome, thereby avoiding the debate as to which is the cause and which is the effect.
  • Cohort analysis is highly flexible and can provide insight into effects over time and related to a variety of different types of changes [e.g., social, cultural, political, economic, etc.].
  • Either original data or secondary data can be used in this design.
  • In cases where a comparative analysis of two cohorts is made [e.g., studying the effects of one group exposed to asbestos and one that has not], a researcher cannot control for all other factors that might differ between the two groups. These factors are known as confounding variables.
  • Cohort studies can end up taking a long time to complete if the researcher must wait for the conditions of interest to develop within the group. This also increases the chance that key variables change during the course of the study, potentially impacting the validity of the findings.
  • Because of the lack of randominization in the cohort design, its external validity is lower than that of study designs where the researcher randomly assigns participants.

Healy P, Devane D. “Methodological Considerations in Cohort Study Designs.” Nurse Researcher 18 (2011): 32-36;  Levin, Kate Ann. Study Design IV: Cohort Studies. Evidence-Based Dentistry 7 (2003): 51–52; Study Design 101 . Himmelfarb Health Sciences Library. George Washington University, November 2011; Cohort Study . Wikipedia.

Cross-sectional research designs have three distinctive features: no time dimension, a reliance on existing differences rather than change following intervention; and, groups are selected based on existing differences rather than random allocation. The cross-sectional design can only measure diffrerences between or from among a variety of people, subjects, or phenomena rather than change. As such, researchers using this design can only employ a relative passive approach to making causal inferences based on findings.

  • Cross-sectional studies provide a 'snapshot' of the outcome and the characteristics associated with it, at a specific point in time.
  • Unlike the experimental design where there is an active intervention by the researcher to produce and measure change or to create differences, cross-sectional designs focus on studying and drawing inferences from existing differences between people, subjects, or phenomena.
  • Entails collecting data at and concerning one point in time. While longitudinal studies involve taking multiple measures over an extended period of time, cross-sectional research is focused on finding relationships between variables at one moment in time.
  • Groups identified for study are purposely selected based upon existing differences in the sample rather than seeking random sampling.
  • Cross-section studies are capable of using data from a large number of subjects and, unlike observational studies, is not geographically bound.
  • Can estimate prevalence of an outcome of interest because the sample is usually taken from the whole population.
  • Because cross-sectional designs generally use survey techniques to gather data, they are relatively inexpensive and take up little time to conduct.
  • Finding people, subjects, or phenomena to study that are very similar except in one specific variable can be difficult.
  • Results are static and time bound and, therefore, give no indication of a sequence of events or reveal historical contexts.
  • Studies cannot be utilized to establish cause and effect relationships.
  • Provide only a snapshot of analysis so there is always the possibility that a study could have differing results if another time-frame had been chosen.
  • There is no follow up to the findings.

Hall, John. “Cross-Sectional Survey Design.” In Encyclopedia of Survey Research Methods. Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 173-174; Helen Barratt, Maria Kirwan. Cross-Sectional Studies: Design, Application, Strengths and Weaknesses of Cross-Sectional Studies . Healthknowledge, 2009. Cross-Sectional Study . Wikipedia.

Descriptive research designs help provide answers to the questions of who, what, when, where, and how associated with a particular research problem; a descriptive study cannot conclusively ascertain answers to why. Descriptive research is used to obtain information concerning the current status of the phenomena and to describe "what exists" with respect to variables or conditions in a situation.

  • The subject is being observed in a completely natural and unchanged natural environment. True experiments, whilst giving analyzable data, often adversely influence the normal behavior of the subject.
  • Descriptive research is often used as a pre-cursor to more quantitatively research designs, the general overview giving some valuable pointers as to what variables are worth testing quantitatively.
  • If the limitations are understood, they can be a useful tool in developing a more focused study.
  • Descriptive studies can yield rich data that lead to important recommendations.
  • Appoach collects a large amount of data for detailed analysis.
  • The results from a descriptive research can not be used to discover a definitive answer or to disprove a hypothesis.
  • Because descriptive designs often utilize observational methods [as opposed to quantitative methods], the results cannot be replicated.
  • The descriptive function of research is heavily dependent on instrumentation for measurement and observation.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 5, Flexible Methods: Descriptive Research. 2nd ed. New York: Columbia University Press, 1999;  McNabb, Connie. Descriptive Research Methodologies . Powerpoint Presentation; Shuttleworth, Martyn. Descriptive Research Design , September 26, 2008. Explorable.com website.

A blueprint of the procedure that enables the researcher to maintain control over all factors that may affect the result of an experiment. In doing this, the researcher attempts to determine or predict what may occur. Experimental Research is often used where there is time priority in a causal relationship (cause precedes effect), there is consistency in a causal relationship (a cause will always lead to the same effect), and the magnitude of the correlation is great. The classic experimental design specifies an experimental group and a control group. The independent variable is administered to the experimental group and not to the control group, and both groups are measured on the same dependent variable. Subsequent experimental designs have used more groups and more measurements over longer periods. True experiments must have control, randomization, and manipulation.

  • Experimental research allows the researcher to control the situation. In so doing, it allows researchers to answer the question, “what causes something to occur?”
  • Permits the researcher to identify cause and effect relationships between variables and to distinguish placebo effects from treatment effects.
  • Experimental research designs support the ability to limit alternative explanations and to infer direct causal relationships in the study.
  • Approach provides the highest level of evidence for single studies.
  • The design is artificial, and results may not generalize well to the real world.
  • The artificial settings of experiments may alter subject behaviors or responses.
  • Experimental designs can be costly if special equipment or facilities are needed.
  • Some research problems cannot be studied using an experiment because of ethical or technical reasons.
  • Difficult to apply ethnographic and other qualitative methods to  experimental designed research studies.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 7, Flexible Methods: Experimental Research. 2nd ed. New York: Columbia University Press, 1999; Chapter 2: Research Design, Experimental Designs . School of Psychology, University of New England, 2000; Experimental Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Trochim, William M.K. Experimental Design . Research Methods Knowledge Base. 2006; Rasool, Shafqat. Experimental Research . Slideshare presentation.

An exploratory design is conducted about a research problem when there are few or no earlier studies to refer to. The focus is on gaining insights and familiarity for later investigation or undertaken when problems are in a preliminary stage of investigation.

The goals of exploratory research are intended to produce the following possible insights:

  • Familiarity with basic details, settings and concerns.
  • Well grounded picture of the situation being developed.
  • Generation of new ideas and assumption, development of tentative theories or hypotheses.
  • Determination about whether a study is feasible in the future.
  • Issues get refined for more systematic investigation and formulation of new research questions.
  • Direction for future research and techniques get developed.
  • Design is a useful approach for gaining background information on a particular topic.
  • Exploratory research is flexible and can address research questions of all types (what, why, how).
  • Provides an opportunity to define new terms and clarify existing concepts.
  • Exploratory research is often used to generate formal hypotheses and develop more precise research problems.
  • Exploratory studies help establish research priorities.
  • Exploratory research generally utilizes small sample sizes and, thus, findings are typically not generalizable to the population at large.
  • The exploratory nature of the research inhibits an ability to make definitive conclusions about the findings.
  • The research process underpinning exploratory studies is flexible but often unstructured, leading to only tentative results that have limited value in decision-making.
  • Design lacks rigorous standards applied to methods of data gathering and analysis because one of the areas for exploration could be to determine what method or methodologies could best fit the research problem.

Cuthill, Michael. “Exploratory Research: Citizen Participation, Local Government, and Sustainable Development in Australia.” Sustainable Development 10 (2002): 79-89; Taylor, P. J., G. Catalano, and D.R.F. Walker. “Exploratory Analysis of the World City Network.” Urban Studies 39 (December 2002): 2377-2394; Exploratory Research . Wikipedia.

The purpose of a historical research design is to collect, verify, and synthesize evidence from the past to establish facts that defend or refute your hypothesis. It uses secondary sources and a variety of primary documentary evidence, such as, logs, diaries, official records, reports, archives, and non-textual information [maps, pictures, audio and visual recordings]. The limitation is that the sources must be both authentic and valid.

  • The historical research design is unobtrusive; the act of research does not affect the results of the study.
  • The historical approach is well suited for trend analysis.
  • Historical records can add important contextual background required to more fully understand and interpret a research problem.
  • There is no possibility of researcher-subject interaction that could affect the findings.
  • Historical sources can be used over and over to study different research problems or to replicate a previous study.
  • The ability to fulfill the aims of your research are directly related to the amount and quality of documentation available to understand the research problem.
  • Since historical research relies on data from the past, there is no way to manipulate it to control for contemporary contexts.
  • Interpreting historical sources can be very time consuming.
  • The sources of historical materials must be archived consistentally to ensure access.
  • Original authors bring their own perspectives and biases to the interpretation of past events and these biases are more difficult to ascertain in historical resources.
  • Due to the lack of control over external variables, historical research is very weak with regard to the demands of internal validity.
  • It rare that the entirety of historical documentation needed to fully address a research problem is available for interpretation, therefore, gaps need to be acknowledged.

Savitt, Ronald. “Historical Research in Marketing.” Journal of Marketing 44 (Autumn, 1980): 52-58;  Gall, Meredith. Educational Research: An Introduction . Chapter 16, Historical Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007.

A longitudinal study follows the same sample over time and makes repeated observations. With longitudinal surveys, for example, the same group of people is interviewed at regular intervals, enabling researchers to track changes over time and to relate them to variables that might explain why the changes occur. Longitudinal research designs describe patterns of change and help establish the direction and magnitude of causal relationships. Measurements are taken on each variable over two or more distinct time periods. This allows the researcher to measure change in variables over time. It is a type of observational study and is sometimes referred to as a panel study.

  • Longitudinal data allow the analysis of duration of a particular phenomenon.
  • Enables survey researchers to get close to the kinds of causal explanations usually attainable only with experiments.
  • The design permits the measurement of differences or change in a variable from one period to another [i.e., the description of patterns of change over time].
  • Longitudinal studies facilitate the prediction of future outcomes based upon earlier factors.
  • The data collection method may change over time.
  • Maintaining the integrity of the original sample can be difficult over an extended period of time.
  • It can be difficult to show more than one variable at a time.
  • This design often needs qualitative research to explain fluctuations in the data.
  • A longitudinal research design assumes present trends will continue unchanged.
  • It can take a long period of time to gather results.
  • There is a need to have a large sample size and accurate sampling to reach representativness.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 6, Flexible Methods: Relational and Longitudinal Research. 2nd ed. New York: Columbia University Press, 1999; Kalaian, Sema A. and Rafa M. Kasim. "Longitudinal Studies." In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 440-441; Ployhart, Robert E. and Robert J. Vandenberg. "Longitudinal Research: The Theory, Design, and Analysis of Change.” Journal of Management 36 (January 2010): 94-120; Longitudinal Study . Wikipedia.

This type of research design draws a conclusion by comparing subjects against a control group, in cases where the researcher has no control over the experiment. There are two general types of observational designs. In direct observations, people know that you are watching them. Unobtrusive measures involve any method for studying behavior where individuals do not know they are being observed. An observational study allows a useful insight into a phenomenon and avoids the ethical and practical difficulties of setting up a large and cumbersome research project.

  • Observational studies are usually flexible and do not necessarily need to be structured around a hypothesis about what you expect to observe (data is emergent rather than pre-existing).
  • The researcher is able to collect a depth of information about a particular behavior.
  • Can reveal interrelationships among multifaceted dimensions of group interactions.
  • You can generalize your results to real life situations.
  • Observational research is useful for discovering what variables may be important before applying other methods like experiments.
  • Observation researchd esigns account for the complexity of group behaviors.
  • Reliability of data is low because seeing behaviors occur over and over again may be a time consuming task and difficult to replicate.
  • In observational research, findings may only reflect a unique sample population and, thus, cannot be generalized to other groups.
  • There can be problems with bias as the researcher may only "see what they want to see."
  • There is no possiblility to determine "cause and effect" relationships since nothing is manipulated.
  • Sources or subjects may not all be equally credible.
  • Any group that is studied is altered to some degree by the very presence of the researcher, therefore, skewing to some degree any data collected (the Heisenburg Uncertainty Principle).

Atkinson, Paul and Martyn Hammersley. “Ethnography and Participant Observation.” In Handbook of Qualitative Research . Norman K. Denzin and Yvonna S. Lincoln, eds. (Thousand Oaks, CA: Sage, 1994), pp. 248-261; Observational Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Patton Michael Quinn. Qualitiative Research and Evaluation Methods . Chapter 6, Fieldwork Strategies and Observational Methods. 3rd ed. Thousand Oaks, CA: Sage, 2002; Rosenbaum, Paul R. Design of Observational Studies . New York: Springer, 2010.

Understood more as an broad approach to examining a research problem than a methodological design, philosophical analysis and argumentation is intended to challenge deeply embedded, often intractable, assumptions underpinning an area of study. This approach uses the tools of argumentation derived from philosophical traditions, concepts, models, and theories to critically explore and challenge, for example, the relevance of logic and evidence in academic debates, to analyze arguments about fundamental issues, or to discuss the root of existing discourse about a research problem. These overarching tools of analysis can be framed in three ways:

  • Ontology -- the study that describes the nature of reality; for example, what is real and what is not, what is fundamental and what is derivative?
  • Epistemology -- the study that explores the nature of knowledge; for example, on what does knowledge and understanding depend upon and how can we be certain of what we know?
  • Axiology -- the study of values; for example, what values does an individual or group hold and why? How are values related to interest, desire, will, experience, and means-to-end? And, what is the difference between a matter of fact and a matter of value?
  • Can provide a basis for applying ethical decision-making to practice.
  • Functions as a means of gaining greater self-understanding and self-knowledge about the purposes of research.
  • Brings clarity to general guiding practices and principles of an individual or group.
  • Philosophy informs methodology.
  • Refine concepts and theories that are invoked in relatively unreflective modes of thought and discourse.
  • Beyond methodology, philosophy also informs critical thinking about epistemology and the structure of reality (metaphysics).
  • Offers clarity and definition to the practical and theoretical uses of terms, concepts, and ideas.
  • Limited application to specific research problems [answering the "So What?" question in social science research].
  • Analysis can be abstract, argumentative, and limited in its practical application to real-life issues.
  • While a philosophical analysis may render problematic that which was once simple or taken-for-granted, the writing can be dense and subject to unnecessary jargon, overstatement, and/or excessive quotation and documentation.
  • There are limitations in the use of metaphor as a vehicle of philosophical analysis.
  • There can be analytical difficulties in moving from philosophy to advocacy and between abstract thought and application to the phenomenal world.

Chapter 4, Research Methodology and Design . Unisa Institutional Repository (UnisaIR), University of South Africa;  Labaree, Robert V. and Ross Scimeca. “The Philosophical Problem of Truth in Librarianship.” The Library Quarterly 78 (January 2008): 43-70; Maykut, Pamela S. Beginning Qualitative Research: A Philosophic and Practical Guide . Washington, D.C.: Falmer Press, 1994; Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, CSLI, Stanford University, 2013.

  • The researcher has a limitless option when it comes to sample size and the sampling schedule.
  • Due to the repetitive nature of this research design, minor changes and adjustments can be done during the initial parts of the study to correct and hone the research method. Useful design for exploratory studies.
  • There is very little effort on the part of the researcher when performing this technique. It is generally not expensive, time consuming, or workforce extensive.
  • Because the study is conducted serially, the results of one sample are known before the next sample is taken and analyzed.
  • The sampling method is not representative of the entire population. The only possibility of approaching representativeness is when the researcher chooses to use a very large sample size significant enough to represent a significant portion of the entire population. In this case, moving on to study a second or more sample can be difficult.
  • Because the sampling technique is not randomized, the design cannot be used to create conclusions and interpretations that pertain to an entire population. Generalizability from findings is limited.
  • Difficult to account for and interpret variation from one sample to another over time, particularly when using qualitative methods of data collection.

Rebecca Betensky, Harvard University, Course Lecture Note slides ; Cresswell, John W. Et al. “Advanced Mixed-Methods Research Designs.” In Handbook of Mixed Methods in Social and Behavioral Research . Abbas Tashakkori and Charles Teddle, eds. (Thousand Oaks, CA: Sage, 2003), pp. 209-240; Nataliya V. Ivankova. “Using Mixed-Methods Sequential Explanatory Design: From Theory to Practice.” Field Methods 18 (February 2006): 3-20; Bovaird, James A. and Kevin A. Kupzyk. “Sequential Design.” In Encyclopedia of Research Design . Neil J. Salkind, ed. Thousand Oaks, CA: Sage, 2010; Sequential Analysis . Wikipedia.  

  • << Previous: Purpose of Guide
  • Next: Design Flaws to Avoid >>
  • Last Updated: Jul 18, 2023 11:58 AM
  • URL: https://library.sacredheart.edu/c.php?g=29803
  • QuickSearch
  • Library Catalog
  • Databases A-Z
  • Publication Finder
  • Course Reserves
  • Citation Linker
  • Digital Commons
  • Our Website

Research Support

  • Ask a Librarian
  • Appointments
  • Interlibrary Loan (ILL)
  • Research Guides
  • Databases by Subject
  • Citation Help

Using the Library

  • Reserve a Group Study Room
  • Renew Books
  • Honors Study Rooms
  • Off-Campus Access
  • Library Policies
  • Library Technology

User Information

  • Grad Students
  • Online Students
  • COVID-19 Updates
  • Staff Directory
  • News & Announcements
  • Library Newsletter

My Accounts

  • Interlibrary Loan
  • Staff Site Login

Sacred Heart University

FIND US ON  

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

How to choose your study design

Affiliation.

  • 1 Department of Medicine, Sydney Medical School, Faculty of Medicine and Health, University of Sydney, Sydney, New South Wales, Australia.
  • PMID: 32479703
  • DOI: 10.1111/jpc.14929

Research designs are broadly divided into observational studies (i.e. cross-sectional; case-control and cohort studies) and experimental studies (randomised control trials, RCTs). Each design has a specific role, and each has both advantages and disadvantages. Moreover, while the typical RCT is a parallel group design, there are now many variants to consider. It is important that both researchers and paediatricians are aware of the role of each study design, their respective pros and cons, and the inherent risk of bias with each design. While there are numerous quantitative study designs available to researchers, the final choice is dictated by two key factors. First, by the specific research question. That is, if the question is one of 'prevalence' (disease burden) then the ideal is a cross-sectional study; if it is a question of 'harm' - a case-control study; prognosis - a cohort and therapy - a RCT. Second, by what resources are available to you. This includes budget, time, feasibility re-patient numbers and research expertise. All these factors will severely limit the choice. While paediatricians would like to see more RCTs, these require a huge amount of resources, and in many situations will be unethical (e.g. potentially harmful intervention) or impractical (e.g. rare diseases). This paper gives a brief overview of the common study types, and for those embarking on such studies you will need far more comprehensive, detailed sources of information.

Keywords: experimental studies; observational studies; research method.

© 2020 Paediatrics and Child Health Division (The Royal Australasian College of Physicians).

PubMed Disclaimer

Similar articles

  • Observational Studies. Hess DR. Hess DR. Respir Care. 2023 Nov;68(11):1585-1597. doi: 10.4187/respcare.11170. Epub 2023 Jun 20. Respir Care. 2023. PMID: 37339891
  • Observational designs in clinical multiple sclerosis research: Particulars, practices and potentialities. Jongen PJ. Jongen PJ. Mult Scler Relat Disord. 2019 Oct;35:142-149. doi: 10.1016/j.msard.2019.07.006. Epub 2019 Jul 20. Mult Scler Relat Disord. 2019. PMID: 31394404 Review.
  • Study designs in clinical research. Noordzij M, Dekker FW, Zoccali C, Jager KJ. Noordzij M, et al. Nephron Clin Pract. 2009;113(3):c218-21. doi: 10.1159/000235610. Epub 2009 Aug 18. Nephron Clin Pract. 2009. PMID: 19690439 Review.
  • Study Types in Orthopaedics Research: Is My Study Design Appropriate for the Research Question? Zaniletti I, Devick KL, Larson DR, Lewallen DG, Berry DJ, Maradit Kremers H. Zaniletti I, et al. J Arthroplasty. 2022 Oct;37(10):1939-1944. doi: 10.1016/j.arth.2022.05.028. Epub 2022 Sep 6. J Arthroplasty. 2022. PMID: 36162926 Free PMC article.
  • Design choices for observational studies of the effect of exposure on disease incidence. Gail MH, Altman DG, Cadarette SM, Collins G, Evans SJ, Sekula P, Williamson E, Woodward M. Gail MH, et al. BMJ Open. 2019 Dec 9;9(12):e031031. doi: 10.1136/bmjopen-2019-031031. BMJ Open. 2019. PMID: 31822541 Free PMC article.
  • Effects of Electronic Serious Games on Older Adults With Alzheimer's Disease and Mild Cognitive Impairment: Systematic Review With Meta-Analysis of Randomized Controlled Trials. Zuo X, Tang Y, Chen Y, Zhou Z. Zuo X, et al. JMIR Serious Games. 2024 Jul 31;12:e55785. doi: 10.2196/55785. JMIR Serious Games. 2024. PMID: 39083796 Free PMC article. Review.
  • Nurses' Adherence to the Portuguese Standard to Prevent Catheter-Associated Urinary Tract Infections (CAUTIs): An Observational Study. Paiva-Santos F, Santos-Costa P, Bastos C, Graveto J. Paiva-Santos F, et al. Nurs Rep. 2023 Oct 10;13(4):1432-1441. doi: 10.3390/nursrep13040120. Nurs Rep. 2023. PMID: 37873827 Free PMC article.
  • Effects of regional anaesthesia on mortality in patients undergoing lower extremity amputation: A retrospective pooled analysis. Quak SM, Pillay N, Wong SN, Karthekeyan RB, Chan DXH, Liu CWY. Quak SM, et al. Indian J Anaesth. 2022 Jun;66(6):419-430. doi: 10.4103/ija.ija_917_21. Epub 2022 Jun 21. Indian J Anaesth. 2022. PMID: 35903599 Free PMC article.
  • Peat J, Mellis CM, Williams K, Xuan W. Health Science Research: A Handbook of Quantitative Methods Chapter 2, Planning the Study. Sydney: Allen & Unwin; 2001.
  • Guyatt G, Rennie D, Meade MO, Cook DJ. Users Guide to the Medical Literature: A Manual for Evidence-Based Clinical Practice, 3rd edn; Chapter 14, Harm (observational studies). New York, NY: McGraw-Hill; 2015.
  • Centre for Evidence Based Medicine. Oxford EBM ‘Critical Appraisal tools’. Oxford University, UK. Available from: cebm.net [Accessed March 2020].
  • Kahlert J, Bjerge Gribsholt S, Gammelager H, Dekkers OMet al. Control of confounding in the analysis phase - An overview for clinicians. Clin. Epidemiol. 2017; 9: 195-204.
  • Sedgwick P. Cross sectional studies: Advantages and disadvantages. BMJ 2014; 348: g2276.
  • Search in MeSH

LinkOut - more resources

Full text sources.

  • Ovid Technologies, Inc.

Miscellaneous

  • NCI CPTAC Assay Portal

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

  • University Libraries
  • Research Guides
  • Topic Guides
  • Research Methods Guide
  • Research Design & Method

Research Methods Guide: Research Design & Method

  • Introduction
  • Survey Research
  • Interview Research
  • Data Analysis
  • Resources & Consultation

Tutorial Videos: Research Design & Method

Research Methods (sociology-focused)

Qualitative vs. Quantitative Methods (intro)

Qualitative vs. Quantitative Methods (advanced)

a research study design

FAQ: Research Design & Method

What is the difference between Research Design and Research Method?

Research design is a plan to answer your research question.  A research method is a strategy used to implement that plan.  Research design and methods are different but closely related, because good research design ensures that the data you obtain will help you answer your research question more effectively.

Which research method should I choose ?

It depends on your research goal.  It depends on what subjects (and who) you want to study.  Let's say you are interested in studying what makes people happy, or why some students are more conscious about recycling on campus.  To answer these questions, you need to make a decision about how to collect your data.  Most frequently used methods include:

  • Observation / Participant Observation
  • Focus Groups
  • Experiments
  • Secondary Data Analysis / Archival Study
  • Mixed Methods (combination of some of the above)

One particular method could be better suited to your research goal than others, because the data you collect from different methods will be different in quality and quantity.   For instance, surveys are usually designed to produce relatively short answers, rather than the extensive responses expected in qualitative interviews.

What other factors should I consider when choosing one method over another?

Time for data collection and analysis is something you want to consider.  An observation or interview method, so-called qualitative approach, helps you collect richer information, but it takes time.  Using a survey helps you collect more data quickly, yet it may lack details.  So, you will need to consider the time you have for research and the balance between strengths and weaknesses associated with each method (e.g., qualitative vs. quantitative).

  • << Previous: Introduction
  • Next: Survey Research >>
  • Last Updated: Aug 21, 2023 10:42 AM

a research study design

  • Get new issue alerts Get alerts
  • Submit a Manuscript

Secondary Logo

Journal logo.

Colleague's E-mail is Invalid

Your message has been successfully sent to your colleague.

Save my selection

Study designs

Part 1 – an overview and classification.

Ranganathan, Priya; Aggarwal, Rakesh 1

Department of Anaesthesiology, Tata Memorial Centre, Mumbai, Maharashtra, India

1 Department of Gastroenterology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow, Uttar Pradesh, India

Address for correspondence: Dr. Priya Ranganathan, Department of Anaesthesiology, Tata Memorial Centre, Ernest Borges Road, Parel, Mumbai - 400 012, Maharashtra, India. E-mail: [email protected]

This is an open access journal, and articles are distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms.

There are several types of research study designs, each with its inherent strengths and flaws. The study design used to answer a particular research question depends on the nature of the question and the availability of resources. In this article, which is the first part of a series on “study designs,” we provide an overview of research study designs and their classification. The subsequent articles will focus on individual designs.

INTRODUCTION

Research study design is a framework, or the set of methods and procedures used to collect and analyze data on variables specified in a particular research problem.

Research study designs are of many types, each with its advantages and limitations. The type of study design used to answer a particular research question is determined by the nature of question, the goal of research, and the availability of resources. Since the design of a study can affect the validity of its results, it is important to understand the different types of study designs and their strengths and limitations.

There are some terms that are used frequently while classifying study designs which are described in the following sections.

A variable represents a measurable attribute that varies across study units, for example, individual participants in a study, or at times even when measured in an individual person over time. Some examples of variables include age, sex, weight, height, health status, alive/dead, diseased/healthy, annual income, smoking yes/no, and treated/untreated.

Exposure (or intervention) and outcome variables

A large proportion of research studies assess the relationship between two variables. Here, the question is whether one variable is associated with or responsible for change in the value of the other variable. Exposure (or intervention) refers to the risk factor whose effect is being studied. It is also referred to as the independent or the predictor variable. The outcome (or predicted or dependent) variable develops as a consequence of the exposure (or intervention). Typically, the term “exposure” is used when the “causative” variable is naturally determined (as in observational studies – examples include age, sex, smoking, and educational status), and the term “intervention” is preferred where the researcher assigns some or all participants to receive a particular treatment for the purpose of the study (experimental studies – e.g., administration of a drug). If a drug had been started in some individuals but not in the others, before the study started, this counts as exposure, and not as intervention – since the drug was not started specifically for the study.

Observational versus interventional (or experimental) studies

Observational studies are those where the researcher is documenting a naturally occurring relationship between the exposure and the outcome that he/she is studying. The researcher does not do any active intervention in any individual, and the exposure has already been decided naturally or by some other factor. For example, looking at the incidence of lung cancer in smokers versus nonsmokers, or comparing the antenatal dietary habits of mothers with normal and low-birth babies. In these studies, the investigator did not play any role in determining the smoking or dietary habit in individuals.

For an exposure to determine the outcome, it must precede the latter. Any variable that occurs simultaneously with or following the outcome cannot be causative, and hence is not considered as an “exposure.”

Observational studies can be either descriptive (nonanalytical) or analytical (inferential) – this is discussed later in this article.

Interventional studies are experiments where the researcher actively performs an intervention in some or all members of a group of participants. This intervention could take many forms – for example, administration of a drug or vaccine, performance of a diagnostic or therapeutic procedure, and introduction of an educational tool. For example, a study could randomly assign persons to receive aspirin or placebo for a specific duration and assess the effect on the risk of developing cerebrovascular events.

Descriptive versus analytical studies

Descriptive (or nonanalytical) studies, as the name suggests, merely try to describe the data on one or more characteristics of a group of individuals. These do not try to answer questions or establish relationships between variables. Examples of descriptive studies include case reports, case series, and cross-sectional surveys (please note that cross-sectional surveys may be analytical studies as well – this will be discussed in the next article in this series). Examples of descriptive studies include a survey of dietary habits among pregnant women or a case series of patients with an unusual reaction to a drug.

Analytical studies attempt to test a hypothesis and establish causal relationships between variables. In these studies, the researcher assesses the effect of an exposure (or intervention) on an outcome. As described earlier, analytical studies can be observational (if the exposure is naturally determined) or interventional (if the researcher actively administers the intervention).

Directionality of study designs

Based on the direction of inquiry, study designs may be classified as forward-direction or backward-direction. In forward-direction studies, the researcher starts with determining the exposure to a risk factor and then assesses whether the outcome occurs at a future time point. This design is known as a cohort study. For example, a researcher can follow a group of smokers and a group of nonsmokers to determine the incidence of lung cancer in each. In backward-direction studies, the researcher begins by determining whether the outcome is present (cases vs. noncases [also called controls]) and then traces the presence of prior exposure to a risk factor. These are known as case–control studies. For example, a researcher identifies a group of normal-weight babies and a group of low-birth weight babies and then asks the mothers about their dietary habits during the index pregnancy.

Prospective versus retrospective study designs

The terms “prospective” and “retrospective” refer to the timing of the research in relation to the development of the outcome. In retrospective studies, the outcome of interest has already occurred (or not occurred – e.g., in controls) in each individual by the time s/he is enrolled, and the data are collected either from records or by asking participants to recall exposures. There is no follow-up of participants. By contrast, in prospective studies, the outcome (and sometimes even the exposure or intervention) has not occurred when the study starts and participants are followed up over a period of time to determine the occurrence of outcomes. Typically, most cohort studies are prospective studies (though there may be retrospective cohorts), whereas case–control studies are retrospective studies. An interventional study has to be, by definition, a prospective study since the investigator determines the exposure for each study participant and then follows them to observe outcomes.

The terms “prospective” versus “retrospective” studies can be confusing. Let us think of an investigator who starts a case–control study. To him/her, the process of enrolling cases and controls over a period of several months appears prospective. Hence, the use of these terms is best avoided. Or, at the very least, one must be clear that the terms relate to work flow for each individual study participant, and not to the study as a whole.

Classification of study designs

Figure 1 depicts a simple classification of research study designs. The Centre for Evidence-based Medicine has put forward a useful three-point algorithm which can help determine the design of a research study from its methods section:[ 1 ]

F1-8

  • Does the study describe the characteristics of a sample or does it attempt to analyze (or draw inferences about) the relationship between two variables? – If no, then it is a descriptive study, and if yes, it is an analytical (inferential) study
  • If analytical, did the investigator determine the exposure? – If no, it is an observational study, and if yes, it is an experimental study
  • If observational, when was the outcome determined? – at the start of the study (case–control study), at the end of a period of follow-up (cohort study), or simultaneously (cross sectional).

In the next few pieces in the series, we will discuss various study designs in greater detail.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

Epidemiologic methods; research design; research methodology

  • + Favorites
  • View in Gallery

Readers Of this Article Also Read

Study designs: part 2 – descriptive studies, study designs: part 3 - analytical observational studies, research studies on screening tests, introduction to qualitative research methods – part i, investigator-initiated studies: challenges and solutions.

a research study design

How to... Design a research study

The design of a piece of research refers to the practical way in which the research was conducted according to a systematic attempt to generate evidence to answer the research question. The term "research methodology" is often used to mean something similar, however different writers use both terms in slightly different ways: some writers, for example, use the term "methodology" to describe the tools used for data collection, which others (more properly) refer to as methods.

On this page

What is research design, sampling techniques, quantitative approaches to research design, qualitative approaches to research design, planning your research design.

The following are some definitions of research design by researchers:

Design is the deliberately planned 'arrangement of conditions for analysis and collection of data in a manner that aims to combine relevance to the research purpose with economy of procedure'.

Selltiz C.S., Wrightsman L.S. and Cook S.W. 1981  Research Methods in Social Relations, Holt, Rinehart & Winston, London, quoted in Jankowicz, A.D.,  Business Research Methods , Thomson Learning, p.190.)

The idea behind a design is that different kinds of issues logically demand different kinds of data-gathering arrangement so that the data will be:

  • relevant to your thesis or the argument you wish to present;
  • an adequate test of your thesis (i.e. unbiased and reliable);
  • accurate in establishing causality, in situations where you wish to go beyond description to provide explanations for whatever is happening around you;
  • capable of providing findings that can be generalised to situations other than those of your immediate organisation.

(Jankowicz, A.D.,  Business Research Methods  , Thomson Learning, p. 190)

The design of the research involves consideration of the best method of collecting data to provide a relevant and accurate test of your thesis, one that can establish causality if required (see  What type of study are you undertaking? ), and one that will enable you to generalise your findings.

Design of the research should take account of the following factors, which are briefly discussed below with links to subsequent pages or other parts of the site where there is fuller information.

What is your theoretical and epistemological perspective?

Although management research is much concerned with observation of humans and their behaviour, to a certain extent the epistemological framework derives from that of science. Positivism assumes the independent existence of measurable facts in the social world, and researchers who assume this perspective will want to have a fairly exact system of measurement. On the other hand, interpretivism assumes that humans interpret events and researchers employing this method will adopt a more subjective approach.

What type of study are you undertaking?

Are you conducting an exploratory study, obtaining an initial grasp of a phenomenon, a descriptive study, providing a profile of a topic or institution:

Karin Klenke provides an exploratory study of issues of gender in management decisions in  Gender influences in decision-making processes in top management teams  ( Management Decision , Volume 41 Number 10)

Damien McLoughlin provides a descriptive study of action learning as a case study in  There can be no learning without action and no action without learning  in ( European Journal of Marketing , Volume 38 Number 3/4)

Or it can be explanatory, examining the causal relationship between variables: this can include the testing of hypotheses or examination of causes:

Martin  et al.  examined ad zipping and repetition in  Remote control marketing: how ad fast-forwarding and ad repetition affect consumers  ( Marketing Intelligence & Planning , Volume 20 Number 1) with a number of hypotheses e.g. that people are more likely to remember an ad that they have seen repeatedly.

What is your research question?

The most important issue here is that the design you use should be appropriate to your initial question. Implicit within your question will be issues of size, breadth, relationship between variables, how easy is it to measure variables etc.

The two different questions below call for very different types of design:

The example  Dimensions of library anxiety and social interdependence: implications for library services  (Jiao and Onwuegbuzie,  Library Review , Volume 51 Number 2) looks at attitudes and the relationship between variables, and uses very precise measurement instruments in the form of two questionnaires, with 43 and 22 items respectively.

In the example  Equity in Corporate Co-branding  (Judy Motion  et al. ,  European Journal of Marketing , Volume 37 Number 7),  the RQs posit a need to describe rather than to link variables, and the methodology used is one of discourse theory, which involves looking at material within the context of its use by the company.

What sample size will you base your data on?

The sample is the source of your data, and it is important to decide how you are going to select it.

See  Sampling techniques .

What research methods will you use and why?

We referred above to the distinction between methods and methodology. There are two main approaches to methodology – qualitative and quantitative.

The two main approaches to methodology
 
typically use  typically use 
are  are 
involve the researcher as ideally an  require more   and   on the part of the researcher.
may focus on cause and effect focuses on understanding of phenomena in their social, institutional, political and economic context
require a   require a 
have the   that they may force people into categories, also it cannot go into much depth about subjects and issues. have the   that they focus on a few individuals, and may therefore be difficult to generalise.

For more detail on each of the approaches,  Quantitative approaches to design  and  Qualitative approaches to design  later in this feature.

Note, you do not have to stick to one methodology (although some writers recommend that you do). Combining methodologies is a matter of seeing which part of the design of your research is better suited to which methodology.

How will you triangulate your research?

Triangulation refers to the process of ensuring that any defects in a particular methodology are compensated by use of another at appropriate points in the design. For example, if you carry out a quantitative survey and need more in depth information about particular aspects of the survey you may decide to use in-depth interviews, a qualitative method.

Here are a couple of useful articles to read which cover the issue of triangulation:

  • Combining quantitative and qualitative methodologies in logistics research  by John Mangan, Chandra Lalwani and Bernard Gardner ( International Journal of Physical Distribution & Logistics Management , Volume 34 Number 7) looks at ways of combining methodologies in a particular area of research, but much of what they say is generally applicable.
  • Quantitative and qualitative research in the built environment: application of "mixed" research approach  by Dilanthi Amaratunga, David Baldry, Marjan Sarshar and Rita Newton ( Work Study , Volume 51 Number 1) looks at the relative merits of the two research approaches, and despite reference to the built environment in the title acts as a very good introduction to quantitative and qualitative methodology and their relative research literatures. The section on triangulation comes under the heading 'The mixed (or balanced) approach'. 

What steps will you take to ensure that your research is ethical?

Ethics in research is a very important issue. You should design the research in such a way that you take account of such ethical issues as:

  • informed consent (have the participants had the nature of the research explained to them)?
  • checking whether you have permission to transcribe conversations with a tape recorder
  • always treating people with respect, consideration and concern.

How will you ensure the reliability of your research?

Reliability

This is about the replicability of your research and the accuracy of the procedures and research techniques. Will the same results be repeated if the research is repeated? Are the measurements of the research methods accurate and consistent? Could they be used in other similar contexts with equivalent results? Would the same results be achieved by another researcher using the same instruments? Is the research free from error or bias on the part of the researcher, or the participants? (E.g. do the participants say what they believe the management, or the researcher, wants? For example, in a survey done on some course material, that on a mathematical module received glowing reports – which led the researcher to wonder whether this was anything to do with the author being the Head of Department!)

How successfully has the research actually achieved what it set out to achieve? Can the results of the study be transferred to other situations? Does x really cause y, in other words is the researcher correct in maintaining a causal link between these two variables? Is the research design sufficiently rigorous, have alternative explanations been considered? Have the findings really be accurately interpreted? Have other events intervened which might impact on the study, e.g. a large scale redundancy programme? (For example, in an evaluation of the use of CDs for self study with a world-wide group of students, it was established that some groups had not had sufficient explanation from the tutors as to how to use the CD. This could have affected their rather negative views.)

Generalisability

Are the findings applicable in other research settings? Can a theory be developed that can apply to other populations? For example, can a particular study about dissatisfaction amongst lecturers in a particular university be applied generally? This is particularly applicable to research which has a relatively wide sample, as in a questionnaire, or which adopts a scientific technique, as with the experiment.

Transferability

Can the research be applied to other situations? Particularly relevant when applied to case studies.

In addition, each of the sections in this feature on quantitative and qualitative approaches to research design contain notes on how to ensure that the research is reliable.

Some basic definitions

In order to answer a particular research question, the researcher needs to investigate a particular area or group, to which the conclusions from the research will apply. The former may comprise a geographical location such as a city, an industry (for example the clothing industry), an organisation/group of organisations such as a particular firm/type of firm, a particular group of people defined by occupation (e.g. student, manager etc.), consumption of a particular product or service (e.g. users of a shopping mall, new library system etc.), gender etc. This group is termed the  research population .

The  unit of analysis  is the level at which the data is aggregated: for example, it could be a study of individuals as in a study of women managers, of dyads, as in a study of mentor/mentee relationships, of groups (as in studies of departments in an organisation), of organisations, or of industries.

Unless the research population is very small, we need to study a subset of it, which needs to be general enough to be applicable to the whole. This is known as a  sample , and the selection of components of the sample that will give a representative view of the whole is known as  sampling technique  . It is from this sample that you will collect your data.

In order to draw up a sample, you need first to identify the total number of people in the research population. This information may be available in a telephone directory, a list of company members, or a list of companies in the area. It is known as a  sampling frame .

In  Networking for female managers' career development  (Margaret Linehan,  Journal of Management Development , Volume 20 Number 10), he sampling technique is described as follows:

"A total of 50 senior female managers were selected for inclusion in this study. Two sources were used for targeting interviewees, the first was a listing of Fortune 500 top companies in England, Belgium, France and Germany, and, second, The Marketing Guide to Ireland. The 50 managers who participated in the study were representative of a broad range of industries and service sectors including: mining, software engineering, pharmaceutical manufacturing, financial services, car manufacturing, tourism, oil refining, medical and state-owned enterprises."

Sampling may be done either a  probability  or a  non-probability  basis. This is an important research design decision, and one which will depend on such factors as whether the theory behind the research is positivist or idealist, whether qualitative or quantitative methods are used etc. Note that the two methods are not mutually exclusive, and may be used for different purposes at different points in the research, say purposive sampling to find out key attitudes, followed by a more general, random approach.

Note that there is a very good section from an online textbook on sampling: see William Trochim's  Research Methods Knowledge Base .

Probability sampling

In  probability  sampling, each member of a given research population has an equal chance of being selected. It involves, literally, the selection of respondents at random from the sampling frame, having decided on the sample size. This type of sampling is more likely if the theoretical orientation of the research is  positivist , and the methodology used is likely to be  quantitative .

Probability sampling can be:

  • random  – the selection is completely arbitrary, and a given number of the total population is selected completely at random.
  • systematic  – every  nth element  of the population is selected. This can cause a problem if the interval of selection means that the elements share a characteristic: for example, if every fourth seat of a coach is selected it is likely that all the seats will be beside a window.
  • stratified   random  – the population is divided into segments, for example, in a University, you could divide the population into academic, administrators, and academic related (related professional staff). A random number of each group is then selected. It has the advantage of allowing you to categorise your population according to particular features. A.D. Jankowicz provides useful advice (Business Research Methods,Thomson Learning, 2000, p.197).

The concept of fit in services flexibility and research: an empirical approach  (Antonio J Verdú-Jover  et al. ,  International Journal of Service Industry Management , Volume 15 Number 5) uses stratified sampling: the study concentrates on three sectors within the EU, chemicals, electronics and vehicles, with the sample being stratified within this sector.

  • cluster  – a particular subgroup is chosen at random. The subgroup may be based on a particular geographical area, say you may decide to sample particular areas of the country.

Non probability sampling

Here, the population does not have an equal chance of being selected; instead, selection happens according to some factor such as:

  • convenience/accidental  – being present at a particular time e.g. at lunch in the canteen. This is an easy way of getting a sample, but may not be strictly accurate, because the factor you have chosen is based on your convenience rather than on a true understanding of the characteristics of the sample.

In  "Saying is one thing; doing is another": the role of observation in marketing research  ( Qualitative Market Research: An International Journal , Volume 2 Number 1), Matthews and Boote use a two-stage sampling process, with convenience sampling followed by time sampling: see their methodology.

  • "key informant technique" – i.e. people with specialist knowledge
  • using people at selected points in the organisational hierarchy 
  • snowball, with one person being approached and then suggesting others.

In "The benefits of the implementation of the ISO 9000 standard: empirical research in 288 Spanish companies", a sample was selected based on all certified companies in a particular area, because this was where the highest number of certified companies could be found.

  • quota  – the assumption is made that there are subgroups in the population, and a quota of respondents is chosen to reflect this diversity. This subgroup should be reasonably representative of the whole, but care should be taken in drawing conclusions for the whole population. For example, a quota sample taken in New York State would not be representative of the whole of the United States.

Monitoring consumer confidence in food safety: an exploratory study , de Jonge  et al . use quota sampling using age, gender, household size and region as selection variables in a food safety survey. Read about the methodology under Materials and methods.

Non probability sampling methods are more likely to be used in qualitative research, with the greater degree of collaboration with the respondents affording the opportunity of greater detail of data gathering. The researcher is more likely to be involved in the process and be adopting an  interpretivist theoretical  stance.

Calculating the sample size

In purposive sampling, this will be determined by judgement; in other more random types of sample it is calculated as a  proportion  of the sampling frame, the key criterion being to ensure that it is representative of the whole. (E.g. 10 per cent is fine for a large population, say over 1000, but for a small population you would want a larger proportion.)

If you are using stratified sampling you may need to adjust your strata and collapse into smaller strata if you find that some of your sample sizes are too small.

The response rate

It is important to keep track of the response rate against your sample frame. If you are depending on postal questionnaires, you will need to plan into your design time to follow up the questionnaires. What is considered to be a good response rate varies according to the type of survey: if you are, say, surveying managers, then a good response would be 50 per cent; for consumer surveys, the response rate is likely to be lower, say 10 to 20 per cent.

The thing that characterises quantitative research is that it is objective. The assumption is that facts exist totally independently and the researcher is a totally  objective  observer of situations, and has no power to influence them. At such, it probably starts from a positivist or empiricist position.

The research design is based on one iteration in collection of the data: the categories are isolated prior to the study, and the design is planned out and generally not changed during the study (as it may be in qualitative research).

What is my research question? What variables am I interested in exploring?

It is usual to start your research by carrying out a  literature review , which should help you formulate a research question.

Part of the task of the above is to help you determine what  variables  you are considering. What are the key variables for your research and what is the relationship between them – are you looking to  explore  issues, to  compare  two variables or to look at  cause and effect ?

The Dutch heart health community intervention "Hartslag Limburg": evaluation design and baseline data  (Gaby Ronda  et al. ,  Health Education , Volume 103 Number 6) describes a trial of a cardiovascular prevention programme which indicated the importance of its further implementation. The key variables are the types of health related behaviours which affect a person's chance of heart disease.

The following studies compare variables:

Service failures away from home: benefits in intercultural service encounters  (Clyde A Warden  et al. ,  International Journal of Service Industry Management , Volume 14 Number 4) compares service encounters (the independent variable) inside and outside Taiwan (the dependent variable) in order to look at certain aspects of 'critical incidents' in intercultural service encounters.

The concept of fit in services flexibility and research: an empirical approach  (Antonio J Verdú-Jover  et al. ,  International Journal of Service Industry Management , Volume 15 Number 5) looks at managerial flexibility in relation to different types of business, service and manufacturing.

They can also look at cause and effect:

In  Remote control marketing: how ad fast-forwarding and ad repetition affect consumers  (Brett A.S. Martin  et al. ,  Marketing Intelligence & Planning , Volume 20 Number 1), the authors look at two variables associated with advertising, notably zipping and fast forwarding, and in their effect on a third variable, consumer behaviour - i.e. ability to remember ads. Furthermore, it looks at the interaction between the first two variables - i.e. whether they interact on one another to help increase recall.

What is the hypothesis?

It is usual with quantitative research to proceed from a particular hypothesis. The object of research would then be to test the hypothesis.

In the example quoted above,  Remote control marketing: how ad fast-forwarding and ad repetition affect consumers , the researchers decided to explore a neglected area of the literature: the interaction between ad zipping and repetition, and came up with three hypotheses:

The influence of zipping H1 . Individuals viewing advertisements played at normal speed will exhibit higher ad recall and recognition than those who view zipped advertisements.

Ad repetition effects H2 . Individuals viewing a repeated advertisement will exhibit higher ad recall and recognition than those who see an advertisement once.

Zipping and ad repetition H3 . Individuals viewing zipped, repeated advertisements will exhibit higher ad recall and recognition than those who see a normal speed advertisement that is played once.

What are the appropriate measures to use

It is very important, when designing your research, to understand  what  you are measuring. This will call for a close examination of the issues involved: is your measure suitable to the hypothesis and research question under consideration? The type of scale you will use will dictate the statistical procedure which you can use to analyse your data, and it is important to have an understanding of the latter at the outset in order to obtain the correct level of analysis, and one that will throw the best light on your research question, and help test your hypothesis.

It is also important to understand what type of data you are trying to collect. Are you wanting to collect data that relates simply to different types of categories, for example, men and women (as in, say, differences in decision-making between men and women managers), or do you want to rank the data in some way? Choices as far as the nature of data are concerned again dictate the type of statistical analysis.

Data can be categorised as follows:

  • Nominal – Representing particular categories, e.g. men or women.
  • Ordinal – Ranked in some way such as order of passing a particular point in a shopping centre.
  • Interval – Ranked according to the interval between the data, which remains the same. Most typical of this type of data is temperature.
  • Ratio – Where it is possible to measure the difference between different types of data - for example applying a measurement.
  • Scalar – This type of data has intervals between it, which are not quantifiable.

Note that some of the above categories, especially 'interval' and 'ratio' are drawn from a scientific model which assumes exact measurement of data (temperature, length etc.). In management research, you are unlikely to want to or be able to apply such a high degree of exactitude, and are more likely to be measuring less exact criteria which do not have an exact interval between them.

Here are some examples of use of data in management research. This one illustrates the use of different categories:

The concept of fit in services flexibility and research: an empirical approach  (see above) uses an approach which itemises the different aspects which the researchers wished to measure flexibility mix, performance and the form's general data. 

This one looks at categories and also at ranked data (ordinal):

In  Remote control marketing: how ad fast-forwarding and ad repetition affect consumers  (also see above), the measure involved 2 (speed of ad presentation: normal, fast-forwarded) ×\ 2 (repetition: none, one repetition) between-subjects factorial design.

The following examples look at measures on a scale, which may relate to tangible factors such as frequency, or more intangible ones which relate to attitude or opinion:

How many holidays do you take in a year?

One __  Between 2 and 5 __  Between 5 and 10 __  More than 10 __

Tick the option which most agrees with your views.

Navigating my way around the CD was:

Very easy __  Easy __  Neither easy nor hard __  Hard __  Very hard __

The later type of data are very common in management research, and are known as scalar data. A very common measure for such data is known as the Likert scale:

Strongly agree __________ Agree __________ Neither agree nor disagree __________ Disagree __________ Strongly disagree __________

How will I analyse the data?

Quantitative data are invariably analysed by some sort of statistical means, such as a t-test, a chi test, cluster analysis etc. It is very important to decide at the planning stage what your method of analysis will be: this will in turn affect your choice of measure. Both your analysis and measure should be suitable to test your hypothesis.

You need also to consider what type of package will you need to analyse your data. It may be sufficient to enter it into an Excel spreadsheet, or you may wish to use a statistical package such as SPSS or Mintab.

What are the instruments used in quantitative research?

Or, put more simply, what methods will you use to collect your data?

In scientific research, it is possible to be reasonably precise by generating experiments in laboratory conditions. Whilst the  field experiment  has a place in management research, as does  observation , the most usual instrument for producing quantitative data is the  survey , most often carried out by means of a  questionnaire .

You will find numerous examples of questionnaires and surveys in research published by Emerald, as you will in any database of management research. Questionnaires will be discussed at a later stage but here are some key issues:

  • It is important to know exactly what questions you want answers to. A common failing is to realise, once you have got the questionnaire back, that you really need answers to a question which you never asked. Thus the questionnaire should be rigorously researched and the questions phrased as precisely as possible.
  • You are more likely to get a response if you give people a reason to respond - commercial companies sometimes offer a prize, which may not be possible or appropriate if you are a researcher in a university, but it is usual in that case to give the reason behind your research, which gives your respondent a context. Even more motivational is the ease with which the questionnaire can be filled in.
  • How many responses will I need? This concerns the eventual size of your dataset and depends upon the degree of complexity of your planned analysis, how you are treating your variables (for example, if you are wanting to show the effect of a variable, you will need a larger response size, likewise if you are showing changes in variables).

Other instruments that are used in quantitative research to generate data are experiments, historical records and documents, and observation.

Note that some authors claim that for a design to be a  true experiment , items must be randomly assigned to groups; if there is some sort of control group or multiple measures, then it may be  quasi experimental . If your survey fits neither of these descriptions, it may according to these authors be sufficient for descriptive purposes, but not if you seek to establish a causal relationship.

For more information on types of design, see William Trochim's Research Methods Knowledge Base section on  types of design .

What are the advantages and drawbacks of quantitative research?

The main advantage of quantitative research is that it is easy to determine its rigour: because of the objectivity of quantitative studies, it is easy to replicate them in another situation. For example, a well-constructed questionnaire can be used to analyse job satisfaction in two different companies; likewise, an observation studying consumer behaviour in a shopping centre can take place in two different such centres.

Quantitative methods are also good at obtaining a good deal of reliable data from a large number of sources. Their drawback is that they are heavily dependent on the reliability of the instrument: that is, in the case of the questionnaire, it is vital to ask the right questions in the right way. This in turn is dependent upon having sufficient information about a situation, which is not always possible. In addition, quantitative studies may generate a large amount of data, but the data may lack depth and fail to explain complex human processes such as attitudes to organisational change, or how how learning takes place.

For example, a quantitative study on a piece of educational software may show that on the whole people felt that they had learnt something, but may not necessarily show how they learnt, which an observation could.

For this reason, quantitative methods are often used in conjunction with qualitative methods: for example, qualitative methods of interviewing may be used as a way of finding out more about a situation in order to draw up an informed quantitative instrument; or to explore certain issues which have appeared in the quantitative study in greater depth.

Qualitative research operates from a different epistemological perspective than quantitative, which is essentially objective. It is a perspective that acknowledges the essential difference between the social world and the scientific one, recognising that people do not always observe the laws of nature, but rather comprise a whole range of feelings, observations, attitudes which are essentially subjective in nature. The theoretical framework is thus likely to be interpretivist or realist. Indeed, the researcher and the research instrument are often combined, with the former being the interviewer, or observer – as opposed to quantitative studies where the research instrument may be a survey and the subjects may never see the researcher.

In an  interview for Emerald ,  Professor Slawomir Magala , Editor of the  Journal of Organizational Change Management , has this to say about qualitative methods:

"We follow the view that the social construction of reality is personal, experienced by individuals and between individuals – in fact, the interactions which connect us are the building blocks of reality, and there is much meaning in the space between individuals."

As opposed to the statistical reliance of quantitative research, data from qualitative research is based on observation and words, and analysis is based on interpretation and pattern recognition rather than statistical analysis.

Miles and Huberman list the following as typical criteria of qualitative research:

  • Intense and prolonged contact in the field.
  • Designed to achieve a holistic or systemic picture.
  • Perception is gained from the inside based on actors' understanding.
  • Little standardised instrumentation is used.
  • Most analysis is done with words.
  • There are multiple interpretations available in the data.

Miles, M. and Huberman, A.M. (1994) Qualitative Data Analysis: An Expanded Sourcebook , Sage, London

To what types of research questions is qualitative research relevant?

Qualitative research is best suited to the types of questions which require exploration of data  in depth  over a not particularly large sample. For example, it would be too time consuming to ask questions such as "Please describe in detail your reaction to colour x" to a large number of people, it would be more appropriate to simply ask "Do you like colour x" and give people a "yes/no" option. By asking the former question to a smaller number of people, you would get a more detailed result.

Qualitative research is also best suited to  exploratory  and  comparative  studies; to a more limited extent, it can also be used for  "cause-effect"  type questions, providing these are fairly limited in scope.

One of the strengths of qualitative research is that it allows the researcher to gain an in-depth perspective, and to grapple with complexity and ambiguity. This is what makes it suitable to analysis of  particular  groups or situations, or unusual events.

What is the relationship of qualitative research to hypotheses?

Qualitative research is usually inductive: that is, researchers gather data, and then formulate a hypothesis which can be applied to other situations.

In fact, one of the strengths of qualitative research is that it can proceed from a relatively small understanding of a particular situation, and generate new questions during the course of data collection as opposed to needing to have all the questions set out beforehand. Indeed, it is good practice in quantitative research to go into a situation as free from preconceptions as possible.

How will you analyse the data?

There is not the same need with qualitative research to determine the measure and the method of analysis at an early stage of the research process, mainly because there are no standard ways of analysing data as there are for quantitative research: it is usual to go with whatever is appropriate for the research question. However, because qualitative data usually involves a large amount of transcription (e.g. of taped interviews, videos of focus groups etc.) it is a good idea to have a plan of how this should be done, and to allow time for the transcription process.

There are a couple of attested methods of qualitative data analysis:  content analysis , which involves looking at emerging patterns, and  grounded analysis , which involves going through a number of guided stages and which is closely linked to  grounded theory .

What are the main instruments of qualitative research?

Or put another way, what are the main methods used to collect data? These can be organised according to their methodology (note, the following is not an exhaustive list, for which you should consult a good book on qualitative research):

Ethnographic methods

As the name suggests, this methodology derives from anthropology and involves observing people as a participant within their social and cultural system. Most common methods of data collection are:

  • Interviewing, which means discussions with people either on the phone, by email or in person when the purpose is to collect data which is by its nature unquantifiable and more difficult to analyse by statistical means, but which provides in-depth information. The interview can be either:  Structured , which means that the interviewer has a set number of questions.  Semi-structured , which means that the interviewer has a number of questions or a purpose, but the interview can still go off in unanticipated directions.
  • Focus groups, which is where a group of people are assembled at one time to give their reaction to a product, or to discuss an issue. There is usually some sort of facilitation which involves either guided discussion or some sort of product demonstration.
  • Participant observation – the researcher observes behaviour of people in the organisation, their language, actions, behaviour etc.

For some examples of participant observation, see Methods of empirical research ,  and for examples of interview technique, see  Techniques of data collection and analysis .

Historical analysis

This is literally, the analysis of historical documents of a particular company, industry etc. It is important to understand exactly what your focus is, and also which historical school or theoretical perspective you are drawing on.

Grounded theory

This is an essentially inductive approach, and is applied when the understanding of a particular phenomenen is sought. A feature is that the design of the research has several iterations: there is initial exploration followed by a theory which is then tested.

In  Grounded theory methodology and practitioner reflexivity in TQM research  ( International Journal of Quality & Reliability Management  , Volume 18 Number 2), Leonard and McAdam use grounded theory to explore TQM, on the grounds that quantitative methods "fail to give deep insights and rich data into TQM in practice within organizations", and that it is much more appropriate to listen to the individual experiences of participants. 

Action research

This is a highly participative form of research where the research is carried out in collaboration with those involved in a particular process, which is often concerned with some sort of change.

Narrative methods

This is when the researcher listens to the stories of people in the organisation and triangulates them against official documents.

Discourse theory

This methodology draws on a theory which allows language to have a meaning that is not set but is negotiated through social context.

Helen Francis in  The power of "talk" in HRM-based change  ( Personnel Review , Volume 31 Number 4) describes her use of discourse theory as follows:

"The approach to discourse analysis drew upon Fairclough's seminal work in which discourse is treated as a form of social practice and meaning is something that is essentially fluid and negotiated rather than being authored individually (Fairclough, 1992, 1995).

"For Fairclough (1992, 1995) the analysis of discursive events is three dimensional and includes simultaneously a piece of text, an instance of discursive practice, and an instance of social practice. Text refers to written and spoken language in use, while "discursive practices" allude to the processes by which texts are produced and interpreted. The social practice dimension refers to the institutional and organisational factors surrounding the discursive event and how they might shape the nature of the discursive practice.

"For the purposes of this research, the method of analysis included a description of the language text and how it was produced or interpreted amongst managers and their subordinates. Particular emphasis was placed on investigating the import of metaphors that are characteristic of HRM, and the introduction of HRM-based techniques adopted by change leaders in their attempt to privilege certain themes and issues over others."

Fairclough, N., 1992,  Discourse and Social Change , Polity Press, Cambridge.

Fairclough, N., 1995,  Critical Discourse Analysis: Papers in the Critical Study of Language , Longman, London.

Discourse theory can be applied to the written as well as the spoken word and can be used to analyse marketing literature as in the following example:

Equity in corporate co-branding: the case of Adidas and the all-blacks  by Judy Motion  et al.  ( European Journal of Marketing , Volume 37 Number 7), where discourse theory is used to analyse branding messages.

How rigorous is qualitative research?

It is often considered harder to demonstrate the rigour of qualitative research, simply because it may be harder to replicate the conditions of the study, and apply the data in other similar circumstances. The rigour may partly lie in the ability to generate a theory which can be applied in other situations, and which takes our understanding of a particular area further.

Rigour in qualitative research is greatly aided by:

  • confirmability - which does not necessarily mean that someone else would adopt the same conclusion, but rather there is a clear audit trail between your data and your interpretation; and that interpretations are based on a wide range of data (for example, from several interviews rather than just one). (This is related to  triangulation , see below.)
  • authenticity - are you drawing on a sufficiently wide range of rich data, do the interpretations ring true, have you considered rival interpretations, do your informants agree with your interpretation?

In  Cultural assumptions in career management: practice implications from Germany;  (Hansen and Willcox,  Career Development International , Volume 2 Number 4), the main method used is ethnographic interviews, and findings are verified by comparing data from the two samples.

Reliability is also enhanced if you can triangulate your data from a number of different sources or methods of data collection, at different times and from different participants.

Dennis Cahill, in  When to use qualitative methods: a new approach  ( Marketing Intelligence & Planning , Volume 14 Number 6), has this to say about the reliability of qualitative research:

"While there are times when qualitative techniques are inappropriate to the research goal, or appropriate only in certain portions of a research project, quantitative techniques do not have universal applicability, either. Although these techniques may be used to measure "reality" rather precisely, they often suffer from a lack of good descriptive material of the type which brings the information to life. This lack is particularly felt in corporate applications where implementation of the results is sought. Therefore, whether one has any interest in the specific research described above, if one is involved in implementation of research results – something we all should be involved in – the use of qualitative research at midpoint is a technique with which we should become familiar.

"It is at this point that some qualitative follow up – interviews or focus groups for example – can serve to flesh out the results, making it possible for people at the firm to understand and internalize those results."

Can qualitative research be used in with quantitative research?

Whereas some researchers only use either qualitative or quantitative methodologies, the two are frequently combined, as when for example qualitative methods are used exploratatively in order to obtain further information prior to developing a quantitative research instrument. In other cases, qualitative methods are used to complement quantitative methods and obtain a greater degree of descriptive richness:

In  When to use qualitative methods: a new approach , Dennis Cahill describes how qualitative methods were used after an extensive questionnaire used to carry out research for a new publication dedicated to the needs of the real estate market. The analysis for the questionnaire produced a five-segment typology (winners, authentics, heartlanders, wannabes and maintainers), which was tested by means of an EYE-TRAC test, when a selected sample was videotaped looking at a magazine of houses for sale.

Once you have established the key features of your design, you need to create an outline project plan which will include a budget and a timetable. In order to do this you need to think first about the activities of your data collection: how much data are you collecting, where etc. (See the section on  Sampling techniques .) You also need to consider your time period for data collection.

Over what time period will you collect your data?

This refers to two types of issues:

Type of study

Should the research be a 'snapshot', examining a particular phenomenon at a particular time, or should it be  longitutinal , examining an issue over a time period? If the latter, the object will be to explore changes over the period.

A longitudinal study of corporate social reporting in Singapore  (Eric W K Tsang,  Accounting, Auditing & Accountability Journal , Volume 11 Number 5) examines social reporting in that country from 1986 to 1995.

Methodology

Sometimes, you may have 'one shot' at the collection of your data - in other words, you plan your sample, your method of data collection, and then analyse the result. This is more likely to be the case if your research approach is more quantitative.

However, other types of research approach involve stages in the collection of data. For example, in  grounded theory  research, data is collected and analysed and then the process is repeated as more is discovered about the subject. Likewise in  action research , there is a cyclical process of data collection, reflection and more collection and analysis.

If you adopt an approach where you  combine quantitative and qualitative methods , then this methodology will dictate that you do a series of studies, whether qualitative followed by quantitative, or vice versa, or qualitative/quantitative/qualitative.

Grounded theory methodology and practitioner reflexivity in TQM research  (Leonard and McAdam,  International Journal of Quality & Reliability Management , Volume 18 Number 2) adopts a three-stage approach to the collection of data.

Doing the plan

The following are some of the costs which need to be considered:

  • Travel to interview people.
  • Postal surveys, including follow-up.
  • The design and printing of the questionnaire, especially if there is use of Optical Mark Reader (OMR) and Optical Character Recognition (OCR) technology.
  • Programming to "read" the above.
  • Programming the data into meaningful results.
  • Transcription of any tape recorded interviews.
  • Cost of design of any internet survey.
  • Employment of a research assistant.

Timetabling

Make a list of the key stages of your research. Does it have several phases, for example, a questionnaire, then interviews?

How long will each phase take? Take account of factors such as:

  • Sourcing your sampling frame
  • Determining the sample
  • Approaching interview subjects
  • Preparations for interviews
  • Writing questionnaires
  • Response time for questionnaires (include a follow-up stage)
  • Analysing the responses
  • Writing the report

When doing a schedule, it's tempting to make it as short as possible in the belief that you actually can achieve more in the time than you think. However, it's very important to be as accurate as possible in your scheduling.

Planning is particularly important if you are working to a specific budget and timetable as for example if you are doing a PhD, or if you are working on a funded research project, which has a specific amount of money available and probably also specific deadlines.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Indian J Anaesth
  • v.60(9); 2016 Sep

Types of studies and research design

Mukul chandra kapoor.

Department of Anesthesiology, Max Smart Super Specialty Hospital, New Delhi, India

Medical research has evolved, from individual expert described opinions and techniques, to scientifically designed methodology-based studies. Evidence-based medicine (EBM) was established to re-evaluate medical facts and remove various myths in clinical practice. Research methodology is now protocol based with predefined steps. Studies were classified based on the method of collection and evaluation of data. Clinical study methodology now needs to comply to strict ethical, moral, truth, and transparency standards, ensuring that no conflict of interest is involved. A medical research pyramid has been designed to grade the quality of evidence and help physicians determine the value of the research. Randomised controlled trials (RCTs) have become gold standards for quality research. EBM now scales systemic reviews and meta-analyses at a level higher than RCTs to overcome deficiencies in the randomised trials due to errors in methodology and analyses.

INTRODUCTION

Expert opinion, experience, and authoritarian judgement were the norm in clinical medical practice. At scientific meetings, one often heard senior professionals emphatically expressing ‘In my experience,…… what I have said is correct!’ In 1981, articles published by Sackett et al . introduced ‘critical appraisal’ as they felt a need to teach methods of understanding scientific literature and its application at the bedside.[ 1 ] To improve clinical outcomes, clinical expertise must be complemented by the best external evidence.[ 2 ] Conversely, without clinical expertise, good external evidence may be used inappropriately [ Figure 1 ]. Practice gets outdated, if not updated with current evidence, depriving the clientele of the best available therapy.

An external file that holds a picture, illustration, etc.
Object name is IJA-60-626-g001.jpg

Triad of evidence-based medicine

EVIDENCE-BASED MEDICINE

In 1971, in his book ‘Effectiveness and Efficiency’, Archibald Cochrane highlighted the lack of reliable evidence behind many accepted health-care interventions.[ 3 ] This triggered re-evaluation of many established ‘supposed’ scientific facts and awakened physicians to the need for evidence in medicine. Evidence-based medicine (EBM) thus evolved, which was defined as ‘the conscientious, explicit and judicious use of the current best evidence in making decisions about the care of individual patients.’[ 2 ]

The goal of EBM was scientific endowment to achieve consistency, efficiency, effectiveness, quality, safety, reduction in dilemma and limitation of idiosyncrasies in clinical practice.[ 4 ] EBM required the physician to diligently assess the therapy, make clinical adjustments using the best available external evidence, ensure awareness of current research and discover clinical pathways to ensure best patient outcomes.[ 5 ]

With widespread internet use, phenomenally large number of publications, training and media resources are available but determining the quality of this literature is difficult for a busy physician. Abstracts are available freely on the internet, but full-text articles require a subscription. To complicate issues, contradictory studies are published making decision-making difficult.[ 6 ] Publication bias, especially against negative studies, makes matters worse.

In 1993, the Cochrane Collaboration was founded by Ian Chalmers and others to create and disseminate up-to-date review of randomised controlled trials (RCTs) to help health-care professionals make informed decisions.[ 7 ] In 1995, the American College of Physicians and the British Medical Journal Publishing Group collaborated to publish the journal ‘Evidence-based medicine’, leading to the evolution of EBM in all spheres of medicine.

MEDICAL RESEARCH

Medical research needs to be conducted to increase knowledge about the human species, its social/natural environment and to combat disease/infirmity in humans. Research should be conducted in a manner conducive to and consistent with dignity and well-being of the participant; in a professional and transparent manner; and ensuring minimal risk.[ 8 ] Research thus must be subjected to careful evaluation at all stages, i.e., research design/experimentation; results and their implications; the objective of the research sought; anticipated benefits/dangers; potential uses/abuses of the experiment and its results; and on ensuring the safety of human life. Table 1 lists the principles any research should follow.[ 8 ]

General principles of medical research

An external file that holds a picture, illustration, etc.
Object name is IJA-60-626-g002.jpg

Types of study design

Medical research is classified into primary and secondary research. Clinical/experimental studies are performed in primary research, whereas secondary research consolidates available studies as reviews, systematic reviews and meta-analyses. Three main areas in primary research are basic medical research, clinical research and epidemiological research [ Figure 2 ]. Basic research includes fundamental research in fields shown in Figure 2 . In almost all studies, at least one independent variable is varied, whereas the effects on the dependent variables are investigated. Clinical studies include observational studies and interventional studies and are subclassified as in Figure 2 .

An external file that holds a picture, illustration, etc.
Object name is IJA-60-626-g003.jpg

Classification of types of medical research

Interventional clinical study is performed with the purpose of studying or demonstrating clinical or pharmacological properties of drugs/devices, their side effects and to establish their efficacy or safety. They also include studies in which surgical, physical or psychotherapeutic procedures are examined.[ 9 ] Studies on drugs/devices are subject to legal and ethical requirements including the Drug Controller General India (DCGI) directives. They require the approval of DCGI recognized Ethics Committee and must be performed in accordance with the rules of ‘Good Clinical Practice’.[ 10 ] Further details are available under ‘Methodology for research II’ section in this issue of IJA. In 2004, the World Health Organization advised registration of all clinical trials in a public registry. In India, the Clinical Trials Registry of India was launched in 2007 ( www.ctri.nic.in ). The International Committee of Medical Journal Editors (ICMJE) mandates its member journals to publish only registered trials.[ 11 ]

Observational clinical study is a study in which knowledge from treatment of persons with drugs is analysed using epidemiological methods. In these studies, the diagnosis, treatment and monitoring are performed exclusively according to medical practice and not according to a specified study protocol.[ 9 ] They are subclassified as per Figure 2 .

Epidemiological studies have two basic approaches, the interventional and observational. Clinicians are more familiar with interventional research, whereas epidemiologists usually perform observational research.

Interventional studies are experimental in character and are subdivided into field and group studies, for example, iodine supplementation of cooking salt to prevent hypothyroidism. Many interventions are unsuitable for RCTs, as the exposure may be harmful to the subjects.

Observational studies can be subdivided into cohort, case–control, cross-sectional and ecological studies.

  • Cohort studies are suited to detect connections between exposure and development of disease. They are normally prospective studies of two healthy groups of subjects observed over time, in which one group is exposed to a specific substance, whereas the other is not. The occurrence of the disease can be determined in the two groups. Cohort studies can also be retrospective
  • Case–control studies are retrospective analyses performed to establish the prevalence of a disease in two groups exposed to a factor or disease. The incidence rate cannot be calculated, and there is also a risk of selection bias and faulty recall.

Secondary research

Narrative review.

An expert senior author writes about a particular field, condition or treatment, including an overview, and this information is fortified by his experience. The article is in a narrative format. Its limitation is that one cannot tell whether recommendations are based on author's clinical experience, available literature and why some studies were given more emphasis. It can be biased, with selective citation of reports that reinforce the authors' views of a topic.[ 12 ]

Systematic review

Systematic reviews methodically and comprehensively identify studies focused on a specified topic, appraise their methodology, summate the results, identify key findings and reasons for differences across studies, and cite limitations of current knowledge.[ 13 ] They adhere to reproducible methods and recommended guidelines.[ 14 ] The methods used to compile data are explicit and transparent, allowing the reader to gauge the quality of the review and the potential for bias.[ 15 ]

A systematic review can be presented in text or graphic form. In graphic form, data of different trials can be plotted with the point estimate and 95% confidence interval for each study, presented on an individual line. A properly conducted systematic review presents the best available research evidence for a focused clinical question. The review team may obtain information, not available in the original reports, from the primary authors. This ensures that findings are consistent and generalisable across populations, environment, therapies and groups.[ 12 ] A systematic review attempts to reduce bias identification and studies selection for review, using a comprehensive search strategy and specifying inclusion criteria. The strength of a systematic review lies in the transparency of each phase and highlighting the merits of each decision made, while compiling information.

Meta-analysis

A review team compiles aggregate-level data in each primary study, and in some cases, data are solicited from each of the primary studies.[ 16 , 17 ] Although difficult to perform, individual patient meta-analyses offer advantages over aggregate-level analyses.[ 18 ] These mathematically pooled results are referred to as meta-analysis. Combining data from well-conducted primary studies provide a precise estimate of the “true effect.”[ 19 ] Pooling the samples of individual studies increases overall sample size, enhances statistical analysis power, reduces confidence interval and thereby improves statistical value.

The structured process of Cochrane Collaboration systematic reviews has contributed to the improvement of their quality. For the meta-analysis to be definitive, the primary RCTs should have been conducted methodically. When the existing studies have important scientific and methodological limitations, such as smaller sized samples, the systematic review may identify where gaps exist in the available literature.[ 20 ] RCTs and systematic review of several randomised trials are less likely to mislead us, and thereby help judge whether an intervention is better.[ 2 ] Practice guidelines supported by large RCTs and meta-analyses are considered as ‘gold standard’ in EBM. This issue of IJA is accompanied by an editorial on Importance of EBM on research and practice (Guyat and Sriganesh 471_16).[ 21 ] The EBM pyramid grading the value of different types of research studies is shown in Figure 3 .

An external file that holds a picture, illustration, etc.
Object name is IJA-60-626-g004.jpg

The evidence-based medicine pyramid

In the last decade, a number of studies and guidelines brought about path-breaking changes in anaesthesiology and critical care. Some guidelines such as the ‘Surviving Sepsis Guidelines-2004’[ 22 ] were later found to be flawed and biased. A number of large RCTs were rejected as their findings were erroneous. Another classic example is that of ENIGMA-I (Evaluation of Nitrous oxide In the Gas Mixture for Anaesthesia)[ 23 ] which implicated nitrous oxide for poor outcomes, but ENIGMA-II[ 24 , 25 ] conducted later, by the same investigators, declared it as safe. The rise and fall of the ‘tight glucose control’ regimen was similar.[ 26 ]

Although RCTs are considered ‘gold standard’ in research, their status is at crossroads today. RCTs have conflicting interests and thus must be evaluated with careful scrutiny. EBM can promote evidence reflected in RCTs and meta-analyses. However, it cannot promulgate evidence not reflected in RCTs. Flawed RCTs and meta-analyses may bring forth erroneous recommendations. EBM thus should not be restricted to RCTs and meta-analyses but must involve tracking down the best external evidence to answer our clinical questions.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

Pardon Our Interruption

As you were browsing something about your browser made us think you were a bot. There are a few reasons this might happen:

  • You've disabled JavaScript in your web browser.
  • You're a power user moving through this website with super-human speed.
  • You've disabled cookies in your web browser.
  • A third-party browser plugin, such as Ghostery or NoScript, is preventing JavaScript from running. Additional information is available in this support article .

To regain access, please make sure that cookies and JavaScript are enabled before reloading the page.

  • Open access
  • Published: 28 August 2024

The design, implementation, and evaluation of a blended (in-person and virtual) Clinical Competency Examination for final-year nursing students

  • Rita Mojtahedzadeh 1 ,
  • Tahereh Toulabi 2 , 3 &
  • Aeen Mohammadi 1  

BMC Medical Education volume  24 , Article number:  936 ( 2024 ) Cite this article

7 Altmetric

Metrics details

Introduction

Studies have reported different results of evaluation methods of clinical competency tests. Therefore, this study aimed to design, implement, and evaluate a blended (in-person and virtual) Competency Examination for final-year Nursing Students.

This interventional study was conducted in two semesters of 2020–2021 using an educational action research method in the nursing and midwifery faculty. Thirteen faculty members and 84 final-year nursing students were included in the study using a census method. Eight programs and related activities were designed and conducted during the examination process. Students completed the Spielberger Anxiety Inventory before the examination, and both faculty members and students completed the Acceptance and Satisfaction questionnaire.

The results of the analysis of focused group discussions and reflections indicated that the virtual CCE was not capable of adequately assessing clinical skills. Therefore, it was decided that the CCE for final-year nursing students would be conducted using a blended method. The activities required for performing the examination were designed and implemented based on action plans. Anxiety and satisfaction were also evaluated as outcomes of the study. There was no statistically significant difference in overt, covert, and overall anxiety scores between the in-person and virtual sections of the examination ( p  > 0.05). The mean (SD) acceptance and satisfaction scores for students in virtual, in-person, and blended sections were 25.49 (4.73), 27.60 (4.70), and 25.57 (4.97), respectively, out of 30 points, in which there was a significant increase in the in-person section compared to the other sections. ( p  = 0.008). The mean acceptance and satisfaction scores for faculty members were 30.31 (4.47) in the virtual, 29.86 (3.94) in the in-person, and 30.00 (4.16) out of 33 in the blended, and there was no significant difference between the three sections ( p  = 0.864).

Evaluating nursing students’ clinical competency using a blended method was implemented and solved the problem of students’ graduation. Therefore, it is suggested that the blended method be used instead of traditional in-person or entirely virtual exams in epidemics or based on conditions, facilities, and human resources. Also, the use of patient simulation, virtual reality, and the development of necessary virtual and in-person training infrastructure for students is recommended for future research. Furthermore, considering that the acceptance of traditional in-person exams among students is higher, it is necessary to develop virtual teaching strategies.

Peer Review reports

The primary mission of the nursing profession is to educate competent, capable, and qualified nurses with the necessary knowledge and skills to provide quality nursing care to preserve and improve the community’s health [ 1 ]. Clinical education is one of the most essential and fundamental components of nursing education, in which students gain clinical experience by interacting with actual patients and addressing real problems. Therefore, assessing clinical skills is very challenging. The main goal of educational evaluation is to improve, ensure, and enhance the quality of the academic program. In this regard, evaluating learners’ performance is one of the critical and sensitive aspects of the teaching and learning process. It is considered one of the fundamental elements of the educational program [ 2 ]. The study area is educational evaluation.

Various methods are used to evaluate nursing students. The Objective Structured Clinical Examination (OSCE) is a valid and reliable method for assessing clinical competence [ 1 , 2 ]. In the last twenty years, the use of OSCE has increased significantly in evaluating medical and paramedical students to overcome the limitations of traditional practical evaluation systems [ 3 , 4 ]. The advantages of this method include providing rapid feedback, uniformity for all examinees, and providing conditions close to reality. However, the time-consuming nature and the need for a lot of personnel and equipment are some disadvantages of OSCE [ 5 , 6 ]. Additionally, some studies have shown that this method is anxiety-provoking for some students and, due to time constraints, being observed by the evaluator and other factors can cause dissatisfaction among students [ 7 , 8 ].

However, some studies have also reported that this method is not only not associated with high levels of stress among students [ 9 ] but also has higher satisfaction than traditional evaluation methods [ 4 ]. In addition, during the COVID-19 pandemic, problems such as overcrowding and student quarantine during the exam have arisen. Therefore, reducing time and costs, eliminating or reducing the tiring quarantine time, optimizing the exam, utilizing all facilities for simulating the clinical environment, using innovative methods for conducting the exam, reducing stress, increasing satisfaction, and ultimately preventing the transmission of COVID-19 are significant problems that need to be further investigated.

Studies show that using virtual space as an alternative solution is strongly felt [ 10 , 11 , 12 ]. In the fall of 2009, following the outbreak of H1N1, educational classes in the United States were held virtually [ 13 ]. Also, in 2005, during Hurricane Katrina, 27 universities in the Gulf of Texas used emergency virtual education and evaluation [ 14 ].

One of the challenges faced by healthcare providers in Iran, like most countries in the world, especially during the COVID-19 outbreak, was the shortage of nursing staff [ 15 , 16 ]. Also, in evaluating and conducting CCE for final-year students and subsequent job seekers in the Clinical Skills Center, problems such as student overcrowding and the need for quarantine during the implementation of OSCE existed. This problem has been reported not only for us but also in other countries [ 17 ]. The intelligent use of technology can solve many of these problems. Therefore, almost all educational institutions have quickly started changing their policies’ paradigms to introduce online teaching and evaluation methods [ 18 , 19 ].

During the COVID-19 pandemic, for the first time, this exam was held virtually in our school. However, feedback from professors and students and the experiences of researchers have shown that the virtual exam can only partially evaluate clinical and practical skills in some stations, such as basic skills, resuscitation, and pediatrics [ 20 ].

Additionally, using OSCE in skills assessment facilitates the evaluation of psychological-motor knowledge and attitudes and helps identify strengths and weaknesses [ 21 ]. Clinical competency is a combination of theoretical knowledge and clinical skills. Therefore, using an effective blended method focusing on the quality and safety of healthcare that measures students’ clinical skills and theoretical expertise more accurately in both in-person and virtual environments is essential. The participation of students, professors, managers, education and training staff, and the Clinical Skills Center was necessary to achieve this important and inevitable goal. Therefore, the Clinical Competency Examination (CCE) for nursing students in our nursing and midwifery school was held in the form of an educational action research process to design, implement, and evaluate a blended method. Implementing this process during the COVID-19 pandemic, when it was impossible to hold an utterly in-person exam, helped improve the quality of the exam and address its limitations and weaknesses while providing the necessary evaluation for students.

The innovation of this research lies in evaluating the clinical competency of final-year nursing students using a blended method that focuses on clinical and practical aspects. In the searches conducted, only a few studies have been done on virtual exams and simulations, and a similar study using a blended method was not found.

The research investigates the scientific and clinical abilities of nursing students through the clinical competency exam. This exam, traditionally administered in person, is a crucial milestone for final-year nursing students, marking their readiness for graduation. However, the unforeseen circumstances of the COVID-19 pandemic and the resulting restrictions rendered in-person exams impractical in 2020. This necessitated a swift and significant transition to an online format, a decision that has profound implications for the future of nursing education. While the adoption of online assessment was a necessary step to ensure student graduation and address the nursing workforce shortage during the pandemic, it was not without its challenges. The accurate assessment of clinical skills, such as dressing and CPR, proved to be a significant hurdle. This underscored the urgent need for a change in the exam format, prompting a deeper exploration of innovative solutions.

To address these problems, the research was conducted collaboratively with stakeholders, considering the context and necessity for change in exam administration. Employing an Action Research (AR) approach, a blend of online and in-person exam modalities was adopted. Necessary changes were implemented through a cyclic process involving problem identification, program design, implementation, reflection, and continuous evaluation.

The research began by posing the following questions:

What are the problems of conducting the CCE for final-year nursing students during COVID-19?

How can these problems be addressed?

What are the solutions and suggestions from the involved stakeholders?

How can the CCE be designed, implemented, and evaluated?

What is the impact of exam type on student anxiety and satisfaction?

These questions guided the research in exploring the complexities of administering the CCE amidst the COVID-19 pandemic and in devising practical solutions to ensure the validity and reliability of the assessment while meeting stakeholders’ needs.

Materials and methods

Research setting, expert panel members, job analysis, and role delineation.

This action research was conducted at the Nursing and Midwifery School of Lorestan University of Medical Sciences, with a history of approximately 40 years. The school accommodates 500 undergraduate and graduate nursing students across six specialized fields, with 84 students enrolled in their final year of undergraduate studies. Additionally, the school employs 26 full-time faculty members in nursing education departments.

An expert panel was assembled, consisting of faculty members specializing in various areas, including medical-surgical nursing, psychiatric nursing, community health nursing, pediatric nursing, and intensive care nursing. The panel also included educational department managers and the examination department supervisor. Through focused group discussions, the panel identified and examined issues regarding the exam format, and members proposed various solutions. Subsequently, after analyzing the proposed solutions and drawing upon the panel members’ experiences, specific roles for each member were delineated.

Sampling and participant selection

Given the nature of the research, purposive sampling was employed, ensuring that all individuals involved in the design, implementation, and evaluation of the exam participated in this study.

The participants in this study included final-year nursing students, faculty members, clinical skills center experts, the dean of the school, the educational deputy, group managers, and the exam department head. However, in the outcome evaluation phase, 13 faculty members participated in-person and virtually (26 times), and 84 final-year nursing students enrolled in the study using a census method in two semesters of 2020–2021 completed the questionnaires, including 37 females and 47 males. In addition, three male and ten female faculty members participated in this study; of this number, 2 were instructors, and 11 were assistant professors.

Data collection tools

In order to enhance the validity and credibility of the study and thoroughly examine the results, this study utilized a triangulation method consisting of demographic information, focus group discussions, the Spielberger Anxiety Scale questionnaire, and an Acceptance and Satisfaction Questionnaire.

Demographic information

A questionnaire was used to gather demographic information from both students and faculty members. For students, this included age, gender, and place of residence, while for faculty members, it included age, gender, field of study, and employment status.

Focus group discussion

Multiple focused group discussions were conducted with the participation of professors, administrators, experts, and students. These discussions were held through various platforms such as WhatsApp Skype, and in-person meetings while adhering to health protocols. The researcher guided the talks toward the research objectives and raised fundamental questions, such as describing the strengths and weaknesses of the previous exam, determining how to conduct the CCE considering the COVID-19 situation, deciding on virtual and in-person stations, specifying the evaluation checklists for stations, and explaining the weighting and scoring of each station.

Spielberger anxiety scale questionnaire

This study used the Spielberger Anxiety Questionnaire to measure students’ overt and covert anxiety levels. This questionnaire is an internationally standardized tool known as the STAI questionnaire that measures both overt (state) and covert (trait) anxiety [ 22 ]. The state anxiety scale (Form Y-1 of STAI) comprises twenty statements that assess the individual’s feelings at the moment of responding. The trait anxiety scale (Form Y-2 of STAI) also includes twenty statements that measure individuals’ general and typical feelings. The scores of each of the two scales ranged from 20 to 80 in the current study. The reliability coefficient of the test for the apparent and hidden anxiety scales, based on Cronbach’s alpha, was confirmed to be 0.9084 and 0.9025, respectively [ 23 , 24 ]. Furthermore, in the present study, Cronbach’s alpha value for the total anxiety questionnaire, overt anxiety, and covert anxiety scales were 0.935, 0.921, and 0.760, respectively.

Acceptance and satisfaction questionnaire

The Acceptability and Satisfaction Questionnaire for Clinical Competency Test was developed by Farajpour et al. (2012). The student questionnaire consists of ten questions, and the professor questionnaire consists of eleven questions, using a four-point Likert scale. Experts have confirmed the validity of these questionnaires, and their Cronbach’s alpha coefficients have been determined to be 0.85 and 0.87 for the professor and student questionnaires, respectively [ 6 ]. In the current study, ten medical education experts also confirmed the validity of the questionnaires. Regarding internal reliability, Cronbach’s alpha coefficients for the student satisfaction questionnaire for both virtual and in-person sections were 0.76 and 0.87, respectively. The professor satisfaction questionnaires were 0.84 and 0.87, respectively. An online platform was used to collect data for the virtual exam.

Data analysis and rigor of study

Qualitative data analysis was conducted using the method proposed by Graneheim and Lundman. Additionally, the criteria established by Lincoln and Guba (1985) were employed to confirm the rigor and validity of the data, including credibility, transferability, dependability, and confirmability [ 26 ].

In this research, data synthesis was performed by combining the collected data with various tools and methods. The findings of this study were reviewed and confirmed by participants, supervisors, mentors, and experts in qualitative research, reflecting their opinions on the alignment of findings with their experiences and perspectives on clinical competence examinations. Therefore, the member check method was used to validate credibility.

Moreover, efforts were made in this study to provide a comprehensive description of the research steps, create a suitable context for implementation, assess the views of others, and ensure the transferability of the results.

Furthermore, researchers’ interest in identifying and describing problems, reflecting, designing, implementing, and evaluating clinical competence examinations, along with the engagement of stakeholders in these examinations, was ensured by the researchers’ long-term engagement of over 25 years with the environment and stakeholders, seeking their opinions and considering their ideas and views. These factors contributed to ensuring confirmability.

In this research, by reflecting the results to the participants and making revisions by the researchers, problem clarification and solution presentation, design, implementation, and evaluation of operational programs with stakeholder participation and continuous presence were attempted to prevent biases, assumptions, and research hypotheses, and to confirm dependability.

Data analysis was performed using SPSS version 21, and descriptive statistical tests (absolute and relative frequency, mean, and standard deviation) and inferential tests (paired t-test, independent t-test, and analysis of variance) were used. The significance level was set at 0.05. Parametric tests were used based on the normality of the data according to the Kolmogorov-Smirnov statistical test.

Given that conducting the CCE for final-year nursing students required the active participation of managers, faculty members, staff, and students, and to answer the research question “How can the CCE for final-year nursing students be conducted?” and achieve the research objective of “designing, implementing, and evaluating the clinical competency exam,” the action research method was employed.

The present study was conducted based on the Dickens & Watkins model. There are four primary stages (Fig.  1 ) in the cyclical action research process: reflect, plan, act, observe, and then reflect to continue through the cycle [ 27 ].

figure 1

The cyclical process of action research [ 27 ]

Stage 1: Reflection

Identification of the problem.

According to the educational regulations, final semester nursing students must complete the clinical competency exam. However, due to the COVID-19 pandemic and the critical situation in most provinces, inter-city travel restrictions, and insufficient dormitory space, conducting the CCE in-person was not feasible.

This exam was conducted virtually at our institution. However, based on the reflections from experts, researchers have found that virtual exams can only partially assess clinical and practical skills in certain stations, such as basic skills, resuscitation, and pediatrics. Furthermore, utilizing Objective Structured Clinical Examination (OSCE) in skills assessment facilitates the evaluation of psychomotor skills, knowledge, and attitudes, aiding in identifying strengths and weaknesses.

P3, “Due to the COVID-19 pandemic and the critical situation in most provinces, inter-city travel restrictions, and insufficient dormitory space, conducting the CCE in-person is not feasible.”

Stage 2: Planning

Based on the reflections gathered from the participants, the exam was designed using a blended approach (combining in-person and virtual components) as per the schedule outlined in Fig.  2 . All planned activities for the blended CCE for final-year nursing students were executed over two semesters.

P5, “Taking the exam virtually might seem easier for us and the students, but in my opinion, it’s not realistic. For instance, performing wound dressing or airway management is very practical, and it’s not possible to assess students with a virtual scenario. We need to see them in person.”

P6"I believe it’s better to conduct those activities that are highly practical in person, but for those involving communication skills like report writing, professional ethics, etc., we can opt for virtual assessment.”

figure 2

Design and implementation of the blended CCE

Stage 3: Act

Cce implementation steps.

The CCE was conducted based on the flowchart in Fig.  3 and the following steps:

figure 3

Steps for conducting the CCE for final-year nursing students using a blended method

Step 1: Designing the framework for conducting the blended Clinical Competency Examination

The panelists were guided to design the blended exam in focused group sessions and virtual panels based on the ADDIE (Analysis, Design, Development, Implementation, Evaluation) model [ 28 ]. Initially, needs assessment and opinion polling were conducted, followed by the operational planning of the exam, including the design of the blueprint table (Table  1 ), determination of station types (in-person or virtual), designing question stems in the form of scenarios, creating checklists and station procedure guides by expert panel groups based on participant analysis, and the development of exam implementation guidelines with participant input [ 27 ]. The design, execution, and evaluation were as follows:

In-person and virtual meetings with professors were held to determine the exam schedule, deadlines for submitting checklists, decision-making regarding the virtual or in-person nature of stations based on the type of skill (practical, communication), and presenting problems and solutions. Based on the decisions, primary skill stations, as well as cardiac and pediatric resuscitation stations, were held in person. In contrast, virtual stations for health, nursing ethics, nursing reports, nursing diagnosis, physical examinations, and psychiatric nursing were held.

News about the exam was communicated to students through the college website and text messages. Then, an online orientation session was held on Skype with students regarding the need assessment of pre-exam educational workshops, virtual and in-person exam standards, how to use exam software, how to conduct virtual exams, explaining the necessary infrastructure for participating in the exam by students, completing anxiety and satisfaction questionnaires, rules and regulations, how to deal with rejected individuals, and exam testing and Q&A. Additionally, a pre-exam in-person orientation session was held.

To inform students about the entire educational process, the resources and educational content recommended by the professors, including PDF files, photos and videos, instructions, and links, were shared through a virtual group on the social media messenger, and scientific information was also, questions were asked and answered through this platform.

Correspondence and necessary coordination were made with the university clinical skills center to conduct in-person workshops and exams.

Following the Test-centered approach, the Angoff Modified method [ 29 , 30 ] was used to determine the scoring criteria for each station by panelists tasked with assigning scores.

Additionally, in establishing standards for this blended CCE for fourth-year nursing students, for whom graduation was a prerequisite, the panelists, as experienced clinical educators familiar with the performance and future roles of these students and the assessment method of the blended exam, were involved [ 29 , 30 ](Table 1 ).

Step 2: Preparing the necessary infrastructure for conducting the exam

Software infrastructure.

The pre- and post-virtual exam questions, scenarios, and questionnaires were uploaded using online software.

The exam was conducted on a trial basis in multiple sessions with the participation of several faculty members, and any issues were addressed. Students were authenticated to enter the exam environment via email and personal information verification. The questions for each station were designed and entered into the software by the respective station instructors and the examination coordinator, who facilitated the exam. The questions were formatted as clinical scenarios, images, descriptive questions, and multiple-choice questions, emphasizing the clinical and practical aspects. This software had various features for administering different types of exams and various question formats, including multiple-choice, descriptive, scenario-based, image-based, video-based, matching, Excel output, and graphical and descriptive statistical analyses. It also had automatic questionnaire completion, notification emails, score addition to questionnaires, prevention of multiple answer submissions, and the ability to upload files up to 4 gigabytes. Student authentication was based on national identification numbers and student IDs, serving as user IDs and passwords. Students could enter the exam environment using their email and multi-level personal information verification. If the information did not match, individuals could not access the exam environment.

Checklists and questionnaires

A student list was prepared, and checklists for the in-person exam and anxiety and satisfaction questionnaires were reproduced.

Empowerment workshops for professors and education staff

Educational needs of faculty members and academic staff include conducting clinical competency exams using the OSCE method; simulating and evaluating OSCE exams; designing standardized questions, checklists, and scenarios; innovative approaches in clinical evaluations; designing physical spaces and setting up stations; and assessing ethics and professional commitment in clinical competency exams.

Student empowerment programs

According to the students’ needs assessment results, in-person workshops on cardiopulmonary resuscitation and airway management and online workshops were held on health, pediatrics, cardiopulmonary resuscitation, ethics, nursing diagnosis, and report writing through Skype messenger. In addition, vaccination notes, psychiatric nursing, and educational files on clinical examinations and basic skills were recorded by instructors and made available to students via virtual groups.

Step 3: CCE implementation

The CCE was held in two parts, in-person and virtual.

In-person exam

The OSCE method was used for this section of the exam. The basic skills station exam included dressing and injections, and the CPR and pediatrics stations were conducted in person. The students were divided into two groups of 21 each semester, and the exam was held in two shifts. While adhering to quarantine protocols, the students performed the procedures for seven minutes at each station, and instructors evaluated them using a checklist. An additional minute was allotted for transitioning to the next station.

Virtual exam

The professional ethics, nursing diagnosis, nursing report, health, psychiatric nursing, and physical examination stations were conducted virtually after the in-person exam. This exam was made available to students via a primary and a secondary link in a virtual space at the scheduled time. Students were first verified, and after the specified time elapsed, the ability to respond to inactive questions and submitted answers was sent. During the exam, full support was provided by the examination center.

The examination coordinator conducted the entire virtual exam process. The exam results were announced 48 h after the exam. A passing grade was considered to be a score higher than 60% in all stations. Students who failed in various stations were given the opportunity for remediation based on faculty feedback, either through additional study or participation in educational workshops. Subsequent exams were held one week apart from the initial exam. It was stipulated that students who failed in more than half of the stations would be evaluated in the following semester. If they failed in more than three sessions at a station, a decision would be made by the faculty’s educational council. However, no students met these situations.

Step 4: Evaluation

The evaluation of the exam was conducted by examiners using a checklist, and the results were announced as pass or fail.

Stage 4: Observation / evaluation

In this study, both process and outcome evaluations were conducted:

Process evaluation

All programs and activities implemented during the test design and administration process were evaluated in the process evaluation. This evaluation was based on operational program control and reflections received from participants through group discussion sessions and virtual groups.

Sample reflections received from faculty members, managers, experts, and students through group discussions and social messaging platforms after the changes:

P7: “The implementation of the blended virtual exam, in the conditions of the COVID-19 crisis where the possibility of holding in-person exams was not fully available, in my opinion, was able to improve the quality of exam administration and address the limitations and weaknesses of the exam entirely virtually.”

P5: “In my opinion, this blended method was able to better evaluate students in terms of clinical readiness for entering clinical practice.”

Outcomes evaluation

The study outcomes were student anxiety, student acceptance and satisfaction, and faculty acceptance and satisfaction. Before the start of the in-person and virtual exams, the Spielberger Anxiety Questionnaire was provided to students. Additionally, immediately after the exam, students and instructors completed the acceptance and satisfaction questionnaire for the relevant section. After the exam, students and instructors completed the acceptance and satisfaction questionnaire again for the entire exam process, including feasibility, satisfaction with its implementation, and educational impact.

Design framework and implementation for the blended Clinical Competency Examination

The exam was planned using a blended method (part in-person, part virtual) according to the Fig.  2 schedule, and all planned programs for the blended CCE for final-year nursing students were implemented in two semesters.

Evaluation results

In this study, 84 final-year nursing students participated, including 37 females (44.05%) and 47 males (55.95%). Among them, 28 (33.3%) were dormitory residents, and 56 (66.7%) were non-dormitory residents.

In this study, both process and outcome evaluations were conducted.

All programs and activities implemented during the test design and administration process were evaluated in the process evaluation (Table  2 ). This evaluation was based on operational program control and reflections received from participants through group discussion sessions and virtual groups on social media.

Anxiety and satisfaction were examined and evaluated as study outcomes, and the results are presented below.

The paired t-test results in Table  3 showed no statistically significant difference in overt anxiety ( p  = 0.56), covert anxiety ( p  = 0.13), and total anxiety scores ( p  = 0.167) between the in-person and virtual sections before the blended Clinical Competency Examination.

However, the mean (SD) of overt anxiety in persons in males and females was 49.27 (11.16) and 43.63 (13.60), respectively, and this difference was statistically significant ( p  = 0.03). Also, the mean (SD) of overt virtual anxiety in males and females was 45.70 (11.88) and 51.00 (9.51), respectively, and this difference was statistically significant ( p  = 0.03). However, there was no significant difference between males and females regarding covert anxiety in the person ( p  = 0.94) and virtual ( p  = 0.60) sections. In addition, the highest percentage of overt anxiety was apparent in the virtual section among women (15.40%) and the in-person section among men (21.28%) and was prevalent at a moderate to high level.

According to Table  4 , One-way analysis of variance showed a significant difference between the virtual, in-person, and blended sections in terms of acceptance and satisfaction scores.

The results of the One-way analysis of variance showed that the mean (SD) acceptance and satisfaction scores of nursing students of the CCE in virtual, in-person, and blended sections were 25.49 (4.73), 27.60 (4.70), and 25.57 (4.97) out of 30, respectively. There was a significant difference between the three sections ( p  = 0.008).

In addition, 3 (7.23%) male and 10 (76.3%) female faculty members participated in this study; of this number, 2 (15.38%) were instructors, and 11 (84.62%) were assistant professors. Moreover, they were between 29 and 50 years old, with a mean (SD) of 41.37 (6.27). Furthermore, they had 4 to 20 years of work experience with a mean and standard deviation of 13.22(4.43).

The results of the analysis of variance showed that the mean (SD) acceptance and satisfaction scores of faculty members of the CCE in virtual, in-person, and blended sections were 30.31 (4.47), 29.86 (3.94), and 30.00 (4.16) out of 33, respectively. There was no significant difference between the three sections ( p  = 0.864).

This action research study showed that the blended CCE for nursing students is feasible and, depending on the conditions and objectives, evaluation stations can be designed and implemented virtually or in person.

The blended exam, combining in-person and virtual elements, managed to address some of the weaknesses of entirely virtual exams conducted in previous terms due to the COVID-19 pandemic. Given the pandemic conditions, the possibility of performing all in-person stations was not feasible due to the risk of students and evaluators contracting the virus, as well as the need for prolonged quarantine. Additionally, to meet the staffing needs of hospitals, nursing students needed to graduate. By implementing the blended exam idea and conducting in-person evaluations at clinical stations, the assessment of nursing students’ clinical competence was brought closer to reality compared to the entirely virtual method.

Furthermore, the need for human resources, station setup costs, and time spent was less than the entirely in-person method. Therefore, in pandemics or conditions where sufficient financial resources and human resources are not available, the blended approach can be utilized.

Additionally, the evaluation results showed that students’ total and overt anxiety in both virtual and in-person sections of the blended CCE did not differ significantly. However, the overt anxiety of female students in the virtual section and male students in the in-person section was considerably higher. Nevertheless, students’ covert anxiety related to personal characteristics did not differ in virtual and in-person exam sections. However, students’ acceptance and satisfaction in the in-person section were higher than in the virtual and blended sections, with a significant difference. The acceptance and satisfaction of faculty members from the CCE in in-person, virtual, and blended sections were the same and relatively high.

A blended CCE nursing competency exam was not found in the literature review. However, recent studies, especially during the COVID-19 pandemic, have designed and implemented this exam using virtual OSCE. Previously, the CCE was held in-person or through traditional OSCE methods.

During the COVID-19 pandemic, nursing schools worldwide faced difficulties administering clinical competency exams for students. The virtual simulation was used to evaluate clinical competency and develop nursing students’ clinical skills in the United States, including standard videos, home videos, and clinical scenarios. Additionally, an online virtual simulation program was designed to assess the clinical competency of senior nursing students in Hong Kong as a potential alternative to traditional clinical training [ 31 ].

A traditional in-person OSCE was also redesigned and developed through a virtual conferencing platform for nursing students at the University of Texas Medical Branch in Galveston. Survey findings showed that most professors and students considered virtual OSCE a highly effective tool for evaluating communication skills, obtaining a medical history, making differential diagnoses, and managing patients. However, professors noted that evaluating examination techniques in a virtual environment is challenging [ 32 ].

However, Biranvand reported that less than half of the nursing students believed the in-person OSCE was stressful [ 33 ]. At the same time, the results of another study showed that 96.2% of nursing students perceived the exam as anxiety-provoking [ 1 ]. Students believe that the stress of this exam is primarily related to exam time, complexity, and the execution of techniques, as well as confusion about exam methods [ 7 ]. In contrast to previous research results, in a study conducted in Egypt, 75% of students reported that the OSCE method has less stress than other examination methods [ 9 ]. However, there has yet to be a consensus across studies on the causes and extent of anxiety-provoking in the OSCE exam. In a study, the researchers found that in addition to the factors mentioned above, the evaluator’s presence could also be a cause of stress [ 34 ]. Another survey study showed that students perceived the OSCE method as more stressful than the traditional method, mainly due to the large number of stations, exam items, and time constraints [ 7 ]. Another study in Egypt, which designed two stages of the OSCE exam for 75 nursing students, found that 65.6% of students reported that the second stage exam was stressful due to the problem-solving station. In contrast, only 38.9% of participants considered the first-stage exam stressful [ 35 ]. Given that various studies have reported anxiety as one of the disadvantages of the OSCE exam, in this study, one of the outcomes evaluated was the anxiety of final-year nursing students. There was no significant difference in total anxiety and overt anxiety between students in the in-person and virtual sections of the blended Clinical Competency Examination. The overt anxiety was higher in male students in the in-person part and female students in the virtual section, which may be due to their personality traits, but further research is needed to confirm this. Moreover, since students’ total and overt anxiety in the in-person and virtual sections of the exam are the same in resource and workforce shortages or pandemics, the blended CCE is suggested as a suitable alternative to the traditional OSCE test. However, for generalization of the results, it is recommended that future studies consider three intervention groups, where all OSCE stations are conducted virtually in the first group, in-person in the second group, and a blend of in-person and virtual in the third group. Furthermore, the results of the study by Rafati et al. showed that the use of the OSCE clinical competency exam using the OSCE method is acceptable, valid, and reliable for assessing nursing skills, as 50% of the students were delighted, and 34.6% were relatively satisfied with the OSCE clinical competency exam. Additionally, 57.7% of the students believed the exam revealed learning weaknesses [ 1 ]. Another survey study showed that despite higher anxiety about the OSCE exam, students thought that this exam provides equal opportunities for everyone, is less complicated than the traditional method, and encourages the active participation of students [ 7 ]. In another study on maternal and infant care, 95% of the students believed the traditional exam only evaluates memory or practical skills. In contrast, the OSCE exam assesses knowledge, understanding, cognitive and analytical skills, communication, and emotional skills. They believed that explicit evaluation goals, appropriate implementation guidelines, appropriate scheduling, wearing uniforms, equipping the workroom, evaluating many skills, and providing fast feedback are among the advantages of this exam [ 36 ]. Moreover, in a survey study, most students were satisfied with the clinical environment offered by the OSCE CCE using the OSCE method, which is close to reality and involves a hypothetical patient in necessary situations that increase work safety. On the other hand, factors such as the scheduling of stations and time constraints have led to dissatisfaction among students [ 37 ].

Furthermore, another study showed that virtual simulations effectively improve students’ skills in tracheostomy suctioning, triage concepts, evaluation, life-saving interventions, clinical reasoning skills, clinical judgment skills, intravenous catheterization skills, role-based nursing care, individual readiness, critical thinking, reducing anxiety levels, and increasing confidence in the laboratory, clinical nursing education, interactive communication, and health evaluation skills. In addition to knowledge and skills, new findings indicate that virtual simulations can increase confidence, change attitudes and behaviors, and be an innovative, flexible, and hopeful approach for new nurses and nursing students [ 38 ].

Various studies have evaluated the satisfaction of students and faculty members with the OSCE Clinical Competency Examination. In this study, one of the evaluated outcomes was the acceptability and satisfaction of students and faculty members with implementing the CCE in blended, virtual, and in-person sections, which was relatively high and consistent with other studies. One crucial factor that influenced the satisfaction of this study was the provision of virtual justification sessions for students and coordination sessions with faculty members. Social messaging groups were formed through virtual and in-person communication, instructions were explained, expectations and tasks were clarified, and questions were answered. Students and faculty members could access the required information with minimal presence in medical education centers and time and cost constraints. Moreover, with the blended evaluation, the researcher’s communication with participants was more accessible. The written guidelines and uploaded educational content of the workshops enabled students to save the desired topics and review them later if needed. Students had easy access to scientific and up-to-date information, and the application of social messengers and Skype allowed for sending photos and videos, conducting workshops, and questions and answering questions. However, the clinical workshops and examinations were held in-person to ensure accuracy. The virtual part of the examination was conducted through online software, and questions focused on each station’s clinical and practical aspects. Students answered various questions, including multiple-choice, descriptive, scenario, picture, and puzzle questions, within a specified time. The blended examination evaluated clinical competency and did not delay these individuals’ entry into the job market. Moreover, during the severe human resource shortage faced by the healthcare system, the examination allowed several nurses to enter the country’s healthcare system. The blended examination can substitute in-person examination in pandemic and non-pandemic situations, saving facilities, equipment, and human resources. The results of this study can also serve as a model to guide other nursing departments that require appropriate planning and arrangements for Conducting Clinical Competency Examinations in blended formats. This examination can also be developed to evaluate students’ clinical performance.

One of the practical limitations of the study was the possibility that participants might need to complete the questionnaires accurately or be concerned about losing marks. Therefore, in a virtual session before the in-person exam, the objectives and importance of the study were explained. Participants were assured that it would not affect their evaluation and that they should not worry about losing marks. Additionally, active participation from all nursing students, faculty members, and staff was necessary for implementing this plan, achieved through prior coordination, virtual meetings, virtual group formation, and continuous reflection of results, creating the motivation for continued collaboration and participation.

Among other limitations of this study included the use of the Spielberger Anxiety Questionnaire to measure students’ anxiety. It is suggested that future studies use a dedicated anxiety questionnaire designed explicitly for pre-exam anxiety measurement. Another limitation of the current research was its implementation in nursing and midwifery faculty. Therefore, it is recommended that similar studies be conducted in nursing and midwifery faculties of other universities, as well as in related fields, and over multiple consecutive semesters. Additionally, for more precise effectiveness assessment, intervention studies in three separate virtual, in-person, and hybrid groups using electronic checklists are proposed. Furthermore, it is recommended that students be evaluated in terms of other dimensions and variables such as awareness, clinical skill acquisition, self-confidence, and self-efficacy.

Conducting in-person Clinical Competency Examination (CCE) during critical situations, such as the COVID-19 pandemic, is challenging. Instead of virtual exams, blended evaluation is a feasible approach to overcome the shortages of virtual ones and closely mimic in-person scenarios. Using a blended method in pandemics or resource shortages, it is possible to design, implement, and evaluate stations that evaluate basic and advanced clinical skills in in-person section, as well as stations that focus on communication, reporting, nursing diagnosis, professional ethics, mental health, and community health based on scenarios in a virtual section, and replace traditional OSCE exams. Furthermore, the use of patient simulators, virtual reality, virtual practice, and the development of virtual and in-person training infrastructure to improve the quality of clinical education and evaluation and obtain the necessary clinical competencies for students is recommended. Also, since few studies have been conducted using the blended method, it is suggested that future research be conducted in three intervention groups, over longer semesters, based on clinical evaluation models and influential on other outcomes such as awareness and clinical skill acquisition self-efficacy, confidence, obtained grades, and estimation of material and human resources costs. This approach reduced the need for physical space for in-person exams, ensuring participant quarantine and health safety with higher quality. Additionally, a more accurate assessment of nursing students’ practical abilities was achieved compared to a solely virtual exam.

Data availability

The datasets generated and analyzed during the current study are available on request from the corresponding author.

Rafati F, Pilevarzade M, Kiani A. Designing, implementing and evaluating once to assess nursing students’ clinical competence in Jiroft faculty of nursing and midwifery. Nurs Midwifery J. 2020;18(2):118–28.

Google Scholar  

Sadeghi T, Ravari A, Shahabinejad M, Hallakoei M, Shafiee M, Khodadadi H. Performing of OSCE method in nursing students of Rafsanjan University of Medical science before entering the clinical field in the year 2010: a process for quality improvement. Community Health J. 2012;6(1):1–8.

Ali GA, Mehdi AY, Ali HA. Objective structured clinical examination (OSCE) as an assessment tool for clinical skills in Sohag University: nursing students’ perspective. J Environ Stud. 2012;8(1):59–69.

Article   Google Scholar  

Bolourchifard F, Neishabouri M, Ashktorab T, Nasrollahzadeh S. Satisfaction of nursing students with two clinical evaluation methods: objective structured clinical examination (OSCE) and practical examination of clinical competence. Adv Nurs Midwifery. 2010;19(66):38–42.

Noohi E, Motesadi M, Haghdoost A. Clinical teachers’ viewpoints towards Objective Structured Clinical examination in Kerman University of Medical Science. Iran J Med Educ. 2008;8(1):113–20.

Reza Masouleh S, Zare A, Chehrzad M, Atrkarruoshan Z. Comparing two methods of evaluation, objective structured practical examination and traditional examination, on the satisfaction of students in Shahid Beheshti faculty of nursing and midwifery. J Holist Nurs Midwifery. 2008;18(1):22–30.

Bagheri M, Sadeghineajad Forotagheh M, Shaghayee Fallah M. The comparison of stressors in the assessment of basic clinical skills with traditional method and OSCE in nursing students. Life Sci J. 2012;9(4):1748–52.

Eldarir SH, El Sebaae HA, El Feky HA, Hussein HA, El Fadil NA, El Shaeer IH. An introduction of OSCE versus the traditional method in nursing education: Faculty capacity building and students’ perspectives. J Am Sci. 2010;6(12):1002–14.

Al-Zeftawy AM, Khaton SE. Student evaluation of an OSCE in Community Health nursing clinical course at Faculty of nursing, Tanta University. J Nurs Health Sci. 2016;5(4):68–76.

Hayter M, Jackson D. Pre-registration undergraduate nurses and the COVID-19 pandemic: students or workers? J Clin Nurs. 2020;29(17–18):3115–6.

Bayham J, Fenichel EP. Impact of school closures for COVID-19 on the US health-care workforce and net mortality: a modeling study. Lancet Public Health. 2020;5(5):e271–8.

Murphy MPA. COVID-19 and emergency eLearning: consequences of the securitization of higher education for post-pandemic pedagogy. Contemp Secur Policy. 2020;41(3):492–505.

Allen IE, Seaman J. Learning on demand: Online education in the United States, 2009.

Meyer KA, Wilson JL. The role of Online Learning in the emergency plans of Flagship Institutions. Online J Distance Learn Adm. 2011;14(1):110–8.

Kursumovic E, Lennane S, Cook TM. Deaths in healthcare workers due to COVID-19: the need for robust data and analysis. Anaesthesia. 2020;75(8):989–92.

Malekshahi Beiranvand F, Hatami Varzaneh A. Health care workers challenges during coronavirus outbreak: the qualitative study. J Res Behav Sci. 2020;18(2):180–90.

Boursicot K, Kemp S, Ong TH, Wijaya L, Goh SH, Freeman K, Curran I. Conducting a high-stakes OSCE in a COVID-19 environment. MedEdPublish. 2020;9:285–89.

Atwa H, Shehata MH, Al-Ansari A, Kumar A, Jaradat A, Ahmed J, Deifalla A, Online. Face-to-Face, or blended learning? Faculty and Medical Students’ perceptions during the COVID-19 pandemic: a mixed-method study. Front Med. 2022;9:791352.

Chan MMK, Yu DS, Lam VS, Wong JY. Online clinical training in the COVID-19 pandemic. Clin Teach. 2020;17(4):445–6.

Toulabi T, Yarahmadi S. Conducting a clinical competency test for nursing students in a virtual method during the Covid-19 pandemic: a case study. J Nurs Educ. 2021;9(5):33–42.

Meskell P, Burke E, Kropmans TJB, Byrne E, Setyonugroho W, Kennedy KM. Back to the future: an online OSCE Management Information System for nursing OSCEs. Nurse Educ Today. 2015;35(11):1091–6.

Lichtenberg PA. (2010). Handbook of Assessment in Clinical Gerontology, 2nd Ed. Academic Press, https://doi.org/10.1016/B978-0-12-374961-1.10030-2

Gholami Booreng F, Mahram B, Kareshki H. Construction and validation of a scale of research anxiety for students. IJPCP. 2017;23(1):78–93.

Esmaili M. A survey of the influence of Murita therapy on reducing the rate of anxiety in clients of counseling centers. Res Clin Psychol Couns. 2011;1(1):15–30.

Farajpour A, Amini M, Pishbin E, Arshadi H, Sanjarmusavi N, Yousefi J, Sarafrazyazdi M. Teachers’ and students’ satisfaction with DOPS Examination in Islamic Azad University of Mashhad, a study in Year 2012. Iran J Med Educ. 2014;14(2):165–73.

StraussAC, Corbin JM. Basics of qualitative research: grounded theory procedures and technique. 2nd ed. London: Sage, Newbury Park; 1998.

Dickens L, Watkins K. Action research: rethinking Lewin. Manage Learn. 1999;30(2):127–40.

Rezaeerad M, Nadri Kh, Mohammadi Etergoleh R. The effect of ADDIE (analysis, design, development, implementation, evaluation) designing method with emphasizing on mobile learning on students’ self-conception, development motivation and academic development in English course. Educational Adm Res Q. 2013;4(15):15–32.

Ben-David MF. AMEE Guide 18: standard setting in student assessment. Med Teach. 2000;22(2):120–30.

McKinley DW, Norcini JJ. How to set standards on performance-based examinations: AMEE Guide 85. Med Teach. 2014;36(2):97–110.

Fung JTC, Zhang W, Yeung MN, Pang MTH, Lam VSF, Chan BKY, Wong JYH. Evaluation of students perceived clinical competence and learning needs following an online virtual simulation education programmed with debriefing during the COVID-19 pandemic. Nurs Open. 2021;8(6):3045–54.

Luke S, Petitt E, Tombrella J, McGoff E. Virtual evaluation of clinical competence in nurse practitioner students. Med Sci Educ. 2021;31:1267–71.

Beiranvand SH, Hosseinabadi R, Ghasemi F, Anbari KH. An Assessment of nursing and Midwifery Student Veiwwpoin, Performance, and feedback with an objective structured clinical examination. J Nurs Educ. 2017;6(1):63–7.

Sheikh Abumasoudi R, Moghimian M, Hashemi M, Kashani F, Karimi T, Atashi V. Comparison of the Effect of Objective Structured Clinical evaluation (OSCE) with Direct and Indirect Supervision on nursing student’s test anxiety. J Nurs Educ. 2015;4(2):1–8.

Zahran EM, Taha EE. Students’ feedback on Objective Structured Clinical examinations (OSCEs) experience in emergency nursing. J High Inst Public Health. 2009;39(2):370–87.

Na A-G. Assessment of Students’ knowledge, clinical performance and satisfaction with objective structured clinical exam. Med J Cairo Univ. 2009;77(4):287–93.

Adib-Hajbaghery M, Yazdani M. Effects of OSCE on learning, satisfaction and test anxiety of nursing students: a review study. Iran J Med Educ. 2018;18:70–83.

Purwanti LE, Sukartini T, Kurniawati ND, Nursalam N, Susilowati T. Virtual Simulation in clinical nursing education to improve knowledge and clinical skills: Literature Review. Open Access Maced J Med Sci. 2022;10(F):396–404.

Download references

Acknowledgements

We want to thank the Research and Technology deputy of Smart University of Medical Sciences, Tehran, Iran, the faculty members, staff, and officials of the School of Nursing and Midwifery, Lorestan University of Medical Sciences, Khorramabad, Iran, and all individuals who participated in this study.

All steps of the study, including study design and data collection, analysis, interpretation, and manuscript drafting, were supported by the Deputy of Research of Smart University of Medical Sciences.

Author information

Authors and affiliations.

Department of E-Learning in Medical Education, Center of Excellence for E-learning in Medical Education, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran

Rita Mojtahedzadeh & Aeen Mohammadi

Department of Medical Education, Smart University of Medical Sciences, Tehran, Iran

Tahereh Toulabi

Cardiovascular Research Center, School of Nursing and Midwifery, Lorestan University of Medical Sciences, Khorramabad, Iran

You can also search for this author in PubMed   Google Scholar

Contributions

RM. Participating in study design, accrual of study participants, review of the manuscript, and critical revisions for important intellectual content. TT : The investigator; participated in study design, data collection, accrual of study participants, and writing and reviewing the manuscript. AM: Participating in study design, data analysis, accrual of study participants, and reviewing the manuscript. All authors read and approved the final version of the manuscript.

Corresponding author

Correspondence to Tahereh Toulabi .

Ethics declarations

Ethics approval and consent to participate.

This action research was conducted following the participatory method. All methods were performed according to the relevant guidelines and regulations in the Declaration of Helsinki (ethics approval and consent to participate). The study’s aims and procedures were explained to all participants, and necessary assurance was given to them for the anonymity and confidentiality of their information. The results were continuously provided as feedback to the participants. Informed consent (explaining the goals and methods of the study) was obtained from participants. The Smart University of Medical Sciences Ethics Committee approved the study protocol (IR.VUMS.REC.1400.011).

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Mojtahedzadeh, R., Toulabi, T. & Mohammadi, A. The design, implementation, and evaluation of a blended (in-person and virtual) Clinical Competency Examination for final-year nursing students. BMC Med Educ 24 , 936 (2024). https://doi.org/10.1186/s12909-024-05935-9

Download citation

Received : 21 July 2023

Accepted : 20 August 2024

Published : 28 August 2024

DOI : https://doi.org/10.1186/s12909-024-05935-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Clinical Competency Examination (CCE)
  • Objective Structural Clinical Examination (OSCE)
  • Blended method
  • Satisfaction

BMC Medical Education

ISSN: 1472-6920

a research study design

Scholars Crossing

  • Liberty University
  • Jerry Falwell Library
  • Special Collections
  • < Previous

Home > ETD > Doctoral > 5927

Doctoral Dissertations and Projects

An examination of employee's perceived level of their leader's religiosity as a moderator of the relationship between an employee's religiosity and job satisfaction.

Brad Carney , Liberty University Follow

School of Behavioral Sciences

Doctor of Philosophy in Psychology (PhD)

Benjamin Wood

religiosity, moderation, employee religiosity, job satisfaction, leader’s perceived level of religiosity

Disciplines

Leadership Studies | Religion

Recommended Citation

Carney, Brad, "An Examination of Employee's Perceived Level of Their Leader's Religiosity as a Moderator of the Relationship Between an Employee's Religiosity and Job Satisfaction" (2024). Doctoral Dissertations and Projects . 5927. https://digitalcommons.liberty.edu/doctoral/5927

The purpose of this study was to investigate whether an employee’s perceived level of their leader’s religiosity moderates the relationship between an employee’s level of religiosity and job satisfaction. The participants in this research study were recruited through the utilization of a snowball sampling method, primarily leveraging Liberty University’s doctoral student email list and social media platforms such as Facebook and LinkedIn. Participants in the study were required to be 18 and older and had been employed under their current leader for a minimum of one year. The total sample size was N=65. The researcher used a quantitative self-reporting survey approach to data collection using the Huber and Huber (2012) Centrality of Religiosity Scale (CRS-15) survey to measure a leader's level of religiosity as perceived by the employee and an employee's level of religiosity. The Spector (1985) Job Satisfaction Survey (JSS) was used to measure an employee's level of job satisfaction. The data collected from the online CRS-15 and JSS surveys was analyzed employing a correlation research design using linear regression with moderation analysis. The results did not show a significant moderating effect on an employee’s perceived level of their leader’s religiosity. Still, they did find that employees who perceived their leader to have a high level of religiosity reported higher levels of job satisfaction. Furthermore, this study is the first to investigate an employee’s perceived level of their leader’s religiosity and the effect it has on employee job satisfaction.

Since August 29, 2024

Included in

Leadership Studies Commons , Religion Commons

  • Collections
  • Faculty Expert Gallery
  • Theses and Dissertations
  • Conferences and Events
  • Open Educational Resources (OER)
  • Explore Disciplines

Advanced Search

  • Notify me via email or RSS .

Faculty Authors

  • Submit Research
  • Expert Gallery Login

Student Authors

  • Undergraduate Submissions
  • Graduate Submissions
  • Honors Submissions

Home | About | FAQ | My Account | Accessibility Statement

Privacy Copyright

a research study design

B E 600 Independent Study/Research form

Student-Faculty Agreement

Individual readings or study, research, etc. When submitted, form will be forwarded to the faculty member named in the form then to your Graduate Program Coordinator for approval.

Urban Design Certificate Program Letter of Interest

Completing and submitting this form will admit you to the program. If you later decide not to complete the program or change from one certificate to the other, you will need to inform the program office as soon as possible.

  • Degree program * MArch MS in Arch BLA MLA MLA/MArch dual degree MLA/MUP dual degree MUP MSRE PhD in the Built Environment Interdisciplinary PhD in Urban Design & Planning Other (please explain below)
  • Other For those who cannot choose from the drop-down program list above, please explain your program affiliation or status. Note that non-CBE students cannot officially receive the certificate but can receive a letter of recognition upon completion of requirements.
  • UW student number *
  • Alternate email Optional
  • Preferred email If you entered an optional email address, please indicate which one is your preferred email address UW email Alternate email
  • Phone Optional. Format as (###)###-####.
  • Education * Please list: college or university name, dates attended, field of study, degree (title & date) for your 3 most recent educational institutions attended. You do not need to include the UW unless you received an additional degree here.
  • Urban design courses taken outside UW Briefly list any coursework taken in areas related to urban design at institutions/programs other than UW.
  • Urban design background Briefly describe any urban design background other than college/university education (occupational or volunteer experience, independent study, continuing education, etc.).
  • Objectives in undertaking graduate work in urban design * Briefly describe your objectives in undertaking this work in urban design. What do you want to accomplish?
  • Area(s) of urban design study you plan to emphasize * Briefly describe the area(s) of urban design you want to emphasize while completing the certificate.
  • Additional info Please add anything additional you think it would be useful for us to know about you and your interest in urban design.

City of Philadelphia

  • An official website of the City of Philadelphia government
  • Here's how you know
  • An official website
  • Feedback and support
  • Publications & forms

Arena Proposal: Impact Reports

In 2022, developers proposed a new Sixers arena in Center City. With the support of its partner economic development corporation, PIDC, the City of Philadelphia hired expert consultants to look at potential impacts of such an arena on the local community, economy, traffic, and public areas. These studies were conducted by top national experts in each field, after a detailed RFP process.

To get the extensive studies the City needed done the right way and without taxpayers paying for them, the City required the developers to provide the funding PIDC used to pay the independent consultants. The developer had no further involvement and PIDC and the City retained all control over selection and management of the consultants.

On August 26, 2024, the City released the consultants’ final reports on these studies to the public. These reports are identified below as Community Impact Analysis, Design Review Report, Economic Impact Analysis, and Independent Transportation Impact Study. Each has been translated into Spanish and simplified Chinese.

Comments and constructive feedback regarding these reports can be submitted through the public feedback form, available in:

  • Spanish (Español), and
  • Simplified Chinese (简体中文) .
Name Description Released Format
PDF A report summarizing the assessment of the impact an arena may have on near neighboring communities, particularly Chinatown. August 26, 2024
PDF A report summarizing analysis of the design of the proposed center city arena. August 26, 2024
PDF An analysis of the proposed arena's potential economic and fiscal impact. August 26, 2024
PDF A report about the ways the proposed Center City arena may impact traffic flow in Center City. August 26, 2024
PDF Spanish language translation of a report summarizing the assessment of the impact an arena may have on near neighboring communities, particularly Chinatown. August 26, 2024
PDF Spanish language translation of a report summarizing analysis of the design of the proposed center city arena. August 26, 2024
PDF Spanish language translation of an analysis of the proposed arena's potential economic and fiscal impact. August 26, 2024
PDF Spanish language translation of a report about the ways the proposed Center City arena may impact traffic flow in Center City. August 26, 2024
PDF A report summarizing the assessment of the impact an arena may have on near neighboring communities, particularly Chinatown. August 26, 2024
PDF Simplified Chinese translation of a report summarizing analysis of the design of the proposed center city arena. August 26, 2024
PDF Simplified Chinese translation of an analysis of the proposed arena's potential economic and fiscal impact. August 26, 2024
PDF Simplified Chinese translation of a report about the ways the proposed Center City arena may impact traffic flow in Center City. August 26, 2024

Main navigation

  • Seminars & Events
  • Student Affairs

Estimating the Causal Effects of Extreme Weather Events on and HIV care outcomes in the IeDEA Cohort Collaboration: Study design and early results

  • Add to calendar
  • Tweet Widget

The Seminars in Epidemiology organized by the Department of Epidemiology, Biostatistics and Occupational Health at the McGill School of Population and Global Health is a self-approved Group Learning Activity (Section 1) as defined by the maintenance of certification program of the Royal College of Physicians and Surgeons of Canada. Physicians requiring accreditation, please complete the Evaluation Form and send to admincoord.eboh [at] mcgill.ca ( )

Denis Nash, PhD, MPH

Distinguished Professor of Epidemiology | City University of New York (CUNY)

WHEN: Monday, September 23, 2024, from 4 to 5 p.m WHERE: Hybrid | 2001 McGill College, Rm 1140 & 5252 boul. de Maisonneuve - 3rd floor, Kitchen | Zoom Note: Dr. Nash will be presenting from New York

I will cover examples of quasiexperimental methods that we have used in the IeDEA cohort to assess the impact of polices on HIV care outcomes in the past. I will also share the aims and design for a recently funded R01 to examine the causal effects of extreme weather on HIV care outcomes around the globe.

Learning Objectives

At the completion of this talk, attendees will be able to:

  • To understand the structure, scope and design of the global IeDEA network of HIV care cohorts;
  • To appreciate the potential for applications of quasiexperimental methods to address causal research questions around HIV care outcomes;
  • To become oriented to the climate-related data sources that can be used to create exposure variables to assess the influence of extreme weather on HIV care outcomes.

Speaker Bio

Denis Nash is Distinguished Professor of Epidemiology at the CUNY Graduate School of Public Health & Health Policy in the City University of New York. Nash is the founding executive director of the CUNY Institute for Implementation Science in Population Health. He is also the associate director of the NIH-funded Einstein, Rockefeller, CUNY Center for AIDS Research (CFAR). A link to my full bio is here: https://cunyisph.org/team/denis-nash/

  • Epidemiology Monday Seminar Series

Department and University Information

Department of epidemiology, biostatistics and occupational health.

  • Biostatistics
  • Occupational Health
  • MSc in Public Health (MScPH)
  • Course Schedules
  • Departmental Awards
  • McGill Awards
  • External Awards
  • Graduate Calendar
  • Graduate and Postdoctoral Studies
  • University Regulations and Resources
  • Student Rights and Responsibilities

IMAGES

  1. How to Create a Strong Research Design: 2-Minute Summary

    a research study design

  2. What is Research Design in Qualitative Research

    a research study design

  3. 25 Types of Research Designs (2024)

    a research study design

  4. Research Design Sample Work

    a research study design

  5. Types of Research Design

    a research study design

  6. What Is Study Design In Research Methodology

    a research study design

VIDEO

  1. CASE STUDY

  2. Research Study Design

  3. Cohort Study الموضوع مطلعش صعب زي ما الناس كانت فاكرة

  4. Back-to-Basics III: P-values and Effect Sizes

  5. Case Control Study: Explained

  6. Prospective Cohort Study: Explained!

COMMENTS

  1. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  2. Study designs: Part 1

    Research study design is a framework, or the set of methods and procedures used to collect and analyze data on variables specified in a particular research problem. Research study designs are of many types, each with its advantages and limitations. The type of study design used to answer a particular research question is determined by the ...

  3. Clinical research study designs: The essentials

    Introduction. In clinical research, our aim is to design a study, which would be able to derive a valid and meaningful scientific conclusion using appropriate statistical methods that can be translated to the "real world" setting. 1 Before choosing a study design, one must establish aims and objectives of the study, and choose an appropriate target population that is most representative of ...

  4. An introduction to different types of study design

    We may approach this study by 2 longitudinal designs: Prospective: we follow the individuals in the future to know who will develop the disease. Retrospective: we look to the past to know who developed the disease (e.g. using medical records) This design is the strongest among the observational studies. For example - to find out the relative ...

  5. Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall aims and approach; ... For practical reasons, many studies use non-probability sampling, but it's important to be aware of the limitations and carefully consider potential biases. ...

  6. What Is Research Design? 8 Types + Examples

    Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data. Research designs for quantitative studies include descriptive, correlational, experimental and quasi-experimenta l designs. Research designs for qualitative studies include phenomenological ...

  7. Understanding Research Study Designs

    Ranganathan P. Understanding Research Study Designs. Indian J Crit Care Med 2019;23 (Suppl 4):S305-S307. Keywords: Clinical trials as topic, Observational studies as topic, Research designs. We use a variety of research study designs in biomedical research. In this article, the main features of each of these designs are summarized. Go to:

  8. Research Design

    The purpose of research design is to plan and structure a research study in a way that enables the researcher to achieve the desired research goals with accuracy, validity, and reliability. Research design is the blueprint or the framework for conducting a study that outlines the methods, procedures, techniques, and tools for data collection ...

  9. What is a Research Design? Definition, Types, Methods and Examples

    Research design methods refer to the systematic approaches and techniques used to plan, structure, and conduct a research study. The choice of research design method depends on the research questions, objectives, and the nature of the study. Here are some key research design methods commonly used in various fields: 1.

  10. Organizing Your Social Sciences Research Paper

    The research design refers to the overall strategy and analytical approach that you have chosen in order to integrate, in a coherent and logical way, the different components of the study, thus ensuring that the research problem will be thoroughly investigated. It constitutes the blueprint for the collection, measurement, and interpretation of ...

  11. Types of Research Designs Compared

    Types of Research Designs Compared | Guide & Examples. Published on June 20, 2019 by Shona McCombes.Revised on June 22, 2023. When you start planning a research project, developing research questions and creating a research design, you will have to make various decisions about the type of research you want to do.. There are many ways to categorize different types of research.

  12. Understanding Research Study Designs

    A double blind study is the most rigorous clinical research design because, in addition to the randomization of subjects, which reduces the risk of bias, it can eliminate or minimize the placebo effect which is a further challenge to the validity of a study. ... (29:36 min): An overview of research study designs used by health sciences ...

  13. Research design

    A strong research design yields valid answers to research questions while weak designs yield unreliable, imprecise or irrelevant answers. [1] Incorporated in the design of a research study will depend on the standpoint of the researcher over their beliefs in the nature of knowledge (see epistemology) and reality (see ontology), often shaped by ...

  14. Study designs: Part 1

    The study design used to answer a particular research question depends on the nature of the question and the availability of resources. In this article, which is the first part of a series on "study designs," we provide an overview of research study designs and their classification. The subsequent articles will focus on individual designs.

  15. Types of Study Design

    Introduction. Study designs are frameworks used in medical research to gather data and explore a specific research question.. Choosing an appropriate study design is one of many essential considerations before conducting research to minimise bias and yield valid results.. This guide provides a summary of study designs commonly used in medical research, their characteristics, advantages and ...

  16. What Is a Research Design?

    The case study approach in a research design focuses on a detailed examination of a single case or a small number of cases. Cases can be individuals, groups, organizations, or events. Case studies are particularly useful for research designs that aim to understand complex issues in real-life contexts. The aim is to provide a thorough ...

  17. Types of Research Designs

    The research design refers to the overall strategy that you choose to integrate the different components of the study in a coherent and logical way, thereby, ensuring you will effectively address the research problem; it constitutes the blueprint for the collection, measurement, and analysis of data.

  18. How to choose your study design

    First, by the specific research question. That is, if the question is one of 'prevalence' (disease burden) then the ideal is a cross-sectional study; if it is a question of 'harm' - a case-control study; prognosis - a cohort and therapy - a RCT. Second, by what resources are available to you. This includes budget, time, feasibility re-patient ...

  19. Research Methods Guide: Research Design & Method

    Most frequently used methods include: Observation / Participant Observation. Surveys. Interviews. Focus Groups. Experiments. Secondary Data Analysis / Archival Study. Mixed Methods (combination of some of the above) One particular method could be better suited to your research goal than others, because the data you collect from different ...

  20. Perspectives in Clinical Research

    Research study design is a framework, or the set of methods and procedures used to collect and analyze data on variables specified in a particular research problem. Research study designs are of many types, each with its advantages and limitations. The type of study design used to answer a particular research question is determined by the ...

  21. Design a research study

    The design of a piece of research refers to the practical way in which the research was conducted according to a systematic attempt to generate evidence to answer the research question. The term "research methodology" is often used to mean something similar, however different writers use both terms in slightly different ways: some writers, for ...

  22. (PDF) Basics of Research Design: A Guide to selecting appropriate

    for validity and reliability. Design is basically concerned with the aims, uses, purposes, intentions and plans within the. pr actical constraint of location, time, money and the researcher's ...

  23. Types of studies and research design

    Types of study design. Medical research is classified into primary and secondary research. Clinical/experimental studies are performed in primary research, whereas secondary research consolidates available studies as reviews, systematic reviews and meta-analyses. Three main areas in primary research are basic medical research, clinical research ...

  24. Exploring Research Design: Qualitative, Quantitative, and Mixed

    Narrative research, phenomenology, case studies, and grounded theory are examples of qualitative research. Concurrent, transformational, and sequential research are examples of mixed methods. The authors emphasize the necessity for researchers to match their study aims with the most appropriate design by acknowledging the benefits and drawbacks ...

  25. What is the difference between research design and research methodology

    Research Design: The overall plan of the research, determining the type of study, sample selection, data collection, and analysis methods. It ensures the research problem is addressed effectively.

  26. The design, implementation, and evaluation of a blended (in-person and

    Introduction Studies have reported different results of evaluation methods of clinical competency tests. Therefore, this study aimed to design, implement, and evaluate a blended (in-person and virtual) Competency Examination for final-year Nursing Students. Methods This interventional study was conducted in two semesters of 2020-2021 using an educational action research method in the nursing ...

  27. An Examination of Employee's Perceived Level of Their Leader's

    The purpose of this study was to investigate whether an employee's perceived level of their leader's religiosity moderates the relationship between an employee's level of religiosity and job satisfaction. The participants in this research study were recruited through the utilization of a snowball sampling method, primarily leveraging Liberty University's doctoral student email list and ...

  28. B E 600 Independent Study/Research form

    Individual readings or study, research, etc. When submitted, form will be forwarded to the faculty member named in the form then to your Graduate Program Coordinator for approval. ... Urban Design Certificate Program Letter of Interest. Completing and submitting this form will admit you to the program. If you later decide not to complete the ...

  29. Arena Proposal: Impact Reports

    On August 26, 2024, the City released the consultants' final reports on these studies to the public. These reports are identified below as Community Impact Analysis, Design Review Report, Economic Impact Analysis, and Independent Transportation Impact Study. Each has been translated into Spanish and simplified Chinese.

  30. Estimating the Causal Effects of Extreme Weather Events on and HIV care

    Learning Objectives At the completion of this talk, attendees will be able to: To understand the structure, scope and design of the global IeDEA network of HIV care cohorts; To appreciate the potential for applications of quasiexperimental methods to address causal research questions around HIV care outcomes; To become oriented to the climate ...