• Privacy Policy

Research Method

Home » Survey Research – Types, Methods, Examples

Survey Research – Types, Methods, Examples

Table of Contents

Survey Research

Survey Research

Definition:

Survey Research is a quantitative research method that involves collecting standardized data from a sample of individuals or groups through the use of structured questionnaires or interviews. The data collected is then analyzed statistically to identify patterns and relationships between variables, and to draw conclusions about the population being studied.

Survey research can be used to answer a variety of questions, including:

  • What are people’s opinions about a certain topic?
  • What are people’s experiences with a certain product or service?
  • What are people’s beliefs about a certain issue?

Survey Research Methods

Survey Research Methods are as follows:

  • Telephone surveys: A survey research method where questions are administered to respondents over the phone, often used in market research or political polling.
  • Face-to-face surveys: A survey research method where questions are administered to respondents in person, often used in social or health research.
  • Mail surveys: A survey research method where questionnaires are sent to respondents through mail, often used in customer satisfaction or opinion surveys.
  • Online surveys: A survey research method where questions are administered to respondents through online platforms, often used in market research or customer feedback.
  • Email surveys: A survey research method where questionnaires are sent to respondents through email, often used in customer satisfaction or opinion surveys.
  • Mixed-mode surveys: A survey research method that combines two or more survey modes, often used to increase response rates or reach diverse populations.
  • Computer-assisted surveys: A survey research method that uses computer technology to administer or collect survey data, often used in large-scale surveys or data collection.
  • Interactive voice response surveys: A survey research method where respondents answer questions through a touch-tone telephone system, often used in automated customer satisfaction or opinion surveys.
  • Mobile surveys: A survey research method where questions are administered to respondents through mobile devices, often used in market research or customer feedback.
  • Group-administered surveys: A survey research method where questions are administered to a group of respondents simultaneously, often used in education or training evaluation.
  • Web-intercept surveys: A survey research method where questions are administered to website visitors, often used in website or user experience research.
  • In-app surveys: A survey research method where questions are administered to users of a mobile application, often used in mobile app or user experience research.
  • Social media surveys: A survey research method where questions are administered to respondents through social media platforms, often used in social media or brand awareness research.
  • SMS surveys: A survey research method where questions are administered to respondents through text messaging, often used in customer feedback or opinion surveys.
  • IVR surveys: A survey research method where questions are administered to respondents through an interactive voice response system, often used in automated customer feedback or opinion surveys.
  • Mixed-method surveys: A survey research method that combines both qualitative and quantitative data collection methods, often used in exploratory or mixed-method research.
  • Drop-off surveys: A survey research method where respondents are provided with a survey questionnaire and asked to return it at a later time or through a designated drop-off location.
  • Intercept surveys: A survey research method where respondents are approached in public places and asked to participate in a survey, often used in market research or customer feedback.
  • Hybrid surveys: A survey research method that combines two or more survey modes, data sources, or research methods, often used in complex or multi-dimensional research questions.

Types of Survey Research

There are several types of survey research that can be used to collect data from a sample of individuals or groups. following are Types of Survey Research:

  • Cross-sectional survey: A type of survey research that gathers data from a sample of individuals at a specific point in time, providing a snapshot of the population being studied.
  • Longitudinal survey: A type of survey research that gathers data from the same sample of individuals over an extended period of time, allowing researchers to track changes or trends in the population being studied.
  • Panel survey: A type of longitudinal survey research that tracks the same sample of individuals over time, typically collecting data at multiple points in time.
  • Epidemiological survey: A type of survey research that studies the distribution and determinants of health and disease in a population, often used to identify risk factors and inform public health interventions.
  • Observational survey: A type of survey research that collects data through direct observation of individuals or groups, often used in behavioral or social research.
  • Correlational survey: A type of survey research that measures the degree of association or relationship between two or more variables, often used to identify patterns or trends in data.
  • Experimental survey: A type of survey research that involves manipulating one or more variables to observe the effect on an outcome, often used to test causal hypotheses.
  • Descriptive survey: A type of survey research that describes the characteristics or attributes of a population or phenomenon, often used in exploratory research or to summarize existing data.
  • Diagnostic survey: A type of survey research that assesses the current state or condition of an individual or system, often used in health or organizational research.
  • Explanatory survey: A type of survey research that seeks to explain or understand the causes or mechanisms behind a phenomenon, often used in social or psychological research.
  • Process evaluation survey: A type of survey research that measures the implementation and outcomes of a program or intervention, often used in program evaluation or quality improvement.
  • Impact evaluation survey: A type of survey research that assesses the effectiveness or impact of a program or intervention, often used to inform policy or decision-making.
  • Customer satisfaction survey: A type of survey research that measures the satisfaction or dissatisfaction of customers with a product, service, or experience, often used in marketing or customer service research.
  • Market research survey: A type of survey research that collects data on consumer preferences, behaviors, or attitudes, often used in market research or product development.
  • Public opinion survey: A type of survey research that measures the attitudes, beliefs, or opinions of a population on a specific issue or topic, often used in political or social research.
  • Behavioral survey: A type of survey research that measures actual behavior or actions of individuals, often used in health or social research.
  • Attitude survey: A type of survey research that measures the attitudes, beliefs, or opinions of individuals, often used in social or psychological research.
  • Opinion poll: A type of survey research that measures the opinions or preferences of a population on a specific issue or topic, often used in political or media research.
  • Ad hoc survey: A type of survey research that is conducted for a specific purpose or research question, often used in exploratory research or to answer a specific research question.

Types Based on Methodology

Based on Methodology Survey are divided into two Types:

Quantitative Survey Research

Qualitative survey research.

Quantitative survey research is a method of collecting numerical data from a sample of participants through the use of standardized surveys or questionnaires. The purpose of quantitative survey research is to gather empirical evidence that can be analyzed statistically to draw conclusions about a particular population or phenomenon.

In quantitative survey research, the questions are structured and pre-determined, often utilizing closed-ended questions, where participants are given a limited set of response options to choose from. This approach allows for efficient data collection and analysis, as well as the ability to generalize the findings to a larger population.

Quantitative survey research is often used in market research, social sciences, public health, and other fields where numerical data is needed to make informed decisions and recommendations.

Qualitative survey research is a method of collecting non-numerical data from a sample of participants through the use of open-ended questions or semi-structured interviews. The purpose of qualitative survey research is to gain a deeper understanding of the experiences, perceptions, and attitudes of participants towards a particular phenomenon or topic.

In qualitative survey research, the questions are open-ended, allowing participants to share their thoughts and experiences in their own words. This approach allows for a rich and nuanced understanding of the topic being studied, and can provide insights that are difficult to capture through quantitative methods alone.

Qualitative survey research is often used in social sciences, education, psychology, and other fields where a deeper understanding of human experiences and perceptions is needed to inform policy, practice, or theory.

Data Analysis Methods

There are several Survey Research Data Analysis Methods that researchers may use, including:

  • Descriptive statistics: This method is used to summarize and describe the basic features of the survey data, such as the mean, median, mode, and standard deviation. These statistics can help researchers understand the distribution of responses and identify any trends or patterns.
  • Inferential statistics: This method is used to make inferences about the larger population based on the data collected in the survey. Common inferential statistical methods include hypothesis testing, regression analysis, and correlation analysis.
  • Factor analysis: This method is used to identify underlying factors or dimensions in the survey data. This can help researchers simplify the data and identify patterns and relationships that may not be immediately apparent.
  • Cluster analysis: This method is used to group similar respondents together based on their survey responses. This can help researchers identify subgroups within the larger population and understand how different groups may differ in their attitudes, behaviors, or preferences.
  • Structural equation modeling: This method is used to test complex relationships between variables in the survey data. It can help researchers understand how different variables may be related to one another and how they may influence one another.
  • Content analysis: This method is used to analyze open-ended responses in the survey data. Researchers may use software to identify themes or categories in the responses, or they may manually review and code the responses.
  • Text mining: This method is used to analyze text-based survey data, such as responses to open-ended questions. Researchers may use software to identify patterns and themes in the text, or they may manually review and code the text.

Applications of Survey Research

Here are some common applications of survey research:

  • Market Research: Companies use survey research to gather insights about customer needs, preferences, and behavior. These insights are used to create marketing strategies and develop new products.
  • Public Opinion Research: Governments and political parties use survey research to understand public opinion on various issues. This information is used to develop policies and make decisions.
  • Social Research: Survey research is used in social research to study social trends, attitudes, and behavior. Researchers use survey data to explore topics such as education, health, and social inequality.
  • Academic Research: Survey research is used in academic research to study various phenomena. Researchers use survey data to test theories, explore relationships between variables, and draw conclusions.
  • Customer Satisfaction Research: Companies use survey research to gather information about customer satisfaction with their products and services. This information is used to improve customer experience and retention.
  • Employee Surveys: Employers use survey research to gather feedback from employees about their job satisfaction, working conditions, and organizational culture. This information is used to improve employee retention and productivity.
  • Health Research: Survey research is used in health research to study topics such as disease prevalence, health behaviors, and healthcare access. Researchers use survey data to develop interventions and improve healthcare outcomes.

Examples of Survey Research

Here are some real-time examples of survey research:

  • COVID-19 Pandemic Surveys: Since the outbreak of the COVID-19 pandemic, surveys have been conducted to gather information about public attitudes, behaviors, and perceptions related to the pandemic. Governments and healthcare organizations have used this data to develop public health strategies and messaging.
  • Political Polls During Elections: During election seasons, surveys are used to measure public opinion on political candidates, policies, and issues in real-time. This information is used by political parties to develop campaign strategies and make decisions.
  • Customer Feedback Surveys: Companies often use real-time customer feedback surveys to gather insights about customer experience and satisfaction. This information is used to improve products and services quickly.
  • Event Surveys: Organizers of events such as conferences and trade shows often use surveys to gather feedback from attendees in real-time. This information can be used to improve future events and make adjustments during the current event.
  • Website and App Surveys: Website and app owners use surveys to gather real-time feedback from users about the functionality, user experience, and overall satisfaction with their platforms. This feedback can be used to improve the user experience and retain customers.
  • Employee Pulse Surveys: Employers use real-time pulse surveys to gather feedback from employees about their work experience and overall job satisfaction. This feedback is used to make changes in real-time to improve employee retention and productivity.

Survey Sample

Purpose of survey research.

The purpose of survey research is to gather data and insights from a representative sample of individuals. Survey research allows researchers to collect data quickly and efficiently from a large number of people, making it a valuable tool for understanding attitudes, behaviors, and preferences.

Here are some common purposes of survey research:

  • Descriptive Research: Survey research is often used to describe characteristics of a population or a phenomenon. For example, a survey could be used to describe the characteristics of a particular demographic group, such as age, gender, or income.
  • Exploratory Research: Survey research can be used to explore new topics or areas of research. Exploratory surveys are often used to generate hypotheses or identify potential relationships between variables.
  • Explanatory Research: Survey research can be used to explain relationships between variables. For example, a survey could be used to determine whether there is a relationship between educational attainment and income.
  • Evaluation Research: Survey research can be used to evaluate the effectiveness of a program or intervention. For example, a survey could be used to evaluate the impact of a health education program on behavior change.
  • Monitoring Research: Survey research can be used to monitor trends or changes over time. For example, a survey could be used to monitor changes in attitudes towards climate change or political candidates over time.

When to use Survey Research

there are certain circumstances where survey research is particularly appropriate. Here are some situations where survey research may be useful:

  • When the research question involves attitudes, beliefs, or opinions: Survey research is particularly useful for understanding attitudes, beliefs, and opinions on a particular topic. For example, a survey could be used to understand public opinion on a political issue.
  • When the research question involves behaviors or experiences: Survey research can also be useful for understanding behaviors and experiences. For example, a survey could be used to understand the prevalence of a particular health behavior.
  • When a large sample size is needed: Survey research allows researchers to collect data from a large number of people quickly and efficiently. This makes it a useful method when a large sample size is needed to ensure statistical validity.
  • When the research question is time-sensitive: Survey research can be conducted quickly, which makes it a useful method when the research question is time-sensitive. For example, a survey could be used to understand public opinion on a breaking news story.
  • When the research question involves a geographically dispersed population: Survey research can be conducted online, which makes it a useful method when the population of interest is geographically dispersed.

How to Conduct Survey Research

Conducting survey research involves several steps that need to be carefully planned and executed. Here is a general overview of the process:

  • Define the research question: The first step in conducting survey research is to clearly define the research question. The research question should be specific, measurable, and relevant to the population of interest.
  • Develop a survey instrument : The next step is to develop a survey instrument. This can be done using various methods, such as online survey tools or paper surveys. The survey instrument should be designed to elicit the information needed to answer the research question, and should be pre-tested with a small sample of individuals.
  • Select a sample : The sample is the group of individuals who will be invited to participate in the survey. The sample should be representative of the population of interest, and the size of the sample should be sufficient to ensure statistical validity.
  • Administer the survey: The survey can be administered in various ways, such as online, by mail, or in person. The method of administration should be chosen based on the population of interest and the research question.
  • Analyze the data: Once the survey data is collected, it needs to be analyzed. This involves summarizing the data using statistical methods, such as frequency distributions or regression analysis.
  • Draw conclusions: The final step is to draw conclusions based on the data analysis. This involves interpreting the results and answering the research question.

Advantages of Survey Research

There are several advantages to using survey research, including:

  • Efficient data collection: Survey research allows researchers to collect data quickly and efficiently from a large number of people. This makes it a useful method for gathering information on a wide range of topics.
  • Standardized data collection: Surveys are typically standardized, which means that all participants receive the same questions in the same order. This ensures that the data collected is consistent and reliable.
  • Cost-effective: Surveys can be conducted online, by mail, or in person, which makes them a cost-effective method of data collection.
  • Anonymity: Participants can remain anonymous when responding to a survey. This can encourage participants to be more honest and open in their responses.
  • Easy comparison: Surveys allow for easy comparison of data between different groups or over time. This makes it possible to identify trends and patterns in the data.
  • Versatility: Surveys can be used to collect data on a wide range of topics, including attitudes, beliefs, behaviors, and preferences.

Limitations of Survey Research

Here are some of the main limitations of survey research:

  • Limited depth: Surveys are typically designed to collect quantitative data, which means that they do not provide much depth or detail about people’s experiences or opinions. This can limit the insights that can be gained from the data.
  • Potential for bias: Surveys can be affected by various biases, including selection bias, response bias, and social desirability bias. These biases can distort the results and make them less accurate.
  • L imited validity: Surveys are only as valid as the questions they ask. If the questions are poorly designed or ambiguous, the results may not accurately reflect the respondents’ attitudes or behaviors.
  • Limited generalizability : Survey results are only generalizable to the population from which the sample was drawn. If the sample is not representative of the population, the results may not be generalizable to the larger population.
  • Limited ability to capture context: Surveys typically do not capture the context in which attitudes or behaviors occur. This can make it difficult to understand the reasons behind the responses.
  • Limited ability to capture complex phenomena: Surveys are not well-suited to capture complex phenomena, such as emotions or the dynamics of interpersonal relationships.

Following is an example of a Survey Sample:

Welcome to our Survey Research Page! We value your opinions and appreciate your participation in this survey. Please answer the questions below as honestly and thoroughly as possible.

1. What is your age?

  • A) Under 18
  • G) 65 or older

2. What is your highest level of education completed?

  • A) Less than high school
  • B) High school or equivalent
  • C) Some college or technical school
  • D) Bachelor’s degree
  • E) Graduate or professional degree

3. What is your current employment status?

  • A) Employed full-time
  • B) Employed part-time
  • C) Self-employed
  • D) Unemployed

4. How often do you use the internet per day?

  •  A) Less than 1 hour
  • B) 1-3 hours
  • C) 3-5 hours
  • D) 5-7 hours
  • E) More than 7 hours

5. How often do you engage in social media per day?

6. Have you ever participated in a survey research study before?

7. If you have participated in a survey research study before, how was your experience?

  • A) Excellent
  • E) Very poor

8. What are some of the topics that you would be interested in participating in a survey research study about?

……………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….

9. How often would you be willing to participate in survey research studies?

  • A) Once a week
  • B) Once a month
  • C) Once every 6 months
  • D) Once a year

10. Any additional comments or suggestions?

Thank you for taking the time to complete this survey. Your feedback is important to us and will help us improve our survey research efforts.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Questionnaire

Questionnaire – Definition, Types, and Examples

Case Study Research

Case Study – Methods, Examples and Guide

Observational Research

Observational Research – Methods and Guide

Quantitative Research

Quantitative Research – Methods, Types and...

Qualitative Research Methods

Qualitative Research Methods

Explanatory Research

Explanatory Research – Types, Methods, Guide

Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground

Know how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories
  • Artificial Intelligence
  • Market Research
  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO
  • Experience Management
  • What is a survey?
  • Survey Research

Try Qualtrics for free

What is survey research.

15 min read Find out everything you need to know about survey research, from what it is and how it works to the different methods and tools you can use to ensure you’re successful.

Survey research is the process of collecting data from a predefined group (e.g. customers or potential customers) with the ultimate goal of uncovering insights about your products, services, or brand overall .

As a quantitative data collection method, survey research can provide you with a goldmine of information that can inform crucial business and product decisions. But survey research needs careful planning and execution to get the results you want.

So if you’re thinking about using surveys to carry out research, read on.

Get started with our free survey maker tool

Types of survey research

Calling these methods ‘survey research’ slightly underplays the complexity of this type of information gathering. From the expertise required to carry out each activity to the analysis of the data and its eventual application, a considerable amount of effort is required.

As for how you can carry out your research, there are several options to choose from — face-to-face interviews, telephone surveys, focus groups (though more interviews than surveys), online surveys , and panel surveys.

Typically, the survey method you choose will largely be guided by who you want to survey, the size of your sample , your budget, and the type of information you’re hoping to gather.

Here are a few of the most-used survey types:

Face-to-face interviews

Before technology made it possible to conduct research using online surveys, telephone, and mail were the most popular methods for survey research. However face-to-face interviews were considered the gold standard — the only reason they weren’t as popular was due to their highly prohibitive costs.

When it came to face-to-face interviews, organizations would use highly trained researchers who knew when to probe or follow up on vague or problematic answers. They also knew when to offer assistance to respondents when they seemed to be struggling. The result was that these interviewers could get sample members to participate and engage in surveys in the most effective way possible, leading to higher response rates and better quality data.

Telephone surveys

While phone surveys have been popular in the past, particularly for measuring general consumer behavior or beliefs, response rates have been declining since the 1990s .

Phone surveys are usually conducted using a random dialing system and software that a researcher can use to record responses.

This method is beneficial when you want to survey a large population but don’t have the resources to conduct face-to-face research surveys or run focus groups, or want to ask multiple-choice and open-ended questions .

The downsides are they can: take a long time to complete depending on the response rate, and you may have to do a lot of cold-calling to get the information you need.

You also run the risk of respondents not being completely honest . Instead, they’ll answer your survey questions quickly just to get off the phone.

Focus groups (interviews — not surveys)

Focus groups are a separate qualitative methodology rather than surveys — even though they’re often bunched together. They’re normally used for survey pretesting and designing , but they’re also a great way to generate opinions and data from a diverse range of people.

Focus groups involve putting a cohort of demographically or socially diverse people in a room with a moderator and engaging them in a discussion on a particular topic, such as your product, brand, or service.

They remain a highly popular method for market research , but they’re expensive and require a lot of administration to conduct and analyze the data properly.

You also run the risk of more dominant members of the group taking over the discussion and swaying the opinions of other people — potentially providing you with unreliable data.

Online surveys

Online surveys have become one of the most popular survey methods due to being cost-effective, enabling researchers to accurately survey a large population quickly.

Online surveys can essentially be used by anyone for any research purpose – we’ve all seen the increasing popularity of polls on social media (although these are not scientific).

Using an online survey allows you to ask a series of different question types and collect data instantly that’s easy to analyze with the right software.

There are also several methods for running and distributing online surveys that allow you to get your questionnaire in front of a large population at a fraction of the cost of face-to-face interviews or focus groups.

This is particularly true when it comes to mobile surveys as most people with a smartphone can access them online.

However, you have to be aware of the potential dangers of using online surveys, particularly when it comes to the survey respondents. The biggest risk is because online surveys require access to a computer or mobile device to complete, they could exclude elderly members of the population who don’t have access to the technology — or don’t know how to use it.

It could also exclude those from poorer socio-economic backgrounds who can’t afford a computer or consistent internet access. This could mean the data collected is more biased towards a certain group and can lead to less accurate data when you’re looking for a representative population sample.

When it comes to surveys, every voice matters.

Find out how to create more inclusive and representative surveys for your research.

Panel surveys

A panel survey involves recruiting respondents who have specifically signed up to answer questionnaires and who are put on a list by a research company. This could be a workforce of a small company or a major subset of a national population. Usually, these groups are carefully selected so that they represent a sample of your target population — giving you balance across criteria such as age, gender, background, and so on.

Panel surveys give you access to the respondents you need and are usually provided by the research company in question. As a result, it’s much easier to get access to the right audiences as you just need to tell the research company your criteria. They’ll then determine the right panels to use to answer your questionnaire.

However, there are downsides. The main one being that if the research company offers its panels incentives, e.g. discounts, coupons, money — respondents may answer a lot of questionnaires just for the benefits.

This might mean they rush through your survey without providing considered and truthful answers. As a consequence, this can damage the credibility of your data and potentially ruin your analyses.

What are the benefits of using survey research?

Depending on the research method you use, there are lots of benefits to conducting survey research for data collection. Here, we cover a few:

1.   They’re relatively easy to do

Most research surveys are easy to set up, administer and analyze. As long as the planning and survey design is thorough and you target the right audience , the data collection is usually straightforward regardless of which survey type you use.

2.   They can be cost effective

Survey research can be relatively cheap depending on the type of survey you use.

Generally, qualitative research methods that require access to people in person or over the phone are more expensive and require more administration.

Online surveys or mobile surveys are often more cost-effective for market research and can give you access to the global population for a fraction of the cost.

3.   You can collect data from a large sample

Again, depending on the type of survey, you can obtain survey results from an entire population at a relatively low price. You can also administer a large variety of survey types to fit the project you’re running.

4.   You can use survey software to analyze results immediately

Using survey software, you can use advanced statistical analysis techniques to gain insights into your responses immediately.

Analysis can be conducted using a variety of parameters to determine the validity and reliability of your survey data at scale.

5.   Surveys can collect any type of data

While most people view surveys as a quantitative research method, they can just as easily be adapted to gain qualitative information by simply including open-ended questions or conducting interviews face to face.

How to measure concepts with survey questions

While surveys are a great way to obtain data, that data on its own is useless unless it can be analyzed and developed into actionable insights.

The easiest, and most effective way to measure survey results, is to use a dedicated research tool that puts all of your survey results into one place.

When it comes to survey measurement, there are four measurement types to be aware of that will determine how you treat your different survey results:

Nominal scale

With a nominal scale , you can only keep track of how many respondents chose each option from a question, and which response generated the most selections.

An example of this would be simply asking a responder to choose a product or brand from a list.

You could find out which brand was chosen the most but have no insight as to why.

Ordinal scale

Ordinal scales are used to judge an order of preference. They do provide some level of quantitative value because you’re asking responders to choose a preference of one option over another.

Ratio scale

Ratio scales can be used to judge the order and difference between responses. For example, asking respondents how much they spend on their weekly shopping on average.

Interval scale

In an interval scale, values are lined up in order with a meaningful difference between the two values — for example, measuring temperature or measuring a credit score between one value and another.

Step by step: How to conduct surveys and collect data

Conducting a survey and collecting data is relatively straightforward, but it does require some careful planning and design to ensure it results in reliable data.

Step 1 – Define your objectives

What do you want to learn from the survey? How is the data going to help you? Having a hypothesis or series of assumptions about survey responses will allow you to create the right questions to test them.

Step 2 – Create your survey questions

Once you’ve got your hypotheses or assumptions, write out the questions you need answering to test your theories or beliefs. Be wary about framing questions that could lead respondents or inadvertently create biased responses .

Step 3 – Choose your question types

Your survey should include a variety of question types and should aim to obtain quantitative data with some qualitative responses from open-ended questions. Using a mix of questions (simple Yes/ No, multiple-choice, rank in order, etc) not only increases the reliability of your data but also reduces survey fatigue and respondents simply answering questions quickly without thinking.

Find out how to create a survey that’s easy to engage with

Step 4 – Test your questions

Before sending your questionnaire out, you should test it (e.g. have a random internal group do the survey) and carry out A/B tests to ensure you’ll gain accurate responses.

Step 5 – Choose your target and send out the survey

Depending on your objectives, you might want to target the general population with your survey or a specific segment of the population. Once you’ve narrowed down who you want to target, it’s time to send out the survey.

After you’ve deployed the survey, keep an eye on the response rate to ensure you’re getting the number you expected. If your response rate is low, you might need to send the survey out to a second group to obtain a large enough sample — or do some troubleshooting to work out why your response rates are so low. This could be down to your questions, delivery method, selected sample, or otherwise.

Step 6 – Analyze results and draw conclusions

Once you’ve got your results back, it’s time for the fun part.

Break down your survey responses using the parameters you’ve set in your objectives and analyze the data to compare to your original assumptions. At this stage, a research tool or software can make the analysis a lot easier — and that’s somewhere Qualtrics can help.

Get reliable insights with survey software from Qualtrics

Gaining feedback from customers and leads is critical for any business, data gathered from surveys can prove invaluable for understanding your products and your market position, and with survey software from Qualtrics, it couldn’t be easier.

Used by more than 13,000 brands and supporting more than 1 billion surveys a year, Qualtrics empowers everyone in your organization to gather insights and take action. No coding required — and your data is housed in one system.

Get feedback from more than 125 sources on a single platform and view and measure your data in one place to create actionable insights and gain a deeper understanding of your target customers .

Automatically run complex text and statistical analysis to uncover exactly what your survey data is telling you, so you can react in real-time and make smarter decisions.

We can help you with survey management, too. From designing your survey and finding your target respondents to getting your survey in the field and reporting back on the results, we can help you every step of the way.

And for expert market researchers and survey designers, Qualtrics features custom programming to give you total flexibility over question types, survey design, embedded data, and other variables.

No matter what type of survey you want to run, what target audience you want to reach, or what assumptions you want to test or answers you want to uncover, we’ll help you design, deploy and analyze your survey with our team of experts.

Ready to find out more about Qualtrics CoreXM?

Get started with our free survey maker tool today

Related resources

Survey bias types 24 min read, post event survey questions 10 min read, best survey software 16 min read, close-ended questions 7 min read, survey vs questionnaire 12 min read, response bias 13 min read, double barreled question 11 min read, request demo.

Ready to learn more about Qualtrics?

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

survey research methods

Home Market Research

Survey Research: Definition, Examples and Methods

Survey Research

Survey Research is a quantitative research method used for collecting data from a set of respondents. It has been perhaps one of the most used methodologies in the industry for several years due to the multiple benefits and advantages that it has when collecting and analyzing data.

LEARN ABOUT: Behavioral Research

In this article, you will learn everything about survey research, such as types, methods, and examples.

Survey Research Definition

Survey Research is defined as the process of conducting research using surveys that researchers send to survey respondents. The data collected from surveys is then statistically analyzed to draw meaningful research conclusions. In the 21st century, every organization’s eager to understand what their customers think about their products or services and make better business decisions. Researchers can conduct research in multiple ways, but surveys are proven to be one of the most effective and trustworthy research methods. An online survey is a method for extracting information about a significant business matter from an individual or a group of individuals. It consists of structured survey questions that motivate the participants to respond. Creditable survey research can give these businesses access to a vast information bank. Organizations in media, other companies, and even governments rely on survey research to obtain accurate data.

The traditional definition of survey research is a quantitative method for collecting information from a pool of respondents by asking multiple survey questions. This research type includes the recruitment of individuals collection, and analysis of data. It’s useful for researchers who aim to communicate new features or trends to their respondents.

LEARN ABOUT: Level of Analysis Generally, it’s the primary step towards obtaining quick information about mainstream topics and conducting more rigorous and detailed quantitative research methods like surveys/polls or qualitative research methods like focus groups/on-call interviews can follow. There are many situations where researchers can conduct research using a blend of both qualitative and quantitative strategies.

LEARN ABOUT: Survey Sampling

Survey Research Methods

Survey research methods can be derived based on two critical factors: Survey research tool and time involved in conducting research. There are three main survey research methods, divided based on the medium of conducting survey research:

  • Online/ Email:   Online survey research is one of the most popular survey research methods today. The survey cost involved in online survey research is extremely minimal, and the responses gathered are highly accurate.
  • Phone:  Survey research conducted over the telephone ( CATI survey ) can be useful in collecting data from a more extensive section of the target population. There are chances that the money invested in phone surveys will be higher than other mediums, and the time required will be higher.
  • Face-to-face:  Researchers conduct face-to-face in-depth interviews in situations where there is a complicated problem to solve. The response rate for this method is the highest, but it can be costly.

Further, based on the time taken, survey research can be classified into two methods:

  • Longitudinal survey research:  Longitudinal survey research involves conducting survey research over a continuum of time and spread across years and decades. The data collected using this survey research method from one time period to another is qualitative or quantitative. Respondent behavior, preferences, and attitudes are continuously observed over time to analyze reasons for a change in behavior or preferences. For example, suppose a researcher intends to learn about the eating habits of teenagers. In that case, he/she will follow a sample of teenagers over a considerable period to ensure that the collected information is reliable. Often, cross-sectional survey research follows a longitudinal study .
  • Cross-sectional survey research:  Researchers conduct a cross-sectional survey to collect insights from a target audience at a particular time interval. This survey research method is implemented in various sectors such as retail, education, healthcare, SME businesses, etc. Cross-sectional studies can either be descriptive or analytical. It is quick and helps researchers collect information in a brief period. Researchers rely on the cross-sectional survey research method in situations where descriptive analysis of a subject is required.

Survey research also is bifurcated according to the sampling methods used to form samples for research: Probability and Non-probability sampling. Every individual in a population should be considered equally to be a part of the survey research sample. Probability sampling is a sampling method in which the researcher chooses the elements based on probability theory. The are various probability research methods, such as simple random sampling , systematic sampling, cluster sampling, stratified random sampling, etc. Non-probability sampling is a sampling method where the researcher uses his/her knowledge and experience to form samples.

LEARN ABOUT: Survey Sample Sizes

The various non-probability sampling techniques are :

  • Convenience sampling
  • Snowball sampling
  • Consecutive sampling
  • Judgemental sampling
  • Quota sampling

Process of implementing survey research methods:

  • Decide survey questions:  Brainstorm and put together valid survey questions that are grammatically and logically appropriate. Understanding the objective and expected outcomes of the survey helps a lot. There are many surveys where details of responses are not as important as gaining insights about what customers prefer from the provided options. In such situations, a researcher can include multiple-choice questions or closed-ended questions . Whereas, if researchers need to obtain details about specific issues, they can consist of open-ended questions in the questionnaire. Ideally, the surveys should include a smart balance of open-ended and closed-ended questions. Use survey questions like Likert Scale , Semantic Scale, Net Promoter Score question, etc., to avoid fence-sitting.

LEARN ABOUT: System Usability Scale

  • Finalize a target audience:  Send out relevant surveys as per the target audience and filter out irrelevant questions as per the requirement. The survey research will be instrumental in case the target population decides on a sample. This way, results can be according to the desired market and be generalized to the entire population.

LEARN ABOUT:  Testimonial Questions

  • Send out surveys via decided mediums:  Distribute the surveys to the target audience and patiently wait for the feedback and comments- this is the most crucial step of the survey research. The survey needs to be scheduled, keeping in mind the nature of the target audience and its regions. Surveys can be conducted via email, embedded in a website, shared via social media, etc., to gain maximum responses.
  • Analyze survey results:  Analyze the feedback in real-time and identify patterns in the responses which might lead to a much-needed breakthrough for your organization. GAP, TURF Analysis , Conjoint analysis, Cross tabulation, and many such survey feedback analysis methods can be used to spot and shed light on respondent behavior. Researchers can use the results to implement corrective measures to improve customer/employee satisfaction.

Reasons to conduct survey research

The most crucial and integral reason for conducting market research using surveys is that you can collect answers regarding specific, essential questions. You can ask these questions in multiple survey formats as per the target audience and the intent of the survey. Before designing a study, every organization must figure out the objective of carrying this out so that the study can be structured, planned, and executed to perfection.

LEARN ABOUT: Research Process Steps

Questions that need to be on your mind while designing a survey are:

  • What is the primary aim of conducting the survey?
  • How do you plan to utilize the collected survey data?
  • What type of decisions do you plan to take based on the points mentioned above?

There are three critical reasons why an organization must conduct survey research.

  • Understand respondent behavior to get solutions to your queries:  If you’ve carefully curated a survey, the respondents will provide insights about what they like about your organization as well as suggestions for improvement. To motivate them to respond, you must be very vocal about how secure their responses will be and how you will utilize the answers. This will push them to be 100% honest about their feedback, opinions, and comments. Online surveys or mobile surveys have proved their privacy, and due to this, more and more respondents feel free to put forth their feedback through these mediums.
  • Present a medium for discussion:  A survey can be the perfect platform for respondents to provide criticism or applause for an organization. Important topics like product quality or quality of customer service etc., can be put on the table for discussion. A way you can do it is by including open-ended questions where the respondents can write their thoughts. This will make it easy for you to correlate your survey to what you intend to do with your product or service.
  • Strategy for never-ending improvements:  An organization can establish the target audience’s attributes from the pilot phase of survey research . Researchers can use the criticism and feedback received from this survey to improve the product/services. Once the company successfully makes the improvements, it can send out another survey to measure the change in feedback keeping the pilot phase the benchmark. By doing this activity, the organization can track what was effectively improved and what still needs improvement.

Survey Research Scales

There are four main scales for the measurement of variables:

  • Nominal Scale:  A nominal scale associates numbers with variables for mere naming or labeling, and the numbers usually have no other relevance. It is the most basic of the four levels of measurement.
  • Ordinal Scale:  The ordinal scale has an innate order within the variables along with labels. It establishes the rank between the variables of a scale but not the difference value between the variables.
  • Interval Scale:  The interval scale is a step ahead in comparison to the other two scales. Along with establishing a rank and name of variables, the scale also makes known the difference between the two variables. The only drawback is that there is no fixed start point of the scale, i.e., the actual zero value is absent.
  • Ratio Scale:  The ratio scale is the most advanced measurement scale, which has variables that are labeled in order and have a calculated difference between variables. In addition to what interval scale orders, this scale has a fixed starting point, i.e., the actual zero value is present.

Benefits of survey research

In case survey research is used for all the right purposes and is implemented properly, marketers can benefit by gaining useful, trustworthy data that they can use to better the ROI of the organization.

Other benefits of survey research are:

  • Minimum investment:  Mobile surveys and online surveys have minimal finance invested per respondent. Even with the gifts and other incentives provided to the people who participate in the study, online surveys are extremely economical compared to paper-based surveys.
  • Versatile sources for response collection:  You can conduct surveys via various mediums like online and mobile surveys. You can further classify them into qualitative mediums like focus groups , and interviews and quantitative mediums like customer-centric surveys. Due to the offline survey response collection option, researchers can conduct surveys in remote areas with limited internet connectivity. This can make data collection and analysis more convenient and extensive.
  • Reliable for respondents:  Surveys are extremely secure as the respondent details and responses are kept safeguarded. This anonymity makes respondents answer the survey questions candidly and with absolute honesty. An organization seeking to receive explicit responses for its survey research must mention that it will be confidential.

Survey research design

Researchers implement a survey research design in cases where there is a limited cost involved and there is a need to access details easily. This method is often used by small and large organizations to understand and analyze new trends, market demands, and opinions. Collecting information through tactfully designed survey research can be much more effective and productive than a casually conducted survey.

There are five stages of survey research design:

  • Decide an aim of the research:  There can be multiple reasons for a researcher to conduct a survey, but they need to decide a purpose for the research. This is the primary stage of survey research as it can mold the entire path of a survey, impacting its results.
  • Filter the sample from target population:  Who to target? is an essential question that a researcher should answer and keep in mind while conducting research. The precision of the results is driven by who the members of a sample are and how useful their opinions are. The quality of respondents in a sample is essential for the results received for research and not the quantity. If a researcher seeks to understand whether a product feature will work well with their target market, he/she can conduct survey research with a group of market experts for that product or technology.
  • Zero-in on a survey method:  Many qualitative and quantitative research methods can be discussed and decided. Focus groups, online interviews, surveys, polls, questionnaires, etc. can be carried out with a pre-decided sample of individuals.
  • Design the questionnaire:  What will the content of the survey be? A researcher is required to answer this question to be able to design it effectively. What will the content of the cover letter be? Or what are the survey questions of this questionnaire? Understand the target market thoroughly to create a questionnaire that targets a sample to gain insights about a survey research topic.
  • Send out surveys and analyze results:  Once the researcher decides on which questions to include in a study, they can send it across to the selected sample . Answers obtained from this survey can be analyzed to make product-related or marketing-related decisions.

Survey examples: 10 tips to design the perfect research survey

Picking the right survey design can be the key to gaining the information you need to make crucial decisions for all your research. It is essential to choose the right topic, choose the right question types, and pick a corresponding design. If this is your first time creating a survey, it can seem like an intimidating task. But with QuestionPro, each step of the process is made simple and easy.

Below are 10 Tips To Design The Perfect Research Survey:

  • Set your SMART goals:  Before conducting any market research or creating a particular plan, set your SMART Goals . What is that you want to achieve with the survey? How will you measure it promptly, and what are the results you are expecting?
  • Choose the right questions:  Designing a survey can be a tricky task. Asking the right questions may help you get the answers you are looking for and ease the task of analyzing. So, always choose those specific questions – relevant to your research.
  • Begin your survey with a generalized question:  Preferably, start your survey with a general question to understand whether the respondent uses the product or not. That also provides an excellent base and intro for your survey.
  • Enhance your survey:  Choose the best, most relevant, 15-20 questions. Frame each question as a different question type based on the kind of answer you would like to gather from each. Create a survey using different types of questions such as multiple-choice, rating scale, open-ended, etc. Look at more survey examples and four measurement scales every researcher should remember.
  • Prepare yes/no questions:  You may also want to use yes/no questions to separate people or branch them into groups of those who “have purchased” and those who “have not yet purchased” your products or services. Once you separate them, you can ask them different questions.
  • Test all electronic devices:  It becomes effortless to distribute your surveys if respondents can answer them on different electronic devices like mobiles, tablets, etc. Once you have created your survey, it’s time to TEST. You can also make any corrections if needed at this stage.
  • Distribute your survey:  Once your survey is ready, it is time to share and distribute it to the right audience. You can share handouts and share them via email, social media, and other industry-related offline/online communities.
  • Collect and analyze responses:  After distributing your survey, it is time to gather all responses. Make sure you store your results in a particular document or an Excel sheet with all the necessary categories mentioned so that you don’t lose your data. Remember, this is the most crucial stage. Segregate your responses based on demographics, psychographics, and behavior. This is because, as a researcher, you must know where your responses are coming from. It will help you to analyze, predict decisions, and help write the summary report.
  • Prepare your summary report:  Now is the time to share your analysis. At this stage, you should mention all the responses gathered from a survey in a fixed format. Also, the reader/customer must get clarity about your goal, which you were trying to gain from the study. Questions such as – whether the product or service has been used/preferred or not. Do respondents prefer some other product to another? Any recommendations?

Having a tool that helps you carry out all the necessary steps to carry out this type of study is a vital part of any project. At QuestionPro, we have helped more than 10,000 clients around the world to carry out data collection in a simple and effective way, in addition to offering a wide range of solutions to take advantage of this data in the best possible way.

From dashboards, advanced analysis tools, automation, and dedicated functions, in QuestionPro, you will find everything you need to execute your research projects effectively. Uncover insights that matter the most!

MORE LIKE THIS

data information vs insight

Data Information vs Insight: Essential differences

May 14, 2024

pricing analytics software

Pricing Analytics Software: Optimize Your Pricing Strategy

May 13, 2024

relationship marketing

Relationship Marketing: What It Is, Examples & Top 7 Benefits

May 8, 2024

email survey tool

The Best Email Survey Tool to Boost Your Feedback Game

May 7, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Doing Survey Research | A Step-by-Step Guide & Examples

Doing Survey Research | A Step-by-Step Guide & Examples

Published on 6 May 2022 by Shona McCombes . Revised on 10 October 2022.

Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps:

  • Determine who will participate in the survey
  • Decide the type of survey (mail, online, or in-person)
  • Design the survey questions and layout
  • Distribute the survey
  • Analyse the responses
  • Write up the results

Surveys are a flexible method of data collection that can be used in many different types of research .

Table of contents

What are surveys used for, step 1: define the population and sample, step 2: decide on the type of survey, step 3: design the survey questions, step 4: distribute the survey and collect responses, step 5: analyse the survey results, step 6: write up the survey results, frequently asked questions about surveys.

Surveys are used as a method of gathering data in many different fields. They are a good choice when you want to find out about the characteristics, preferences, opinions, or beliefs of a group of people.

Common uses of survey research include:

  • Social research: Investigating the experiences and characteristics of different social groups
  • Market research: Finding out what customers think about products, services, and companies
  • Health research: Collecting data from patients about symptoms and treatments
  • Politics: Measuring public opinion about parties and policies
  • Psychology: Researching personality traits, preferences, and behaviours

Surveys can be used in both cross-sectional studies , where you collect data just once, and longitudinal studies , where you survey the same sample several times over an extended period.

Prevent plagiarism, run a free check.

Before you start conducting survey research, you should already have a clear research question that defines what you want to find out. Based on this question, you need to determine exactly who you will target to participate in the survey.

Populations

The target population is the specific group of people that you want to find out about. This group can be very broad or relatively narrow. For example:

  • The population of Brazil
  • University students in the UK
  • Second-generation immigrants in the Netherlands
  • Customers of a specific company aged 18 to 24
  • British transgender women over the age of 50

Your survey should aim to produce results that can be generalised to the whole population. That means you need to carefully define exactly who you want to draw conclusions about.

It’s rarely possible to survey the entire population of your research – it would be very difficult to get a response from every person in Brazil or every university student in the UK. Instead, you will usually survey a sample from the population.

The sample size depends on how big the population is. You can use an online sample calculator to work out how many responses you need.

There are many sampling methods that allow you to generalise to broad populations. In general, though, the sample should aim to be representative of the population as a whole. The larger and more representative your sample, the more valid your conclusions.

There are two main types of survey:

  • A questionnaire , where a list of questions is distributed by post, online, or in person, and respondents fill it out themselves
  • An interview , where the researcher asks a set of questions by phone or in person and records the responses

Which type you choose depends on the sample size and location, as well as the focus of the research.

Questionnaires

Sending out a paper survey by post is a common method of gathering demographic information (for example, in a government census of the population).

  • You can easily access a large sample.
  • You have some control over who is included in the sample (e.g., residents of a specific region).
  • The response rate is often low.

Online surveys are a popular choice for students doing dissertation research , due to the low cost and flexibility of this method. There are many online tools available for constructing surveys, such as SurveyMonkey and Google Forms .

  • You can quickly access a large sample without constraints on time or location.
  • The data is easy to process and analyse.
  • The anonymity and accessibility of online surveys mean you have less control over who responds.

If your research focuses on a specific location, you can distribute a written questionnaire to be completed by respondents on the spot. For example, you could approach the customers of a shopping centre or ask all students to complete a questionnaire at the end of a class.

  • You can screen respondents to make sure only people in the target population are included in the sample.
  • You can collect time- and location-specific data (e.g., the opinions of a shop’s weekday customers).
  • The sample size will be smaller, so this method is less suitable for collecting data on broad populations.

Oral interviews are a useful method for smaller sample sizes. They allow you to gather more in-depth information on people’s opinions and preferences. You can conduct interviews by phone or in person.

  • You have personal contact with respondents, so you know exactly who will be included in the sample in advance.
  • You can clarify questions and ask for follow-up information when necessary.
  • The lack of anonymity may cause respondents to answer less honestly, and there is more risk of researcher bias.

Like questionnaires, interviews can be used to collect quantitative data : the researcher records each response as a category or rating and statistically analyses the results. But they are more commonly used to collect qualitative data : the interviewees’ full responses are transcribed and analysed individually to gain a richer understanding of their opinions and feelings.

Next, you need to decide which questions you will ask and how you will ask them. It’s important to consider:

  • The type of questions
  • The content of the questions
  • The phrasing of the questions
  • The ordering and layout of the survey

Open-ended vs closed-ended questions

There are two main forms of survey questions: open-ended and closed-ended. Many surveys use a combination of both.

Closed-ended questions give the respondent a predetermined set of answers to choose from. A closed-ended question can include:

  • A binary answer (e.g., yes/no or agree/disagree )
  • A scale (e.g., a Likert scale with five points ranging from strongly agree to strongly disagree )
  • A list of options with a single answer possible (e.g., age categories)
  • A list of options with multiple answers possible (e.g., leisure interests)

Closed-ended questions are best for quantitative research . They provide you with numerical data that can be statistically analysed to find patterns, trends, and correlations .

Open-ended questions are best for qualitative research. This type of question has no predetermined answers to choose from. Instead, the respondent answers in their own words.

Open questions are most common in interviews, but you can also use them in questionnaires. They are often useful as follow-up questions to ask for more detailed explanations of responses to the closed questions.

The content of the survey questions

To ensure the validity and reliability of your results, you need to carefully consider each question in the survey. All questions should be narrowly focused with enough context for the respondent to answer accurately. Avoid questions that are not directly relevant to the survey’s purpose.

When constructing closed-ended questions, ensure that the options cover all possibilities. If you include a list of options that isn’t exhaustive, you can add an ‘other’ field.

Phrasing the survey questions

In terms of language, the survey questions should be as clear and precise as possible. Tailor the questions to your target population, keeping in mind their level of knowledge of the topic.

Use language that respondents will easily understand, and avoid words with vague or ambiguous meanings. Make sure your questions are phrased neutrally, with no bias towards one answer or another.

Ordering the survey questions

The questions should be arranged in a logical order. Start with easy, non-sensitive, closed-ended questions that will encourage the respondent to continue.

If the survey covers several different topics or themes, group together related questions. You can divide a questionnaire into sections to help respondents understand what is being asked in each part.

If a question refers back to or depends on the answer to a previous question, they should be placed directly next to one another.

Before you start, create a clear plan for where, when, how, and with whom you will conduct the survey. Determine in advance how many responses you require and how you will gain access to the sample.

When you are satisfied that you have created a strong research design suitable for answering your research questions, you can conduct the survey through your method of choice – by post, online, or in person.

There are many methods of analysing the results of your survey. First you have to process the data, usually with the help of a computer program to sort all the responses. You should also cleanse the data by removing incomplete or incorrectly completed responses.

If you asked open-ended questions, you will have to code the responses by assigning labels to each response and organising them into categories or themes. You can also use more qualitative methods, such as thematic analysis , which is especially suitable for analysing interviews.

Statistical analysis is usually conducted using programs like SPSS or Stata. The same set of survey data can be subject to many analyses.

Finally, when you have collected and analysed all the necessary data, you will write it up as part of your thesis, dissertation , or research paper .

In the methodology section, you describe exactly how you conducted the survey. You should explain the types of questions you used, the sampling method, when and where the survey took place, and the response rate. You can include the full questionnaire as an appendix and refer to it in the text if relevant.

Then introduce the analysis by describing how you prepared the data and the statistical methods you used to analyse it. In the results section, you summarise the key results from your analysis.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviours. It is made up of four or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with five or seven possible responses, to capture their degree of agreement.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyse your data.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analysing data from people using questionnaires.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2022, October 10). Doing Survey Research | A Step-by-Step Guide & Examples. Scribbr. Retrieved 14 May 2024, from https://www.scribbr.co.uk/research-methods/surveys/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, qualitative vs quantitative research | examples & methods, construct validity | definition, types, & examples, what is a likert scale | guide & examples.

Logo for University of Southern Queensland

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

9 Survey research

Survey research is a research method involving the use of standardised questionnaires or interviews to collect data about people and their preferences, thoughts, and behaviours in a systematic manner. Although census surveys were conducted as early as Ancient Egypt, survey as a formal research method was pioneered in the 1930–40s by sociologist Paul Lazarsfeld to examine the effects of the radio on political opinion formation of the United States. This method has since become a very popular method for quantitative research in the social sciences.

The survey method can be used for descriptive, exploratory, or explanatory research. This method is best suited for studies that have individual people as the unit of analysis. Although other units of analysis, such as groups, organisations or dyads—pairs of organisations, such as buyers and sellers—are also studied using surveys, such studies often use a specific person from each unit as a ‘key informant’ or a ‘proxy’ for that unit. Consequently, such surveys may be subject to respondent bias if the chosen informant does not have adequate knowledge or has a biased opinion about the phenomenon of interest. For instance, Chief Executive Officers may not adequately know employees’ perceptions or teamwork in their own companies, and may therefore be the wrong informant for studies of team dynamics or employee self-esteem.

Survey research has several inherent strengths compared to other research methods. First, surveys are an excellent vehicle for measuring a wide variety of unobservable data, such as people’s preferences (e.g., political orientation), traits (e.g., self-esteem), attitudes (e.g., toward immigrants), beliefs (e.g., about a new law), behaviours (e.g., smoking or drinking habits), or factual information (e.g., income). Second, survey research is also ideally suited for remotely collecting data about a population that is too large to observe directly. A large area—such as an entire country—can be covered by postal, email, or telephone surveys using meticulous sampling to ensure that the population is adequately represented in a small sample. Third, due to their unobtrusive nature and the ability to respond at one’s convenience, questionnaire surveys are preferred by some respondents. Fourth, interviews may be the only way of reaching certain population groups such as the homeless or illegal immigrants for which there is no sampling frame available. Fifth, large sample surveys may allow detection of small effects even while analysing multiple variables, and depending on the survey design, may also allow comparative analysis of population subgroups (i.e., within-group and between-group analysis). Sixth, survey research is more economical in terms of researcher time, effort and cost than other methods such as experimental research and case research. At the same time, survey research also has some unique disadvantages. It is subject to a large number of biases such as non-response bias, sampling bias, social desirability bias, and recall bias, as discussed at the end of this chapter.

Depending on how the data is collected, survey research can be divided into two broad categories: questionnaire surveys (which may be postal, group-administered, or online surveys), and interview surveys (which may be personal, telephone, or focus group interviews). Questionnaires are instruments that are completed in writing by respondents, while interviews are completed by the interviewer based on verbal responses provided by respondents. As discussed below, each type has its own strengths and weaknesses in terms of their costs, coverage of the target population, and researcher’s flexibility in asking questions.

Questionnaire surveys

Invented by Sir Francis Galton, a questionnaire is a research instrument consisting of a set of questions (items) intended to capture responses from respondents in a standardised manner. Questions may be unstructured or structured. Unstructured questions ask respondents to provide a response in their own words, while structured questions ask respondents to select an answer from a given set of choices. Subjects’ responses to individual questions (items) on a structured questionnaire may be aggregated into a composite scale or index for statistical analysis. Questions should be designed in such a way that respondents are able to read, understand, and respond to them in a meaningful way, and hence the survey method may not be appropriate or practical for certain demographic groups such as children or the illiterate.

Most questionnaire surveys tend to be self-administered postal surveys , where the same questionnaire is posted to a large number of people, and willing respondents can complete the survey at their convenience and return it in prepaid envelopes. Postal surveys are advantageous in that they are unobtrusive and inexpensive to administer, since bulk postage is cheap in most countries. However, response rates from postal surveys tend to be quite low since most people ignore survey requests. There may also be long delays (several months) in respondents’ completing and returning the survey, or they may even simply lose it. Hence, the researcher must continuously monitor responses as they are being returned, track and send non-respondents repeated reminders (two or three reminders at intervals of one to one and a half months is ideal). Questionnaire surveys are also not well-suited for issues that require clarification on the part of the respondent or those that require detailed written responses. Longitudinal designs can be used to survey the same set of respondents at different times, but response rates tend to fall precipitously from one survey to the next.

A second type of survey is a group-administered questionnaire . A sample of respondents is brought together at a common place and time, and each respondent is asked to complete the survey questionnaire while in that room. Respondents enter their responses independently without interacting with one another. This format is convenient for the researcher, and a high response rate is assured. If respondents do not understand any specific question, they can ask for clarification. In many organisations, it is relatively easy to assemble a group of employees in a conference room or lunch room, especially if the survey is approved by corporate executives.

A more recent type of questionnaire survey is an online or web survey. These surveys are administered over the Internet using interactive forms. Respondents may receive an email request for participation in the survey with a link to a website where the survey may be completed. Alternatively, the survey may be embedded into an email, and can be completed and returned via email. These surveys are very inexpensive to administer, results are instantly recorded in an online database, and the survey can be easily modified if needed. However, if the survey website is not password-protected or designed to prevent multiple submissions, the responses can be easily compromised. Furthermore, sampling bias may be a significant issue since the survey cannot reach people who do not have computer or Internet access, such as many of the poor, senior, and minority groups, and the respondent sample is skewed toward a younger demographic who are online much of the time and have the time and ability to complete such surveys. Computing the response rate may be problematic if the survey link is posted on LISTSERVs or bulletin boards instead of being emailed directly to targeted respondents. For these reasons, many researchers prefer dual-media surveys (e.g., postal survey and online survey), allowing respondents to select their preferred method of response.

Constructing a survey questionnaire is an art. Numerous decisions must be made about the content of questions, their wording, format, and sequencing, all of which can have important consequences for the survey responses.

Response formats. Survey questions may be structured or unstructured. Responses to structured questions are captured using one of the following response formats:

Dichotomous response , where respondents are asked to select one of two possible choices, such as true/false, yes/no, or agree/disagree. An example of such a question is: Do you think that the death penalty is justified under some circumstances? (circle one): yes / no.

Nominal response , where respondents are presented with more than two unordered options, such as: What is your industry of employment?: manufacturing / consumer services / retail / education / healthcare / tourism and hospitality / other.

Ordinal response , where respondents have more than two ordered options, such as: What is your highest level of education?: high school / bachelor’s degree / postgraduate degree.

Interval-level response , where respondents are presented with a 5-point or 7-point Likert scale, semantic differential scale, or Guttman scale. Each of these scale types were discussed in a previous chapter.

Continuous response , where respondents enter a continuous (ratio-scaled) value with a meaningful zero point, such as their age or tenure in a firm. These responses generally tend to be of the fill-in-the blanks type.

Question content and wording. Responses obtained in survey research are very sensitive to the types of questions asked. Poorly framed or ambiguous questions will likely result in meaningless responses with very little value. Dillman (1978) [1] recommends several rules for creating good survey questions. Every single question in a survey should be carefully scrutinised for the following issues:

Is the question clear and understandable ?: Survey questions should be stated in very simple language, preferably in active voice, and without complicated words or jargon that may not be understood by a typical respondent. All questions in the questionnaire should be worded in a similar manner to make it easy for respondents to read and understand them. The only exception is if your survey is targeted at a specialised group of respondents, such as doctors, lawyers and researchers, who use such jargon in their everyday environment. Is the question worded in a negative manner ?: Negatively worded questions such as ‘Should your local government not raise taxes?’ tend to confuse many respondents and lead to inaccurate responses. Double-negatives should be avoided when designing survey questions.

Is the question ambiguous ?: Survey questions should not use words or expressions that may be interpreted differently by different respondents (e.g., words like ‘any’ or ‘just’). For instance, if you ask a respondent, ‘What is your annual income?’, it is unclear whether you are referring to salary/wages, or also dividend, rental, and other income, whether you are referring to personal income, family income (including spouse’s wages), or personal and business income. Different interpretation by different respondents will lead to incomparable responses that cannot be interpreted correctly.

Does the question have biased or value-laden words ?: Bias refers to any property of a question that encourages subjects to answer in a certain way. Kenneth Rasinky (1989) [2] examined several studies on people’s attitude toward government spending, and observed that respondents tend to indicate stronger support for ‘assistance to the poor’ and less for ‘welfare’, even though both terms had the same meaning. In this study, more support was also observed for ‘halting rising crime rate’ and less for ‘law enforcement’, more for ‘solving problems of big cities’ and less for ‘assistance to big cities’, and more for ‘dealing with drug addiction’ and less for ‘drug rehabilitation’. A biased language or tone tends to skew observed responses. It is often difficult to anticipate in advance the biasing wording, but to the greatest extent possible, survey questions should be carefully scrutinised to avoid biased language.

Is the question double-barrelled ?: Double-barrelled questions are those that can have multiple answers. For example, ‘Are you satisfied with the hardware and software provided for your work?’. In this example, how should a respondent answer if they are satisfied with the hardware, but not with the software, or vice versa? It is always advisable to separate double-barrelled questions into separate questions: ‘Are you satisfied with the hardware provided for your work?’, and ’Are you satisfied with the software provided for your work?’. Another example: ‘Does your family favour public television?’. Some people may favour public TV for themselves, but favour certain cable TV programs such as Sesame Street for their children.

Is the question too general ?: Sometimes, questions that are too general may not accurately convey respondents’ perceptions. If you asked someone how they liked a certain book and provided a response scale ranging from ‘not at all’ to ‘extremely well’, if that person selected ‘extremely well’, what do they mean? Instead, ask more specific behavioural questions, such as, ‘Will you recommend this book to others, or do you plan to read other books by the same author?’. Likewise, instead of asking, ‘How big is your firm?’ (which may be interpreted differently by respondents), ask, ‘How many people work for your firm?’, and/or ‘What is the annual revenue of your firm?’, which are both measures of firm size.

Is the question too detailed ?: Avoid unnecessarily detailed questions that serve no specific research purpose. For instance, do you need the age of each child in a household, or is just the number of children in the household acceptable? However, if unsure, it is better to err on the side of details than generality.

Is the question presumptuous ?: If you ask, ‘What do you see as the benefits of a tax cut?’, you are presuming that the respondent sees the tax cut as beneficial. Many people may not view tax cuts as being beneficial, because tax cuts generally lead to lesser funding for public schools, larger class sizes, and fewer public services such as police, ambulance, and fire services. Avoid questions with built-in presumptions.

Is the question imaginary ?: A popular question in many television game shows is, ‘If you win a million dollars on this show, how will you spend it?’. Most respondents have never been faced with such an amount of money before and have never thought about it—they may not even know that after taxes, they will get only about $640,000 or so in the United States, and in many cases, that amount is spread over a 20-year period—and so their answers tend to be quite random, such as take a tour around the world, buy a restaurant or bar, spend on education, save for retirement, help parents or children, or have a lavish wedding. Imaginary questions have imaginary answers, which cannot be used for making scientific inferences.

Do respondents have the information needed to correctly answer the question ?: Oftentimes, we assume that subjects have the necessary information to answer a question, when in reality, they do not. Even if a response is obtained, these responses tend to be inaccurate given the subjects’ lack of knowledge about the question being asked. For instance, we should not ask the CEO of a company about day-to-day operational details that they may not be aware of, or ask teachers about how much their students are learning, or ask high-schoolers, ‘Do you think the US Government acted appropriately in the Bay of Pigs crisis?’.

Question sequencing. In general, questions should flow logically from one to the next. To achieve the best response rates, questions should flow from the least sensitive to the most sensitive, from the factual and behavioural to the attitudinal, and from the more general to the more specific. Some general rules for question sequencing:

Start with easy non-threatening questions that can be easily recalled. Good options are demographics (age, gender, education level) for individual-level surveys and firmographics (employee count, annual revenues, industry) for firm-level surveys.

Never start with an open ended question.

If following a historical sequence of events, follow a chronological order from earliest to latest.

Ask about one topic at a time. When switching topics, use a transition, such as, ‘The next section examines your opinions about…’

Use filter or contingency questions as needed, such as, ‘If you answered “yes” to question 5, please proceed to Section 2. If you answered “no” go to Section 3′.

Other golden rules . Do unto your respondents what you would have them do unto you. Be attentive and appreciative of respondents’ time, attention, trust, and confidentiality of personal information. Always practice the following strategies for all survey research:

People’s time is valuable. Be respectful of their time. Keep your survey as short as possible and limit it to what is absolutely necessary. Respondents do not like spending more than 10-15 minutes on any survey, no matter how important it is. Longer surveys tend to dramatically lower response rates.

Always assure respondents about the confidentiality of their responses, and how you will use their data (e.g., for academic research) and how the results will be reported (usually, in the aggregate).

For organisational surveys, assure respondents that you will send them a copy of the final results, and make sure that you follow up with your promise.

Thank your respondents for their participation in your study.

Finally, always pretest your questionnaire, at least using a convenience sample, before administering it to respondents in a field setting. Such pretesting may uncover ambiguity, lack of clarity, or biases in question wording, which should be eliminated before administering to the intended sample.

Interview survey

Interviews are a more personalised data collection method than questionnaires, and are conducted by trained interviewers using the same research protocol as questionnaire surveys (i.e., a standardised set of questions). However, unlike a questionnaire, the interview script may contain special instructions for the interviewer that are not seen by respondents, and may include space for the interviewer to record personal observations and comments. In addition, unlike postal surveys, the interviewer has the opportunity to clarify any issues raised by the respondent or ask probing or follow-up questions. However, interviews are time-consuming and resource-intensive. Interviewers need special interviewing skills as they are considered to be part of the measurement instrument, and must proactively strive not to artificially bias the observed responses.

The most typical form of interview is a personal or face-to-face interview , where the interviewer works directly with the respondent to ask questions and record their responses. Personal interviews may be conducted at the respondent’s home or office location. This approach may even be favoured by some respondents, while others may feel uncomfortable allowing a stranger into their homes. However, skilled interviewers can persuade respondents to co-operate, dramatically improving response rates.

A variation of the personal interview is a group interview, also called a focus group . In this technique, a small group of respondents (usually 6–10 respondents) are interviewed together in a common location. The interviewer is essentially a facilitator whose job is to lead the discussion, and ensure that every person has an opportunity to respond. Focus groups allow deeper examination of complex issues than other forms of survey research, because when people hear others talk, it often triggers responses or ideas that they did not think about before. However, focus group discussion may be dominated by a dominant personality, and some individuals may be reluctant to voice their opinions in front of their peers or superiors, especially while dealing with a sensitive issue such as employee underperformance or office politics. Because of their small sample size, focus groups are usually used for exploratory research rather than descriptive or explanatory research.

A third type of interview survey is a telephone interview . In this technique, interviewers contact potential respondents over the phone, typically based on a random selection of people from a telephone directory, to ask a standard set of survey questions. A more recent and technologically advanced approach is computer-assisted telephone interviewing (CATI). This is increasing being used by academic, government, and commercial survey researchers. Here the interviewer is a telephone operator who is guided through the interview process by a computer program displaying instructions and questions to be asked. The system also selects respondents randomly using a random digit dialling technique, and records responses using voice capture technology. Once respondents are on the phone, higher response rates can be obtained. This technique is not ideal for rural areas where telephone density is low, and also cannot be used for communicating non-audio information such as graphics or product demonstrations.

Role of interviewer. The interviewer has a complex and multi-faceted role in the interview process, which includes the following tasks:

Prepare for the interview: Since the interviewer is in the forefront of the data collection effort, the quality of data collected depends heavily on how well the interviewer is trained to do the job. The interviewer must be trained in the interview process and the survey method, and also be familiar with the purpose of the study, how responses will be stored and used, and sources of interviewer bias. They should also rehearse and time the interview prior to the formal study.

Locate and enlist the co-operation of respondents: Particularly in personal, in-home surveys, the interviewer must locate specific addresses, and work around respondents’ schedules at sometimes undesirable times such as during weekends. They should also be like a salesperson, selling the idea of participating in the study.

Motivate respondents: Respondents often feed off the motivation of the interviewer. If the interviewer is disinterested or inattentive, respondents will not be motivated to provide useful or informative responses either. The interviewer must demonstrate enthusiasm about the study, communicate the importance of the research to respondents, and be attentive to respondents’ needs throughout the interview.

Clarify any confusion or concerns: Interviewers must be able to think on their feet and address unanticipated concerns or objections raised by respondents to the respondents’ satisfaction. Additionally, they should ask probing questions as necessary even if such questions are not in the script.

Observe quality of response: The interviewer is in the best position to judge the quality of information collected, and may supplement responses obtained using personal observations of gestures or body language as appropriate.

Conducting the interview. Before the interview, the interviewer should prepare a kit to carry to the interview session, consisting of a cover letter from the principal investigator or sponsor, adequate copies of the survey instrument, photo identification, and a telephone number for respondents to call to verify the interviewer’s authenticity. The interviewer should also try to call respondents ahead of time to set up an appointment if possible. To start the interview, they should speak in an imperative and confident tone, such as, ‘I’d like to take a few minutes of your time to interview you for a very important study’, instead of, ‘May I come in to do an interview?’. They should introduce themself, present personal credentials, explain the purpose of the study in one to two sentences, and assure respondents that their participation is voluntary, and their comments are confidential, all in less than a minute. No big words or jargon should be used, and no details should be provided unless specifically requested. If the interviewer wishes to record the interview, they should ask for respondents’ explicit permission before doing so. Even if the interview is recorded, the interviewer must take notes on key issues, probes, or verbatim phrases

During the interview, the interviewer should follow the questionnaire script and ask questions exactly as written, and not change the words to make the question sound friendlier. They should also not change the order of questions or skip any question that may have been answered earlier. Any issues with the questions should be discussed during rehearsal prior to the actual interview sessions. The interviewer should not finish the respondent’s sentences. If the respondent gives a brief cursory answer, the interviewer should probe the respondent to elicit a more thoughtful, thorough response. Some useful probing techniques are:

The silent probe: Just pausing and waiting without going into the next question may suggest to respondents that the interviewer is waiting for more detailed response.

Overt encouragement: An occasional ‘uh-huh’ or ‘okay’ may encourage the respondent to go into greater details. However, the interviewer must not express approval or disapproval of what the respondent says.

Ask for elaboration: Such as, ‘Can you elaborate on that?’ or ‘A minute ago, you were talking about an experience you had in high school. Can you tell me more about that?’.

Reflection: The interviewer can try the psychotherapist’s trick of repeating what the respondent said. For instance, ‘What I’m hearing is that you found that experience very traumatic’ and then pause and wait for the respondent to elaborate.

After the interview is completed, the interviewer should thank respondents for their time, tell them when to expect the results, and not leave hastily. Immediately after leaving, they should write down any notes or key observations that may help interpret the respondent’s comments better.

Biases in survey research

Despite all of its strengths and advantages, survey research is often tainted with systematic biases that may invalidate some of the inferences derived from such surveys. Five such biases are the non-response bias, sampling bias, social desirability bias, recall bias, and common method bias.

Non-response bias. Survey research is generally notorious for its low response rates. A response rate of 15-20 per cent is typical in a postal survey, even after two or three reminders. If the majority of the targeted respondents fail to respond to a survey, this may indicate a systematic reason for the low response rate, which may in turn raise questions about the validity of the study’s results. For instance, dissatisfied customers tend to be more vocal about their experience than satisfied customers, and are therefore more likely to respond to questionnaire surveys or interview requests than satisfied customers. Hence, any respondent sample is likely to have a higher proportion of dissatisfied customers than the underlying population from which it is drawn. In this instance, not only will the results lack generalisability, but the observed outcomes may also be an artefact of the biased sample. Several strategies may be employed to improve response rates:

Advance notification: Sending a short letter to the targeted respondents soliciting their participation in an upcoming survey can prepare them in advance and improve their propensity to respond. The letter should state the purpose and importance of the study, mode of data collection (e.g., via a phone call, a survey form in the mail, etc.), and appreciation for their co-operation. A variation of this technique may be to ask the respondent to return a prepaid postcard indicating whether or not they are willing to participate in the study.

Relevance of content: People are more likely to respond to surveys examining issues of relevance or importance to them.

Respondent-friendly questionnaire: Shorter survey questionnaires tend to elicit higher response rates than longer questionnaires. Furthermore, questions that are clear, non-offensive, and easy to respond tend to attract higher response rates.

Endorsement: For organisational surveys, it helps to gain endorsement from a senior executive attesting to the importance of the study to the organisation. Such endorsement can be in the form of a cover letter or a letter of introduction, which can improve the researcher’s credibility in the eyes of the respondents.

Follow-up requests: Multiple follow-up requests may coax some non-respondents to respond, even if their responses are late.

Interviewer training: Response rates for interviews can be improved with skilled interviewers trained in how to request interviews, use computerised dialling techniques to identify potential respondents, and schedule call-backs for respondents who could not be reached.

Incentives : Incentives in the form of cash or gift cards, giveaways such as pens or stress balls, entry into a lottery, draw or contest, discount coupons, promise of contribution to charity, and so forth may increase response rates.

Non-monetary incentives: Businesses, in particular, are more prone to respond to non-monetary incentives than financial incentives. An example of such a non-monetary incentive is a benchmarking report comparing the business’s individual response against the aggregate of all responses to a survey.

Confidentiality and privacy: Finally, assurances that respondents’ private data or responses will not fall into the hands of any third party may help improve response rates

Sampling bias. Telephone surveys conducted by calling a random sample of publicly available telephone numbers will systematically exclude people with unlisted telephone numbers, mobile phone numbers, and people who are unable to answer the phone when the survey is being conducted—for instance, if they are at work—and will include a disproportionate number of respondents who have landline telephone services with listed phone numbers and people who are home during the day, such as the unemployed, the disabled, and the elderly. Likewise, online surveys tend to include a disproportionate number of students and younger people who are constantly on the Internet, and systematically exclude people with limited or no access to computers or the Internet, such as the poor and the elderly. Similarly, questionnaire surveys tend to exclude children and the illiterate, who are unable to read, understand, or meaningfully respond to the questionnaire. A different kind of sampling bias relates to sampling the wrong population, such as asking teachers (or parents) about their students’ (or children’s) academic learning, or asking CEOs about operational details in their company. Such biases make the respondent sample unrepresentative of the intended population and hurt generalisability claims about inferences drawn from the biased sample.

Social desirability bias . Many respondents tend to avoid negative opinions or embarrassing comments about themselves, their employers, family, or friends. With negative questions such as, ‘Do you think that your project team is dysfunctional?’, ‘Is there a lot of office politics in your workplace?’, ‘Or have you ever illegally downloaded music files from the Internet?’, the researcher may not get truthful responses. This tendency among respondents to ‘spin the truth’ in order to portray themselves in a socially desirable manner is called the ‘social desirability bias’, which hurts the validity of responses obtained from survey research. There is practically no way of overcoming the social desirability bias in a questionnaire survey, but in an interview setting, an astute interviewer may be able to spot inconsistent answers and ask probing questions or use personal observations to supplement respondents’ comments.

Recall bias. Responses to survey questions often depend on subjects’ motivation, memory, and ability to respond. Particularly when dealing with events that happened in the distant past, respondents may not adequately remember their own motivations or behaviours, or perhaps their memory of such events may have evolved with time and no longer be retrievable. For instance, if a respondent is asked to describe his/her utilisation of computer technology one year ago, or even memorable childhood events like birthdays, their response may not be accurate due to difficulties with recall. One possible way of overcoming the recall bias is by anchoring the respondent’s memory in specific events as they happened, rather than asking them to recall their perceptions and motivations from memory.

Common method bias. Common method bias refers to the amount of spurious covariance shared between independent and dependent variables that are measured at the same point in time, such as in a cross-sectional survey, using the same instrument, such as a questionnaire. In such cases, the phenomenon under investigation may not be adequately separated from measurement artefacts. Standard statistical tests are available to test for common method bias, such as Harmon’s single-factor test (Podsakoff, MacKenzie, Lee & Podsakoff, 2003), [3] Lindell and Whitney’s (2001) [4] market variable technique, and so forth. This bias can potentially be avoided if the independent and dependent variables are measured at different points in time using a longitudinal survey design, or if these variables are measured using different methods, such as computerised recording of dependent variable versus questionnaire-based self-rating of independent variables.

  • Dillman, D. (1978). Mail and telephone surveys: The total design method . New York: Wiley. ↵
  • Rasikski, K. (1989). The effect of question wording on public support for government spending. Public Opinion Quarterly , 53(3), 388–394. ↵
  • Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology , 88(5), 879–903. http://dx.doi.org/10.1037/0021-9010.88.5.879. ↵
  • Lindell, M. K., & Whitney, D. J. (2001). Accounting for common method variance in cross-sectional research designs. Journal of Applied Psychology , 86(1), 114–121. ↵

Social Science Research: Principles, Methods and Practices (Revised edition) Copyright © 2019 by Anol Bhattacherjee is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

  • Technical Support
  • Find My Rep

You are here

Survey Research Methods

Survey Research Methods

  • Floyd J Fowler, Jr - Center for Survey Research, University of Massachusetts Boston, USA
  • Description

Available with   Perusall —an eBook that makes it easier to prepare for class Perusall  is an award-winning eBook platform featuring social annotation tools that allow students and instructors to collaboratively mark up and discuss their SAGE textbook. Backed by research and supported by technological innovations developed at Harvard University, this process of learning through collaborative annotation keeps your students engaged and makes teaching easier and more effective.   Learn more . 

See what’s new to this edition by selecting the Features tab on this page. Should you need additional information or have questions regarding the HEOA information provided for this title, including what is new to this edition, please email [email protected] . Please include your name, contact information, and the name of the title for which you would like more information. For information on the HEOA, please go to http://ed.gov/policy/highered/leg/hea08/index.html .

For assistance with your order: Please email us at [email protected] or connect with your SAGE representative.

SAGE 2455 Teller Road Thousand Oaks, CA 91320 www.sagepub.com

“The text is clear, logical and straightforward. It provides extensive applied information, coupled with a good understanding of the rationale underlying survey construction.”

“[The book] is accessible enough for the masters students but theoretical enough for the doctoral students.” 

“Even students with very little background in research methods can follow this text easily.”

“The greatest strength [of this book] is the articulation of the concept of total survey design and how the different components affect the credibility of the survey.”

GREAT book for my students and me!!

Concise information on developing surveys for DNP projects.

Have used the book for 4 years and the graduate students do like it. Easy to read, short chapters that allow the professor to decide the topics requiring elaboration. Thank you

This book, despite having a total of 171 pages, is rich with so much details on survey research and survey research methods. The simplistic approach in laying out what many new research students consider complicated terms and topics, is a big plus in this book. Dr. Fowler covered what I truly consider a comprehensive list of areas needed to be covered for the students embarking on survey research for the first time. I, as an MA tutor and adviser will always refer students needing information on survey research methods to this book. One of the best characteristics of this book is having very concise sections for every concept needed in survey research methods.

Was a good resource for DBA students conducting research.

In a concrete way, this book provides a good overview of the essentials of survey research. It offers good discussions on sampling, implementing a sample design, questionnaire construction, interviewing (and training interviewers), data analysis, and so on.

NEW TO THIS EDITION:

  • Coverage of the latest developments in the field helps readers understand how to best use new and changing technology, including the Internet, smartphones, and interactive voice response systems (IVRs), singly or in combination, to collect high quality data.
  • New studies and publications from the past five years are integrated throughout the text.
  • Updated survey examples offer students and researchers exemplary models.

KEY FEATURES:

  • Compact, yet comprehensive coverage of survey research makes this book an ideal companion or beginning text.
  • Error in surveys is described using a clear and accessible approach.
  • A list of strengths and weaknesses for each of the different types of survey data collection , including the more recent Web-based approaches, is included.
  • Coverage of the expansion of cell phone use is included.
  • Up-to-date coverage of Web-based and online surveys , as well as the latest resources available to the beginning and expert researcher, prepares readers for practice.
  • In-depth discussions of non-response and sample size issues are included.

Sample Materials & Chapters

For instructors, select a purchasing option, related products.

Feminist Measures in Survey Research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Ann R Coll Surg Engl
  • v.95(1); 2013 Jan

A quick guide to survey research

1 University of Cambridge,, UK

2 Cambridge University Hospitals NHS Foundation Trust,, UK

Questionnaires are a very useful survey tool that allow large populations to be assessed with relative ease. Despite a widespread perception that surveys are easy to conduct, in order to yield meaningful results, a survey needs extensive planning, time and effort. In this article, we aim to cover the main aspects of designing, implementing and analysing a survey as well as focusing on techniques that would improve response rates.

Medical research questionnaires or surveys are vital tools used to gather information on individual perspectives in a large cohort. Within the medical realm, there are three main types of survey: epidemiological surveys, surveys on attitudes to a health service or intervention and questionnaires assessing knowledge on a particular issue or topic. 1

Despite a widespread perception that surveys are easy to conduct, in order to yield meaningful results, a survey needs extensive planning, time and effort. In this article, we aim to cover the main aspects of designing, implementing and analysing a survey as well as focusing on techniques that would improve response rates.

Clear research goal

The first and most important step in designing a survey is to have a clear idea of what you are looking for. It will always be tempting to take a blanket approach and ask as many questions as possible in the hope of getting as much information as possible. This type of approach does not work as asking too many irrelevant or incoherent questions reduces the response rate 2 and therefore reduces the power of the study. This is especially important when surveying physicians as they often have a lower response rate than the rest of the population. 3 Instead, you must carefully consider the important data you will be using and work on a ‘need to know’ rather than a ‘would be nice to know’ model. 4

After considering the question you are trying to answer, deciding whom you are going to ask is the next step. With small populations, attempting to survey them all is manageable but as your population gets bigger, a sample must be taken. The size of this sample is more important than you might expect. After lost questionnaires, non-responders and improper answers are taken into account, this sample must still be big enough to be representative of the entire population. If it is not big enough, the power of your statistics will drop and you may not get any meaningful answers at all. It is for this reason that getting a statistician involved in your study early on is absolutely crucial. Data should not be collected until you know what you are going to do with them.

Directed questions

After settling on your research goal and beginning to design a questionnaire, the main considerations are the method of data collection, the survey instrument and the type of question you are going to ask. Methods of data collection include personal interviews, telephone, postal or electronic ( Table 1 ).

Advantages and disadvantages of survey methods

Collected data are only useful if they convey information accurately and consistently about the topic in which you are interested. This is where a validated survey instrument comes in to the questionnaire design. Validated instruments are those that have been extensively tested and are correctly calibrated to their target. They can therefore be assumed to be accurate. 1 It may be possible to modify a previously validated instrument but you should seek specialist advice as this is likely to reduce its power. Examples of validated models are the Beck Hopelessness Scale 5 or the Addenbrooke’s Cognitive Examination. 6

The next step is choosing the type of question you are going to ask. The questionnaire should be designed to answer the question you want answered. Each question should be clear, concise and without bias. Normalising statements should be included and the language level targeted towards those at the lowest educational level in your cohort. 1 You should avoid open, double barrelled questions and those questions that include negative items and assign causality. 1 The questions you use may elicit either an open (free text answer) or closed response. Open responses are more flexible but require more time and effort to analyse, whereas closed responses require more initial input in order to exhaust all possible options but are easier to analyse and present.

Questionnaire

Two more aspects come into questionnaire design: aesthetics and question order. While this is not relevant to telephone or personal questionnaires, in self-administered surveys the aesthetics of the questionnaire are crucial. Having spent a large amount of time fine-tuning your questions, presenting them in such a way as to maximise response rates is pivotal to obtaining good results. Visual elements to think of include smooth, simple and symmetrical shapes, soft colours and repetition of visual elements. 7

Once you have attracted your subject’s attention and willingness with a well designed and attractive survey, the order in which you put your questions is critical. To do this you should focus on what you need to know; start by placing easier, important questions at the beginning, group common themes in the middle and keep questions on demographics to near the end. The questions should be arrayed in a logical order, questions on the same topic close together and with sensible sections if long enough to warrant them. Introductory and summary questions to mark the start and end of the survey are also helpful.

Pilot study

Once a completed survey has been compiled, it needs to be tested. The ideal next step should highlight spelling errors, ambiguous questions and anything else that impairs completion of the questionnaire. 8 A pilot study, in which you apply your work to a small sample of your target population in a controlled setting, may highlight areas in which work still needs to be done. Where possible, being present while the pilot is going on will allow a focus group-type atmosphere in which you can discuss aspects of the survey with those who are going to be filling it in. This step may seem non-essential but detecting previously unconsidered difficulties needs to happen as early as possible and it is important to use your participants’ time wisely as they are unlikely to give it again.

Distribution and collection

While it should be considered quite early on, we will now discuss routes of survey administration and ways to maximise results. Questionnaires can be self-administered electronically or by post, or administered by a researcher by telephone or in person. The advantages and disadvantages of each method are summarised in Table 1 . Telephone and personal surveys are very time and resource consuming whereas postal and electronic surveys suffer from low response rates and response bias. Your route should be chosen with care.

Methods for maximising response rates for self-administered surveys are listed in Table 2 , taken from a Cochrane review.2 The differences between methods of maximising responses to postal or e-surveys are considerable but common elements include keeping the questionnaire short and logical as well as including incentives.

Methods for improving response rates in postal and electronic questionnaires 2

  • – Involve a statistician early on.
  • – Run a pilot study to uncover problems.
  • – Consider using a validated instrument.
  • – Only ask what you ‘need to know’.
  • – Consider guidelines on improving response rates.

The collected data will come in a number of forms depending on the method of collection. Data from telephone or personal interviews can be directly entered into a computer database whereas postal data can be entered at a later stage. Electronic questionnaires can allow responses to go directly into a computer database. Problems arise from errors in data entry and when questionnaires are returned with missing data fields. As mentioned earlier, it is essential to have a statistician involved from the beginning for help with data analysis. He or she will have helped to determine the sample size required to ensure your study has enough power. The statistician can also suggest tests of significance appropriate to your survey, such as Student’s t-test or the chi-square test.

Conclusions

Survey research is a unique way of gathering information from a large cohort. Advantages of surveys include having a large population and therefore a greater statistical power, the ability to gather large amounts of information and having the availability of validated models. However, surveys are costly, there is sometimes discrepancy in recall accuracy and the validity of a survey depends on the response rate. Proper design is vital to enable analysis of results and pilot studies are critical to this process.

Logo for BCcampus Open Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 9: Survey Research

Overview of Survey Research

Learning Objectives

  • Define what survey research is, including its two important characteristics.
  • Describe several different ways that survey research can be used and give some examples.

What Is Survey Research?

Survey research  is a quantitative and qualitative method with two important characteristics. First, the variables of interest are measured using self-reports. In essence, survey researchers ask their participants (who are often called respondents  in survey research) to report directly on their own thoughts, feelings, and behaviours. Second, considerable attention is paid to the issue of sampling. In particular, survey researchers have a strong preference for large random samples because they provide the most accurate estimates of what is true in the population. In fact, survey research may be the only approach in psychology in which random sampling is routinely used. Beyond these two characteristics, almost anything goes in survey research. Surveys can be long or short. They can be conducted in person, by telephone, through the mail, or over the Internet. They can be about voting intentions, consumer preferences, social attitudes, health, or anything else that it is possible to ask people about and receive meaningful answers.  Although survey data are often analyzed using statistics, there are many questions that lend themselves to more qualitative analysis.

Most survey research is nonexperimental. It is used to describe single variables (e.g., the percentage of voters who prefer one presidential candidate or another, the prevalence of schizophrenia in the general population) and also to assess statistical relationships between variables (e.g., the relationship between income and health). But surveys can also be experimental. The study by Lerner and her colleagues is a good example. Their use of self-report measures and a large national sample identifies their work as survey research. But their manipulation of an independent variable (anger vs. fear) to assess its effect on a dependent variable (risk judgments) also identifies their work as experimental.

History and Uses of Survey Research

Survey research may have its roots in English and American “social surveys” conducted around the turn of the 20th century by researchers and reformers who wanted to document the extent of social problems such as poverty (Converse, 1987) [1] . By the 1930s, the US government was conducting surveys to document economic and social conditions in the country. The need to draw conclusions about the entire population helped spur advances in sampling procedures. At about the same time, several researchers who had already made a name for themselves in market research, studying consumer preferences for American businesses, turned their attention to election polling. A watershed event was the presidential election of 1936 between Alf Landon and Franklin Roosevelt. A magazine called  Literary Digest  conducted a survey by sending ballots (which were also subscription requests) to millions of Americans. Based on this “straw poll,” the editors predicted that Landon would win in a landslide. At the same time, the new pollsters were using scientific methods with much smaller samples to predict just the opposite—that Roosevelt would win in a landslide. In fact, one of them, George Gallup, publicly criticized the methods of Literary Digest  before the election and all but guaranteed that his prediction would be correct. And of course it was. (We will consider the reasons that Gallup was right later in this chapter.) Interest in surveying around election times has led to several long-term projects, notably the Canadian Election Studies which has measured opinions of Canadian voters around federal elections since 1965.  Anyone can access the data and read about the results of the experiments in these studies.

From market research and election polling, survey research made its way into several academic fields, including political science, sociology, and public health—where it continues to be one of the primary approaches to collecting new data. Beginning in the 1930s, psychologists made important advances in questionnaire design, including techniques that are still used today, such as the Likert scale. (See “What Is a Likert Scale?” in  Section 9.2 “Constructing Survey Questionnaires” .) Survey research has a strong historical association with the social psychological study of attitudes, stereotypes, and prejudice. Early attitude researchers were also among the first psychologists to seek larger and more diverse samples than the convenience samples of university students that were routinely used in psychology (and still are).

Survey research continues to be important in psychology today. For example, survey data have been instrumental in estimating the prevalence of various mental disorders and identifying statistical relationships among those disorders and with various other factors. The National Comorbidity Survey is a large-scale mental health survey conducted in the United States . In just one part of this survey, nearly 10,000 adults were given a structured mental health interview in their homes in 2002 and 2003.  Table 9.1  presents results on the lifetime prevalence of some anxiety, mood, and substance use disorders. (Lifetime prevalence is the percentage of the population that develops the problem sometime in their lifetime.) Obviously, this kind of information can be of great use both to basic researchers seeking to understand the causes and correlates of mental disorders as well as to clinicians and policymakers who need to understand exactly how common these disorders are.

And as the opening example makes clear, survey research can even be used to conduct experiments to test specific hypotheses about causal relationships between variables. Such studies, when conducted on large and diverse samples, can be a useful supplement to laboratory studies conducted on university students. Although this approach is not a typical use of survey research, it certainly illustrates the flexibility of this method.

Key Takeaways

  • Survey research is a quantitative approach that features the use of self-report measures on carefully selected samples. It is a flexible approach that can be used to study a wide variety of basic and applied research questions.
  • Survey research has its roots in applied social research, market research, and election polling. It has since become an important approach in many academic disciplines, including political science, sociology, public health, and, of course, psychology.

Discussion: Think of a question that each of the following professionals might try to answer using survey research.

  • a social psychologist
  • an educational researcher
  • a market researcher who works for a supermarket chain
  • the mayor of a large city
  • the head of a university police force
  • Converse, J. M. (1987). Survey research in the United States: Roots and emergence, 1890–1960 . Berkeley, CA: University of California Press. ↵
  • The lifetime prevalence of a disorder is the percentage of people in the population that develop that disorder at any time in their lives. ↵

A quantitative approach in which variables are measured using self-reports from a sample of the population.

Participants of a survey.

Research Methods in Psychology - 2nd Canadian Edition Copyright © 2015 by Paul C. Price, Rajiv Jhangiani, & I-Chant A. Chiang is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

A Comprehensive Guide to Survey Research Methodologies

For decades, researchers and businesses have used survey research to produce statistical data and explore ideas. The survey process is simple, ask questions and analyze the responses to make decisions. Data is what makes the difference between a valid and invalid statement and as the American statistician, W. Edwards Deming said:

“Without data, you’re just another person with an opinion.” - W. Edwards Deming

In this article, we will discuss what survey research is, its brief history, types, common uses, benefits, and the step-by-step process of designing a survey.

What is Survey Research

A survey is a research method that is used to collect data from a group of respondents in order to gain insights and information regarding a particular subject. It’s an excellent method to gather opinions and understand how and why people feel a certain way about different situations and contexts.

Brief History of Survey Research

Survey research may have its roots in the American and English “social surveys” conducted around the turn of the 20th century. The surveys were mainly conducted by researchers and reformers to document the extent of social issues such as poverty. ( 1 ) Despite being a relatively young field to many scientific domains, survey research has experienced three stages of development ( 2 ):

-       First Era (1930-1960)

-       Second Era (1960-1990)

-       Third Era (1990 onwards)

Over the years, survey research adapted to the changing times and technologies. By exploiting the latest technologies, researchers can gain access to the right population from anywhere in the world, analyze the data like never before, and extract useful information.

Survey Research Methods & Types

Survey research can be classified into seven categories based on objective, data sources, methodology, deployment method, and frequency of deployment.

Types of survey research based on objective, data source, methodology, deployment method, and frequency of deployment.

Surveys based on Objective

Exploratory survey research.

Exploratory survey research is aimed at diving deeper into research subjects and finding out more about their context. It’s important for marketing or business strategy and the focus is to discover ideas and insights instead of gathering statistical data.

Generally, exploratory survey research is composed of open-ended questions that allow respondents to express their thoughts and perspectives. The final responses present information from various sources that can lead to fresh initiatives.

Predictive Survey Research

Predictive survey research is also called causal survey research. It’s preplanned, structured, and quantitative in nature. It’s often referred to as conclusive research as it tries to explain the cause-and-effect relationship between different variables. The objective is to understand which variables are causes and which are effects and the nature of the relationship between both variables.

Descriptive Survey Research

Descriptive survey research is largely observational and is ideal for gathering numeric data. Due to its quantitative nature, it’s often compared to exploratory survey research. The difference between the two is that descriptive research is structured and pre-planned.

 The idea behind descriptive research is to describe the mindset and opinion of a particular group of people on a given subject. The questions are every day multiple choices and users must choose from predefined categories. With predefined choices, you don’t get unique insights, rather, statistically inferable data.

Survey Research Types based on Concept Testing

Monadic concept testing.

Monadic testing is a survey research methodology in which the respondents are split into multiple groups and ask each group questions about a separate concept in isolation. Generally, monadic surveys are hyper-focused on a particular concept and shorter in duration. The important thing in monadic surveys is to avoid getting off-topic or exhausting the respondents with too many questions.

Sequential Monadic Concept Testing

Another approach to monadic testing is sequential monadic testing. In sequential monadic surveys, groups of respondents are surveyed in isolation. However, instead of surveying three groups on three different concepts, the researchers survey the same groups of people on three distinct concepts one after another. In a sequential monadic survey, at least two topics are included (in random order), and the same questions are asked for each concept to eliminate bias.

Based on Data Source

Primary data.

Data obtained directly from the source or target population is referred to as primary survey data. When it comes to primary data collection, researchers usually devise a set of questions and invite people with knowledge of the subject to respond. The main sources of primary data are interviews, questionnaires, surveys, and observation methods.

 Compared to secondary data, primary data is gathered from first-hand sources and is more reliable. However, the process of primary data collection is both costly and time-consuming.

Secondary Data

Survey research is generally used to collect first-hand information from a respondent. However, surveys can also be designed to collect and process secondary data. It’s collected from third-party sources or primary sources in the past.

 This type of data is usually generic, readily available, and cheaper than primary data collection. Some common sources of secondary data are books, data collected from older surveys, online data, and data from government archives. Beware that you might compromise the validity of your findings if you end up with irrelevant or inflated data.

Based on Research Method

Quantitative research.

Quantitative research is a popular research methodology that is used to collect numeric data in a systematic investigation. It’s frequently used in research contexts where statistical data is required, such as sciences or social sciences. Quantitative research methods include polls, systematic observations, and face-to-face interviews.

Qualitative Research

Qualitative research is a research methodology where you collect non-numeric data from research participants. In this context, the participants are not restricted to a specific system and provide open-ended information. Some common qualitative research methods include focus groups, one-on-one interviews, observations, and case studies.

Based on Deployment Method

Online surveys.

With technology advancing rapidly, the most popular method of survey research is an online survey. With the internet, you can not only reach a broader audience but also design and customize a survey and deploy it from anywhere. Online surveys have outperformed offline survey methods as they are less expensive and allow researchers to easily collect and analyze data from a large sample.

Paper or Print Surveys

As the name suggests, paper or print surveys use the traditional paper and pencil approach to collect data. Before the invention of computers, paper surveys were the survey method of choice.

Though many would assume that surveys are no longer conducted on paper, it's still a reliable method of collecting information during field research and data collection. However, unlike online surveys, paper surveys are expensive and require extra human resources.

Telephonic Surveys

Telephonic surveys are conducted over telephones where a researcher asks a series of questions to the respondent on the other end. Contacting respondents over a telephone requires less effort, human resources, and is less expensive.

What makes telephonic surveys debatable is that people are often reluctant in giving information over a phone call. Additionally, the success of such surveys depends largely on whether people are willing to invest their time on a phone call answering questions.

One-on-one Surveys

One-on-one surveys also known as face-to-face surveys are interviews where the researcher and respondent. Interacting directly with the respondent introduces the human factor into the survey.

Face-to-face interviews are useful when the researcher wants to discuss something personal with the respondent. The response rates in such surveys are always higher as the interview is being conducted in person. However, these surveys are quite expensive and the success of these depends on the knowledge and experience of the researcher.

Based on Distribution

The easiest and most common way of conducting online surveys is sending out an email. Sending out surveys via emails has a higher response rate as your target audience already knows about your brand and is likely to engage.

Buy Survey Responses

Purchasing survey responses also yields higher responses as the responders signed up for the survey. Businesses often purchase survey samples to conduct extensive research. Here, the target audience is often pre-screened to check if they're qualified to take part in the research.

Embedding Survey on a Website

Embedding surveys on a website is another excellent way to collect information. It allows your website visitors to take part in a survey without ever leaving the website and can be done while a person is entering or exiting the website.

Post the Survey on Social Media

Social media is an excellent medium to reach abroad range of audiences. You can publish your survey as a link on social media and people who are following the brand can take part and answer questions.

Based on Frequency of Deployment

Cross-sectional studies.

Cross-sectional studies are administered to a small sample from a large population within a short period of time. This provides researchers a peek into what the respondents are thinking at a given time. The surveys are usually short, precise, and specific to a particular situation.

Longitudinal Surveys

Longitudinal surveys are an extension of cross-sectional studies where researchers make an observation and collect data over extended periods of time. This type of survey can be further divided into three types:

-       Trend surveys are employed to allow researchers to understand the change in the thought process of the respondents over some time.

-       Panel surveys are administered to the same group of people over multiple years. These are usually expensive and researchers must stick to their panel to gather unbiased opinions.

-       In cohort surveys, researchers identify a specific category of people and regularly survey them. Unlike panel surveys, the same people do not need to take part over the years, but each individual must fall into the researcher’s primary interest category.

Retrospective Survey

Retrospective surveys allow researchers to ask questions to gather data about past events and beliefs of the respondents. Since retrospective surveys also require years of data, they are similar to the longitudinal survey, except retrospective surveys are shorter and less expensive.

Why Should You Conduct Research Surveys?

“In God we trust. All others must bring data” - W. Edwards Deming

 In the information age, survey research is of utmost importance and essential for understanding the opinion of your target population. Whether you’re launching a new product or conducting a social survey, the tool can be used to collect specific information from a defined set of respondents. The data collected via surveys can be further used by organizations to make informed decisions.

Furthermore, compared to other research methods, surveys are relatively inexpensive even if you’re giving out incentives. Compared to the older methods such as telephonic or paper surveys, online surveys have a smaller cost and the number of responses is higher.

 What makes surveys useful is that they describe the characteristics of a large population. With a larger sample size , you can rely on getting more accurate results. However, you also need honest and open answers for accurate results. Since surveys are also anonymous and the responses remain confidential, respondents provide candid and accurate answers.

Common Uses of a Survey

Surveys are widely used in many sectors, but the most common uses of the survey research include:

-       Market research : surveying a potential market to understand customer needs, preferences, and market demand.

-       Customer Satisfaction: finding out your customer’s opinions about your services, products, or companies .

-       Social research: investigating the characteristics and experiences of various social groups.

-       Health research: collecting data about patients’ symptoms and treatments.

-       Politics: evaluating public opinion regarding policies and political parties.

-       Psychology: exploring personality traits, behaviors, and preferences.

6 Steps to Conduct Survey Research

An organization, person, or company conducts a survey when they need the information to make a decision but have insufficient data on hand. Following are six simple steps that can help you design a great survey.

Step 1: Objective of the Survey

The first step in survey research is defining an objective. The objective helps you define your target population and samples. The target population is the specific group of people you want to collect data from and since it’s rarely possible to survey the entire population, we target a specific sample from it. Defining a survey objective also benefits your respondents by helping them understand the reason behind the survey.

Step 2: Number of Questions

The number of questions or the size of the survey depends on the survey objective. However, it’s important to ensure that there are no redundant queries and the questions are in a logical order. Rephrased and repeated questions in a survey are almost as frustrating as in real life. For a higher completion rate, keep the questionnaire small so that the respondents stay engaged to the very end. The ideal length of an interview is less than 15 minutes. ( 2 )

Step 3: Language and Voice of Questions

While designing a survey, you may feel compelled to use fancy language. However, remember that difficult language is associated with higher survey dropout rates. You need to speak to the respondent in a clear, concise, and neutral manner, and ask simple questions. If your survey respondents are bilingual, then adding an option to translate your questions into another language can also prove beneficial.

Step 4: Type of Questions

In a survey, you can include any type of questions and even both closed-ended or open-ended questions. However, opt for the question types that are the easiest to understand for the respondents, and offer the most value. For example, compared to open-ended questions, people prefer to answer close-ended questions such as MCQs (multiple choice questions)and NPS (net promoter score) questions.

Step 5: User Experience

Designing a great survey is about more than just questions. A lot of researchers underestimate the importance of user experience and how it affects their response and completion rates. An inconsistent, difficult-to-navigate survey with technical errors and poor color choice is unappealing for the respondents. Make sure that your survey is easy to navigate for everyone and if you’re using rating scales, they remain consistent throughout the research study.

Additionally, don’t forget to design a good survey experience for both mobile and desktop users. According to Pew Research Center, nearly half of the smartphone users access the internet mainly from their mobile phones and 14 percent of American adults are smartphone-only internet users. ( 3 )

Step 6: Survey Logic

Last but not least, logic is another critical aspect of the survey design. If the survey logic is flawed, respondents may not continue in the right direction. Make sure to test the logic to ensure that selecting one answer leads to the next logical question instead of a series of unrelated queries.

How to Effectively Use Survey Research with Starlight Analytics

Designing and conducting a survey is almost as much science as it is an art. To craft great survey research, you need technical skills, consider the psychological elements, and have a broad understanding of marketing.

The ultimate goal of the survey is to ask the right questions in the right manner to acquire the right results.

Bringing a new product to the market is a long process and requires a lot of research and analysis. In your journey to gather information or ideas for your business, Starlight Analytics can be an excellent guide. Starlight Analytics' product concept testing helps you measure your product's market demand and refine product features and benefits so you can launch with confidence. The process starts with custom research to design the survey according to your needs, execute the survey, and deliver the key insights on time.

  • Survey research in the United States: roots and emergence, 1890-1960 https://searchworks.stanford.edu/view/10733873    
  • How to create a survey questionnaire that gets great responses https://luc.id/knowledgehub/how-to-create-a-survey-questionnaire-that-gets-great-responses/    
  • Internet/broadband fact sheet https://www.pewresearch.org/internet/fact-sheet/internet-broadband/    

Related Articles

Market growth: tap into your full market potential.

Market growth rate is the change in a market’s size over a given period, typically expressed as a positive or negative percentage.

The Introduction Stage of Product Life Cycle | What to Know

The introduction stage in the product life cycle is meant to build product awareness. Click here to learn more about the introduction stage and how it works.

How to Identify Barriers to Purchase (and Crush Revenue Goals!)

Learn how to identify and remove barriers to purchase to improve conversion rates and increase revenue.

Consumer Insights | Tap Into Your Core Customer Base

Learn how you can leverage consumer insights to improve your position in the market and discover valuable information about your customers.

Customer Feedback Loop: How to Close It & Increase Revenue

An effective customer feedback loop is key to increased customer loyalty and revenue. Eliminate churn with these pro tips from the experts at Starlight.

The Growth Stage of Product Life Cycle | What to Know

All you need to know about the growth stage of the product life cycle and how you should make your product during this stage.

Survey Research Methods

Survey Research Methods

Current Issue

Modeling public opinion over time and space: trust in state institutions in europe, 1989-2019, the poisson extension of the unrelated question model: improving surveys with time-constrained questions on sensitive topics, question wording matters in measuring frequency of fear of crime: a survey experiment of the anchoring effect, evaluating the effect of monetary incentives on web survey response rates in the uk millennium cohort study, we have come a long way and we have a long way to go: a cross-survey comparison of data quality in 16 arab countries in the arab barometer vs the world values survey, call for papers, learning through failures.

Survey Research Methods is the official peer reviewed journal of the European Survey Research Association . The journal publishes articles in English which discuss methodological issues related to survey research; see SRM's about the journal for more details about articles published in SRM.

Survey Research Methods is indexed by the Social Sciences Citation Index ( SSCI ), Scopus , and the Directory of Open Access Journals ( DOAJ ).

Survey Research Methods has signed the Transparency and Openness Promotion Guidelines of the Center for Open Science ; see the Author Guidelines for SRM's adoption of these guidelines.

Information

  • For Readers
  • For Authors
  • For Librarians

ESRA

survey research methods

Survey Research Methods

Survey Research Methods is the official peer-reviewed journal of the European Survey Research Association (ESRA). The journal publishes articles in English, which discuss methodological issues related to survey research.

Three types of papers are in-scope:

  • Papers discussing methodological issues in substantive research using survey data
  • Papers that discuss methodological issues that are more or less independent of the specific field of substantive research
  • Replication studies of studies published in SRM

Topics of particular interest include survey design, sample design, question and questionnaire design, data collection, nonresponse, data capture, data processing, coding and editing, measurement errors, imputation, weighting and survey data analysis methods.

Survey Research Methods focuses on data collection methods for large scale surveys, but papers on special populations are welcome. We do not publish pure mathematical papers or simulations. We also normally do not publish papers based on students, or on small sized experiments. Papers on larger experiments, or papers based on data from non-probability samples are welcome given that authors undertake steps to assess external validity or limitations concerning population estimates in detail.

Survey Research Methods publishes replications studies of articles published in the journal. Replication studies will undergo the same reviewing process as original journal articles. In case of publication, replication studies will be marked as such.

Survey Research Methods is indexed by the Social Sciences Citation Index (SSCI) , Scopus , and the Directory of Open Access Journals (DOAJ) . The journal has signed the Transparency and Openness Promotion Guidelines of the Center for Open Science; see the Author Guidelines for SRM’s adoption of these guidelines.

Find out more about the journal’s editorial team here .

  • Survey Methods: Definition, Types, and Examples

busayo.longe

Data gathering is a flexible and exciting process; especially when you use surveys. There are different survey methods that allow you to collect relevant information from research participants or the people who have access to the required data. 

For instance, you can conduct an interview or simply observe the research participants as they interact in their environment. Typically, your research context, the type of systematic investigation, and many other factors should determine the survey method you adopt. 

In this article, we will discuss different types of survey methods and also show you how to conduct online surveys using Formplus . 

What is a Survey Method?

A survey method is a process, tool, or technique that you can use to gather information in research by asking questions to a predefined group of people. Typically, it facilitates the exchange of information between the research participants and the person or organization carrying out the research. 

Survey methods can be qualitative or quantitative depending on the type of research and the type of data you want to gather in the end. For instance, you can choose to create and administer an online survey with Formplus that allows you to collect statistical information from respondents. For qualitative research, you can conduct a face-to-face interview or organize a focus group. 

Types of Survey Methods  

Interviews    .

An interview is a survey research method where the researcher facilitates some sort of conversation with the research participant to gather useful information about the research subject. This conversation can happen physically as a face-to-face interview or virtually as a telephone interview or via video and audio-conferencing platforms.  

During an interview, the researcher has the opportunity to connect personally with the research subject and establish some sort of relationship. This connection allows the interviewer (researcher) to gain more insight into the information provided by the research participant in the course of the conversation. 

An interview can be structured, semi-structured, or unstructured . In a structured interview , the researcher strictly adheres to a sequence of premeditated questions throughout the conversation. This is also known as a standardized interview or a researcher-administered interview and it often results in quantitative research findings. 

In a semi-structured interview , the researcher has a set of premeditated interview questions but he or she can veer off the existing interview sequence to get more answers and gain more clarity from the interviewee. The semi-structured interview method is flexible and allows the researcher to work outside the scope of the sequence while maintaining the basic interview framework. 

Just as the name suggests, an unstructured interview is one that doesn’t restrict the researcher to a set of premeditated questions or the interview sequence. Here, the researcher is allowed to leverage his or her knowledge and to creatively weave questions to help him or her to get useful information from the participant. This is why it is also called an in-depth interview. 

Advantages of Interviews

  • Interviews, especially face-to-face interviews, allow you to capture non-verbal nuances that provide more context around the interviewee’s responses. For instance, the interview can act in a certain way to suggest that he or she is uncomfortable with a particular question. 
  • Interviews are more flexible as a method of survey research. With semi-structured and unstructured interviews, you can adjust the conversation sequence to suit prevailing circumstances. 

Disadvantages of Interviews

  • It is expensive and time-consuming; especially when you have to interview large numbers of people. 
  • It is subject to researcher bias which can affect the quality of data gathered at the end of the process. 

A survey is a data collection tool that lists a set of structured questions to which respondents provide answers based on their knowledge and experiences. It is a standard data gathering process that allows you to access information from a predefined group of respondents during research. 

In a survey, you would find different types of questions based on the research context and the type of information you want to have access to. Many surveys combine open-ended and closed-ended questions including rating scales and semantic scales. This means you can use them for qualitative and quantitative research. 

Surveys come in 2 major formats; paper forms or online forms. A paper survey is a more traditional method of data collection and it can easily result in loss of data. Paper forms are also cumbersome to organize and process. 

Online surveys, on the other hand, are usually created via data collection platforms like Formplus. These platforms have form builders where you can create your survey from scratch using different form fields and features. On Formplus, you can also find different online survey templates for data collection. 

One of the many advantages of online surveys is accuracy as it typically records a lower margin of error than paper surveys. Also, online surveys are easier to administer as you can share them with respondents via email or social media channels. 

Advantages of Surveys

  • Surveys allow you to gather data from a large sample size or research population. This helps to improve the validity and accuracy of your research findings. 
  • The cost of creating and administering a survey is usually lower compared to other research methods. 
  • It is a convenient method of data collection for the researcher and the respondents. 

Disadvantages of Surveys

  • The validity of the research data can be affected by survey response bias. 
  • High survey dropout rates can also affect the number of responses received in your survey. 

Observation  

Just as the name suggests, observation is a method of gathering data by paying attention to the actions and behaviors of the research subjects as they interact in their environment. This qualitative research method allows you to get first-hand information about the research subjects in line with the aims and objectives of your systematic investigation. 

If you have tried out this survey method, then you must have come across one or more of the 4 types of observation in research. These are; Complete observer method, observer as participant method, participant as observer method, and complete participant method. 

In the complete observer method , the researcher is entirely detached or absorbed from the research environment. This means that the participants are completely unaware of the researcher’s presence and this allows them to act naturally as they interact with their environment. You can think of it as a remote observation. 

The observer as participant method requires the researcher to be involved in the research environment; albeit with limited interaction with the participants. The participants typically know the researcher and may also be familiar with the goals and objectives of the systematic investigation. 

A good example of this is when a researcher visits a school to understand how students interact with each other during extra-curricular activities. In this case, the students may be fully aware of the research process; although they may not interact with the researcher. 

In the participant as observer method , the researcher has some kind of relationship with the participants and interacts with them often as he or she carries out the investigation. For instance, when an anthropologist goes to a host community for research, s/he builds a relationship with members of the community while the host community is aware of the research. 

In the complete participant method , the researcher interacts with the research participants and is also an active member of the research environment. However, the research participants remain unaware of the research process; they do not know that a researcher is among them and they also do not know that they are being observed. 

Advantages of Observation Method

  • It is one of the simplest methods of data collection as it does not require specialization or expertise in many cases.
  • The observation method helps you to formulate a valid research hypothesis for your systematic investigation. You can test this hypothesis via experimental research to get valid findings.  

Disadvantages of Observation Method

  • When the participants know they are being observed, they may act differently and this can affect the accuracy of the information you gather. 
  • Because observation is done in the participant’s natural environment; that is an environment without control, the findings from this process are not very reliable. 

Focus Groups

A focus group is an open conversation with a small number of carefully-selected participants who provide useful information for research. The selected participants are a subset of your research population and should represent the different groups in the larger population. 

In a focus group, the researcher can act as the moderator who sets the tone of the conversation and guides the discourse. The moderator ensures that the overall conversations are in line with the aims and objectives of the research and he or she also reduces the bias in the discussions.  

If you are conducting qualitative research with a large and diverse research population, then adopting focus groups is an effective and cost-efficient method of data collection . Typically, your focus group should have 6-10 participants, usually 8; including the moderator. 

Based on the focus of your research, you can adopt one or more types of focus groups for your investigation. Common types of focus groups you should consider include:

  • Dual-moderator focus group
  • Mini focus group
  • Client-involvement focus group
  • Virtual or online focus groups. 

Advantages of Focus Groups

  • Focus groups are open-ended and this allows you to explore a variety of opinions and ideas that may come up during the discussions. 
  • Focus groups help you to discover other salient points that you may not have considered in the systematic investigation. 

Disadvantages of Focus Groups

  • Participants may not communicate their true thoughts and experiences and this affects the validity of the entire process.
  • Participants can be easily influenced by the opinions of other people in the group. 

How to Conduct Online Surveys with Formplus  

As we’ve mentioned earlier, an online survey allows you to gather data from a large pool of respondents easily and conveniently. Unlike paper forms, online surveys are secure and it is also easy to distribute them and collate responses for valid research data. 

Formplus allows you to create your online surve y in a few easy steps. It also has several features that make data collection and organization easy for you. Let’s show you how to conduct online surveys with Formplus. 

  • Create your Formplus account here. If you already have a Formplus account, you can log in at www.formpl.us . 

survey research methods

  • On your Formplus dashboard, you will find several buttons and options. Click on the “create new form” button located at the top left corner of the dashboard to begin. 
  • Now, you should have access to the form builder. The Formplus builder allows you to add different form fields to your survey by simply dragging and dropping them from the builder’s fields section into your form. You will find the fields section at the left corner of the form builder. 

survey research methods

  • First, add the title of your form by clicking on the title tab just at the top of the builder. 
  • Next, click on the different fields you’d like to have in your survey. You can add rating fields, number fields, and more than 30 other form fields as you like. 

survey research methods

  • After adding the fields to your survey, it is time to populate them with questions and answer-options as needed. Click on the small pencil icon located beside each field to access their unique editing tab. 
  • Apart from adding questions and answer-options to the fields, you can also make preferred fields to be compulsory or make them read-only. 
  • Save all the changes you have made to the form by clicking on the save icon at the top right corner. This gives you immediate access to the builder’s customization section. 

survey research methods

  • Formplus has numerous customization options that you can use to change the outlook and layout of your online survey without any knowledge of CSS. You can change your form font, add your organization’s logo, and also add preferred background images among other things. 

survey research methods

  • To start collecting responses in your online survey, you can use any of the Formplus multiple form sharing options. Go to the builder’s “share” section, choose your preferred option, and follow the prompt provided. If you have a WordPress website, you can add the survey to it via the WordPress plugin. 

survey research methods

  • Don’t forget to track your form responses and other important data in our form analytics dashboard. 

Advantages of Online Surveys

  • Online surveys are a faster method of data collection : They help you to save time by accelerating your data collection process. Typically, respondents would spend ⅓ of the time used in completing a paper survey for an online survey. This means you will record almost-immediate responses from participants.  
  • Apart from saving time, you also get to save cost. For instance, you do not have to spend money on printing paper surveys and transporting them to respondents. Also, many online survey tools have a free subscription plan and also support affordable premium subscription plans. You can check out Formplus pricing here . 
  • Online surveys reduce the margin of error in data collection. This allows you to gather more accurate information and arrive at objective research findings. 
  • It is flexible and allows participants to respond as is convenient. For instance, Formplus has a save and resume later feature that allows respondents to save an incomplete survey and finish up when it is more convenient. The order of the questions in an online survey can also be changed. 
  • Online surveys make the data collection process easy and seamless. By leveraging the internet for distribution, you can gather information from thousands of people in your target population. 
  • Because online surveys are very convenient, they result in increased survey response rates because participants can complete the survey according to their own pace, chosen time, and preferences.

Conclusion  

When conducting research, many survey methods can help you to gather, analyze and process data effectively. In this article, we have looked at some of these methods in detail including interviews, focus groups, and the observation approach. 

As we’ve shown you, each of these survey methods has its strengths and weaknesses. This is why your choice should be informed by the type of research you are conducting and what you want to get out of it. While some of these methods work best for qualitative research, others are better suited for quantitative data collection . 

Logo

Connect to Formplus, Get Started Now - It's Free!

  • accuracy surveys
  • brand survey examples
  • survey methods
  • survey question types
  • survey questionnaire
  • busayo.longe

Formplus

You may also like:

Pilot Survey: Definition, Importance + [ Question Examples]

Before launching a new product or feature into the market, it is a good idea to find out what you

survey research methods

Survey & Questionnaire Introduction: Examples + [5 Types]

The Golden Rule of Surveys: Be Polite. Whether online or offline, you need to politely approach survey respondents and get th

25 Training Survey Questions + Free Form Templates

Asking the right training survey questions before, during, and after a training session is an effective way to gather valuabl

33 Event Survey Questions + [Template Examples]

Read this article to learn how to create an event survey with Formplus

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

  • Mobile Forms
  • INTEGRATIONS
  • See 100+ integrations
  • FEATURED INTEGRATIONS
  • See more Integrations
  • See more CRM Integrations

FTP

  • See more Storage Integrations
  • See more Payment Integrations

Mad Mimi

  • See more Email Integrations
  • Jotform Teams
  • Enterprise Mobile
  • Prefill Forms
  • HIPAA Forms
  • Secure Forms
  • Assign Forms
  • Online Payments
  • See more features
  • Multiple Users
  • Admin Console
  • White Labeling
  • See more Enterprise Features
  • Contact Sales
  • Contact Support
  • Help Center
  • Jotform Books
  • Jotform Academy

Get a dedicated support team with Jotform Enterprise.

Apply to Jotform Enterprise for a dedicated support team.

  • Sign Up for Free

Research survey guide: Benefits, examples, and templates

  • Market Research

Research survey guide: Benefits, examples, and templates

Kimberly Houston

You likely already know the essential role market research plays in helping your business or nonprofit succeed. Gathering insights about your audience, especially their pain points, desires, behavior, and motivations, is necessary for making critical business decisions.

Market research can inform just about everything your organization does, including how to improve your current offerings, which new products and services to launch and new markets to enter, and what brand and marketing strategies are best to help you stand out in the marketplace.

So, how do you gather this critical intel?

Conducting a research survey is one of the most effective ways to gather feedback in real time from a large audience, then transform that feedback into robust, usable data. And these days, it’s easier than ever to do with the availability of multiple survey tools for research .

In this article, we’ll explain what survey research is, discuss its benefits, highlight different types of survey research, review how to design a research survey, and share how you can use Jotform to create a survey that suits your needs.

The importance of survey research

Survey research makes it possible to understand audience preferences on everything from social policy to product development to employee initiatives, giving you the ability to gauge your audience’s opinions and make important improvements to your operation.

Let’s look at a few examples of how survey research is used:

  • Education: Survey research allows schools and universities to gauge students’ and parents’ opinions on the educational institution overall. It can also gather specific feedback on courses, student engagement and satisfaction, academic programs, the post-graduation student experience, and more.
  • Business: Companies can conduct market research to assess what current and potential customers think about their products, services, pricing, and customer service. It can also provide data on how customers feel about a company’s marketing strategies, messaging, position in the marketplace relative to similar brands, and so on. The data collected will help inform decisions about areas for improvement in all these categories.
  • Public opinion polling: Survey research is useful for measuring and understanding public opinions and attitudes on elections, voter preferences, policy initiatives, the economy, recent news events, healthcare and social services in a community, and more. Public organizations can use the data they collect to improve community services, allocate tax dollars, apprise candidates of issues of concern to their voters, and so on.
  • Healthcare: Providers can collect data about medical symptoms and conditions, risk factors, treatment preferences, and the overall patient experience.
  • Human resources: HR teams can use survey research to measure employee satisfaction, evaluate employee engagement, identify obstacles to productivity, and determine areas for improvement across the organization.

A few other uses of survey research include customer satisfaction surveys , social research, community surveys , industry-based surveys (such as the retail or hotel industry), and nonprofit surveys that gather feedback on volunteer satisfaction or community impact.

A brief overview of survey research

Survey research is a method of gathering data from a group of people that represents your target market or audience. The goal is to develop insights about your products, services, marketing campaigns, and other factors that are crucial to the success of your organization.

Types of survey research can include face-to-face interviews, telephone surveys, product research surveys, brand surveys, online surveys, and panel surveys. We’ll cover more on that shortly.

2 types of survey research: Quantitative and qualitative

The type of data you collect will depend on your research objectives. In most cases, combining both quantitative and qualitative approaches to data collection will yield the best results.

Quantitative research involves collecting numerical data that you can count or measure, such as website conversion rates, business revenue, or demographic data like respondent age or education level. Analyzing quantitative data can help identify trends and patterns in large populations.

Qualitative research involves gathering non-numerical information by asking open-ended questions . This approach provides data that gives more context about behaviors, preferences, and opinions.

Surveys can be either qualitative, quantitative, or a combination of both, depending on the questions you ask. For example, quantitative surveys include closed-ended questions , such as multiple-choice, rating scale, or yes/no questions. Qualitative surveys, on the other hand, feature open-ended questions that allow respondents to answer at length and in their own words.

Types of survey research

The survey research method you choose will depend on a few factors, including the budget, time frame, survey distribution channels, and type of data you’re hoping to gather.

Here are six survey research types to consider.

Face-to-face interviews

This research method involves using a researcher to gather information directly from respondents. Though conducting face-to-face interviews can be more costly than other methods, it’s considered an effective approach that yields high-quality data, as trained researchers are able to gather detailed insights and more in-depth, nuanced responses.

Telephone surveys

In this type of survey research, known as a computer-assisted telephonic interview (CATI), respondents answer questions that a researcher poses over the telephone. This approach can yield rich data because respondents often feel more comfortable answering questions over the phone than they do in person. On the other hand, many people don’t answer calls from unknown numbers, and if they do, they might answer questions quickly just to get off the phone.

Product research surveys

A product research survey gathers attitudes and opinions about a product from a section of the target audience. The goal is to identify product strengths and weaknesses and discover what current and potential customers like, don’t like, or feel needs improvement. The data these surveys gather helps inform decisions about current product improvements and new product development.

Brand surveys

Brand surveys measure how the target audience feels about your brand overall. Data gathered can include insights on how your audience views your brand in relation to its competitors, what your brand strengths and weaknesses are, which words people associate with your brand, and how customers experience your brand, among other things. Brand survey types include brand awareness surveys , brand perception surveys , brand identity surveys , and brand loyalty surveys.

Online surveys

Online surveys are by far one of the most popular approaches to survey research. This data collection method consists of questionnaires and surveys that anyone with an internet connection can access, including those using mobile devices. Online surveys allow you to gather, process, and analyze a large amount of data in a less labor-intensive and more cost-effective way than other methods. And today’s online survey tools make the process easier than ever.

Panel surveys

Panel surveys involve enlisting respondents who have previously agreed to answer questions and are on a list that a research company maintains. Such companies typically vet respondents who represent a sample of your target audience.

The benefits of survey research

Survey research delivers many benefits:

  • Reliable data gathered from surveys helps inform key business decisions and strategy.
  • Quantitative data is useful for persuading stakeholders to make important product improvements and other essential enhancements they might not make otherwise.
  • Quantitative and qualitative data can help improve the end-user experience and increase customer satisfaction.
  • Dependable quantitative and qualitative data prevents organizations from making misguided decisions about product development, marketing campaigns, employee retention, and other areas of operation.

The benefits of online survey research

Conducting research using online surveys has additional benefits:

  • Online surveys allow for anonymity, which encourages honest feedback. This leads to context-rich, candid answers from respondents.
  • Online surveys can reach large audiences, which improves research validity.
  • Online surveys are generally cost-effective and easy to conduct.
  • Online surveys are particularly effective for capturing both quantitative and qualitative data on your target audience through a mix of question types.
  • Online surveys are easier for respondents to access and complete than other survey methods.
  • Survey software often makes it easy to analyze results from an online survey instantly.

5 steps to designing a research survey

To get started with designing your survey, follow these basic steps:

1. Clarify your research goals

Before you conduct research, it’s important to define your goals and objectives. Start by asking yourself a few key questions, such as

  • What specific data do you hope to collect?
  • How many respondents will you need?
  • How will you use the data?
  • What are you hoping to learn?
  • Which types of questions will elicit the data you’re looking for?

One approach to defining goals is to apply the SMART goals framework to your survey and survey questions. SMART stands for specific, measurable, achievable, relevant, and time-bound.

For example, let’s say you’re conducting a brand survey. Here are some examples of SMART goals you might set:

Specific : Conduct a survey to determine whether your brand is appealing to a specific target audience. Measurable : You need to receive 300 completed surveys to get enough insights to achieve this goal. Achievable : You’ve conducted similar brand surveys in the past, so you know you can achieve this goal. Relevant : You want to gather opinions from the target audience before launching a new marketing campaign. Time-bound : To initiate the marketing campaign by the fourth quarter, you know you need to conduct the survey and analyze results in the third quarter.

2. Choose appropriate questions

Be sure to select questions that are relevant to your research goals and objectives. You may want to choose a mix of question types that will allow you to gather both quantitative and qualitative data. You can collect qualitative data by using open-ended questions or gather quantitative data by posing yes/no, multiple-choice, rating scale, and other such questions.

A good approach is to begin your survey with a general question to determine which respondents use your product or service. You can also use yes/no questions to divide respondents into groups of those who have purchased and those who haven’t, then ask different series of questions based on this information.

3. Distribute the survey

Once you’ve designed your survey and selected relevant questions and question types, it’s time to get your survey out to your target audience. You can distribute your online survey by sending it through email, sharing it on social media, or embedding it in a website.

4. Analyze survey data and draw conclusions

Once you’ve gathered enough responses (according to the goal you set in step one), you can organize your data according to demographics, psychographics, behavior, and other categories relevant to your research goals. You can use statistics for quantitative data, and you can analyze qualitative data by themes, content, and narrative.

Then compare your analysis to your original assumptions and use the data to identify how to improve customer service, product development, marketing initiatives, or other aspects of your business.

5. Create a report of your survey results

You may want to present your findings in a written report. You’ll use this report to share the survey results along with your analysis and recommendations. You may also choose to include details about the methodology you used, an explanation of the types of questions you presented, information on the survey audience, and the response rate.

Research survey examples and templates

Surveys are an ideal way to evaluate customer, employee, or public opinion about your brand or organization. Whether you need to conduct a market research survey , understand customer demographics, gauge employee satisfaction, or conduct a poll, a survey template (a ready-made sample survey with built-in questions) is the perfect solution.

Jotform offers over 1,000 survey templates for different industries and use cases, including surveys for HR , marketing , product development, customer satisfaction, education , healthcare , and more.

Gain reliable insights with Jotform’s suite of tools

Jotform provides several powerful tools and resources for creating and sending surveys, as well as analyzing survey results.

You can design a survey quickly and easily using one of our 1,000-plus survey templates. Use our free drag-and-drop online survey maker to customize a template or build your own survey from scratch. Add your own questions, set up conditional logic, and share your survey online to start collecting responses instantly.

Jotform surveys are super versatile, and you can share them through multiple channels. Embed the survey on your website, share it via QR code, or email it directly to survey participants. You can even use the conditional logic feature to create targeted, dynamic surveys that change based on how respondents answer. You can send automated thank-you messages or follow-up emails to respondents about your surveys.

Jotform integrates with multiple analytics and CRM tools, allowing users to easily track and analyze their survey data. Plus, Jotform Report Builder turns form submissions into stunning visual reports with a few clicks. Generate bar graphs, pie charts, and submission grids automatically, all without coding. Then share, embed, or print your reports. Jotform Form Analytics can also help you analyze data, learn from customer behavior, and increase conversion rates.

Then there’s Jotform’s Popup Form Maker , which allows you to create and publish a lightbox form that pops up on your site automatically. You can use this feature to add a popup contact form, signup form, or feedback survey to your website easily.

And finally, Jotform’s kiosk mode feature turns your smartphone or tablet into a mobile survey station, so you can gather form submissions, payments, e-signatures, and more at conferences and trade shows. Your form will refresh automatically after every submission, allowing you to collect online form responses safely and securely.

Photo by George Milton

Thank you for helping improve the Jotform Blog. 🎉

Kimberly Houston

RECOMMENDED ARTICLES

How to Do Market Research

How to Do Market Research

14 best SurveyMonkey alternatives in 2024

14 best SurveyMonkey alternatives in 2024

How to add a signature in SurveyMonkey

How to add a signature in SurveyMonkey

How to make Google Forms anonymous

How to make Google Forms anonymous

9 customer-focused market research survey questions

9 customer-focused market research survey questions

What is the population of interest?

What is the population of interest?

4 ways to conduct online market research

4 ways to conduct online market research

7 market research tools you should start using today

7 market research tools you should start using today

30 market research questions for startups to ask

30 market research questions for startups to ask

How to do customer research

How to do customer research

The 5 steps of the market research process

The 5 steps of the market research process

5 quantitative market research best practices

5 quantitative market research best practices

7 common types of market research

7 common types of market research

6 insights for running effective marketing research surveys

6 insights for running effective marketing research surveys

Quantitative research question examples

Quantitative research question examples

How to conduct data collection in market research

How to conduct data collection in market research

4 expert tips for conducting qualitative market research

4 expert tips for conducting qualitative market research

Send Comment :

Jotform Avatar

A behind-the-scenes blog about research methods at Pew Research Center. For our latest findings, visit pewresearch.org .

  • International Surveys

Measuring partisanship in Europe: How online survey questions compare with phone polls

Pew Research Center illustration

The majority of Pew Research Center’s international survey work is conducted either through face-to-face or telephone interviews. As the survey landscape changes internationally, we are exploring the possibility of using self-administered online polls in certain countries, as we do in the United States .

If we move our surveys online, we may need to make other changes so that data from new projects is comparable to data from older ones. For instance, online surveys do not use an interviewer, an important difference from our traditional in-person and phone surveys. Those differences could bias our results. Bridging the gap between interviewing formats raises many questions, so we embedded some experiments into recent surveys to get some answers. ­­

In the sections below, we walk through an experiment where we tested online surveys against phone surveys, drawing on data from France, Germany and the United Kingdom. We compare the three ways we asked about partisanship on the web with a traditional measurement obtained over the phone, concluding with what we learned.

Related: How adding a ‘Don’t know’ response option can affect cross-national survey results

How we’ve asked about partisanship in past telephone surveys

Our annual phone polls ask people in France, Germany and the UK an open-ended question about their partisan affiliation. In the UK, for example, a phone interviewer would ask a respondent, “Which political party do you feel closest to?” then wait for a response and record the answer.

We compared three different online survey methods to see which one would most closely replicate our phone results.

How we asked about partisanship in our new online survey experiments

Open-ended questions

The first method we tested asked respondents about their partisan affiliation in an open-ended format. We gave people a textbox and asked them to enter the name of the party they feel closest to. One benefit of this method is that people can offer relatively new parties or alliances that we have yet to include in our lists and they won’t feel limited by the options listed, even if an “other, please specify” box is included.

But open-ends can be messy. They require a researcher to comb through them, standardizing spelling and making choices, such as whether saying things like “the right” or the name of a leader should be counted together. In the UK, a respondent may say Tories, Conservatives, the Conservative Party, Rishi Sunak or other variations on these answers – and researchers have to determine what they meant and code it accordingly. This takes time and, in the case of France and Germany, some foreign language knowledge. Open-ended questions also tend to have higher nonresponse rates than closed-ended questions.

Closed-ended questions

We also asked about partisan affiliation using two different close-ended survey question formats. Each format listed the parties in a different order:

  • Alphabetical order: Alphabetical lists present a clear ordering principle for respondents, making it easy to know where to look for their preferred party. But there are some complications with this approach, such as the fact that many parties are known by multiple names or abbreviations. In France, President Emmanuel Macron’s party is currently called Renaissance, but was formerly known as En Marche or LREM – all of which would appear in different places in an alphabetical list. Also, people who feel some allegiance to multiple parties might choose a party that appears earlier in the list, simply because they saw it first. For example, if a Scot were reading the list and saw the Labour Party first, they might select this, despite feeling closer to the Scottish Labour Party.
  • Random order: We also tested randomly ordering the parties. While this might make it harder for respondents to find their party in the list, it also might make them more likely to read all response options in detail.

In the sections below, we compare these three methodswith our phonetrend question, which we fielded while we were experimenting with the web panels.

What we found

In all three countries, the closed-ended web formats we tested – both the alphabetical list and the randomized list – were more similar to our traditional phone trends than the open-ended question.

United Kingdom

A table showing how partisan identification in the UK is affected by the way the question is administered on web panels.

In the UK, both the alphabetical and the randomized closed-ended web formats resulted in the most similar results to our traditional phone trends. For example, in both the closed-ended formats and on the phone, around a quarter say no party represents them; only 8% say this when given an open-ended box online.

Estimates for many of the largest parties are also quite comparable across question formats – though the share who say the Conservative Party most represents them in the random list (16%) is somewhat lower than in the alphabetical list (23%). The alphabetical results tend to be closer to what we find in our phone polling.

A table showing how partisan identification in France is affected by the way the question is administered on web panels.

In France, too, the closed-ended web formats appear to mirror our traditional phone trends more closely. This is most evident when we look at the share who say no party represents them.

Also, in the open-ended format, the share saying “other” is quite a bit higher than with the other methods. Some of these volunteered answers included NUPES – the name of a left-wing coalition that includes France Insoumise, Europe Ecology, the Socialist Party and the Communist Party – and right-wing Reconquête! Neither of these were included in the closed-ended lists.

The alphabetical format appears to yield results more similar to traditional phone polling than the randomly ordered list.

Findings are similar in Germany. The open-ended response produces a lower share of people who say no party represents them than in the closed-ended web formats or in our traditional phone polling.

The alphabetical list also appears to be most similar to our traditional phone polling.

A table showing how partisan identification in Germany is affected by the way the question is administered on web panels.

The connection between ideology and partisanship

A table showing how ideology and partisan identification relate across multiple methods of asking party ID on web and phone.

We also examined whether ideology and partisan identification are related in similar ways across the various question formats. For example, we know that in our UK phone polls, 45% of Britons on the ideological left describe themselves as Labour Party supporters. Our goal was to find a measure on the web panels that replicates this relationship.

After looking into the relationship between ideology and partisan identification across the three question styles on the web and how they relate to the phone survey, it appears that the alphabetical one compares most favorably with the phone.

In the UK, identical shares of the left identify as Labour supporters in the alphabetical list and on the phone.

Still, there are some large differences between the two question formats. This is the case with right-leaning Britons: On the phone, 37% identify with the Conservative Party, while all web versions differ from this by at least 10 percentage points.

The alphabetical list format performs better than the random or open-ended questions, but no method consistently matches the phone results.

Conclusion: How should we ask about partisanship on online surveys going forward?

These results suggest that an alphabetical list is the best way to replicate our existing phone trends in online surveys.

Still, more experimentation may be warranted. For example, we could potentially address some of the difficulties presented by the open-ended question with a more interactive textbox that could search and present a pre-coded list to respondents. This would leave a fully blank space for text entry only for people who could not find their preferred answer on the existing list.

This method might provide some of the benefits presented by an open-end – such as the ability to mention new political parties or alliances – but without the effort required for the research team to hand code and analyze all of the answers that respondents volunteered.

Categories:

  • Survey Methods

More from Decoded

Examining how survey mode affects americans’ views of international affairs.

Responses to survey questions asked online can differ from those asked on the phone.

What different survey modes and question types can tell us about Americans’ views of China

Regardless of mode or measure, most Americans have a negative view of China – and opinions have become more unfavorable in recent years.

Trends are a cornerstone of public opinion research. How do we continue to track changes in public opinion when there’s a shift in survey mode?

Mode differences present a challenge to analyzing trends over time. While these differences can vary, there are lessons to be learned.

MORE FROM DECODED

To browse all of Pew Research Center findings and data by topic, visit  pewresearch.org

About Decoded

This is a blog about research methods and behind-the-scenes technical matters at Pew Research Center. To get our latest findings, visit pewresearch.org .

Copyright 2024 Pew Research Center

  • Open access
  • Published: 14 May 2024

Developing a survey to measure nursing students’ knowledge, attitudes and beliefs, influences, and willingness to be involved in Medical Assistance in Dying (MAiD): a mixed method modified e-Delphi study

  • Jocelyn Schroeder 1 ,
  • Barbara Pesut 1 , 2 ,
  • Lise Olsen 2 ,
  • Nelly D. Oelke 2 &
  • Helen Sharp 2  

BMC Nursing volume  23 , Article number:  326 ( 2024 ) Cite this article

31 Accesses

Metrics details

Medical Assistance in Dying (MAiD) was legalized in Canada in 2016. Canada’s legislation is the first to permit Nurse Practitioners (NP) to serve as independent MAiD assessors and providers. Registered Nurses’ (RN) also have important roles in MAiD that include MAiD care coordination; client and family teaching and support, MAiD procedural quality; healthcare provider and public education; and bereavement care for family. Nurses have a right under the law to conscientious objection to participating in MAiD. Therefore, it is essential to prepare nurses in their entry-level education for the practice implications and moral complexities inherent in this practice. Knowing what nursing students think about MAiD is a critical first step. Therefore, the purpose of this study was to develop a survey to measure nursing students’ knowledge, attitudes and beliefs, influences, and willingness to be involved in MAiD in the Canadian context.

The design was a mixed-method, modified e-Delphi method that entailed item generation from the literature, item refinement through a 2 round survey of an expert faculty panel, and item validation through a cognitive focus group interview with nursing students. The settings were a University located in an urban area and a College located in a rural area in Western Canada.

During phase 1, a 56-item survey was developed from existing literature that included demographic items and items designed to measure experience with death and dying (including MAiD), education and preparation, attitudes and beliefs, influences on those beliefs, and anticipated future involvement. During phase 2, an expert faculty panel reviewed, modified, and prioritized the items yielding 51 items. During phase 3, a sample of nursing students further evaluated and modified the language in the survey to aid readability and comprehension. The final survey consists of 45 items including 4 case studies.

Systematic evaluation of knowledge-to-date coupled with stakeholder perspectives supports robust survey design. This study yielded a survey to assess nursing students’ attitudes toward MAiD in a Canadian context.

The survey is appropriate for use in education and research to measure knowledge and attitudes about MAiD among nurse trainees and can be a helpful step in preparing nursing students for entry-level practice.

Peer Review reports

Medical Assistance in Dying (MAiD) is permitted under an amendment to Canada’s Criminal Code which was passed in 2016 [ 1 ]. MAiD is defined in the legislation as both self-administered and clinician-administered medication for the purpose of causing death. In the 2016 Bill C-14 legislation one of the eligibility criteria was that an applicant for MAiD must have a reasonably foreseeable natural death although this term was not defined. It was left to the clinical judgement of MAiD assessors and providers to determine the time frame that constitutes reasonably foreseeable [ 2 ]. However, in 2021 under Bill C-7, the eligibility criteria for MAiD were changed to allow individuals with irreversible medical conditions, declining health, and suffering, but whose natural death was not reasonably foreseeable, to receive MAiD [ 3 ]. This population of MAiD applicants are referred to as Track 2 MAiD (those whose natural death is foreseeable are referred to as Track 1). Track 2 applicants are subject to additional safeguards under the 2021 C-7 legislation.

Three additional proposed changes to the legislation have been extensively studied by Canadian Expert Panels (Council of Canadian Academics [CCA]) [ 4 , 5 , 6 ] First, under the legislation that defines Track 2, individuals with mental disease as their sole underlying medical condition may apply for MAiD, but implementation of this practice is embargoed until March 2027 [ 4 ]. Second, there is consideration of allowing MAiD to be implemented through advanced consent. This would make it possible for persons living with dementia to receive MAID after they have lost the capacity to consent to the procedure [ 5 ]. Third, there is consideration of extending MAiD to mature minors. A mature minor is defined as “a person under the age of majority…and who has the capacity to understand and appreciate the nature and consequences of a decision” ([ 6 ] p. 5). In summary, since the legalization of MAiD in 2016 the eligibility criteria and safeguards have evolved significantly with consequent implications for nurses and nursing care. Further, the number of Canadians who access MAiD shows steady increases since 2016 [ 7 ] and it is expected that these increases will continue in the foreseeable future.

Nurses have been integral to MAiD care in the Canadian context. While other countries such as Belgium and the Netherlands also permit euthanasia, Canada is the first country to allow Nurse Practitioners (Registered Nurses with additional preparation typically achieved at the graduate level) to act independently as assessors and providers of MAiD [ 1 ]. Although the role of Registered Nurses (RNs) in MAiD is not defined in federal legislation, it has been addressed at the provincial/territorial-level with variability in scope of practice by region [ 8 , 9 ]. For example, there are differences with respect to the obligation of the nurse to provide information to patients about MAiD, and to the degree that nurses are expected to ensure that patient eligibility criteria and safeguards are met prior to their participation [ 10 ]. Studies conducted in the Canadian context indicate that RNs perform essential roles in MAiD care coordination; client and family teaching and support; MAiD procedural quality; healthcare provider and public education; and bereavement care for family [ 9 , 11 ]. Nurse practitioners and RNs are integral to a robust MAiD care system in Canada and hence need to be well-prepared for their role [ 12 ].

Previous studies have found that end of life care, and MAiD specifically, raise complex moral and ethical issues for nurses [ 13 , 14 , 15 , 16 ]. The knowledge, attitudes, and beliefs of nurses are important across practice settings because nurses have consistent, ongoing, and direct contact with patients who experience chronic or life-limiting health conditions. Canadian studies exploring nurses’ moral and ethical decision-making in relation to MAiD reveal that although some nurses are clear in their support for, or opposition to, MAiD, others are unclear on what they believe to be good and right [ 14 ]. Empirical findings suggest that nurses go through a period of moral sense-making that is often informed by their family, peers, and initial experiences with MAID [ 17 , 18 ]. Canadian legislation and policy specifies that nurses are not required to participate in MAiD and may recuse themselves as conscientious objectors with appropriate steps to ensure ongoing and safe care of patients [ 1 , 19 ]. However, with so many nurses having to reflect on and make sense of their moral position, it is essential that they are given adequate time and preparation to make an informed and thoughtful decision before they participate in a MAID death [ 20 , 21 ].

It is well established that nursing students receive inconsistent exposure to end of life care issues [ 22 ] and little or no training related to MAiD [ 23 ]. Without such education and reflection time in pre-entry nursing preparation, nurses are at significant risk for moral harm. An important first step in providing this preparation is to be able to assess the knowledge, values, and beliefs of nursing students regarding MAID and end of life care. As demand for MAiD increases along with the complexities of MAiD, it is critical to understand the knowledge, attitudes, and likelihood of engagement with MAiD among nursing students as a baseline upon which to build curriculum and as a means to track these variables over time.

Aim, design, and setting

The aim of this study was to develop a survey to measure nursing students’ knowledge, attitudes and beliefs, influences, and willingness to be involved in MAiD in the Canadian context. We sought to explore both their willingness to be involved in the registered nursing role and in the nurse practitioner role should they chose to prepare themselves to that level of education. The design was a mixed-method, modified e-Delphi method that entailed item generation, item refinement through an expert faculty panel [ 24 , 25 , 26 ], and initial item validation through a cognitive focus group interview with nursing students [ 27 ]. The settings were a University located in an urban area and a College located in a rural area in Western Canada.

Participants

A panel of 10 faculty from the two nursing education programs were recruited for Phase 2 of the e-Delphi. To be included, faculty were required to have a minimum of three years of experience in nurse education, be employed as nursing faculty, and self-identify as having experience with MAiD. A convenience sample of 5 fourth-year nursing students were recruited to participate in Phase 3. Students had to be in good standing in the nursing program and be willing to share their experiences of the survey in an online group interview format.

The modified e-Delphi was conducted in 3 phases: Phase 1 entailed item generation through literature and existing survey review. Phase 2 entailed item refinement through a faculty expert panel review with focus on content validity, prioritization, and revision of item wording [ 25 ]. Phase 3 entailed an assessment of face validity through focus group-based cognitive interview with nursing students.

Phase I. Item generation through literature review

The goal of phase 1 was to develop a bank of survey items that would represent the variables of interest and which could be provided to expert faculty in Phase 2. Initial survey items were generated through a literature review of similar surveys designed to assess knowledge and attitudes toward MAiD/euthanasia in healthcare providers; Canadian empirical studies on nurses’ roles and/or experiences with MAiD; and legislative and expert panel documents that outlined proposed changes to the legislative eligibility criteria and safeguards. The literature review was conducted in three online databases: CINAHL, PsycINFO, and Medline. Key words for the search included nurses , nursing students , medical students , NPs, MAiD , euthanasia , assisted death , and end-of-life care . Only articles written in English were reviewed. The legalization and legislation of MAiD is new in many countries; therefore, studies that were greater than twenty years old were excluded, no further exclusion criteria set for country.

Items from surveys designed to measure similar variables in other health care providers and geographic contexts were placed in a table and similar items were collated and revised into a single item. Then key variables were identified from the empirical literature on nurses and MAiD in Canada and checked against the items derived from the surveys to ensure that each of the key variables were represented. For example, conscientious objection has figured prominently in the Canadian literature, but there were few items that assessed knowledge of conscientious objection in other surveys and so items were added [ 15 , 21 , 28 , 29 ]. Finally, four case studies were added to the survey to address the anticipated changes to the Canadian legislation. The case studies were based upon the inclusion of mature minors, advanced consent, and mental disorder as the sole underlying medical condition. The intention was to assess nurses’ beliefs and comfort with these potential legislative changes.

Phase 2. Item refinement through expert panel review

The goal of phase 2 was to refine and prioritize the proposed survey items identified in phase 1 using a modified e-Delphi approach to achieve consensus among an expert panel [ 26 ]. Items from phase 1 were presented to an expert faculty panel using a Qualtrics (Provo, UT) online survey. Panel members were asked to review each item to determine if it should be: included, excluded or adapted for the survey. When adapted was selected faculty experts were asked to provide rationale and suggestions for adaptation through the use of an open text box. Items that reached a level of 75% consensus for either inclusion or adaptation were retained [ 25 , 26 ]. New items were categorized and added, and a revised survey was presented to the panel of experts in round 2. Panel members were again asked to review items, including new items, to determine if it should be: included, excluded, or adapted for the survey. Round 2 of the modified e-Delphi approach also included an item prioritization activity, where participants were then asked to rate the importance of each item, based on a 5-point Likert scale (low to high importance), which De Vaus [ 30 ] states is helpful for increasing the reliability of responses. Items that reached a 75% consensus on inclusion were then considered in relation to the importance it was given by the expert panel. Quantitative data were managed using SPSS (IBM Corp).

Phase 3. Face validity through cognitive interviews with nursing students

The goal of phase 3 was to obtain initial face validity of the proposed survey using a sample of nursing student informants. More specifically, student participants were asked to discuss how items were interpreted, to identify confusing wording or other problematic construction of items, and to provide feedback about the survey as a whole including readability and organization [ 31 , 32 , 33 ]. The focus group was held online and audio recorded. A semi-structured interview guide was developed for this study that focused on clarity, meaning, order and wording of questions; emotions evoked by the questions; and overall survey cohesion and length was used to obtain data (see Supplementary Material 2  for the interview guide). A prompt to “think aloud” was used to limit interviewer-imposed bias and encourage participants to describe their thoughts and response to a given item as they reviewed survey items [ 27 ]. Where needed, verbal probes such as “could you expand on that” were used to encourage participants to expand on their responses [ 27 ]. Student participants’ feedback was collated verbatim and presented to the research team where potential survey modifications were negotiated and finalized among team members. Conventional content analysis [ 34 ] of focus group data was conducted to identify key themes that emerged through discussion with students. Themes were derived from the data by grouping common responses and then using those common responses to modify survey items.

Ten nursing faculty participated in the expert panel. Eight of the 10 faculty self-identified as female. No faculty panel members reported conscientious objector status and ninety percent reported general agreement with MAiD with one respondent who indicated their view as “unsure.” Six of the 10 faculty experts had 16 years of experience or more working as a nurse educator.

Five nursing students participated in the cognitive interview focus group. The duration of the focus group was 2.5 h. All participants identified that they were born in Canada, self-identified as female (one preferred not to say) and reported having received some instruction about MAiD as part of their nursing curriculum. See Tables  1 and 2 for the demographic descriptors of the study sample. Study results will be reported in accordance with the study phases. See Fig.  1 for an overview of the results from each phase.

figure 1

Fig. 1  Overview of survey development findings

Phase 1: survey item generation

Review of the literature identified that no existing survey was available for use with nursing students in the Canadian context. However, an analysis of themes across qualitative and quantitative studies of physicians, medical students, nurses, and nursing students provided sufficient data to develop a preliminary set of items suitable for adaptation to a population of nursing students.

Four major themes and factors that influence knowledge, attitudes, and beliefs about MAiD were evident from the literature: (i) endogenous or individual factors such as age, gender, personally held values, religion, religiosity, and/or spirituality [ 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 ], (ii) experience with death and dying in personal and/or professional life [ 35 , 40 , 41 , 43 , 44 , 45 ], (iii) training including curricular instruction about clinical role, scope of practice, or the law [ 23 , 36 , 39 ], and (iv) exogenous or social factors such as the influence of key leaders, colleagues, friends and/or family, professional and licensure organizations, support within professional settings, and/or engagement in MAiD in an interdisciplinary team context [ 9 , 35 , 46 ].

Studies of nursing students also suggest overlap across these categories. For example, value for patient autonomy [ 23 ] and the moral complexity of decision-making [ 37 ] are important factors that contribute to attitudes about MAiD and may stem from a blend of personally held values coupled with curricular content, professional training and norms, and clinical exposure. For example, students report that participation in end of life care allows for personal growth, shifts in perception, and opportunities to build therapeutic relationships with their clients [ 44 , 47 , 48 ].

Preliminary items generated from the literature resulted in 56 questions from 11 published sources (See Table  3 ). These items were constructed across four main categories: (i) socio-demographic questions; (ii) end of life care questions; (iii) knowledge about MAiD; or (iv) comfort and willingness to participate in MAiD. Knowledge questions were refined to reflect current MAiD legislation, policies, and regulatory frameworks. Falconer [ 39 ] and Freeman [ 45 ] studies were foundational sources for item selection. Additionally, four case studies were written to reflect the most recent anticipated changes to MAiD legislation and all used the same open-ended core questions to address respondents’ perspectives about the patient’s right to make the decision, comfort in assisting a physician or NP to administer MAiD in that scenario, and hypothesized comfort about serving as a primary provider if qualified as an NP in future. Response options for the survey were also constructed during this stage and included: open text, categorical, yes/no , and Likert scales.

Phase 2: faculty expert panel review

Of the 56 items presented to the faculty panel, 54 questions reached 75% consensus. However, based upon the qualitative responses 9 items were removed largely because they were felt to be repetitive. Items that generated the most controversy were related to measuring religion and spirituality in the Canadian context, defining end of life care when there is no agreed upon time frames (e.g., last days, months, or years), and predicting willingness to be involved in a future events – thus predicting their future selves. Phase 2, round 1 resulted in an initial set of 47 items which were then presented back to the faculty panel in round 2.

Of the 47 initial questions presented to the panel in round 2, 45 reached a level of consensus of 75% or greater, and 34 of these questions reached a level of 100% consensus [ 27 ] of which all participants chose to include without any adaptations) For each question, level of importance was determined based on a 5-point Likert scale (1 = very unimportant, 2 = somewhat unimportant, 3 = neutral, 4 = somewhat important, and 5 = very important). Figure  2 provides an overview of the level of importance assigned to each item.

figure 2

Ranking level of importance for survey items

After round 2, a careful analysis of participant comments and level of importance was completed by the research team. While the main method of survey item development came from participants’ response to the first round of Delphi consensus ratings, level of importance was used to assist in the decision of whether to keep or modify questions that created controversy, or that rated lower in the include/exclude/adapt portion of the Delphi. Survey items that rated low in level of importance included questions about future roles, sex and gender, and religion/spirituality. After deliberation by the research committee, these questions were retained in the survey based upon the importance of these variables in the scientific literature.

Of the 47 questions remaining from Phase 2, round 2, four were revised. In addition, the two questions that did not meet the 75% cut off level for consensus were reviewed by the research team. The first question reviewed was What is your comfort level with providing a MAiD death in the future if you were a qualified NP ? Based on a review of participant comments, it was decided to retain this question for the cognitive interviews with students in the final phase of testing. The second question asked about impacts on respondents’ views of MAiD and was changed from one item with 4 subcategories into 4 separate items, resulting in a final total of 51 items for phase 3. The revised survey was then brought forward to the cognitive interviews with student participants in Phase 3. (see Supplementary Material 1 for a complete description of item modification during round 2).

Phase 3. Outcomes of cognitive interview focus group

Of the 51 items reviewed by student participants, 29 were identified as clear with little or no discussion. Participant comments for the remaining 22 questions were noted and verified against the audio recording. Following content analysis of the comments, four key themes emerged through the student discussion: unclear or ambiguous wording; difficult to answer questions; need for additional response options; and emotional response evoked by questions. An example of unclear or ambiguous wording was a request for clarity in the use of the word “sufficient” in the context of assessing an item that read “My nursing education has provided sufficient content about the nursing role in MAiD.” “Sufficient” was viewed as subjective and “laden with…complexity that distracted me from the question.” The group recommended rewording the item to read “My nursing education has provided enough content for me to care for a patient considering or requesting MAiD.”

An example of having difficulty answering questions related to limited knowledge related to terms used in the legislation such as such as safeguards , mature minor , eligibility criteria , and conscientious objection. Students were unclear about what these words meant relative to the legislation and indicated that this lack of clarity would hamper appropriate responses to the survey. To ensure that respondents are able to answer relevant questions, student participants recommended that the final survey include explanation of key terms such as mature minor and conscientious objection and an overview of current legislation.

Response options were also a point of discussion. Participants noted a lack of distinction between response options of unsure and unable to say . Additionally, scaling of attitudes was noted as important since perspectives about MAiD are dynamic and not dichotomous “agree or disagree” responses. Although the faculty expert panel recommended the integration of the demographic variables of religious and/or spiritual remain as a single item, the student group stated a preference to have religion and spirituality appear as separate items. The student focus group also took issue with separate items for the variables of sex and gender, specifically that non-binary respondents might feel othered or “outed” particularly when asked to identify their sex. These variables had been created based upon best practices in health research but students did not feel they were appropriate in this context [ 49 ]. Finally, students agreed with the faculty expert panel in terms of the complexity of projecting their future involvement as a Nurse Practitioner. One participant stated: “I certainly had to like, whoa, whoa, whoa. Now let me finish this degree first, please.” Another stated, “I'm still imagining myself, my future career as an RN.”

Finally, student participants acknowledged the array of emotions that some of the items produced for them. For example, one student described positive feelings when interacting with the survey. “Brought me a little bit of feeling of joy. Like it reminded me that this is the last piece of independence that people grab on to.” Another participant, described the freedom that the idea of an advance request gave her. “The advance request gives the most comfort for me, just with early onset Alzheimer’s and knowing what it can do.” But other participants described less positive feelings. For example, the mature minor case study yielded a comment: “This whole scenario just made my heart hurt with the idea of a child requesting that.”

Based on the data gathered from the cognitive interview focus group of nursing students, revisions were made to 11 closed-ended questions (see Table  4 ) and 3 items were excluded. In the four case studies, the open-ended question related to a respondents’ hypothesized actions in a future role as NP were removed. The final survey consists of 45 items including 4 case studies (see Supplementary Material 3 ).

The aim of this study was to develop and validate a survey that can be used to track the growth of knowledge about MAiD among nursing students over time, inform training programs about curricular needs, and evaluate attitudes and willingness to participate in MAiD at time-points during training or across nursing programs over time.

The faculty expert panel and student participants in the cognitive interview focus group identified a need to establish core knowledge of the terminology and legislative rules related to MAiD. For example, within the cognitive interview group of student participants, several acknowledged lack of clear understanding of specific terms such as “conscientious objector” and “safeguards.” Participants acknowledged discomfort with the uncertainty of not knowing and their inclination to look up these terms to assist with answering the questions. This survey can be administered to nursing or pre-nursing students at any phase of their training within a program or across training programs. However, in doing so it is important to acknowledge that their baseline knowledge of MAiD will vary. A response option of “not sure” is important and provides a means for respondents to convey uncertainty. If this survey is used to inform curricular needs, respondents should be given explicit instructions not to conduct online searches to inform their responses, but rather to provide an honest appraisal of their current knowledge and these instructions are included in the survey (see Supplementary Material 3 ).

Some provincial regulatory bodies have established core competencies for entry-level nurses that include MAiD. For example, the BC College of Nurses and Midwives (BCCNM) requires “knowledge about ethical, legal, and regulatory implications of medical assistance in dying (MAiD) when providing nursing care.” (10 p. 6) However, across Canada curricular content and coverage related to end of life care and MAiD is variable [ 23 ]. Given the dynamic nature of the legislation that includes portions of the law that are embargoed until 2024, it is important to ensure that respondents are guided by current and accurate information. As the law changes, nursing curricula, and public attitudes continue to evolve, inclusion of core knowledge and content is essential and relevant for investigators to be able to interpret the portions of the survey focused on attitudes and beliefs about MAiD. Content knowledge portions of the survey may need to be modified over time as legislation and training change and to meet the specific purposes of the investigator.

Given the sensitive nature of the topic, it is strongly recommended that surveys be conducted anonymously and that students be provided with an opportunity to discuss their responses to the survey. A majority of feedback from both the expert panel of faculty and from student participants related to the wording and inclusion of demographic variables, in particular religion, religiosity, gender identity, and sex assigned at birth. These and other demographic variables have the potential to be highly identifying in small samples. In any instance in which the survey could be expected to yield demographic group sizes less than 5, users should eliminate the demographic variables from the survey. For example, the profession of nursing is highly dominated by females with over 90% of nurses who identify as female [ 50 ]. Thus, a survey within a single class of students or even across classes in a single institution is likely to yield a small number of male respondents and/or respondents who report a difference between sex assigned at birth and gender identity. When variables that serve to identify respondents are included, respondents are less likely to complete or submit the survey, to obscure their responses so as not to be identifiable, or to be influenced by social desirability bias in their responses rather than to convey their attitudes accurately [ 51 ]. Further, small samples do not allow for conclusive analyses or interpretation of apparent group differences. Although these variables are often included in surveys, such demographics should be included only when anonymity can be sustained. In small and/or known samples, highly identifying variables should be omitted.

There are several limitations associated with the development of this survey. The expert panel was comprised of faculty who teach nursing students and are knowledgeable about MAiD and curricular content, however none identified as a conscientious objector to MAiD. Ideally, our expert panel would have included one or more conscientious objectors to MAiD to provide a broader perspective. Review by practitioners who participate in MAiD, those who are neutral or undecided, and practitioners who are conscientious objectors would ensure broad applicability of the survey. This study included one student cognitive interview focus group with 5 self-selected participants. All student participants had held discussions about end of life care with at least one patient, 4 of 5 participants had worked with a patient who requested MAiD, and one had been present for a MAiD death. It is not clear that these participants are representative of nursing students demographically or by experience with end of life care. It is possible that the students who elected to participate hold perspectives and reflections on patient care and MAiD that differ from students with little or no exposure to end of life care and/or MAiD. However, previous studies find that most nursing students have been involved with end of life care including meaningful discussions about patients’ preferences and care needs during their education [ 40 , 44 , 47 , 48 , 52 ]. Data collection with additional student focus groups with students early in their training and drawn from other training contexts would contribute to further validation of survey items.

Future studies should incorporate pilot testing with small sample of nursing students followed by a larger cross-program sample to allow evaluation of the psychometric properties of specific items and further refinement of the survey tool. Consistent with literature about the importance of leadership in the context of MAiD [ 12 , 53 , 54 ], a study of faculty knowledge, beliefs, and attitudes toward MAiD would provide context for understanding student perspectives within and across programs. Additional research is also needed to understand the timing and content coverage of MAiD across Canadian nurse training programs’ curricula.

The implementation of MAiD is complex and requires understanding of the perspectives of multiple stakeholders. Within the field of nursing this includes clinical providers, educators, and students who will deliver clinical care. A survey to assess nursing students’ attitudes toward and willingness to participate in MAiD in the Canadian context is timely, due to the legislation enacted in 2016 and subsequent modifications to the law in 2021 with portions of the law to be enacted in 2027. Further development of this survey could be undertaken to allow for use in settings with practicing nurses or to allow longitudinal follow up with students as they enter practice. As the Canadian landscape changes, ongoing assessment of the perspectives and needs of health professionals and students in the health professions is needed to inform policy makers, leaders in practice, curricular needs, and to monitor changes in attitudes and practice patterns over time.

Availability of data and materials

The datasets used and/or analysed during the current study are not publicly available due to small sample sizes, but are available from the corresponding author on reasonable request.

Abbreviations

British Columbia College of Nurses and Midwives

Medical assistance in dying

Nurse practitioner

Registered nurse

University of British Columbia Okanagan

Nicol J, Tiedemann M. Legislative Summary: Bill C-14: An Act to amend the Criminal Code and to make related amendments to other Acts (medical assistance in dying). Available from: https://lop.parl.ca/staticfiles/PublicWebsite/Home/ResearchPublications/LegislativeSummaries/PDF/42-1/c14-e.pdf .

Downie J, Scallion K. Foreseeably unclear. The meaning of the “reasonably foreseeable” criterion for access to medical assistance in dying in Canada. Dalhousie Law J. 2018;41(1):23–57.

Nicol J, Tiedeman M. Legislative summary of Bill C-7: an act to amend the criminal code (medical assistance in dying). Ottawa: Government of Canada; 2021.

Google Scholar  

Council of Canadian Academies. The state of knowledge on medical assistance in dying where a mental disorder is the sole underlying medical condition. Ottawa; 2018. Available from: https://cca-reports.ca/wp-content/uploads/2018/12/The-State-of-Knowledge-on-Medical-Assistance-in-Dying-Where-a-Mental-Disorder-is-the-Sole-Underlying-Medical-Condition.pdf .

Council of Canadian Academies. The state of knowledge on advance requests for medical assistance in dying. Ottawa; 2018. Available from: https://cca-reports.ca/wp-content/uploads/2019/02/The-State-of-Knowledge-on-Advance-Requests-for-Medical-Assistance-in-Dying.pdf .

Council of Canadian Academies. The state of knowledge on medical assistance in dying for mature minors. Ottawa; 2018. Available from: https://cca-reports.ca/wp-content/uploads/2018/12/The-State-of-Knowledge-on-Medical-Assistance-in-Dying-for-Mature-Minors.pdf .

Health Canada. Third annual report on medical assistance in dying in Canada 2021. Ottawa; 2022. [cited 2023 Oct 23]. Available from: https://www.canada.ca/en/health-canada/services/medical-assistance-dying/annual-report-2021.html .

Banner D, Schiller CJ, Freeman S. Medical assistance in dying: a political issue for nurses and nursing in Canada. Nurs Philos. 2019;20(4): e12281.

Article   PubMed   Google Scholar  

Pesut B, Thorne S, Stager ML, Schiller CJ, Penney C, Hoffman C, et al. Medical assistance in dying: a review of Canadian nursing regulatory documents. Policy Polit Nurs Pract. 2019;20(3):113–30.

Article   PubMed   PubMed Central   Google Scholar  

College of Registered Nurses of British Columbia. Scope of practice for registered nurses [Internet]. Vancouver; 2018. Available from: https://www.bccnm.ca/Documents/standards_practice/rn/RN_ScopeofPractice.pdf .

Pesut B, Thorne S, Schiller C, Greig M, Roussel J, Tishelman C. Constructing good nursing practice for medical assistance in dying in Canada: an interpretive descriptive study. Global Qual Nurs Res. 2020;7:2333393620938686. https://doi.org/10.1177/2333393620938686 .

Article   Google Scholar  

Pesut B, Thorne S, Schiller CJ, Greig M, Roussel J. The rocks and hard places of MAiD: a qualitative study of nursing practice in the context of legislated assisted death. BMC Nurs. 2020;19:12. https://doi.org/10.1186/s12912-020-0404-5 .

Pesut B, Greig M, Thorne S, Burgess M, Storch JL, Tishelman C, et al. Nursing and euthanasia: a narrative review of the nursing ethics literature. Nurs Ethics. 2020;27(1):152–67.

Pesut B, Thorne S, Storch J, Chambaere K, Greig M, Burgess M. Riding an elephant: a qualitative study of nurses’ moral journeys in the context of Medical Assistance in Dying (MAiD). Journal Clin Nurs. 2020;29(19–20):3870–81.

Lamb C, Babenko-Mould Y, Evans M, Wong CA, Kirkwood KW. Conscientious objection and nurses: results of an interpretive phenomenological study. Nurs Ethics. 2018;26(5):1337–49.

Wright DK, Chan LS, Fishman JR, Macdonald ME. “Reflection and soul searching:” Negotiating nursing identity at the fault lines of palliative care and medical assistance in dying. Social Sci & Med. 2021;289: 114366.

Beuthin R, Bruce A, Scaia M. Medical assistance in dying (MAiD): Canadian nurses’ experiences. Nurs Forum. 2018;54(4):511–20.

Bruce A, Beuthin R. Medically assisted dying in Canada: "Beautiful Death" is transforming nurses' experiences of suffering. The Canadian J Nurs Res | Revue Canadienne de Recherche en Sci Infirmieres. 2020;52(4):268–77. https://doi.org/10.1177/0844562119856234 .

Canadian Nurses Association. Code of ethics for registered nurses. Ottawa; 2017. Available from: https://www.cna-aiic.ca/en/nursing/regulated-nursing-in-canada/nursing-ethics .

Canadian Nurses Association. National nursing framework on Medical Assistance in Dying in Canada. Ottawa: 2017. Available from: https://www.virtualhospice.ca/Assets/cna-national-nursing-framework-on-maidEng_20170216155827.pdf .

Pesut B, Thorne S, Greig M. Shades of gray: conscientious objection in medical assistance in dying. Nursing Inq. 2020;27(1): e12308.

Durojaiye A, Ryan R, Doody O. Student nurse education and preparation for palliative care: a scoping review. PLoS ONE. 2023. https://doi.org/10.1371/journal.pone.0286678 .

McMechan C, Bruce A, Beuthin R. Canadian nursing students’ experiences with medical assistance in dying | Les expériences d’étudiantes en sciences infirmières au regard de l’aide médicale à mourir. Qual Adv Nurs Educ - Avancées en Formation Infirmière. 2019;5(1). https://doi.org/10.17483/2368-6669.1179 .

Adler M, Ziglio E. Gazing into the oracle. The Delphi method and its application to social policy and public health. London: Jessica Kingsley Publishers; 1996

Keeney S, Hasson F, McKenna H. Consulting the oracle: ten lessons from using the Delphi technique in nursing research. J Adv Nurs. 2006;53(2):205–12.

Keeney S, Hasson F, McKenna H. The Delphi technique in nursing and health research. 1st ed. City: Wiley; 2011.

Willis GB. Cognitive interviewing: a tool for improving questionnaire design. 1st ed. Thousand Oaks, Calif: Sage; 2005. ISBN: 9780761928041

Lamb C, Evans M, Babenko-Mould Y, Wong CA, Kirkwood EW. Conscience, conscientious objection, and nursing: a concept analysis. Nurs Ethics. 2017;26(1):37–49.

Lamb C, Evans M, Babenko-Mould Y, Wong CA, Kirkwood K. Nurses’ use of conscientious objection and the implications of conscience. J Adv Nurs. 2018;75(3):594–602.

de Vaus D. Surveys in social research. 6th ed. Abingdon, Oxon: Routledge; 2014.

Boateng GO, Neilands TB, Frongillo EA, Melgar-Quiñonez HR, Young SL. Best practices for developing and validating scales for health, social, and behavioral research: A primer. Front Public Health. 2018;6:149. https://doi.org/10.3389/fpubh.2018.00149 .

Puchta C, Potter J. Focus group practice. 1st ed. London: Sage; 2004.

Book   Google Scholar  

Streiner DL, Norman GR, Cairney J. Health measurement scales: a practical guide to their development and use. 5th ed. Oxford: Oxford University Press; 2015.

Hsieh H-F, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15(9):1277–88.

Adesina O, DeBellis A, Zannettino L. Third-year Australian nursing students’ attitudes, experiences, knowledge, and education concerning end-of-life care. Int J of Palliative Nurs. 2014;20(8):395–401.

Bator EX, Philpott B, Costa AP. This moral coil: a cross-sectional survey of Canadian medical student attitudes toward medical assistance in dying. BMC Med Ethics. 2017;18(1):58.

Beuthin R, Bruce A, Scaia M. Medical assistance in dying (MAiD): Canadian nurses’ experiences. Nurs Forum. 2018;53(4):511–20.

Brown J, Goodridge D, Thorpe L, Crizzle A. What is right for me, is not necessarily right for you: the endogenous factors influencing nonparticipation in medical assistance in dying. Qual Health Res. 2021;31(10):1786–1800.

Falconer J, Couture F, Demir KK, Lang M, Shefman Z, Woo M. Perceptions and intentions toward medical assistance in dying among Canadian medical students. BMC Med Ethics. 2019;20(1):22.

Green G, Reicher S, Herman M, Raspaolo A, Spero T, Blau A. Attitudes toward euthanasia—dual view: Nursing students and nurses. Death Stud. 2022;46(1):124–31.

Hosseinzadeh K, Rafiei H. Nursing student attitudes toward euthanasia: a cross-sectional study. Nurs Ethics. 2019;26(2):496–503.

Ozcelik H, Tekir O, Samancioglu S, Fadiloglu C, Ozkara E. Nursing students’ approaches toward euthanasia. Omega (Westport). 2014;69(1):93–103.

Canning SE, Drew C. Canadian nursing students’ understanding, and comfort levels related to medical assistance in dying. Qual Adv Nurs Educ - Avancées en Formation Infirmière. 2022;8(2). https://doi.org/10.17483/2368-6669.1326 .

Edo-Gual M, Tomás-Sábado J, Bardallo-Porras D, Monforte-Royo C. The impact of death and dying on nursing students: an explanatory model. J Clin Nurs. 2014;23(23–24):3501–12.

Freeman LA, Pfaff KA, Kopchek L, Liebman J. Investigating palliative care nurse attitudes towards medical assistance in dying: an exploratory cross-sectional study. J Adv Nurs. 2020;76(2):535–45.

Brown J, Goodridge D, Thorpe L, Crizzle A. “I am okay with it, but I am not going to do it:” the exogenous factors influencing non-participation in medical assistance in dying. Qual Health Res. 2021;31(12):2274–89.

Dimoula M, Kotronoulas G, Katsaragakis S, Christou M, Sgourou S, Patiraki E. Undergraduate nursing students’ knowledge about palliative care and attitudes towards end-of-life care: A three-cohort, cross-sectional survey. Nurs Educ Today. 2019;74:7–14.

Matchim Y, Raetong P. Thai nursing students’ experiences of caring for patients at the end of life: a phenomenological study. Int J Palliative Nurs. 2018;24(5):220–9.

Canadian Institute for Health Research. Sex and gender in health research [Internet]. Ottawa: CIHR; 2021 [cited 2023 Oct 23]. Available from: https://cihr-irsc.gc.ca/e/50833.html .

Canadian Nurses’ Association. Nursing statistics. Ottawa: CNA; 2023 [cited 2023 Oct 23]. Available from: https://www.cna-aiic.ca/en/nursing/regulated-nursing-in-canada/nursing-statistics .

Krumpal I. Determinants of social desirability bias in sensitive surveys: a literature review. Qual Quant. 2013;47(4):2025–47. https://doi.org/10.1007/s11135-011-9640-9 .

Ferri P, Di Lorenzo R, Stifani S, Morotti E, Vagnini M, Jiménez Herrera MF, et al. Nursing student attitudes toward dying patient care: a European multicenter cross-sectional study. Acta Bio Medica Atenei Parmensis. 2021;92(S2): e2021018.

PubMed   PubMed Central   Google Scholar  

Beuthin R, Bruce A. Medical assistance in dying (MAiD): Ten things leaders need to know. Nurs Leadership. 2018;31(4):74–81.

Thiele T, Dunsford J. Nurse leaders’ role in medical assistance in dying: a relational ethics approach. Nurs Ethics. 2019;26(4):993–9.

Download references

Acknowledgements

We would like to acknowledge the faculty and students who generously contributed their time to this work.

JS received a student traineeship through the Principal Research Chairs program at the University of British Columbia Okanagan.

Author information

Authors and affiliations.

School of Health and Human Services, Selkirk College, Castlegar, BC, Canada

Jocelyn Schroeder & Barbara Pesut

School of Nursing, University of British Columbia Okanagan, Kelowna, BC, Canada

Barbara Pesut, Lise Olsen, Nelly D. Oelke & Helen Sharp

You can also search for this author in PubMed   Google Scholar

Contributions

JS made substantial contributions to the conception of the work; data acquisition, analysis, and interpretation; and drafting and substantively revising the work. JS has approved the submitted version and agreed to be personally accountable for the author's own contributions and to ensure that questions related to the accuracy or integrity of any part of the work, even ones in which the author was not personally involved, are appropriately investigated, resolved, and the resolution documented in the literature. BP made substantial contributions to the conception of the work; data acquisition, analysis, and interpretation; and drafting and substantively revising the work. BP has approved the submitted version and agreed to be personally accountable for the author's own contributions and to ensure that questions related to the accuracy or integrity of any part of the work, even ones in which the author was not personally involved, are appropriately investigated, resolved, and the resolution documented in the literature. LO made substantial contributions to the conception of the work; data acquisition, analysis, and interpretation; and substantively revising the work. LO has approved the submitted version and agreed to be personally accountable for the author's own contributions and to ensure that questions related to the accuracy or integrity of any part of the work, even ones in which the author was not personally involved, are appropriately investigated, resolved, and the resolution documented in the literature. NDO made substantial contributions to the conception of the work; data acquisition, analysis, and interpretation; and substantively revising the work. NDO has approved the submitted version and agreed to be personally accountable for the author's own contributions and to ensure that questions related to the accuracy or integrity of any part of the work, even ones in which the author was not personally involved, are appropriately investigated, resolved, and the resolution documented in the literature. HS made substantial contributions to drafting and substantively revising the work. HS has approved the submitted version and agreed to be personally accountable for the author's own contributions and to ensure that questions related to the accuracy or integrity of any part of the work, even ones in which the author was not personally involved, are appropriately investigated, resolved, and the resolution documented in the literature.

Authors’ information

JS conducted this study as part of their graduate requirements in the School of Nursing, University of British Columbia Okanagan.

Corresponding author

Correspondence to Barbara Pesut .

Ethics declarations

Ethics approval and consent to participate.

The research was approved by the Selkirk College Research Ethics Board (REB) ID # 2021–011 and the University of British Columbia Behavioral Research Ethics Board ID # H21-01181.

All participants provided written and informed consent through approved consent processes. Research was conducted in accordance with the Declaration of Helsinki.

Consent for publication

Not applicable.

Competing interests

The authors declare they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., supplementary material 2., supplementary material 3., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Schroeder, J., Pesut, B., Olsen, L. et al. Developing a survey to measure nursing students’ knowledge, attitudes and beliefs, influences, and willingness to be involved in Medical Assistance in Dying (MAiD): a mixed method modified e-Delphi study. BMC Nurs 23 , 326 (2024). https://doi.org/10.1186/s12912-024-01984-z

Download citation

Received : 24 October 2023

Accepted : 28 April 2024

Published : 14 May 2024

DOI : https://doi.org/10.1186/s12912-024-01984-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Medical assistance in dying (MAiD)
  • End of life care
  • Student nurses
  • Nursing education

BMC Nursing

ISSN: 1472-6955

survey research methods

U.S. flag

A .gov website belongs to an official government organization in the United States.

A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

  • Risk Factors
  • Providing Care
  • Living with Diabetes
  • Clinical Guidance
  • DSMES for Health Care Providers
  • Prevent Type 2 Diabetes: Talking to Your Patients About Lifestyle Change
  • Employers and Insurers
  • Community-based Organizations (CBOs)
  • Toolkits for Diabetes Educators and Community Health Workers
  • National Diabetes Statistics Report
  • Reports and Publications
  • Current Research Projects
  • National Diabetes Prevention Program
  • State, Local, and National Partner Diabetes Programs for Public Health
  • Diabetes Self-Management Education and Support (DSMES) Toolkit

Methods for the National Diabetes Statistics Report

  • Learn about the methods used in the National Diabetes Statistics Report.

Data collection

The estimates (unless otherwise noted) were derived from various data systems of the Centers for Disease Control and Prevention (CDC), Indian Health Service (IHS), Agency for Healthcare Research and Quality (AHRQ), and U.S. Census Bureau and from published research studies. Estimated percentages and total number of people with diabetes and prediabetes were derived from the National Health and Nutrition Examination Survey (NHANES), National Health Interview Survey (NHIS), IHS National Data Warehouse (NDW), Behavioral Risk Factor Surveillance System (BRFSS), United States Diabetes Surveillance System (USDSS), and U.S. resident population estimates.

Diagnosed diabetes status was determined from self-reported information provided by survey respondents. Undiagnosed diabetes was determined by measured fasting plasma glucose or A1C levels among people without self-reported diagnosed diabetes. Numbers and rates for acute and long-term complications of diabetes were derived from the National Inpatient Sample (NIS) and National Emergency Department Sample (NEDS), as well as NHIS.

For some measures, estimates were not available for certain racial and ethnic subgroups due to small sample sizes.

Diabetes estimates

An alpha level of 0.05 was used when determining statistically significant differences between groups. Age-adjusted estimates were calculated among adults aged 18 years or older by the direct method to the 2000 U.S. Census standard population, using age groups 18–44, 45–64, and 65 years or older. Most estimates of diabetes in this report do not differentiate between type 1 and type 2 diabetes. However, as type 2 diabetes accounts for 90% to 95% of all diabetes cases, the data presented here are more likely to be characteristic of type 2 diabetes, except as noted.

More information about the data sources, methods, and references is available in Appendix B: Detailed Methods and Data Sources .

Diabetes is a chronic disease that affects how your body turns food into energy. About 1 in 10 Americans has diabetes.

For Everyone

Health care providers, public health.

  • Open access
  • Published: 10 May 2024

Challenges and opportunities of English as the medium of instruction in diploma midwifery programs in Bangladesh: a mixed-methods study

  • Anna Williams 1 ,
  • Jennifer R. Stevens 2 ,
  • Rondi Anderson 3 &
  • Malin Bogren 4  

BMC Medical Education volume  24 , Article number:  523 ( 2024 ) Cite this article

171 Accesses

Metrics details

English is generally recognized as the international language of science and most research on evidence-based medicine is produced in English. While Bangla is the dominant language in Bangladesh, public midwifery degree programs use English as the medium of instruction (EMI). This enables faculty and student access to the latest evidence-based midwifery content, which is essential for provision of quality care later. Yet, it also poses a barrier, as limited English mastery among students and faculty limits both teaching and learning.

This mixed-methods study investigates the challenges and opportunities associated with the implementation of EMI in the context of diploma midwifery education in Bangladesh. Surveys were sent to principals at 38 public midwifery education institutions, and 14 English instructors at those schools. Additionally, ten key informant interviews were held with select knowledgeable stakeholders with key themes identified.

Surveys found that English instructors are primarily guest lecturers, trained in general or business English, without a standardized curriculum or functional English language laboratories. Three themes were identified in the key informant interviews. First, in addition to students’ challenges with English, faculty mastery of English presented challenges as well. Second, language labs were poorly maintained, often non-functional, and lacked faculty. Third, an alternative education model, such as the English for Specific Purposes (ESP) curriculum,  has potential to strengthen English competencies within midwifery schools.

Conclusions

ESP, which teaches English for application in a specific discipline, is one option available in Bangladesh for midwifery education. Native language instruction and the middle ground of multilingualism are also useful options. Although a major undertaking, investing in an ESP model and translation of technical midwifery content into relevant mother tongues may provide faster and more complete learning. In addition, a tiered system of requirements for English competencies tied to higher levels of midwifery education could build bridges to students to help them access global evidence-based care resources. Higher levels might emphasize English more heavily, while the diploma level would follow a multilingualism approach, teach using an ESP curriculum, and have complementary emphasis on the mother tongue.

Peer Review reports

Introduction

As the international language of science, English holds an important position in the education of healthcare professionals. Globally, most scientific papers are published in English. In many non-native English-speaking countries, English is used as the language of instruction in higher education [ 1 ]. The dominant status held by the English language in the sciences is largely considered to increase global access to scientific information by unifying the scientific community under a single lingua franca [ 2 ].

In Bangladesh, where the mother tongue is Bangla and midwifery diploma programs are taught in English, knowledge of English facilitates student and instructor access to global, continuously updated evidence-based practice guidance. This includes basic and scientific texts, media-based instructional materials (including on life-saving skills), professional journals, and proceedings of medical conferences. Many of these resources are available for free online, which can be particularly useful in healthcare settings that have not integrated evidence-based practice.

In addition to opportunity though, English instruction also creates several challenges. Weak student and faculty English competency may impede midwifery education quality in Bangladesh. Globally, literature has linked limited instructor competency in the language of instruction with reduced depth, nuance, and accuracy in conveying subject matter content [ 3 ]. This can lead to the perpetuation of patterns of care in misalignment with global evidence. In addition, students’ native language proficiency in their topic of study can decline when instruction is in English, limiting native language communication between colleagues on the job later on [ 4 , 5 ].

In this paper, we examine the current status of English language instruction within public diploma midwifery programs in Bangladesh. Midwifery students are not required to demonstrate a certain skill level in English to enter the program. However, they are provided with English classes in the program. Midwifery course materials are in English, while—for ease and practicality—teaching aids and verbal classroom instruction are provided in Bangla. Following graduation, midwifery students must pass a national licensing exam given in English to practice. Upon passing, some new midwives are deployed as public employees and are posted to sub-district health facilities where English is not used by either providers or clients. Others will seek employment as part of non-governmental organization (NGO) projects where English competency can be of value for interacting with global communities, and for participating in NGO-specific on-the-job learning opportunities. The mix of both challenge and opportunity in this context is complex.

Our analysis examines the reasons for the identified English competency gaps within midwifery programs, and potential solutions. We synthesize the findings and discuss solutions in the context of the global literature. Finally, we present a set of viable options for strengthening English competencies among midwifery faculty and students to enable better quality teaching and greater learning comprehension among students.

Study design

We employed a mixed-methods study design [ 6 ] in order to assess the quality of English instruction within education programs, and options for its improvement. Data collection consisted of two surveys of education institutes, a web-search of available English programs in Bangladesh, and key informant interviews. Both surveys followed a structured questionnaire with a combination of open- and closed-ended questions and were designed by the authors. One survey targeted the 38 institute principals and the other targeted 14 of the institutes’ 38 English instructors (those for whom contact information was shared). The web-search focused on generating a list of available English programs in Bangladesh that had viable models that could be tapped into to strengthen English competencies among midwifery faculty and students. Key informant interviews were unstructured and intended to substantiate and deepen understanding of the survey and web-search findings.

No minimum requirements exist for students’ English competencies upon entry into midwifery diploma programs. Students enter directly from higher secondary school (12th standard) and complete the midwifery program over a period of three years. Most students come from modest economic backgrounds having completed their primary and secondary education in Bangla. While English instruction is part of students’ secondary education, skill attainment is low, and assessment standards are not in place to ensure student mastery. To join the program, midwifery students are required to pass a multi-subject entrance exam that includes a component on English competency. However, as no minimum English standard must be met, the exam does not screen out potential midwifery students. Scoring, for instance, is not broken down by subject. This makes it possible to answer zero questions correctly in up to three of the subjects, including English, and pass the exam.

Processes/data collection

Prior to the first survey, principals were contacted by UNFPA with information about the survey and all provided verbal consent to participate. The survey of principals collected general information about the resources available for English instruction at the institutes. It was a nine-item questionnaire with a mix of Yes/No, multiple choice and write-in questions. Specific measures of interest were whether and how many English instructors the institutes had, instructors’ hiring criteria, whether institutes had language labs and if they were in use, and principals’ views on the need for English courses and their ideal mode of delivery (e.g., in-person, online, or a combination). This survey also gathered contact information of institute English instructors. These measures were chosen as they were intended to provide a high-level picture of institutes’ English resources such as faculty availability and qualifications, and use of language labs. To ensure questions were appropriately framed, a pilot test was conducted with two institute principals and small adjustments were subsequently made. Responses were shared via an electronic form sent by email and were used to inform the second survey as well as the key informant interviews. Of the 38 principals, 36 completed the survey.

The second survey, targeting English instructors, gathered information on instructors’ type of employment (e.g., institute faculty or adjunct lecturers); length of employment; student academic focus (e.g., midwifery or nursing); hours of English instruction provided as part of the midwifery diploma program; whether a standard English curriculum was used and if it was tailored toward the healthcare profession; use of digital content in teaching; education and experience in English teaching; and their views on student barriers to learning English. These measures were chosen to provide a basic criterion for assessing quality of English instruction, materials and resources available to students. For instance, instructors’ status as faculty would indicate a stronger degree of integration and belonging to the institute midwifery program than a guest lecturer status which allows for part time instruction with little job security. In addition, use of a standard, professionally developed English curriculum and integration of digital content into classroom learning would be indicative of higher quality than learning materials developed informally by instructors themselves without use of listening content by native speakers in classrooms. The survey was piloted with two English instructors. Based on their feedback, minor adjustments were made to one question, and it was determined that responses were best gathered by phone due to instructors’ limited internet access. Of the 14 instructors contacted, 11 were reached and provided survey responses by phone.

The web-search gathered information on available English language instruction programs for adults in Bangladesh, and the viability of tapping into any of them to improve English competency among midwifery students and faculty. Keywords Bangladesh  +  English courses , English training , English classes , study English and learn English were typed into Google’s search platform. Eleven English language instruction programs were identified. Following this, each program was contacted either by phone or email and further detail about the program’s offerings was collected.

Unstructured key informant interviews were carried out with select knowledgeable individuals to substantiate and enhance the credibility of the survey and web-search findings. Three in-country expert English language instructors and four managers of English language teaching programs were interviewed. In addition, interviews were held with three national-level stakeholders knowledgeable about work to make functional technologically advanced English language laboratories that had been installed at many of the training institutes. Question prompts included queries such as, ‘In your experience, what are the major barriers to Bangla-medium educated students studying in English at the university level?’, ‘What effective methods or curricula are you aware of for improving student English to an appropriate competency level for successful learning in English?’, and, ‘What options do you see for the language lab/s being used, either in their originally intended capacity or otherwise?’

Data analysis

All data were analyzed by the lead researcher. Survey data were entered into a master Excel file and grouped descriptively to highlight trends and outliers, and ultimately enable a clear description of the structure and basic quality attributes (e.g., instructors’ education, hours of English instruction, and curriculum development resources used). Web-search findings were compiled in a second Excel file with columns distinguishing whether they taught general English (often aimed at preparing students for international standard exams), Business English, or English for Specific Purposes (ESP). This enabled separation of standalone English courses taught by individual instructors as part of vocational or academic programs of study in other fields, and programs with an exclusive focus on English language acquisition. Key informant interviews were summarized in a standard notes format using Word. An inductive process of content analysis was carried out, in which content categories were identified and structured to create coherent meaning [ 7 ]. From this, the key overall findings and larger themes that grew from the initial survey and web-search results were drawn out.

The surveys (Tables  1 and 2 ) found that English instructors are primarily long-term male guest lecturers employed at each institute for more than two years. All principal respondents indicated that there is a need for English instruction—18 of the 19 reported that this is best done through a combination of in-person and computer-based instruction. Ten institutes reported that they have an English language lab, but none were used as such. The other institutes did not have language labs. The reported reasons for the labs not being in use were a lack of trained staff to operate them and some components of the technology not being installed or working properly. The findings from the instructors’ survey indicated that English instructors typically develop their own learning materials and teach general English without tailoring content to healthcare contexts. Only two mentioned using a standard textbook to guide their instruction and one described consulting a range of English textbooks to develop learning content. None reported using online or other digital tools for language instruction in their classrooms. Most instructors had an advanced degree (i.e., master’s degree) in English, and seven had received training in teaching English. Interviews with instructors also revealed that they themselves did not have mastery of English, as communication barriers in speaking over the phone appeared consistently across 10 of the 11 instructor respondents.

The web-search and related follow up interviews found that most English instruction programs (10 out of the 11) were designed for teaching general English and/or business English. The majority were offered through private entities aiming to reach individuals intending to study abroad, access employment that required English, or improve their ability to navigate business endeavors in English. One program, developed by the British Council, had flexibility to tailor its structure and some of its content to the needs of midwifery students. However, this was limited in that a significant portion of the content that would be used was developed for global audiences and thus not tailored to a Bangladeshi audience or to any specific discipline. One of the university English programs offered a promising ESP model tailored to midwifery students. It was designed by BRAC University’s Institute of Language for the university’s private midwifery training program.

Three themes emerged from the other key informant interviews (Table  3 ). The first was that, in addition to students’ challenges with English, faculty mastery of English presented challenges as well. Of the 34 faculty members intending to participate in the 2019–2020 cohort for the Dalarna master’s degree, half did not pass the prerequisite English exam. Ultimately, simultaneous English-Bangla translation was necessary for close to half of the faculty to enable their participation in the master’s program. English language limitations also precluded one faculty member from participating in an international PhD program in midwifery.

The second theme highlighted the language labs’ lack of usability. The language labs consisted of computers, an interactive whiteboard, audio-visual equipment, and associated software to allow for individualized direct interactions between teacher and student. However, due to the lack of appropriately trained staff to manage, care for and use the language lab equipment, the investment required to make the labs functional appeared to outweigh the learning advantages doing so would provide. Interviews revealed that work was being done, supported by a donor agency, on just one language lab, to explore whether it could be made functional. The work was described as costly and challenging, and required purchasing a software license from abroad, thus likely being impractical to apply to the other labs and sustain over multiple years.

The third theme was around the ESP curriculum model. The program developers had employed evidence-informed thinking to develop the ESP learning content and consulted student midwives on their learning preferences. Due to the student input, at least 80% of the content was designed to directly relate to the practice of midwifery in Bangladesh, while the remaining 10–20% references globally relevant content. This balance was struck based on students’ expressed interest in having some exposure to English usage outside of Bangladesh for their personal interest. For conversation practice, the modules integrated realistic scenarios of midwives interacting with doctors, nurses and patients. Also built into written activities were exercises where students were prompted to describe relevant health topics they are concurrently studying in their health, science or clinical classes. Given the midwifery students’ educational backgrounds and intended placements in rural parts of Bangladesh, an ESP curriculum model appeared to be the most beneficial existing program to pursue tapping into to strengthen English competencies within midwifery programs. This was because the content would likely be more accessible to students than a general English course by having vocabulary, activities and examples directly relevant to the midwifery profession.

The study findings demonstrate key weaknesses in the current model of English instruction taught in public midwifery programs. Notably, the quantitative findings revealed that some English instructors do not have training in teaching English, and none used standard curricula or online resources to structure and enhance their classroom content. In addition, weak mastery of English among midwifery faculty was identified in the qualitative data, which calls into question faculty’s ability to fully understand and accurately convey content from English learning materials. Global literature indicates that this is not a unique situation. Many healthcare faculty and students in low-resource settings, in fact, are faced with delivering and acquiring knowledge in a language they have not sufficiently mastered [ 8 ]. As a significant barrier to knowledge and skill acquisition for evidence-based care, this requires more attention from global midwifery educators [ 9 ].

Also holding back students’ English development is the finding from both the quantitative and qualitative data that none of the high-tech language labs were being used as intended. This indicates a misalignment with the investment against the reality of the resources at the institutes to use them. While setting up the costly language labs appears to have been a large investment with little to no return, it does demonstrate that strengthening English language instruction in post-secondary public education settings is a priority that the Bangladesh government is willing to invest in. However, scaling up access to an ESP curriculum model tailored to future midwifery practitioners in Bangladesh may be a more worthwhile investment than language labs [ 10 ]. 

The ESP approach teaches English for application in a specific discipline. It does this by using vocabulary, examples, demonstrations, scenarios and practice activities that are directly related to the context and professions those studying English live and work (or are preparing to work) in. One way ESP has been described, attributed to Hutchinson and Waters (1987), is, “ESP should properly be seen not as any particular language product but as an approach to language teaching in which all decisions as to content and method are based on the learner’s reason for learning” [ 11 ]. It is proposed by linguistic education researchers as a viable model for strengthening language mastery and subject matter comprehension in EMI university contexts [ 12 ].

Though it did not arise as a finding, reviewing the literature highlighted that Bangla language instruction may be an additional, potentially viable option. Linguistic research has long shown that students learn more thoroughly and efficiently in their mother tongue [ 12 ]. Another perhaps more desirable option may be multilingualism, which entails recognizing native languages as complementary in EMI classrooms, and using them through verbal instruction and supplemental course materials. Kirkpatrick, a leading scholar of EMI in Asia, suggests that multilingualism be formally integrated into EMI university settings [ 13 ]. This approach is supported by evidence showing that the amount of native language support students need for optimal learning is inversely proportional to their degree of English proficiency [ 14 ].

Ultimately, despite the language related learning limitations identified in this study, and the opportunities presented by native language and multilingualism approaches, there remains a fundamental need for members of the midwifery profession in Bangladesh to use up-to-date guidance on evidence-based midwifery care [ 11 ]. Doing that currently requires English language competence. Perhaps a tiered system of requirements for English competencies that are tied to diploma, Bachelor’s, Master’s and PhD midwifery programs could build bridges for more advanced students to access global resources. Higher academic levels might emphasize English more heavily, while the diploma level could follow a multilingualism approach—teaching using an ESP curriculum and integrating Bangla strategically to support optimal knowledge acquisition for future practice in rural facilities. Ideally, scores on a standard English competency exam would be used to assess students’ language competencies prior to entrance in English-based programs and that this would require more stringent English skill development prior to entering a midwifery program.

Methodological considerations

One of the limitations of this study is that it relied on self-reports and observation, rather than tested language and subject matter competencies. Its strengths though are in the relatively large number of education institutes that participated in the study, and the breadth of knowledge about faculty and student subject matter expertise among study co-authors. It was recognized that the lead researcher might be biased toward pre-determined perceptions of English competencies being a barrier to teaching and learning held by the lead institution (UNFPA). It was also recognized that due to the inherent power imbalance between researcher and participants, the manner of gathering data and engaging with stakeholders may contribute to confirmation bias, with respondents primarily sharing what they anticipated the researcher wished to hear (e.g., that English needed strengthening and the lead agency should take action to support the strengthening). The researcher thus engaged with participants independently of UNFPA and employed reflexivity by designing and carrying out the surveys to remotely collect standard data from institutes, as well as casting a wide net across institutes to increase broad representation. In addition, while institutes were informed that the surveys were gathering information about the English instruction within the institutes, no information was shared about potential new support to institutes. Finally, the researcher validated and gathered further details on the relevant information identified in the surveys through key informant interviews, which were held with stakeholders independent of UNFPA.

Adapting and scaling up the existing ESP modules found in this study, and integrating Bangla where it can enhance subject-matter learning, may be a useful way to help midwifery students and faculty improve their knowledge, skills, and critical thinking related to the field of midwifery. Given the educational backgrounds and likely work locations of most midwives in Bangladesh and many other LMICs, practitioners may want to consider investing in more opportunities for local midwives to teach and learn in their mother tongue. This type of investment would ideally be paired with a tiered system in which more advanced English competencies are required at higher-levels of education to ensure integration of global, evidence-based approaches into local standards of care.

Declarations.

Data availability

The datasets used and analyzed during the current study are available from the corresponding author upon reasonable request.

Abbreviations

Bangladesh Rehabilitation Assistance Committee

English medium instruction

English for Specific Purposes

Low- and Middle-Income Countries

Ministry of Health and Family Welfare

United Nations Population Fund

Macaro E. English medium instruction: global views and countries in focus. Lang Teach. 2019;52(2):231–48.

Article   Google Scholar  

Montgomery S. Does science need a global language? English and the future of research. University of Chicago Press; 2013.

Doiz A, Lasagabaster D, Pavón V. The integration of language and content in English-medium instruction courses: lecturers’ beliefs and practices. Ibérica. 2019;38:151–76.

Google Scholar  

Gallo F, Bermudez-Margareto B, et al. First language attrition: what it is, what it isn’t, and what it can be. National Research University Higher School of Economics; 2019.

Yilmaz G, Schmidt M. First language attrition and bilingualism, adult speakers. Bilingual cognition and language, the state of the science across its sub-fields (Ch. 11). John Benjamin’s Publishing Company.

Polit DF, Beck CT. (2021). Nursing research: generating and assessing evidence for nursing practice. Eleventh edition. Philadelphia, Wolters Kluwer.

Scheufele, B. (2008). Content Analysis, Qualitative. The international encyclopedia of communication John Wiley & Sons.

Pelicioni PHS, Michell A, Rocha dos Santos PC, Schulz JS. Facilitating Access to Current, evidence-based Health Information for Non-english speakers. Healthcare. 2023;11(13):1932.

Pakenham-Walsh N. Improving the availability of health research in languages other than English. Lancet. 2018;8. http://dx.doi.org/10.1016/ S2214-109X(18)30384-X.

Islam M. The differences and similarities between English for Specific purposes(ESP) and English for General purposes(EGP) teachers. Journal of Research in Humanities; 2015.

Lamri C, Dr et al. (2016-2017). English for Specific Purposes (1st Semester) Third Year ‘License’ Level. Department of English Language, Faculty of Arts and Language, University of Tlemcen

Jiang L, Zhang LJ, May S. (2016). Implementing English-medium instruction (EMI) in China: teachers’ practices and perceptions, and students’ learning motivation and needs. Int J Bilingual Educ Bilinguaism 22(2).

Kirkpatrick A. The rise of EMI: challenges for Asia. In, English medium instruction: global views and countries in focus. Lang Teach. 2015;52(2):231–48.

Kavaliauskiene G. Role of the mother tongue in learning English for specific purposes. ESP World. 2009;1(22):8.

Download references

Acknowledgements

The authors acknowledge Farida Begum, Rabeya Basri, and Pronita Raha for their contributions to data collection for this assessment.

This project under which this study was carried out was funded by funded by the Foreign Commonwealth and Development Office.

Open access funding provided by University of Gothenburg.

Author information

Authors and affiliations.

Data, Design + Writing, Portland, OR, USA

Anna Williams

Goodbirth Network, North Adams, USA, MA

Jennifer R. Stevens

Project HOPE, Washington DC, USA

Rondi Anderson

University of Gothenburg, Gothenburg, Sweden

Malin Bogren

You can also search for this author in PubMed   Google Scholar

Contributions

Authors contributions in the development of this paper were as follows: AW- Concept, acquisition, drafting, revision, analysis, interpretation. JRS- Concept, revision. RA- Concept, analysis MB- Revision, analysis, interpretationAll authors read and approved the final manuscript.

Ethics declarations

Ethics approval.

This study was part of a larger project in Bangladesh approved by the Ministry of Health and Family Welfare (MOHFW) with project ID UZJ31. The MOHFW project approval allows data collection of this type, that is carried out as part of routine program monitoring and improvement, including informed verbal consent for surveys and key informant interviews.

Consent for publication

Not applicable.

Competing interests

The authors of this study have no competing interests and no conflicts of interest.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Williams, A., Stevens, J., Anderson, R. et al. Challenges and opportunities of English as the medium of instruction in diploma midwifery programs in Bangladesh: a mixed-methods study. BMC Med Educ 24 , 523 (2024). https://doi.org/10.1186/s12909-024-05499-8

Download citation

Received : 31 July 2023

Accepted : 02 May 2024

Published : 10 May 2024

DOI : https://doi.org/10.1186/s12909-024-05499-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • “English for special purposes”
  • “English medium instruction”

BMC Medical Education

ISSN: 1472-6920

survey research methods

  • Open access
  • Published: 14 May 2024

Assessing public health service capability of primary healthcare personnel: a large-scale survey in Henan Province, China

  • Rongmei Liu 1 ,
  • Qiuping Zhao 1 ,
  • Wenyong Dong 2 ,
  • Dan Guo 3 , 4 ,
  • Zhanlei Shen 3 ,
  • Wanliang Zhang 3 ,
  • Dongfang Zhu 3 ,
  • Jingbao Zhang 3 ,
  • Junwen Bai 3 ,
  • Ruizhe Ren 3 ,
  • Mingyue Zhen 3 ,
  • Jiajia Zhang 3 ,
  • Jinxin Cui 3 ,
  • Xinran Li 3 &
  • Yudong Miao 3  

BMC Health Services Research volume  24 , Article number:  627 ( 2024 ) Cite this article

12 Accesses

Metrics details

The public health service capability of primary healthcare personnel directly affects the utilization and delivery of health services, and is influenced by various factors. This study aimed to examine the status, factors, and urban-rural differences of public health service capability among primary healthcare personnel, and provided suggestions for improvement.

We used cluster sampling to survey 11,925 primary healthcare personnel in 18 regions of Henan Province from 20th to March 31, 2023. Data encompassing demographics and public health service capabilities, including health lifestyle guidance, chronic disease management, health management of special populations, and vaccination services. Multivariable regression analysis was employed to investigate influencing factors. Propensity Score Matching (PSM) quantified urban-rural differences.

The total score of public health service capability was 80.17 points. Chronic disease management capability scored the lowest, only 19.60. Gender, education level, average monthly salary, professional title, health status, employment form, work unit type, category of practicing (assistant) physician significantly influenced the public health service capability (all P  < 0.05). PSM analysis revealed rural primary healthcare personnel had higher public health service capability scores than urban ones.

Conclusions

The public health service capability of primary healthcare personnel in Henan Province was relatively high, but chronic disease management required improvement. Additionally, implementing effective training methods for different subgroups, and improving the service capability of primary medical and health institutions were positive measures.

Peer Review reports

Public health services play a pivotal role in safeguarding people’s health and shaping their future. In this domain, primary healthcare personnel are the direct providers of essential public health services and the backbone of China’s primary health care [ 1 ]. They deliver various public health services, such as health lifestyle guidance, chronic disease management, health management of special populations, and vaccination services. Studies have shown that the implementation of the national basic public health service project [ 2 , 3 ], which is one of the important components of China’s basic medical and health service system [ 4 ], has successfully achieved its intended outcome [ 5 ]. This initiative not only effectively manages the risk factors affecting the population, leading to a decrease in the occurrence of common and chronic diseases, but also improves the overall health quality of the population [ 6 , 7 ].

However, the public health service capability of primary healthcare personnel may be difficult to cope with the growing demands and challenges [ 8 , 9 ] of public health service projects. As public health service projects gradually advance, China’s primary healthcare personnel face more and more requirements [ 10 ]. Hence, there is a pressing need to evaluate their current public health service capability [ 11 ]. However, the current research on primary healthcare personnel mainly focuses on personnel training itself [ 12 ], while there are few studies based on the capability of basic public health service projects. In addition, in the existing studies, the public health service capability is embodied in the medical and health institutions [ 13 ], and there is a lack of comprehensive evaluation of the service capability of primary healthcare personnel. This assessment is critical for accelerating the progressive equalization of basic public health service projects, strengthening the development of a more comprehensive public health service system [ 14 ], and ultimately elevating the overall health status of the population.

The public health service capability of primary healthcare personnel may be influenced by various factors, such as gender, age, profession, region, etc [ 15 ]. . . Studies have reported that primary health institutions face many problems [ 16 ] such as excessive work pressure [ 17 , 18 ], uneven distribution of medical and health resources [ 19 ], and service quality [ 20 ] among the primary healthcare personnel. Furthermore, some studies have suggested that primary healthcare personnel possess limited theoretical knowledge and practical skills [ 21 ] in public health service capability. These limitations hinder the effective implementation and widespread adoption of basic public health services [ 22 ]. Moreover, primary healthcare personnel may experience different levels of job burnout [ 23 , 24 ] due to heavy workloads and excessive temporary assignment by their work. There may also be regional differences in the delivery of public health services between urban and rural areas [ 25 , 26 ], which may affect the public health service capability of primary healthcare personnel.

Henan Province, an important province in the Central Plains of China with a large primary population and great medical needs, is a pioneer of primary public health service reform. Therefore, this study chose Henan Province as the research site, which helped to reflect the current status, influencing factors, and formation mechanism of the public health service capability of primary healthcare personnel in China. It also analyzed the differences in basic public health services between urban and rural areas, explored ways to improve the public health service capability of primary healthcare personnel, and provides empirical support for the sustainable development of basic public health services.

Participants and study procedure

We conducted a cross-sectional survey of primary healthcare personnel in 18 cities of Henan Province, China, one of the most populous and important provinces in central China. The survey period was from March 20, 2023 to March 31, 2023. A self-designed questionnaire was used to collect data, mainly through the municipal disease prevention and control, community health service centers, and township health centers in Henan Province. And a total of 14,604 questionnaires were collected, excluding those with incomplete basic information and irregular information filling, those under 18 years old, irregular age filling, short response time and those with age minus working experience less than 18 years. After screening, there were 11,925 valid questionnaires, with an effective recovery rate of 81.66%. The inclusion and exclusion process were shown in Fig.  1 .

figure 1

Inclusion and exclusion of participants

Based on the World Organization of Family Doctors (WONCA) tree model [ 25 ] and the European Quality and Cost of Primary Care (QUALICOPC), we designed a questionnaire for primary healthcare personnel [ 26 ], which is widely recognized internationally and has high scientific and reliability. At the same time, according to the structure of national basic public health service project [ 27 , 28 ], the regional characteristics, population structure and medical and health service needs of Henan Province, some indicators in the questionnaire were added, deleted and adjusted to design the questionnaire (Supplementary file 1 ). After expert consultation and review, a preliminary survey was conducted in two communities and two townships within Zhengzhou to assess the reliability and validity of the questionnaire. The scale consists of two parts: basic information of primary healthcare personnel and public health service capability rating scale. The Cronbach’s α coefficient of the scale was 0.980, and the KMO test value was 0.973, indicating good reliability and validity of the scale.

Basic information

The first part of the questionnaire collected basic information of the participants, such as gender, age, marital status, education level, health status, working area, working years, forms of employment, type of work unit, category of practicing (assistant) physicians, professional title, average monthly salary.

Dependent variable

The second part of the questionnaire measured the public health service capability of the participants. It had four dimensions: (1) healthy lifestyle guidance capability, including guidance capability of diet, exercise, decompression, weight, smoking cessation, alcohol limit; (2) chronic disease management capability, including chronic disease screening, risk prediction, integrated chronic disease management, management effect evaluation; (3) health management capability of special populations (It refers to children aged 0–6 years, pregnant women, the elderly, patients with hypertension, or type 2 diabetes, etc.), including professional and technical training, information mastery, health education, health records capability; and (4) vaccination service capability, including receive vaccination training, understand immunization programmer, vaccination procedures, vaccine alternatives, safety precaution, etc.

Using a five-point likert scale, the options for each item were “strongly disagree”, “disagree”, “neutral”, “agree”, and “strongly agree”, scored from 1 to 5 respectively. The score of each dimension ranged from 5 to 25, and the overall score ranged from 20 to 100. A higher score indicated a higher level of public health service capability.

Statistical analysis

The means with standard deviation was used to describe the continuous quantitative variables and percentages was used to present the categorical variables. Analysis of variance (ANOVA) and student’s t- test were used to test the difference in mean scores among the groups. Collinearity testing was employed to examine the relationships between independent variables. The results revealed no significant collinearity among the independent variables, as indicated by variance inflation factors (VIF) below 5. Multivariate linear regression was used to analyze the influencing factors of primary public health service capability. The Propensity Score Matching (PSM) was used to analyze the differences in the public health service capability of primary healthcare personnel between urban and rural areas.

Baseline characteristics of primary healthcare personnel

The study included 11,925 primary healthcare personnel. Of these, 296 (2.48%) did not meet the required criteria, while 4,175 individuals (35.02%) demonstrated excellent public health service capacities. The table shows the score of public health service capability of primary healthcare personnel in Henan Province as 80.17 (Table  1 ). Male public health service capability score was 81.21 points, 1.8 points higher than female. The primary healthcare personnel aged 51–60 had a mean score of 80.71, while those with 21 years of work experience had a mean score of 80.83 points. The public health service score of high school education and below was 80.42 points. Public health services with the average monthly salary of 3001–4500 yuan scored 80.80. The score of intermediate professional title was 80.80, and the score of healthy people was 80.55. The score of those who were regular employee was 80.65, and the village clinic was 81.19. Within the category of practicing or assistant physicians, public health and clinical category scored higher (Table  1 ).

In addition, there were statistical differences in the public health service capability scores of primary healthcare personnel among different groups, such as gender, age, marital status, working years, average monthly salary, professional title, health status, form of employment, type of work unit, category of practicing (assistant) physicians (All P  < 0.05).

The mean scores for all four dimensions are clustered around 20, with 20.16 points for health lifestyle guidance, 19.60 points for chronic disease management, 20.14 points for health management of special population, and 20.27 points for vaccination service. Chronic disease management had the lowest score, and sub-health and oral category was significantly lower than other subgroups in four dimensions (Fig.  2 ). There were significant differences in the scores for the four dimensions in terms of gender, health status, employment form and type of work (all p  < 0.05). (Supplemental file 2 )

figure 2

Heatmap of four dimensions service capability score. Note: HLG (healthy lifestyle guidance); CDM (chronic disease management); HMC (health management of special populations); VSC (Vaccination service capability); a for age; b for working years; c for average monthly salary

Factors influencing public health service capability of primary healthcare personnel

The multiple linear regression analysis showed that male public health service capability scored higher than female ( P < 0.001 , 95%CI: 0.82–1.96). The participants with a bachelor’s degree scored higher than those with high school education or higher ( P = 0.019 , 95%CI: 0.18– 2.01). In comparison to average monthly salary of less than 3000 yuan, those with 3001–4500 yuan or above had higher public health service capability scores ( P  < 0.001, 95%CI: 0.79–2.05; P  = 0.003, 95%CI: 0.72–3.44). Those with intermediate and senior professional title had higher scores for public health service capability than those without professional title ( P = 0.005 , 95%CI: 0.35–1.97; P  = 0.010, 95%CI: 0.47–3.48). The sub-health population had lower scores of public health services than the healthy population (P < 0.001 , 95%CI: -4.38– -2.70); The regular employees had higher public health service scores than other public health personnel ( P = 0.004 , 95%CI: -2.59–0.08). Those working in village clinics had higher scores ( P = 0.002 , 95%CI: -5.63– -1.30) than their counterparts. Those in public health, clinical, and traditional Chinese medicine scored higher than those in oral medicine ( P = 0.006 , 95%CI: 0.91–5.55; P  = 0.007, 95%CI: 0.87–5.53; P  = 0.034, 95%CI: 0.20–5.01). Age, marital status and working years had no effect on the public health service capability of primary healthcare personnel ( P = 0.914 , 95%CI: -1.17–1.05; P  = 0.225, 95%CI: -2.01–0.47; P  = 0.143, 95%CI: P  = 0.402, 95%CI: -2.49–0.99; P  = 0.705, 95%CI: -0.70–1.04; P  = 0.863, 95%CI: -0.84–1.00; P  = 0.572, 95%CI: -0.76–1.37; P = 0,543 , 95%CI: -1.43–0.75; P  = 0.590, 95%CI: -0.72–1.25). See Table  2 for details.

Differences in public health service capability between urban and rural primary healthcare personnel

We selected rural primary healthcare personnel as the matched group and urban primary healthcare personnel as the control group. We employed a 1:1 nearest neighbor matching method with a caliper value of 0.03, incorporating the previously mentioned control variables as covariates. There were 1,825 (15.3%) healthcare personnel from community health service centers and 10,100 (84.7%) were from township hospitals or village clinics. Before PSM analysis, there were statistically significant differences in gender, marital status, education level, working years, average monthly salary, professional title, employment form and type of work unit between urban and rural healthcare personnel (all P  < 0.05). The public health service capability of primary healthcare personnel in urban areas was 79.50 ± 0.33, while in rural areas was 80.30 ± 0.14. The difference between these two groups was statistically significant ( P = 0.042 ), with rural personnel demonstrating higher scores than their urban counterparts.

Using PSM, we matched the basic information of urban and rural healthcare personnel at a 1:1 ratio, and successfully matched 1684 pairs of subjects, with a total of 3,368 participants. After PSM, there was no statistically significant difference in the basic characteristics (all P  > 0.05; Table  3 ) . The findings indicate that the sample achieved a good balance, effectively mitigating the influence of covariates that could have contributed to the disparity in public health service capability between urban and rural primary healthcare personnel. The public health service capability scores of urban and rural primary healthcare personnel were (78.98 ± 0.36) and (79.32 ± 0.34), respectively. After PAM, the difference remained statistically significant ( P = 0.025 ), with rural areas continuing to exhibit higher public health service capacities than urban areas (Fig.  3 ).

figure 3

Changes of public health service capability scores in urban and rural before and after PSM

The welfare benefits of primary public healthcare personnel are limited

Over 50% of the elderly are working in public health services, nearly half of employees have accumulated more than twenty working years, and approximately one-third is regular employee. Most primary healthcare personnel earned less than 3,000 yuan a month. In addition, less than one-fifth of the primary healthcare personnel had bachelor’s degree or above, and few had senior professional titles. These findings indicate that the majority of primary healthcare personnel face challenges associated with a being older [ 29 , 30 ], limited authorized strength, lower income and education level. Studies have shown that due to the limited number of senior professional titles, the number of promotion opportunities for healthcare personnel is limited in primary medical institutions, which seriously weakens the willingness of primary health personnel to serve [ 31 ]. Moreover, studies have shown that primary healthcare personnel will have high turnover due to heavy workload, low income and low sense of achievement [ 32 ]. This has a great impact on the quantity and quality of basic public health services in primary areas.

Hence, it becomes imperative to facilitate the rejuvenation of primary healthcare personnel by introducing fresh talent and enhancing welfare benefits [ 33 , 34 ] as a measure to mitigate the attrition of skilled professionals. It is imperative to optimize the talent recruitment mechanism of primary health institutions, augment the number of professional healthcare personnel, and attract more outstanding undergraduate and postgraduate graduates to serve the primary areas and public health service. By offering attractive welfare for high educational talent, it can maintain the stability of primary healthcare personnel, and ultimately curb the attrition of talent.

The public health service capability of primary healthcare personnel is not balanced

The average scores of the healthy lifestyle guidance capability, chronic disease management capability, health management capability of special population, and vaccination service capability were all about 20 points. This shows that most primary healthcare personnel had a good grasp of these public health service capability projects. However, the capability of chronic disease management was relatively poor among the four capabilities [ 35 ], followed by the health management capability of special population. This indicates that primary healthcare personnel still need to strengthen the training of this aspect. Research has indicated that the effective management of chronic diseases, such as pregnant women and individuals with severe mental disorders, as well as initiatives targeting special population health management, require a high degree of professional expertise and it takes years to cultivate professionals in these areas [ 35 , 36 ].

Limited by the scarcity of health resources and the existence of manifold patients and various chronic diseases in primary areas, primary prevention is about improving the capability of chronic disease management and health management of special populations [ 37 , 38 ], so as to protect the life and health of the population. Therefore, implementing a scientifically sound and effective training is also a crucial step in enhancing the knowledge and capability of primary healthcare service personnel. Primary health institutions should consider providing targeted training for the weak areas of primary healthcare personnel, such as chronic disease management capability and special groups health management capability. Strengthening the links between primary medical institutions and higher-level hospitals may increase the clinical knowledge and skills of healthcare personnel, which are relatively insufficient.

The public health service capability of primary healthcare personnel is affected by many factors

The educational attainment of primary healthcare personnel in Henan Province is generally low. Less than one-fifth of these personnel hold a bachelor’s degree or higher, and even fewer possess a graduate degree or above. This underscores a significant deficiency of highly educated professionals in the extensive primary regions [ 39 ]. In addition, this study found that those with higher professional titles and working units in village clinics had better public health service capability. This could be attributed to the fact that these primary healthcare personnel often handle a substantial workload [ 40 ]. As a result, they gain more experience and attain a higher level of proficiency in public health services. This study also found that the primary healthcare personnel who were hired by regular employees and whose practice categories were clinical medicine, traditional Chinese medicine and public health had stronger public health capability. These public healthcare personnel will also enhance their knowledge and theory reserve [ 41 ]. As a reward and incentive system, the average monthly salary can directly affect the work enthusiasm of primary healthcare personnel [ 17 ] in their work.

An appropriate assessment mechanism for primary health institutions can be established to break the “ceiling effect”, overcome the limitation of performance-based salary cap. Therefore, a mechanism can be established to adjust the salary synchronously with the workload increase of primary healthcare personnel.

There are differences in public health service capability between urban and rural primary healthcare personnel

The outcomes of the PSM analysis indicated that the service capability of rural primary healthcare personnel was slightly higher than their urban counterparts. This underscores the unequal distribution of public health service resources between urban and rural areas, potentially stemming from deficiencies in financial and human resources within rural public health services, as well as the absence of modern equipment and technology [ 42 ]. Urban primary services were more adequate in terms of capital, human resources, technology, etc., which cause populations tend to choose the superior hospital. Hence, urban primary healthcare personnel got less training in public health service. In contrast, rural primary public health institutions were the most important approaches of accessing to health services in rural areas.

Considering the influencing factors specific to region, it is essential to adopt suitable strategies and measures to foster equilibrium in public health services between urban and rural areas. In addition, attention should also be paid to the construction of professional personnel to enhance the service capability of primary healthcare personnel. Therefore, the government may also consider facilitating collaboration [ 43 ] between urban and rural public health services, through the adoption of internet technology [ 44 ] and the establishment of a consortium for urban and rural health services.

Strength and limitation of the study

This study has the following advantages: first, the current study marks a pioneering effort by focusing on primary healthcare personnel, breaking away from the previous convention of primarily investigating public health service capabilities at the institutional levels of healthcare. Second, it is the first large-scale survey in Henan province, which can truly reflect the current public health service capability of primary healthcare personnel in Henan Province. Third, PSM was deployed to effectively control for confounding variables and to reveal the net difference in public health service capability between urban and rural areas. However, the study also has some limitations. First, it relied on self-reported survey data, which may have introduced some bias. Secondly, it used PSM with fewer variables than the total number of sets available. Thirdly, due to the cross-sectional design of this study, it could not examine causality.

Primary healthcare personnel in Henan have above-average public health service capability. This approach will expedite efforts to achieve universal coverage of public health service capability and foster the equalization of public health services between urban and rural areas. A systematic, rational, and precisely targeted training approach should be implemented for individuals with lower education levels, lower mean monthly income, lower professional titles, and contracted employees. Moreover, the number of primary health professionals should be increased, and the professional title evaluation and promotion system should be improved to enhance the stability of primary public health team construction. These measures can greatly reduce the loss of professionals, improve the public health service capability of primary healthcare personnel, and guard the first line of defense for population health.

Data availability

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Li X, Lu J, Hu S, Cheng K, De Maeseneer J, Meng Q, et al. The primary health-care system in China. Lancet. 2017;390:2584–94.

Article   PubMed   Google Scholar  

Fang G, Yang D, Wang L, Wang Z, Liang Y, Yang J. Experiences and Challenges of Implementing Universal Health Coverage with China’s National Basic Public Health Service Program: Literature Review, Regression Analysis, and Insider interviews. JMIR Public Health Surveill. 2022;8:e31289.

Article   PubMed   PubMed Central   Google Scholar  

Wan YC, Wan YI. Achievement of equity and universal access in China’s health service: a commentary on the historical reform perspective from the UK National Health Service. Glob Public Health. 2010;5:15–27.

Article   CAS   PubMed   Google Scholar  

Cao F, Xi Y, Zheng C, Bai T, Sun Q. How efficient are Basic Public Health Services between Urban and Rural in Shandong Province, China? A Data Envelopment Analysis and Panel Tobit Regression Approach. RMHP. 2022;15:727–38.

Article   Google Scholar  

Yuan B, Balabanova D, Gao J, Tang S, Guo Y. Strengthening public health services to achieve universal health coverage in China. BMJ. 2019:l2358.

Das P. Kevin Fenton: pursuing equity and equality in public health. Lancet. 2020;396:1388.

Yang L, Sun L, Wen L, Zhang H, Li C, Hanson K, et al. Financing strategies to improve essential public health equalization and its effects in China. Int J Equity Health. 2016;15:194.

Beaglehole R, Dal Poz MR. Public health workforce: challenges and policy issues. Hum Resour Health. 2003;1:4.

Leider JP, Yeager VA, Kirkland C, Krasna H, Hare Bork R, Resnick B. The state of the US Public Health Workforce: Ongoing challenges and future directions. Annu Rev Public Health. 2023;44:323–41.

Hu S. Universal coverage and health financing from China’s perspective. Bull World Health Org. 2008;86:819–819.

Honoré PA. Aligning Public Health Workforce Competencies with Population Health Improvement goals. Am J Prev Med. 2014;47:S344–5.

Witter S, Hamza MM, Alazemi N, Alluhidan M, Alghaith T, Herbst CH. Human resources for health interventions in high- and middle-income countries: findings of an evidence review. Hum Resour Health. 2020;18:43.

Liu S, Lin J, He Y, Xu J. The Service Capability of Primary Health Institutions under the Hierarchical Medical System. Healthcare. 2022;10:335.

A Strategy for Strengthening. Implementation of the Capabilities Opportunities Assessment Tool for the Public Health Workforce. http://www.iyxy.cn:3017/pubmed/37498540 . Accessed 18 Oct 2023.

Ding Y, Smith HJ, Fei Y, Xu B, Nie S, Yan W, et al. Factors influencing the provision of public health services by village doctors in Hubei and Jiangxi provinces, China. Bull World Health Organ. 2013;91:64–9.

Abate M, Mulissa Z, Magge H, Bitewulign B, Kiflie A, Biadgo A, et al. Key factors influencing motivation among health extension workers and health care professionals in four regions of Ethiopia: a cross-sectional study. PLoS ONE. 2022;17:e0272551.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Wright T, Mughal F, Babatunde O, Dikomitis L, Mallen C, Helliwell T. Burnout among primary health-care professionals in low- and middle-income countries: systematic review and meta-analysis. Bull World Health Organ. 2022;100:385–A401.

A scoping analysis of the aspects of primary healthcare physician. job satisfaction: facets relevant to the Indonesian system | Human Resources for Health | Full Text. https://human-resources-health.biomedcentral.com/articles/10.1186/s12960-019-0375-3 . Accessed 12 Nov 2023.

Pu L. Fairness of the Distribution of Public Medical and Health Resources. Front Public Health. 2021;9:768728.

Zhang T, Xu Y, Ren J, Sun L, Liu C. Inequality in the distribution of health resources and health services in China: hospitals versus primary care institutions. Int J Equity Health. 2017;16:42.

Asamani JA, Christmals CD, Nyoni CN, Nabyonga-Orem J, Nyoni J, Okoroafor SC, et al. Exploring the availability of specialist health workforce education in East and Southern Africa: a document analysis. BMJ Glob Health. 2022;7(Suppl 1):e009555.

Han X, Ku L. Enhancing staffing in Rural Community Health Centers Can Help Improve Behavioral Health Care. Health Aff. 2019;38:2061–8.

Chen G, Sang L, Rong J, Yan H, Liu H, Cheng J, et al. Current status and related factors of turnover intention of primary medical staff in Anhui Province, China: a cross-sectional study. Hum Resour Health. 2021;19:23.

Sun Y, Luo Z, Fang P. Factors influencing the Turnover Intention of Chinese Community Health Service workers based on the Investigation results of five provinces. J Community Health. 2013;38:1058–66.

Li T, Lei T, Xie Z, Zhang T. Determinants of basic public health services provision by village doctors in China: using non-communicable diseases management as an example. BMC Health Serv Res. 2015;16:42.

Paula M, Kett BB. Competencies, training needs, and turnover among rural compared with Urban Local Public Health Practitioners: 2021 Public Health Workforce Interests and Needs Survey.

Zhao P, Diao Y, You L, Wu S, Yang L, Liu Y. The influence of basic public health service project on maternal health services: an interrupted time series study. BMC Public Health. 2019;19:824.

Zhao P, Han X, You L, Zhao Y, Yang L, Liu Y. Effect of basic public health service project on neonatal health services and neonatal mortality in China: a longitudinal time-series study. BMJ Open. 2020;10:e034427.

Beier ME, Torres WJ, Fisher GG, Wallace LE. Age and job fit: the relationship between demands–ability fit and retirement and health. J Occup Health Psychol. 2020;25:227–43.

Su B, Li D, Xie J, Wang Y, Wu X, Li J, et al. Chronic disease in China: Geographic and socioeconomic determinants among persons aged 60 and older. J Am Med Dir Assoc. 2023;24:206–e2125.

Evaluation of global. health capacity building initiatives in low-and middle-income countries: A systematic review. http://www.iyxy.cn:3017/pubmed/33110574 . Accessed 18 Oct 2023.

Ning L, Jia H, Gao S, Liu M, Xu J, Ge S, et al. The mediating role of job satisfaction and presenteeism on the relationship between job stress and turnover intention among primary health care workers. Int J Equity Health. 2023;22:155.

Cometto G, Witter S. Tackling health workforce challenges to universal health coverage: setting targets and measuring progress. Bull World Health Organ. 2013;91:881–5.

Hunter MB, Ogunlayi F, Middleton J, Squires N. Strengthening capacity through competency-based education and training to deliver the essential public health functions: reflection on roadmap to build public health workforce. BMJ Glob Health. 2023;8:e011310.

The Effects of Chronic Disease Management in Primary Health Care. Evidence from Rural China. http://www.iyxy.cn:3017/pubmed/34740053 . Accessed 19 Oct 2023.

Li X, Jiang M, Peng Y, Shen X, Jia E, Xiong J. Community residents’ preferences for chronic disease management in Primary Care facilities in China: a stated preference survey. Arch Public Health. 2021;79:211.

Wang Y, Wu Y, Chu H, Xu Z, Sun X, Fang H. Association between Health-Related Quality of Life and Access to Chronic Disease Management by Primary Care Facilities in Mainland China: a cross-sectional study. IJERPH. 2023;20:4288.

Salyers MP, Bonfils KA, Luther L, Firmin RL, White DA, Adams EL, et al. The relationship between Professional Burnout and Quality and Safety in Healthcare: a Meta-analysis. J GEN INTERN MED. 2017;32:475–82.

Intersection of Living in. a Rural Versus Urban Area and Race/Ethnicity in Explaining Access to Health Care in the United States. http://www.iyxy.cn:3017/pubmed/27310341 . Accessed 18 Oct 2023.

Shao S, Wu T, Guo A, Jin G, Chen R, Zhao Y, et al. The training contents, problems and needs of doctors in urban community health service institutions in China. BMC Fam Pract. 2018;19:182.

Eltorki Y, Abdallah O, Omar N, Zolezzi M. Perceptions and expectations of health care providers towards clinical pharmacy services in a mental health hospital in Qatar. Asian J Psychiatry. 2019;42:62–6.

Hypothesis. improving literacy about health workforce will improve rural health workforce recruitment, retention and capability | Human Resources for Health | Full Text. https://human-resources-health.biomedcentral.com/articles/ https://doi.org/10.1186/s12960-019-0442-9 . Accessed 12 Nov 2023.

Li Y, Marquez R. Can government subsidies and public mechanisms alleviate the physical and mental health vulnerability of China’s urban and rural residents? Int J Equity Health. 2023;22:59.

Visualizing the drivers. of an effective health workforce: a detailed, interactive logic model | Human Resources for Health | Full Text. https://human-resources-health.biomedcentral.com/articles/10.1186/s12960-021-00570-7 . Accessed 12 Nov 2023.

Download references

Acknowledgements

All authors thank participants involved in the study.

The study was funded by the National Social Science Fund of China, Study on Transaction Cost measurement and solutions of Integrated Delivery System (21BGL222).

Author information

Authors and affiliations.

Henan Key Laboratory for Health Management of Chronic Diseases, Central China Fuwai Hospital, Central China Fuwai Hospital of Zhengzhou University, Zhengzhou, Henan, China

Rongmei Liu & Qiuping Zhao

Department of Hypertension, Henan Provincial People’s Hospital, People’s Hospital of Zhengzhou University, Henan, China

Wenyong Dong

Department of Health Management, College of Public Health, Zhengzhou University, No.100 Kexue Road, Zhongyuan District, Zhengzhou, Henan, 450001, China

Dan Guo, Zhanlei Shen, Yi Li, Wanliang Zhang, Dongfang Zhu, Jingbao Zhang, Junwen Bai, Ruizhe Ren, Mingyue Zhen, Jiajia Zhang, Jinxin Cui, Xinran Li & Yudong Miao

Department of Neurology, Henan Provincial People’s Hospital, People’s Hospital of Zhengzhou University, Henan, China

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization: Yudong Miao, Rongmei Liu, Qiuping Zhao, Wenyong DongData curation: Yudong Miao, Qiuping Zhao, Wenyong Dong, Dan Guo, Zhanlei ShenMethodology: Zhanlei Shen, Wanliang Zhang, Yi Li, Junwen Bai, Ruizhe Ren, Dongfang Zhu, Jinxin Cui, Dan GuoProject administration: Rongmei Liu Resources: Rongmei Liu, Qiuping ZhaoSoftware: Zhanlei Shen, Dongfang Zhu, Wanliang ZhangWriting original draft: Yudong Miao, Zhanlei Shen, Rongmei Liu, Mingyue ZhenWriting, review & editing: Yudong Miao, Jingbao Zhang, Jiajia Zhang, Xinran Li.

Corresponding author

Correspondence to Yudong Miao .

Ethics declarations

Ethical approval and consent to participate.

This study was approved by the Life Science Ethics Review Committee of Zhengzhou University (2021-01-12). Informed consent was obtained from all participants. All methods were performed in accordance with the relevant guidelines and regulations.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary material 2, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Liu, R., Zhao, Q., Dong, W. et al. Assessing public health service capability of primary healthcare personnel: a large-scale survey in Henan Province, China. BMC Health Serv Res 24 , 627 (2024). https://doi.org/10.1186/s12913-024-11070-4

Download citation

Received : 05 December 2023

Accepted : 02 May 2024

Published : 14 May 2024

DOI : https://doi.org/10.1186/s12913-024-11070-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Primary healthcare personnel
  • Public health service capability
  • Urban and rural areas

BMC Health Services Research

ISSN: 1472-6963

survey research methods

IMAGES

  1. A Comprehensive Guide to Survey Research Methodologies

    survey research methods

  2. Understanding the 3 Main Types of Survey Research & Putting Them to Use

    survey research methods

  3. 2: Types of survey methods

    survey research methods

  4. PPT

    survey research methods

  5. Survey Research: Definition, Examples & Methods

    survey research methods

  6. Survey Research

    survey research methods

VIDEO

  1. Survey Research Method in Psychology] Urdu/ Hindi #wellnessbyfarah #psychologylessons #psychology

  2. Internet Survey

  3. Applied Survey Research Survey Fundamentals and Terminology

  4. Survey Research/types and advantages of survey research

  5. Session 6 Quantitative Research Methods 1

  6. What is a Survey and How to Design It? Research Beast

COMMENTS

  1. Survey Research

    Learn how to conduct survey research by following six steps: defining the population and sample, choosing the type of survey, designing the questions, distributing the survey, analyzing the results, and writing up the report. Find out the advantages and disadvantages of different types of surveys and question formats.

  2. Understanding and Evaluating Survey Research

    Survey research is defined as "the collection of information from a sample of individuals through their responses to questions" ( Check & Schutt, 2012, p. 160 ). This type of research allows for a variety of methods to recruit participants, collect data, and utilize various methods of instrumentation. Survey research can use quantitative ...

  3. Survey Research

    Learn about the definition, methods, and types of survey research, a quantitative method that collects standardized data from a sample of individuals or groups. Find out how to use different survey modes, sources, and designs to answer various research questions.

  4. Survey Research: Definition, Examples & Methods

    Survey research is the process of collecting data from a predefined group (e.g. customers or potential customers) with the ultimate goal of uncovering insights about your products, services, or brand overall.. As a quantitative data collection method, survey research can provide you with a goldmine of information that can inform crucial business and product decisions.

  5. Survey Research: Definition, Examples and Methods

    Survey Research: Definition, Examples and Methods. Survey Research is a quantitative research method used for collecting data from a set of respondents. It has been perhaps one of the most used methodologies in the industry for several years due to the multiple benefits and advantages that it has when collecting and analyzing data.

  6. Survey Research: Definition, Types & Methods

    Descriptive research is the most common and conclusive form of survey research due to its quantitative nature. Unlike exploratory research methods, descriptive research utilizes pre-planned, structured surveys with closed-ended questions. It's also deductive, meaning that the survey structure and questions are determined beforehand based on existing theories or areas of inquiry.

  7. Doing Survey Research

    Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout. Distribute the survey.

  8. Survey research

    Survey research is a research method involving the use of standardised questionnaires or interviews to collect data about people and their preferences, thoughts, and behaviours in a systematic manner. Although census surveys were conducted as early as Ancient Egypt, survey as a formal research method was pioneered in the 1930-40s by sociologist Paul Lazarsfeld to examine the effects of the ...

  9. Survey Research Methods

    Survey Research Methods. The Fifth Edition of Floyd J. Fowler Jr.'s bestselling Survey Research Methods presents the very latest methodological knowledge on surveys. Offering a sound basis for evaluating how each aspect of a survey can affect its precision, accuracy, and credibility, the book guides readers through each step of the survey ...

  10. A quick guide to survey research

    After settling on your research goal and beginning to design a questionnaire, the main considerations are the method of data collection, the survey instrument and the type of question you are going to ask. Methods of data collection include personal interviews, telephone, postal or electronic (Table 1).

  11. Overview of Survey Research

    Survey research is a quantitative and qualitative method with two important characteristics. First, the variables of interest are measured using self-reports. In essence, survey researchers ask their participants (who are often called respondents in survey research) to report directly on their own thoughts, feelings, and behaviours.

  12. Survey Research

    Survey designs. Kerry Tanner, in Research Methods (Second Edition), 2018. Conclusion. Survey research designs remain pervasive in many fields. Surveys can appear deceptively simple and straightforward to implement. However valid results depend on the researcher having a clear understanding of the circumstances where their use is appropriate and the constraints on inference in interpreting and ...

  13. Survey Research Methods

    The Fifth Edition of Floyd J. Fowler Jr.'s bestselling Survey Research Methods presents the very latest methodological knowledge on surveys. Offering a sound basis for evaluating how each aspect of a survey can affect its precision, accuracy, and credibility, the book guides readers through each step of the survey research process. This fully updated edition addresses the growth of the ...

  14. A Comprehensive Guide to Survey Research Methodologies

    In this article, we will discuss what survey research is, its brief history, types, common uses, benefits, and the step-by-step process of designing a survey. ‍ What is Survey Research ‍ A survey is a research method that is used to collect data from a group of respondents in order to gain insights and information regarding a particular ...

  15. SAGE Research Methods: Find resources to answer your research methods

    Learn how to design, conduct, and analyze surveys with this comprehensive guide by Floyd J. Fowler Jr. (4th ed.)

  16. PDF Fundamentals of Survey Research Methodology

    The survey is then constructed to test this model against observations of the phenomena. In contrast to survey research, a . survey. is simply a data collection tool for carrying out survey research. Pinsonneault and Kraemer (1993) defined a survey as a "means for gathering information about the characteristics, actions, or opinions of a ...

  17. Survey Research: Types, Examples & Methods

    Survey Research Methods. Survey research can be done using different online and offline methods. Let's examine a few of them here. Telephone Surveys; This is a means of conducting survey research via phone calls. In a telephone survey, the researcher places a call to the survey respondents and gathers information from them by asking questions ...

  18. PDF Survey Research

    Survey Research PBNNY S. VISSBR, JON A. KROSNICK, AND PAUL J. LAVRAWS Social psychologists have long recognized that every method of scientific inquiry is subject to limitations and that choosing among research methods inherently involves trade-offs. With the control of a laboratory experiment, for example, comes an artificiality that

  19. Survey Research Methods

    Survey Research Methods is the official peer reviewed journal of the European Survey Research Association.The journal publishes articles in English which discuss methodological issues related to survey research; see SRM's about the journal for more details about articles published in SRM.. Survey Research Methods is indexed by the Social Sciences Citation Index (), Scopus, and the Directory of ...

  20. Survey Research Methods

    Survey Research Methods is the official peer-reviewed journal of the European Survey Research Association (ESRA). The journal publishes articles in English, which discuss methodological issues related to survey research. Three types of papers are in-scope: Topics of particular interest include survey design, sample design, question and ...

  21. Survey Methods: Definition, Types, and Examples

    A survey method is a process, tool, or technique that you can use to gather information in research by asking questions to a predefined group of people. Typically, it facilitates the exchange of information between the research participants and the person or organization carrying out the research. Survey methods can be qualitative or ...

  22. How and why to use research surveys in your business

    A brief overview of survey research. Survey research is a method of gathering data from a group of people that represents your target market or audience. The goal is to develop insights about your products, services, marketing campaigns, and other factors that are crucial to the success of your organization. Types of survey research can include ...

  23. Comparing survey question formats to measure ...

    How we asked about partisanship in our new online survey experiments. Open-ended questions. The first method we tested asked respondents about their partisan affiliation in an open-ended format. We gave people a textbox and asked them to enter the name of the party they feel closest to.

  24. Survey Research: Definition, Types & Methods

    Descriptive research is the most common and conclusive form of survey research due to its quantitative nature. Unlike exploratory research methods, descriptive research utilises pre-planned, structured surveys with closed-ended questions. It's also deductive, meaning that the survey structure and questions are determined beforehand based on existing theories or areas of enquiry.

  25. Developing a survey to measure nursing students' knowledge, attitudes

    The design was a mixed-method, modified e-Delphi method that entailed item generation from the literature, item refinement through a 2 round survey of an expert faculty panel, and item validation through a cognitive focus group interview with nursing students. ... de Vaus D. Surveys in social research. 6th ed. Abingdon, Oxon: Routledge; 2014 ...

  26. Symmetry

    The paper has been systematically organized to enhance clarity and coherence in presenting the research on stratified and post-stratified sampling methods. Beginning with an introduction that sets the stage for the study, Section 2 elucidates key terms and concepts essential for understanding the subsequent discussion.

  27. Methods for the National Diabetes Statistics Report

    Age-adjusted estimates were calculated among adults aged 18 years or older by the direct method to the 2000 U.S. Census standard population, using age groups 18-44, 45-64, and 65 years or older. Most estimates of diabetes in this report do not differentiate between type 1 and type 2 diabetes. However, as type 2 diabetes accounts for 90% to ...

  28. Challenges and opportunities of English as the medium of instruction in

    Study design. We employed a mixed-methods study design [] in order to assess the quality of English instruction within education programs, and options for its improvement.Data collection consisted of two surveys of education institutes, a web-search of available English programs in Bangladesh, and key informant interviews.

  29. Assessing public health service capability of primary healthcare

    Methods. We used cluster sampling to survey 11,925 primary healthcare personnel in 18 regions of Henan Province from 20th to March 31, 2023. Data encompassing demographics and public health service capabilities, including health lifestyle guidance, chronic disease management, health management of special populations, and vaccination services.