• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

definition of a research survey

Home Market Research

Survey Research: Definition, Examples and Methods

Survey Research

Survey Research is a quantitative research method used for collecting data from a set of respondents. It has been perhaps one of the most used methodologies in the industry for several years due to the multiple benefits and advantages that it has when collecting and analyzing data.

LEARN ABOUT: Behavioral Research

In this article, you will learn everything about survey research, such as types, methods, and examples.

Survey Research Definition

Survey Research is defined as the process of conducting research using surveys that researchers send to survey respondents. The data collected from surveys is then statistically analyzed to draw meaningful research conclusions. In the 21st century, every organization’s eager to understand what their customers think about their products or services and make better business decisions. Researchers can conduct research in multiple ways, but surveys are proven to be one of the most effective and trustworthy research methods. An online survey is a method for extracting information about a significant business matter from an individual or a group of individuals. It consists of structured survey questions that motivate the participants to respond. Creditable survey research can give these businesses access to a vast information bank. Organizations in media, other companies, and even governments rely on survey research to obtain accurate data.

The traditional definition of survey research is a quantitative method for collecting information from a pool of respondents by asking multiple survey questions. This research type includes the recruitment of individuals collection, and analysis of data. It’s useful for researchers who aim to communicate new features or trends to their respondents.

LEARN ABOUT: Level of Analysis Generally, it’s the primary step towards obtaining quick information about mainstream topics and conducting more rigorous and detailed quantitative research methods like surveys/polls or qualitative research methods like focus groups/on-call interviews can follow. There are many situations where researchers can conduct research using a blend of both qualitative and quantitative strategies.

LEARN ABOUT: Survey Sampling

Survey Research Methods

Survey research methods can be derived based on two critical factors: Survey research tool and time involved in conducting research. There are three main survey research methods, divided based on the medium of conducting survey research:

  • Online/ Email:   Online survey research is one of the most popular survey research methods today. The survey cost involved in online survey research is extremely minimal, and the responses gathered are highly accurate.
  • Phone:  Survey research conducted over the telephone ( CATI survey ) can be useful in collecting data from a more extensive section of the target population. There are chances that the money invested in phone surveys will be higher than other mediums, and the time required will be higher.
  • Face-to-face:  Researchers conduct face-to-face in-depth interviews in situations where there is a complicated problem to solve. The response rate for this method is the highest, but it can be costly.

Further, based on the time taken, survey research can be classified into two methods:

  • Longitudinal survey research:  Longitudinal survey research involves conducting survey research over a continuum of time and spread across years and decades. The data collected using this survey research method from one time period to another is qualitative or quantitative. Respondent behavior, preferences, and attitudes are continuously observed over time to analyze reasons for a change in behavior or preferences. For example, suppose a researcher intends to learn about the eating habits of teenagers. In that case, he/she will follow a sample of teenagers over a considerable period to ensure that the collected information is reliable. Often, cross-sectional survey research follows a longitudinal study .
  • Cross-sectional survey research:  Researchers conduct a cross-sectional survey to collect insights from a target audience at a particular time interval. This survey research method is implemented in various sectors such as retail, education, healthcare, SME businesses, etc. Cross-sectional studies can either be descriptive or analytical. It is quick and helps researchers collect information in a brief period. Researchers rely on the cross-sectional survey research method in situations where descriptive analysis of a subject is required.

Survey research also is bifurcated according to the sampling methods used to form samples for research: Probability and Non-probability sampling. Every individual in a population should be considered equally to be a part of the survey research sample. Probability sampling is a sampling method in which the researcher chooses the elements based on probability theory. The are various probability research methods, such as simple random sampling , systematic sampling, cluster sampling, stratified random sampling, etc. Non-probability sampling is a sampling method where the researcher uses his/her knowledge and experience to form samples.

LEARN ABOUT: Survey Sample Sizes

The various non-probability sampling techniques are :

  • Convenience sampling
  • Snowball sampling
  • Consecutive sampling
  • Judgemental sampling
  • Quota sampling

Process of implementing survey research methods:

  • Decide survey questions:  Brainstorm and put together valid survey questions that are grammatically and logically appropriate. Understanding the objective and expected outcomes of the survey helps a lot. There are many surveys where details of responses are not as important as gaining insights about what customers prefer from the provided options. In such situations, a researcher can include multiple-choice questions or closed-ended questions . Whereas, if researchers need to obtain details about specific issues, they can consist of open-ended questions in the questionnaire. Ideally, the surveys should include a smart balance of open-ended and closed-ended questions. Use survey questions like Likert Scale , Semantic Scale, Net Promoter Score question, etc., to avoid fence-sitting.

LEARN ABOUT: System Usability Scale

  • Finalize a target audience:  Send out relevant surveys as per the target audience and filter out irrelevant questions as per the requirement. The survey research will be instrumental in case the target population decides on a sample. This way, results can be according to the desired market and be generalized to the entire population.

LEARN ABOUT:  Testimonial Questions

  • Send out surveys via decided mediums:  Distribute the surveys to the target audience and patiently wait for the feedback and comments- this is the most crucial step of the survey research. The survey needs to be scheduled, keeping in mind the nature of the target audience and its regions. Surveys can be conducted via email, embedded in a website, shared via social media, etc., to gain maximum responses.
  • Analyze survey results:  Analyze the feedback in real-time and identify patterns in the responses which might lead to a much-needed breakthrough for your organization. GAP, TURF Analysis , Conjoint analysis, Cross tabulation, and many such survey feedback analysis methods can be used to spot and shed light on respondent behavior. Researchers can use the results to implement corrective measures to improve customer/employee satisfaction.

Reasons to conduct survey research

The most crucial and integral reason for conducting market research using surveys is that you can collect answers regarding specific, essential questions. You can ask these questions in multiple survey formats as per the target audience and the intent of the survey. Before designing a study, every organization must figure out the objective of carrying this out so that the study can be structured, planned, and executed to perfection.

LEARN ABOUT: Research Process Steps

Questions that need to be on your mind while designing a survey are:

  • What is the primary aim of conducting the survey?
  • How do you plan to utilize the collected survey data?
  • What type of decisions do you plan to take based on the points mentioned above?

There are three critical reasons why an organization must conduct survey research.

  • Understand respondent behavior to get solutions to your queries:  If you’ve carefully curated a survey, the respondents will provide insights about what they like about your organization as well as suggestions for improvement. To motivate them to respond, you must be very vocal about how secure their responses will be and how you will utilize the answers. This will push them to be 100% honest about their feedback, opinions, and comments. Online surveys or mobile surveys have proved their privacy, and due to this, more and more respondents feel free to put forth their feedback through these mediums.
  • Present a medium for discussion:  A survey can be the perfect platform for respondents to provide criticism or applause for an organization. Important topics like product quality or quality of customer service etc., can be put on the table for discussion. A way you can do it is by including open-ended questions where the respondents can write their thoughts. This will make it easy for you to correlate your survey to what you intend to do with your product or service.
  • Strategy for never-ending improvements:  An organization can establish the target audience’s attributes from the pilot phase of survey research . Researchers can use the criticism and feedback received from this survey to improve the product/services. Once the company successfully makes the improvements, it can send out another survey to measure the change in feedback keeping the pilot phase the benchmark. By doing this activity, the organization can track what was effectively improved and what still needs improvement.

Survey Research Scales

There are four main scales for the measurement of variables:

  • Nominal Scale:  A nominal scale associates numbers with variables for mere naming or labeling, and the numbers usually have no other relevance. It is the most basic of the four levels of measurement.
  • Ordinal Scale:  The ordinal scale has an innate order within the variables along with labels. It establishes the rank between the variables of a scale but not the difference value between the variables.
  • Interval Scale:  The interval scale is a step ahead in comparison to the other two scales. Along with establishing a rank and name of variables, the scale also makes known the difference between the two variables. The only drawback is that there is no fixed start point of the scale, i.e., the actual zero value is absent.
  • Ratio Scale:  The ratio scale is the most advanced measurement scale, which has variables that are labeled in order and have a calculated difference between variables. In addition to what interval scale orders, this scale has a fixed starting point, i.e., the actual zero value is present.

Benefits of survey research

In case survey research is used for all the right purposes and is implemented properly, marketers can benefit by gaining useful, trustworthy data that they can use to better the ROI of the organization.

Other benefits of survey research are:

  • Minimum investment:  Mobile surveys and online surveys have minimal finance invested per respondent. Even with the gifts and other incentives provided to the people who participate in the study, online surveys are extremely economical compared to paper-based surveys.
  • Versatile sources for response collection:  You can conduct surveys via various mediums like online and mobile surveys. You can further classify them into qualitative mediums like focus groups , and interviews and quantitative mediums like customer-centric surveys. Due to the offline survey response collection option, researchers can conduct surveys in remote areas with limited internet connectivity. This can make data collection and analysis more convenient and extensive.
  • Reliable for respondents:  Surveys are extremely secure as the respondent details and responses are kept safeguarded. This anonymity makes respondents answer the survey questions candidly and with absolute honesty. An organization seeking to receive explicit responses for its survey research must mention that it will be confidential.

Survey research design

Researchers implement a survey research design in cases where there is a limited cost involved and there is a need to access details easily. This method is often used by small and large organizations to understand and analyze new trends, market demands, and opinions. Collecting information through tactfully designed survey research can be much more effective and productive than a casually conducted survey.

There are five stages of survey research design:

  • Decide an aim of the research:  There can be multiple reasons for a researcher to conduct a survey, but they need to decide a purpose for the research. This is the primary stage of survey research as it can mold the entire path of a survey, impacting its results.
  • Filter the sample from target population:  Who to target? is an essential question that a researcher should answer and keep in mind while conducting research. The precision of the results is driven by who the members of a sample are and how useful their opinions are. The quality of respondents in a sample is essential for the results received for research and not the quantity. If a researcher seeks to understand whether a product feature will work well with their target market, he/she can conduct survey research with a group of market experts for that product or technology.
  • Zero-in on a survey method:  Many qualitative and quantitative research methods can be discussed and decided. Focus groups, online interviews, surveys, polls, questionnaires, etc. can be carried out with a pre-decided sample of individuals.
  • Design the questionnaire:  What will the content of the survey be? A researcher is required to answer this question to be able to design it effectively. What will the content of the cover letter be? Or what are the survey questions of this questionnaire? Understand the target market thoroughly to create a questionnaire that targets a sample to gain insights about a survey research topic.
  • Send out surveys and analyze results:  Once the researcher decides on which questions to include in a study, they can send it across to the selected sample . Answers obtained from this survey can be analyzed to make product-related or marketing-related decisions.

Survey examples: 10 tips to design the perfect research survey

Picking the right survey design can be the key to gaining the information you need to make crucial decisions for all your research. It is essential to choose the right topic, choose the right question types, and pick a corresponding design. If this is your first time creating a survey, it can seem like an intimidating task. But with QuestionPro, each step of the process is made simple and easy.

Below are 10 Tips To Design The Perfect Research Survey:

  • Set your SMART goals:  Before conducting any market research or creating a particular plan, set your SMART Goals . What is that you want to achieve with the survey? How will you measure it promptly, and what are the results you are expecting?
  • Choose the right questions:  Designing a survey can be a tricky task. Asking the right questions may help you get the answers you are looking for and ease the task of analyzing. So, always choose those specific questions – relevant to your research.
  • Begin your survey with a generalized question:  Preferably, start your survey with a general question to understand whether the respondent uses the product or not. That also provides an excellent base and intro for your survey.
  • Enhance your survey:  Choose the best, most relevant, 15-20 questions. Frame each question as a different question type based on the kind of answer you would like to gather from each. Create a survey using different types of questions such as multiple-choice, rating scale, open-ended, etc. Look at more survey examples and four measurement scales every researcher should remember.
  • Prepare yes/no questions:  You may also want to use yes/no questions to separate people or branch them into groups of those who “have purchased” and those who “have not yet purchased” your products or services. Once you separate them, you can ask them different questions.
  • Test all electronic devices:  It becomes effortless to distribute your surveys if respondents can answer them on different electronic devices like mobiles, tablets, etc. Once you have created your survey, it’s time to TEST. You can also make any corrections if needed at this stage.
  • Distribute your survey:  Once your survey is ready, it is time to share and distribute it to the right audience. You can share handouts and share them via email, social media, and other industry-related offline/online communities.
  • Collect and analyze responses:  After distributing your survey, it is time to gather all responses. Make sure you store your results in a particular document or an Excel sheet with all the necessary categories mentioned so that you don’t lose your data. Remember, this is the most crucial stage. Segregate your responses based on demographics, psychographics, and behavior. This is because, as a researcher, you must know where your responses are coming from. It will help you to analyze, predict decisions, and help write the summary report.
  • Prepare your summary report:  Now is the time to share your analysis. At this stage, you should mention all the responses gathered from a survey in a fixed format. Also, the reader/customer must get clarity about your goal, which you were trying to gain from the study. Questions such as – whether the product or service has been used/preferred or not. Do respondents prefer some other product to another? Any recommendations?

Having a tool that helps you carry out all the necessary steps to carry out this type of study is a vital part of any project. At QuestionPro, we have helped more than 10,000 clients around the world to carry out data collection in a simple and effective way, in addition to offering a wide range of solutions to take advantage of this data in the best possible way.

From dashboards, advanced analysis tools, automation, and dedicated functions, in QuestionPro, you will find everything you need to execute your research projects effectively. Uncover insights that matter the most!

MORE LIKE THIS

data information vs insight

Data Information vs Insight: Essential differences

May 14, 2024

pricing analytics software

Pricing Analytics Software: Optimize Your Pricing Strategy

May 13, 2024

relationship marketing

Relationship Marketing: What It Is, Examples & Top 7 Benefits

May 8, 2024

email survey tool

The Best Email Survey Tool to Boost Your Feedback Game

May 7, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Logo for University of Southern Queensland

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

9 Survey research

Survey research is a research method involving the use of standardised questionnaires or interviews to collect data about people and their preferences, thoughts, and behaviours in a systematic manner. Although census surveys were conducted as early as Ancient Egypt, survey as a formal research method was pioneered in the 1930–40s by sociologist Paul Lazarsfeld to examine the effects of the radio on political opinion formation of the United States. This method has since become a very popular method for quantitative research in the social sciences.

The survey method can be used for descriptive, exploratory, or explanatory research. This method is best suited for studies that have individual people as the unit of analysis. Although other units of analysis, such as groups, organisations or dyads—pairs of organisations, such as buyers and sellers—are also studied using surveys, such studies often use a specific person from each unit as a ‘key informant’ or a ‘proxy’ for that unit. Consequently, such surveys may be subject to respondent bias if the chosen informant does not have adequate knowledge or has a biased opinion about the phenomenon of interest. For instance, Chief Executive Officers may not adequately know employees’ perceptions or teamwork in their own companies, and may therefore be the wrong informant for studies of team dynamics or employee self-esteem.

Survey research has several inherent strengths compared to other research methods. First, surveys are an excellent vehicle for measuring a wide variety of unobservable data, such as people’s preferences (e.g., political orientation), traits (e.g., self-esteem), attitudes (e.g., toward immigrants), beliefs (e.g., about a new law), behaviours (e.g., smoking or drinking habits), or factual information (e.g., income). Second, survey research is also ideally suited for remotely collecting data about a population that is too large to observe directly. A large area—such as an entire country—can be covered by postal, email, or telephone surveys using meticulous sampling to ensure that the population is adequately represented in a small sample. Third, due to their unobtrusive nature and the ability to respond at one’s convenience, questionnaire surveys are preferred by some respondents. Fourth, interviews may be the only way of reaching certain population groups such as the homeless or illegal immigrants for which there is no sampling frame available. Fifth, large sample surveys may allow detection of small effects even while analysing multiple variables, and depending on the survey design, may also allow comparative analysis of population subgroups (i.e., within-group and between-group analysis). Sixth, survey research is more economical in terms of researcher time, effort and cost than other methods such as experimental research and case research. At the same time, survey research also has some unique disadvantages. It is subject to a large number of biases such as non-response bias, sampling bias, social desirability bias, and recall bias, as discussed at the end of this chapter.

Depending on how the data is collected, survey research can be divided into two broad categories: questionnaire surveys (which may be postal, group-administered, or online surveys), and interview surveys (which may be personal, telephone, or focus group interviews). Questionnaires are instruments that are completed in writing by respondents, while interviews are completed by the interviewer based on verbal responses provided by respondents. As discussed below, each type has its own strengths and weaknesses in terms of their costs, coverage of the target population, and researcher’s flexibility in asking questions.

Questionnaire surveys

Invented by Sir Francis Galton, a questionnaire is a research instrument consisting of a set of questions (items) intended to capture responses from respondents in a standardised manner. Questions may be unstructured or structured. Unstructured questions ask respondents to provide a response in their own words, while structured questions ask respondents to select an answer from a given set of choices. Subjects’ responses to individual questions (items) on a structured questionnaire may be aggregated into a composite scale or index for statistical analysis. Questions should be designed in such a way that respondents are able to read, understand, and respond to them in a meaningful way, and hence the survey method may not be appropriate or practical for certain demographic groups such as children or the illiterate.

Most questionnaire surveys tend to be self-administered postal surveys , where the same questionnaire is posted to a large number of people, and willing respondents can complete the survey at their convenience and return it in prepaid envelopes. Postal surveys are advantageous in that they are unobtrusive and inexpensive to administer, since bulk postage is cheap in most countries. However, response rates from postal surveys tend to be quite low since most people ignore survey requests. There may also be long delays (several months) in respondents’ completing and returning the survey, or they may even simply lose it. Hence, the researcher must continuously monitor responses as they are being returned, track and send non-respondents repeated reminders (two or three reminders at intervals of one to one and a half months is ideal). Questionnaire surveys are also not well-suited for issues that require clarification on the part of the respondent or those that require detailed written responses. Longitudinal designs can be used to survey the same set of respondents at different times, but response rates tend to fall precipitously from one survey to the next.

A second type of survey is a group-administered questionnaire . A sample of respondents is brought together at a common place and time, and each respondent is asked to complete the survey questionnaire while in that room. Respondents enter their responses independently without interacting with one another. This format is convenient for the researcher, and a high response rate is assured. If respondents do not understand any specific question, they can ask for clarification. In many organisations, it is relatively easy to assemble a group of employees in a conference room or lunch room, especially if the survey is approved by corporate executives.

A more recent type of questionnaire survey is an online or web survey. These surveys are administered over the Internet using interactive forms. Respondents may receive an email request for participation in the survey with a link to a website where the survey may be completed. Alternatively, the survey may be embedded into an email, and can be completed and returned via email. These surveys are very inexpensive to administer, results are instantly recorded in an online database, and the survey can be easily modified if needed. However, if the survey website is not password-protected or designed to prevent multiple submissions, the responses can be easily compromised. Furthermore, sampling bias may be a significant issue since the survey cannot reach people who do not have computer or Internet access, such as many of the poor, senior, and minority groups, and the respondent sample is skewed toward a younger demographic who are online much of the time and have the time and ability to complete such surveys. Computing the response rate may be problematic if the survey link is posted on LISTSERVs or bulletin boards instead of being emailed directly to targeted respondents. For these reasons, many researchers prefer dual-media surveys (e.g., postal survey and online survey), allowing respondents to select their preferred method of response.

Constructing a survey questionnaire is an art. Numerous decisions must be made about the content of questions, their wording, format, and sequencing, all of which can have important consequences for the survey responses.

Response formats. Survey questions may be structured or unstructured. Responses to structured questions are captured using one of the following response formats:

Dichotomous response , where respondents are asked to select one of two possible choices, such as true/false, yes/no, or agree/disagree. An example of such a question is: Do you think that the death penalty is justified under some circumstances? (circle one): yes / no.

Nominal response , where respondents are presented with more than two unordered options, such as: What is your industry of employment?: manufacturing / consumer services / retail / education / healthcare / tourism and hospitality / other.

Ordinal response , where respondents have more than two ordered options, such as: What is your highest level of education?: high school / bachelor’s degree / postgraduate degree.

Interval-level response , where respondents are presented with a 5-point or 7-point Likert scale, semantic differential scale, or Guttman scale. Each of these scale types were discussed in a previous chapter.

Continuous response , where respondents enter a continuous (ratio-scaled) value with a meaningful zero point, such as their age or tenure in a firm. These responses generally tend to be of the fill-in-the blanks type.

Question content and wording. Responses obtained in survey research are very sensitive to the types of questions asked. Poorly framed or ambiguous questions will likely result in meaningless responses with very little value. Dillman (1978) [1] recommends several rules for creating good survey questions. Every single question in a survey should be carefully scrutinised for the following issues:

Is the question clear and understandable ?: Survey questions should be stated in very simple language, preferably in active voice, and without complicated words or jargon that may not be understood by a typical respondent. All questions in the questionnaire should be worded in a similar manner to make it easy for respondents to read and understand them. The only exception is if your survey is targeted at a specialised group of respondents, such as doctors, lawyers and researchers, who use such jargon in their everyday environment. Is the question worded in a negative manner ?: Negatively worded questions such as ‘Should your local government not raise taxes?’ tend to confuse many respondents and lead to inaccurate responses. Double-negatives should be avoided when designing survey questions.

Is the question ambiguous ?: Survey questions should not use words or expressions that may be interpreted differently by different respondents (e.g., words like ‘any’ or ‘just’). For instance, if you ask a respondent, ‘What is your annual income?’, it is unclear whether you are referring to salary/wages, or also dividend, rental, and other income, whether you are referring to personal income, family income (including spouse’s wages), or personal and business income. Different interpretation by different respondents will lead to incomparable responses that cannot be interpreted correctly.

Does the question have biased or value-laden words ?: Bias refers to any property of a question that encourages subjects to answer in a certain way. Kenneth Rasinky (1989) [2] examined several studies on people’s attitude toward government spending, and observed that respondents tend to indicate stronger support for ‘assistance to the poor’ and less for ‘welfare’, even though both terms had the same meaning. In this study, more support was also observed for ‘halting rising crime rate’ and less for ‘law enforcement’, more for ‘solving problems of big cities’ and less for ‘assistance to big cities’, and more for ‘dealing with drug addiction’ and less for ‘drug rehabilitation’. A biased language or tone tends to skew observed responses. It is often difficult to anticipate in advance the biasing wording, but to the greatest extent possible, survey questions should be carefully scrutinised to avoid biased language.

Is the question double-barrelled ?: Double-barrelled questions are those that can have multiple answers. For example, ‘Are you satisfied with the hardware and software provided for your work?’. In this example, how should a respondent answer if they are satisfied with the hardware, but not with the software, or vice versa? It is always advisable to separate double-barrelled questions into separate questions: ‘Are you satisfied with the hardware provided for your work?’, and ’Are you satisfied with the software provided for your work?’. Another example: ‘Does your family favour public television?’. Some people may favour public TV for themselves, but favour certain cable TV programs such as Sesame Street for their children.

Is the question too general ?: Sometimes, questions that are too general may not accurately convey respondents’ perceptions. If you asked someone how they liked a certain book and provided a response scale ranging from ‘not at all’ to ‘extremely well’, if that person selected ‘extremely well’, what do they mean? Instead, ask more specific behavioural questions, such as, ‘Will you recommend this book to others, or do you plan to read other books by the same author?’. Likewise, instead of asking, ‘How big is your firm?’ (which may be interpreted differently by respondents), ask, ‘How many people work for your firm?’, and/or ‘What is the annual revenue of your firm?’, which are both measures of firm size.

Is the question too detailed ?: Avoid unnecessarily detailed questions that serve no specific research purpose. For instance, do you need the age of each child in a household, or is just the number of children in the household acceptable? However, if unsure, it is better to err on the side of details than generality.

Is the question presumptuous ?: If you ask, ‘What do you see as the benefits of a tax cut?’, you are presuming that the respondent sees the tax cut as beneficial. Many people may not view tax cuts as being beneficial, because tax cuts generally lead to lesser funding for public schools, larger class sizes, and fewer public services such as police, ambulance, and fire services. Avoid questions with built-in presumptions.

Is the question imaginary ?: A popular question in many television game shows is, ‘If you win a million dollars on this show, how will you spend it?’. Most respondents have never been faced with such an amount of money before and have never thought about it—they may not even know that after taxes, they will get only about $640,000 or so in the United States, and in many cases, that amount is spread over a 20-year period—and so their answers tend to be quite random, such as take a tour around the world, buy a restaurant or bar, spend on education, save for retirement, help parents or children, or have a lavish wedding. Imaginary questions have imaginary answers, which cannot be used for making scientific inferences.

Do respondents have the information needed to correctly answer the question ?: Oftentimes, we assume that subjects have the necessary information to answer a question, when in reality, they do not. Even if a response is obtained, these responses tend to be inaccurate given the subjects’ lack of knowledge about the question being asked. For instance, we should not ask the CEO of a company about day-to-day operational details that they may not be aware of, or ask teachers about how much their students are learning, or ask high-schoolers, ‘Do you think the US Government acted appropriately in the Bay of Pigs crisis?’.

Question sequencing. In general, questions should flow logically from one to the next. To achieve the best response rates, questions should flow from the least sensitive to the most sensitive, from the factual and behavioural to the attitudinal, and from the more general to the more specific. Some general rules for question sequencing:

Start with easy non-threatening questions that can be easily recalled. Good options are demographics (age, gender, education level) for individual-level surveys and firmographics (employee count, annual revenues, industry) for firm-level surveys.

Never start with an open ended question.

If following a historical sequence of events, follow a chronological order from earliest to latest.

Ask about one topic at a time. When switching topics, use a transition, such as, ‘The next section examines your opinions about…’

Use filter or contingency questions as needed, such as, ‘If you answered “yes” to question 5, please proceed to Section 2. If you answered “no” go to Section 3′.

Other golden rules . Do unto your respondents what you would have them do unto you. Be attentive and appreciative of respondents’ time, attention, trust, and confidentiality of personal information. Always practice the following strategies for all survey research:

People’s time is valuable. Be respectful of their time. Keep your survey as short as possible and limit it to what is absolutely necessary. Respondents do not like spending more than 10-15 minutes on any survey, no matter how important it is. Longer surveys tend to dramatically lower response rates.

Always assure respondents about the confidentiality of their responses, and how you will use their data (e.g., for academic research) and how the results will be reported (usually, in the aggregate).

For organisational surveys, assure respondents that you will send them a copy of the final results, and make sure that you follow up with your promise.

Thank your respondents for their participation in your study.

Finally, always pretest your questionnaire, at least using a convenience sample, before administering it to respondents in a field setting. Such pretesting may uncover ambiguity, lack of clarity, or biases in question wording, which should be eliminated before administering to the intended sample.

Interview survey

Interviews are a more personalised data collection method than questionnaires, and are conducted by trained interviewers using the same research protocol as questionnaire surveys (i.e., a standardised set of questions). However, unlike a questionnaire, the interview script may contain special instructions for the interviewer that are not seen by respondents, and may include space for the interviewer to record personal observations and comments. In addition, unlike postal surveys, the interviewer has the opportunity to clarify any issues raised by the respondent or ask probing or follow-up questions. However, interviews are time-consuming and resource-intensive. Interviewers need special interviewing skills as they are considered to be part of the measurement instrument, and must proactively strive not to artificially bias the observed responses.

The most typical form of interview is a personal or face-to-face interview , where the interviewer works directly with the respondent to ask questions and record their responses. Personal interviews may be conducted at the respondent’s home or office location. This approach may even be favoured by some respondents, while others may feel uncomfortable allowing a stranger into their homes. However, skilled interviewers can persuade respondents to co-operate, dramatically improving response rates.

A variation of the personal interview is a group interview, also called a focus group . In this technique, a small group of respondents (usually 6–10 respondents) are interviewed together in a common location. The interviewer is essentially a facilitator whose job is to lead the discussion, and ensure that every person has an opportunity to respond. Focus groups allow deeper examination of complex issues than other forms of survey research, because when people hear others talk, it often triggers responses or ideas that they did not think about before. However, focus group discussion may be dominated by a dominant personality, and some individuals may be reluctant to voice their opinions in front of their peers or superiors, especially while dealing with a sensitive issue such as employee underperformance or office politics. Because of their small sample size, focus groups are usually used for exploratory research rather than descriptive or explanatory research.

A third type of interview survey is a telephone interview . In this technique, interviewers contact potential respondents over the phone, typically based on a random selection of people from a telephone directory, to ask a standard set of survey questions. A more recent and technologically advanced approach is computer-assisted telephone interviewing (CATI). This is increasing being used by academic, government, and commercial survey researchers. Here the interviewer is a telephone operator who is guided through the interview process by a computer program displaying instructions and questions to be asked. The system also selects respondents randomly using a random digit dialling technique, and records responses using voice capture technology. Once respondents are on the phone, higher response rates can be obtained. This technique is not ideal for rural areas where telephone density is low, and also cannot be used for communicating non-audio information such as graphics or product demonstrations.

Role of interviewer. The interviewer has a complex and multi-faceted role in the interview process, which includes the following tasks:

Prepare for the interview: Since the interviewer is in the forefront of the data collection effort, the quality of data collected depends heavily on how well the interviewer is trained to do the job. The interviewer must be trained in the interview process and the survey method, and also be familiar with the purpose of the study, how responses will be stored and used, and sources of interviewer bias. They should also rehearse and time the interview prior to the formal study.

Locate and enlist the co-operation of respondents: Particularly in personal, in-home surveys, the interviewer must locate specific addresses, and work around respondents’ schedules at sometimes undesirable times such as during weekends. They should also be like a salesperson, selling the idea of participating in the study.

Motivate respondents: Respondents often feed off the motivation of the interviewer. If the interviewer is disinterested or inattentive, respondents will not be motivated to provide useful or informative responses either. The interviewer must demonstrate enthusiasm about the study, communicate the importance of the research to respondents, and be attentive to respondents’ needs throughout the interview.

Clarify any confusion or concerns: Interviewers must be able to think on their feet and address unanticipated concerns or objections raised by respondents to the respondents’ satisfaction. Additionally, they should ask probing questions as necessary even if such questions are not in the script.

Observe quality of response: The interviewer is in the best position to judge the quality of information collected, and may supplement responses obtained using personal observations of gestures or body language as appropriate.

Conducting the interview. Before the interview, the interviewer should prepare a kit to carry to the interview session, consisting of a cover letter from the principal investigator or sponsor, adequate copies of the survey instrument, photo identification, and a telephone number for respondents to call to verify the interviewer’s authenticity. The interviewer should also try to call respondents ahead of time to set up an appointment if possible. To start the interview, they should speak in an imperative and confident tone, such as, ‘I’d like to take a few minutes of your time to interview you for a very important study’, instead of, ‘May I come in to do an interview?’. They should introduce themself, present personal credentials, explain the purpose of the study in one to two sentences, and assure respondents that their participation is voluntary, and their comments are confidential, all in less than a minute. No big words or jargon should be used, and no details should be provided unless specifically requested. If the interviewer wishes to record the interview, they should ask for respondents’ explicit permission before doing so. Even if the interview is recorded, the interviewer must take notes on key issues, probes, or verbatim phrases

During the interview, the interviewer should follow the questionnaire script and ask questions exactly as written, and not change the words to make the question sound friendlier. They should also not change the order of questions or skip any question that may have been answered earlier. Any issues with the questions should be discussed during rehearsal prior to the actual interview sessions. The interviewer should not finish the respondent’s sentences. If the respondent gives a brief cursory answer, the interviewer should probe the respondent to elicit a more thoughtful, thorough response. Some useful probing techniques are:

The silent probe: Just pausing and waiting without going into the next question may suggest to respondents that the interviewer is waiting for more detailed response.

Overt encouragement: An occasional ‘uh-huh’ or ‘okay’ may encourage the respondent to go into greater details. However, the interviewer must not express approval or disapproval of what the respondent says.

Ask for elaboration: Such as, ‘Can you elaborate on that?’ or ‘A minute ago, you were talking about an experience you had in high school. Can you tell me more about that?’.

Reflection: The interviewer can try the psychotherapist’s trick of repeating what the respondent said. For instance, ‘What I’m hearing is that you found that experience very traumatic’ and then pause and wait for the respondent to elaborate.

After the interview is completed, the interviewer should thank respondents for their time, tell them when to expect the results, and not leave hastily. Immediately after leaving, they should write down any notes or key observations that may help interpret the respondent’s comments better.

Biases in survey research

Despite all of its strengths and advantages, survey research is often tainted with systematic biases that may invalidate some of the inferences derived from such surveys. Five such biases are the non-response bias, sampling bias, social desirability bias, recall bias, and common method bias.

Non-response bias. Survey research is generally notorious for its low response rates. A response rate of 15-20 per cent is typical in a postal survey, even after two or three reminders. If the majority of the targeted respondents fail to respond to a survey, this may indicate a systematic reason for the low response rate, which may in turn raise questions about the validity of the study’s results. For instance, dissatisfied customers tend to be more vocal about their experience than satisfied customers, and are therefore more likely to respond to questionnaire surveys or interview requests than satisfied customers. Hence, any respondent sample is likely to have a higher proportion of dissatisfied customers than the underlying population from which it is drawn. In this instance, not only will the results lack generalisability, but the observed outcomes may also be an artefact of the biased sample. Several strategies may be employed to improve response rates:

Advance notification: Sending a short letter to the targeted respondents soliciting their participation in an upcoming survey can prepare them in advance and improve their propensity to respond. The letter should state the purpose and importance of the study, mode of data collection (e.g., via a phone call, a survey form in the mail, etc.), and appreciation for their co-operation. A variation of this technique may be to ask the respondent to return a prepaid postcard indicating whether or not they are willing to participate in the study.

Relevance of content: People are more likely to respond to surveys examining issues of relevance or importance to them.

Respondent-friendly questionnaire: Shorter survey questionnaires tend to elicit higher response rates than longer questionnaires. Furthermore, questions that are clear, non-offensive, and easy to respond tend to attract higher response rates.

Endorsement: For organisational surveys, it helps to gain endorsement from a senior executive attesting to the importance of the study to the organisation. Such endorsement can be in the form of a cover letter or a letter of introduction, which can improve the researcher’s credibility in the eyes of the respondents.

Follow-up requests: Multiple follow-up requests may coax some non-respondents to respond, even if their responses are late.

Interviewer training: Response rates for interviews can be improved with skilled interviewers trained in how to request interviews, use computerised dialling techniques to identify potential respondents, and schedule call-backs for respondents who could not be reached.

Incentives : Incentives in the form of cash or gift cards, giveaways such as pens or stress balls, entry into a lottery, draw or contest, discount coupons, promise of contribution to charity, and so forth may increase response rates.

Non-monetary incentives: Businesses, in particular, are more prone to respond to non-monetary incentives than financial incentives. An example of such a non-monetary incentive is a benchmarking report comparing the business’s individual response against the aggregate of all responses to a survey.

Confidentiality and privacy: Finally, assurances that respondents’ private data or responses will not fall into the hands of any third party may help improve response rates

Sampling bias. Telephone surveys conducted by calling a random sample of publicly available telephone numbers will systematically exclude people with unlisted telephone numbers, mobile phone numbers, and people who are unable to answer the phone when the survey is being conducted—for instance, if they are at work—and will include a disproportionate number of respondents who have landline telephone services with listed phone numbers and people who are home during the day, such as the unemployed, the disabled, and the elderly. Likewise, online surveys tend to include a disproportionate number of students and younger people who are constantly on the Internet, and systematically exclude people with limited or no access to computers or the Internet, such as the poor and the elderly. Similarly, questionnaire surveys tend to exclude children and the illiterate, who are unable to read, understand, or meaningfully respond to the questionnaire. A different kind of sampling bias relates to sampling the wrong population, such as asking teachers (or parents) about their students’ (or children’s) academic learning, or asking CEOs about operational details in their company. Such biases make the respondent sample unrepresentative of the intended population and hurt generalisability claims about inferences drawn from the biased sample.

Social desirability bias . Many respondents tend to avoid negative opinions or embarrassing comments about themselves, their employers, family, or friends. With negative questions such as, ‘Do you think that your project team is dysfunctional?’, ‘Is there a lot of office politics in your workplace?’, ‘Or have you ever illegally downloaded music files from the Internet?’, the researcher may not get truthful responses. This tendency among respondents to ‘spin the truth’ in order to portray themselves in a socially desirable manner is called the ‘social desirability bias’, which hurts the validity of responses obtained from survey research. There is practically no way of overcoming the social desirability bias in a questionnaire survey, but in an interview setting, an astute interviewer may be able to spot inconsistent answers and ask probing questions or use personal observations to supplement respondents’ comments.

Recall bias. Responses to survey questions often depend on subjects’ motivation, memory, and ability to respond. Particularly when dealing with events that happened in the distant past, respondents may not adequately remember their own motivations or behaviours, or perhaps their memory of such events may have evolved with time and no longer be retrievable. For instance, if a respondent is asked to describe his/her utilisation of computer technology one year ago, or even memorable childhood events like birthdays, their response may not be accurate due to difficulties with recall. One possible way of overcoming the recall bias is by anchoring the respondent’s memory in specific events as they happened, rather than asking them to recall their perceptions and motivations from memory.

Common method bias. Common method bias refers to the amount of spurious covariance shared between independent and dependent variables that are measured at the same point in time, such as in a cross-sectional survey, using the same instrument, such as a questionnaire. In such cases, the phenomenon under investigation may not be adequately separated from measurement artefacts. Standard statistical tests are available to test for common method bias, such as Harmon’s single-factor test (Podsakoff, MacKenzie, Lee & Podsakoff, 2003), [3] Lindell and Whitney’s (2001) [4] market variable technique, and so forth. This bias can potentially be avoided if the independent and dependent variables are measured at different points in time using a longitudinal survey design, or if these variables are measured using different methods, such as computerised recording of dependent variable versus questionnaire-based self-rating of independent variables.

  • Dillman, D. (1978). Mail and telephone surveys: The total design method . New York: Wiley. ↵
  • Rasikski, K. (1989). The effect of question wording on public support for government spending. Public Opinion Quarterly , 53(3), 388–394. ↵
  • Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology , 88(5), 879–903. http://dx.doi.org/10.1037/0021-9010.88.5.879. ↵
  • Lindell, M. K., & Whitney, D. J. (2001). Accounting for common method variance in cross-sectional research designs. Journal of Applied Psychology , 86(1), 114–121. ↵

Social Science Research: Principles, Methods and Practices (Revised edition) Copyright © 2019 by Anol Bhattacherjee is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Doing Survey Research | A Step-by-Step Guide & Examples

Doing Survey Research | A Step-by-Step Guide & Examples

Published on 6 May 2022 by Shona McCombes . Revised on 10 October 2022.

Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps:

  • Determine who will participate in the survey
  • Decide the type of survey (mail, online, or in-person)
  • Design the survey questions and layout
  • Distribute the survey
  • Analyse the responses
  • Write up the results

Surveys are a flexible method of data collection that can be used in many different types of research .

Table of contents

What are surveys used for, step 1: define the population and sample, step 2: decide on the type of survey, step 3: design the survey questions, step 4: distribute the survey and collect responses, step 5: analyse the survey results, step 6: write up the survey results, frequently asked questions about surveys.

Surveys are used as a method of gathering data in many different fields. They are a good choice when you want to find out about the characteristics, preferences, opinions, or beliefs of a group of people.

Common uses of survey research include:

  • Social research: Investigating the experiences and characteristics of different social groups
  • Market research: Finding out what customers think about products, services, and companies
  • Health research: Collecting data from patients about symptoms and treatments
  • Politics: Measuring public opinion about parties and policies
  • Psychology: Researching personality traits, preferences, and behaviours

Surveys can be used in both cross-sectional studies , where you collect data just once, and longitudinal studies , where you survey the same sample several times over an extended period.

Prevent plagiarism, run a free check.

Before you start conducting survey research, you should already have a clear research question that defines what you want to find out. Based on this question, you need to determine exactly who you will target to participate in the survey.

Populations

The target population is the specific group of people that you want to find out about. This group can be very broad or relatively narrow. For example:

  • The population of Brazil
  • University students in the UK
  • Second-generation immigrants in the Netherlands
  • Customers of a specific company aged 18 to 24
  • British transgender women over the age of 50

Your survey should aim to produce results that can be generalised to the whole population. That means you need to carefully define exactly who you want to draw conclusions about.

It’s rarely possible to survey the entire population of your research – it would be very difficult to get a response from every person in Brazil or every university student in the UK. Instead, you will usually survey a sample from the population.

The sample size depends on how big the population is. You can use an online sample calculator to work out how many responses you need.

There are many sampling methods that allow you to generalise to broad populations. In general, though, the sample should aim to be representative of the population as a whole. The larger and more representative your sample, the more valid your conclusions.

There are two main types of survey:

  • A questionnaire , where a list of questions is distributed by post, online, or in person, and respondents fill it out themselves
  • An interview , where the researcher asks a set of questions by phone or in person and records the responses

Which type you choose depends on the sample size and location, as well as the focus of the research.

Questionnaires

Sending out a paper survey by post is a common method of gathering demographic information (for example, in a government census of the population).

  • You can easily access a large sample.
  • You have some control over who is included in the sample (e.g., residents of a specific region).
  • The response rate is often low.

Online surveys are a popular choice for students doing dissertation research , due to the low cost and flexibility of this method. There are many online tools available for constructing surveys, such as SurveyMonkey and Google Forms .

  • You can quickly access a large sample without constraints on time or location.
  • The data is easy to process and analyse.
  • The anonymity and accessibility of online surveys mean you have less control over who responds.

If your research focuses on a specific location, you can distribute a written questionnaire to be completed by respondents on the spot. For example, you could approach the customers of a shopping centre or ask all students to complete a questionnaire at the end of a class.

  • You can screen respondents to make sure only people in the target population are included in the sample.
  • You can collect time- and location-specific data (e.g., the opinions of a shop’s weekday customers).
  • The sample size will be smaller, so this method is less suitable for collecting data on broad populations.

Oral interviews are a useful method for smaller sample sizes. They allow you to gather more in-depth information on people’s opinions and preferences. You can conduct interviews by phone or in person.

  • You have personal contact with respondents, so you know exactly who will be included in the sample in advance.
  • You can clarify questions and ask for follow-up information when necessary.
  • The lack of anonymity may cause respondents to answer less honestly, and there is more risk of researcher bias.

Like questionnaires, interviews can be used to collect quantitative data : the researcher records each response as a category or rating and statistically analyses the results. But they are more commonly used to collect qualitative data : the interviewees’ full responses are transcribed and analysed individually to gain a richer understanding of their opinions and feelings.

Next, you need to decide which questions you will ask and how you will ask them. It’s important to consider:

  • The type of questions
  • The content of the questions
  • The phrasing of the questions
  • The ordering and layout of the survey

Open-ended vs closed-ended questions

There are two main forms of survey questions: open-ended and closed-ended. Many surveys use a combination of both.

Closed-ended questions give the respondent a predetermined set of answers to choose from. A closed-ended question can include:

  • A binary answer (e.g., yes/no or agree/disagree )
  • A scale (e.g., a Likert scale with five points ranging from strongly agree to strongly disagree )
  • A list of options with a single answer possible (e.g., age categories)
  • A list of options with multiple answers possible (e.g., leisure interests)

Closed-ended questions are best for quantitative research . They provide you with numerical data that can be statistically analysed to find patterns, trends, and correlations .

Open-ended questions are best for qualitative research. This type of question has no predetermined answers to choose from. Instead, the respondent answers in their own words.

Open questions are most common in interviews, but you can also use them in questionnaires. They are often useful as follow-up questions to ask for more detailed explanations of responses to the closed questions.

The content of the survey questions

To ensure the validity and reliability of your results, you need to carefully consider each question in the survey. All questions should be narrowly focused with enough context for the respondent to answer accurately. Avoid questions that are not directly relevant to the survey’s purpose.

When constructing closed-ended questions, ensure that the options cover all possibilities. If you include a list of options that isn’t exhaustive, you can add an ‘other’ field.

Phrasing the survey questions

In terms of language, the survey questions should be as clear and precise as possible. Tailor the questions to your target population, keeping in mind their level of knowledge of the topic.

Use language that respondents will easily understand, and avoid words with vague or ambiguous meanings. Make sure your questions are phrased neutrally, with no bias towards one answer or another.

Ordering the survey questions

The questions should be arranged in a logical order. Start with easy, non-sensitive, closed-ended questions that will encourage the respondent to continue.

If the survey covers several different topics or themes, group together related questions. You can divide a questionnaire into sections to help respondents understand what is being asked in each part.

If a question refers back to or depends on the answer to a previous question, they should be placed directly next to one another.

Before you start, create a clear plan for where, when, how, and with whom you will conduct the survey. Determine in advance how many responses you require and how you will gain access to the sample.

When you are satisfied that you have created a strong research design suitable for answering your research questions, you can conduct the survey through your method of choice – by post, online, or in person.

There are many methods of analysing the results of your survey. First you have to process the data, usually with the help of a computer program to sort all the responses. You should also cleanse the data by removing incomplete or incorrectly completed responses.

If you asked open-ended questions, you will have to code the responses by assigning labels to each response and organising them into categories or themes. You can also use more qualitative methods, such as thematic analysis , which is especially suitable for analysing interviews.

Statistical analysis is usually conducted using programs like SPSS or Stata. The same set of survey data can be subject to many analyses.

Finally, when you have collected and analysed all the necessary data, you will write it up as part of your thesis, dissertation , or research paper .

In the methodology section, you describe exactly how you conducted the survey. You should explain the types of questions you used, the sampling method, when and where the survey took place, and the response rate. You can include the full questionnaire as an appendix and refer to it in the text if relevant.

Then introduce the analysis by describing how you prepared the data and the statistical methods you used to analyse it. In the results section, you summarise the key results from your analysis.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviours. It is made up of four or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with five or seven possible responses, to capture their degree of agreement.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyse your data.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analysing data from people using questionnaires.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2022, October 10). Doing Survey Research | A Step-by-Step Guide & Examples. Scribbr. Retrieved 14 May 2024, from https://www.scribbr.co.uk/research-methods/surveys/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, qualitative vs quantitative research | examples & methods, construct validity | definition, types, & examples, what is a likert scale | guide & examples.

Logo for BCcampus Open Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 9: Survey Research

Overview of Survey Research

Learning Objectives

  • Define what survey research is, including its two important characteristics.
  • Describe several different ways that survey research can be used and give some examples.

What Is Survey Research?

Survey research  is a quantitative and qualitative method with two important characteristics. First, the variables of interest are measured using self-reports. In essence, survey researchers ask their participants (who are often called respondents  in survey research) to report directly on their own thoughts, feelings, and behaviours. Second, considerable attention is paid to the issue of sampling. In particular, survey researchers have a strong preference for large random samples because they provide the most accurate estimates of what is true in the population. In fact, survey research may be the only approach in psychology in which random sampling is routinely used. Beyond these two characteristics, almost anything goes in survey research. Surveys can be long or short. They can be conducted in person, by telephone, through the mail, or over the Internet. They can be about voting intentions, consumer preferences, social attitudes, health, or anything else that it is possible to ask people about and receive meaningful answers.  Although survey data are often analyzed using statistics, there are many questions that lend themselves to more qualitative analysis.

Most survey research is nonexperimental. It is used to describe single variables (e.g., the percentage of voters who prefer one presidential candidate or another, the prevalence of schizophrenia in the general population) and also to assess statistical relationships between variables (e.g., the relationship between income and health). But surveys can also be experimental. The study by Lerner and her colleagues is a good example. Their use of self-report measures and a large national sample identifies their work as survey research. But their manipulation of an independent variable (anger vs. fear) to assess its effect on a dependent variable (risk judgments) also identifies their work as experimental.

History and Uses of Survey Research

Survey research may have its roots in English and American “social surveys” conducted around the turn of the 20th century by researchers and reformers who wanted to document the extent of social problems such as poverty (Converse, 1987) [1] . By the 1930s, the US government was conducting surveys to document economic and social conditions in the country. The need to draw conclusions about the entire population helped spur advances in sampling procedures. At about the same time, several researchers who had already made a name for themselves in market research, studying consumer preferences for American businesses, turned their attention to election polling. A watershed event was the presidential election of 1936 between Alf Landon and Franklin Roosevelt. A magazine called  Literary Digest  conducted a survey by sending ballots (which were also subscription requests) to millions of Americans. Based on this “straw poll,” the editors predicted that Landon would win in a landslide. At the same time, the new pollsters were using scientific methods with much smaller samples to predict just the opposite—that Roosevelt would win in a landslide. In fact, one of them, George Gallup, publicly criticized the methods of Literary Digest  before the election and all but guaranteed that his prediction would be correct. And of course it was. (We will consider the reasons that Gallup was right later in this chapter.) Interest in surveying around election times has led to several long-term projects, notably the Canadian Election Studies which has measured opinions of Canadian voters around federal elections since 1965.  Anyone can access the data and read about the results of the experiments in these studies.

From market research and election polling, survey research made its way into several academic fields, including political science, sociology, and public health—where it continues to be one of the primary approaches to collecting new data. Beginning in the 1930s, psychologists made important advances in questionnaire design, including techniques that are still used today, such as the Likert scale. (See “What Is a Likert Scale?” in  Section 9.2 “Constructing Survey Questionnaires” .) Survey research has a strong historical association with the social psychological study of attitudes, stereotypes, and prejudice. Early attitude researchers were also among the first psychologists to seek larger and more diverse samples than the convenience samples of university students that were routinely used in psychology (and still are).

Survey research continues to be important in psychology today. For example, survey data have been instrumental in estimating the prevalence of various mental disorders and identifying statistical relationships among those disorders and with various other factors. The National Comorbidity Survey is a large-scale mental health survey conducted in the United States . In just one part of this survey, nearly 10,000 adults were given a structured mental health interview in their homes in 2002 and 2003.  Table 9.1  presents results on the lifetime prevalence of some anxiety, mood, and substance use disorders. (Lifetime prevalence is the percentage of the population that develops the problem sometime in their lifetime.) Obviously, this kind of information can be of great use both to basic researchers seeking to understand the causes and correlates of mental disorders as well as to clinicians and policymakers who need to understand exactly how common these disorders are.

And as the opening example makes clear, survey research can even be used to conduct experiments to test specific hypotheses about causal relationships between variables. Such studies, when conducted on large and diverse samples, can be a useful supplement to laboratory studies conducted on university students. Although this approach is not a typical use of survey research, it certainly illustrates the flexibility of this method.

Key Takeaways

  • Survey research is a quantitative approach that features the use of self-report measures on carefully selected samples. It is a flexible approach that can be used to study a wide variety of basic and applied research questions.
  • Survey research has its roots in applied social research, market research, and election polling. It has since become an important approach in many academic disciplines, including political science, sociology, public health, and, of course, psychology.

Discussion: Think of a question that each of the following professionals might try to answer using survey research.

  • a social psychologist
  • an educational researcher
  • a market researcher who works for a supermarket chain
  • the mayor of a large city
  • the head of a university police force
  • Converse, J. M. (1987). Survey research in the United States: Roots and emergence, 1890–1960 . Berkeley, CA: University of California Press. ↵
  • The lifetime prevalence of a disorder is the percentage of people in the population that develop that disorder at any time in their lives. ↵

A quantitative approach in which variables are measured using self-reports from a sample of the population.

Participants of a survey.

Research Methods in Psychology - 2nd Canadian Edition Copyright © 2015 by Paul C. Price, Rajiv Jhangiani, & I-Chant A. Chiang is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

definition of a research survey

Logo for Kwantlen Polytechnic University

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Survey Research

34 Overview of Survey Research

Learning objectives.

  • Define what survey research is, including its two important characteristics.
  • Describe several different ways that survey research can be used and give some examples.

What Is Survey Research?

Survey research  is a quantitative and qualitative method with two important characteristics. First, the variables of interest are measured using self-reports (using questionnaires or interviews). In essence, survey researchers ask their participants (who are often called respondents  in survey research) to report directly on their own thoughts, feelings, and behaviors. Second, considerable attention is paid to the issue of sampling. In particular, survey researchers have a strong preference for large random samples because they provide the most accurate estimates of what is true in the population. In fact, survey research may be the only approach in psychology in which random sampling is routinely used. Beyond these two characteristics, almost anything goes in survey research. Surveys can be long or short. They can be conducted in person, by telephone, through the mail, or over the Internet. They can be about voting intentions, consumer preferences, social attitudes, health, or anything else that it is possible to ask people about and receive meaningful answers.  Although survey data are often analyzed using statistics, there are many questions that lend themselves to more qualitative analysis.

Most survey research is non-experimental. It is used to describe single variables (e.g., the percentage of voters who prefer one presidential candidate or another, the prevalence of schizophrenia in the general population, etc.) and also to assess statistical relationships between variables (e.g., the relationship between income and health). But surveys can also be used within experimental research. The study by Lerner and her colleagues is a good example. Their use of self-report measures and a large national sample identifies their work as survey research. But their manipulation of an independent variable (anger vs. fear) to assess its effect on a dependent variable (risk judgments) also identifies their work as experimental.

History and Uses of Survey Research

Survey research may have its roots in English and American “social surveys” conducted around the turn of the 20th century by researchers and reformers who wanted to document the extent of social problems such as poverty (Converse, 1987) [1] . By the 1930s, the US government was conducting surveys to document economic and social conditions in the country. The need to draw conclusions about the entire population helped spur advances in sampling procedures. At about the same time, several researchers who had already made a name for themselves in market research, studying consumer preferences for American businesses, turned their attention to election polling. A watershed event was the presidential election of 1936 between Alf Landon and Franklin Roosevelt. A magazine called  Literary Digest  conducted a survey by sending ballots (which were also subscription requests) to millions of Americans. Based on this “straw poll,” the editors predicted that Landon would win in a landslide. At the same time, the new pollsters were using scientific methods with much smaller samples to predict just the opposite—that Roosevelt would win in a landslide. In fact, one of them, George Gallup, publicly criticized the methods of Literary Digest before the election and all but guaranteed that his prediction would be correct. And of course, it was, demonstrating the effectiveness of careful survey methodology (We will consider the reasons that Gallup was right later in this chapter). Gallup’s demonstration of the power of careful survey methods led later researchers to to local, and in 1948, the first national election survey by the Survey Research Center at the University of Michigan. This work eventually became the American National Election Studies ( https://electionstudies.org/ ) as a collaboration of Stanford University and the University of Michigan, and these studies continue today.

From market research and election polling, survey research made its way into several academic fields, including political science, sociology, and public health—where it continues to be one of the primary approaches to collecting new data. Beginning in the 1930s, psychologists made important advances in questionnaire design, including techniques that are still used today, such as the Likert scale. (See “What Is a Likert Scale?” in  Section 7.2 “Constructing Survey Questionnaires” .) Survey research has a strong historical association with the social psychological study of attitudes, stereotypes, and prejudice. Early attitude researchers were also among the first psychologists to seek larger and more diverse samples than the convenience samples of university students that were routinely used in psychology (and still are).

Survey research continues to be important in psychology today. For example, survey data have been instrumental in estimating the prevalence of various mental disorders and identifying statistical relationships among those disorders and with various other factors. The National Comorbidity Survey is a large-scale mental health survey conducted in the United States (see http://www.hcp.med.harvard.edu/ncs ). In just one part of this survey, nearly 10,000 adults were given a structured mental health interview in their homes in 2002 and 2003.  Table 7.1  presents results on the lifetime prevalence of some anxiety, mood, and substance use disorders. (Lifetime prevalence is the percentage of the population that develops the problem sometime in their lifetime.) Obviously, this kind of information can be of great use both to basic researchers seeking to understand the causes and correlates of mental disorders as well as to clinicians and policymakers who need to understand exactly how common these disorders are.

And as the opening example makes clear, survey research can even be used as a data collection method within experimental research to test specific hypotheses about causal relationships between variables. Such studies, when conducted on large and diverse samples, can be a useful supplement to laboratory studies conducted on university students. Survey research is thus a flexible approach that can be used to study a variety of basic and applied research questions.

  • Converse, J. M. (1987). Survey research in the United States: Roots and emergence, 1890–1960 . Berkeley, CA: University of California Press. ↵

A quantitative and qualitative method with two important characteristics; variables are measured using self-reports and considerable attention is paid to the issue of sampling.

Participants in a survey or study.

Research Methods in Psychology Copyright © 2019 by Rajiv S. Jhangiani, I-Chant A. Chiang, Carrie Cuttler, & Dana C. Leighton is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Logo for Boise State Pressbooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

5 Approaching Survey Research

What is survey research.

Survey research is a quantitative and qualitative method with two important characteristics. First, the variables of interest are measured using self-reports (using questionnaires or interviews). In essence, survey researchers ask their participants (who are often called respondents in survey research) to report directly on their own thoughts, feelings, and behaviors. Second, considerable attention is paid to the issue of sampling. In particular, survey researchers have a strong preference for large random samples because they provide the most accurate estimates of what is true in the population. Beyond these two characteristics, almost anything goes in survey research. Surveys can be long or short. They can be conducted in person, by telephone, through the mail, or over the Internet. They can be about voting intentions, consumer preferences, social attitudes, health, or anything else that it is possible to ask people about and receive meaningful answers. Although survey data are often analyzed using statistics, there are many questions that lend themselves to more qualitative analysis.

Most survey research is non-experimental. It is used to describe single variables (e.g., the percentage of voters who prefer one presidential candidate or another, the prevalence of schizophrenia in the general population, etc.) and also to assess statistical relationships between variables (e.g., the relationship between income and health). But surveys can also be used within experimental research; as long as there is manipulation of an independent variable (e.g. anger vs. fear) to assess an effect on a dependent variable (e.g. risk judgments).

Chapter 5: Learning Objectives

If your research question(s) center on the experience or perception of a particular phenomenon, process, or practice, utilizing a survey method may help glean useful data. After reading this chapter, you will

  • Identify the purpose of survey research
  • Describe the cognitive processes involved in responding to questions
  • Discuss the importance of context in drafting survey items
  • Contrast the utility of open and closed ended questions
  • Describe the BRUSO method of drafting survey questions
  • Describe the format for survey questionnaires

The heart of any survey research project is the survey itself. Although it is easy to think of interesting questions to ask people, constructing a good survey is not easy at all. The problem is that the answers people give can be influenced in unintended ways by the wording of the items, the order of the items, the response options provided, and many other factors. At best, these influences add noise to the data. At worst, they result in systematic biases and misleading results. In this section, therefore, we consider some principles for constructing surveys to minimize these unintended effects and thereby maximize the reliability and validity of respondents’ answers.

Cognitive Processes of Responses

To best understand how to write a ‘good’ survey question, it is important to frame the act of responding to a survey question as a cognitive process. That is, there are are involuntary mechanisms that take place when someone is asked a question. Sudman, Bradburn, & Schwarz (1996, as cited in Jhangiani et. al, 2012) illustrate this cognitive process here.

Progression of a cognitive response. Fist the respondent must understand the question then retrieve information from memory to formulate a response based on a judgement formed by the information. The respondent must then edit the response, depending on the response options provided by the survey.

Framing the formulation of survey questions in this way is extremely helpful to ensure that the questions posed on your survey glean accurate information.

Example of a Poorly Worded Survey Question

How many alcoholic drinks do you consume in a typical day?

  • A lot more of average
  • Somewhat more than average
  • Average number
  • Somewhat fewer than average
  • A lot fewer than average

Although this item at first seems straightforward, it poses several difficulties for respondents. First, they must interpret the question. For example, they must decide whether “alcoholic drinks” include beer and wine (as opposed to just hard liquor) and whether a “typical day” is a typical weekday, typical weekend day, or both. Even though Chang and Krosnick (2003, as cited in Jhangiani et al. 2012) found that asking about “typical” behavior has been shown to be more valid than asking about “past” behavior, their study compared “typical week” to “past week” and may be different when considering typical weekdays or weekend days). Once respondents have interpreted the question, they must retrieve relevant information from memory to answer it. But what information should they retrieve, and how should they go about retrieving it? They might think vaguely about some recent occasions on which they drank alcohol, they might carefully try to recall and count the number of alcoholic drinks they consumed last week, or they might retrieve some existing beliefs that they have about themselves (e.g., “I am not much of a drinker”). Then they must use this information to arrive at a tentative judgment about how many alcoholic drinks they consume in a typical day. For example, this mental calculation might mean dividing the number of alcoholic drinks they consumed last week by seven to come up with an average number per day. Then they must format this tentative answer in terms of the response options actually provided. In this case, the options pose additional problems of interpretation. For example, what does “average” mean, and what would count as “somewhat more” than average? Finally, they must decide whether they want to report the response they have come up with or whether they want to edit it in some way. For example, if they believe that they drink a lot more than average, they might not want to report that for fear of looking bad in the eyes of the researcher, so instead, they may opt to select the “somewhat more than average” response option.

From this perspective, what at first appears to be a simple matter of asking people how much they drink (and receiving a straightforward answer from them) turns out to be much more complex.

Context Effects on Survey Responses

Again, this complexity can lead to unintended influences on respondents’ answers. These are often referred to as context effects because they are not related to the content of the item but to the context in which the item appears (Schwarz & Strack, 1990, as cited in Jhangiani et al. 2012). For example, there is an item-order effect when the order in which the items are presented affects people’s responses. One item can change how participants interpret a later item or change the information that they retrieve to respond to later items. For example, researcher Fritz Strack and his colleagues asked college students about both their general life satisfaction and their dating frequency (Strack, Martin, & Schwarz, 1988, as cited in Jhangiani et al. 2012) . When the life satisfaction item came first, the correlation between the two was only −.12, suggesting that the two variables are only weakly related. But when the dating frequency item came first, the correlation between the two was +.66, suggesting that those who date more have a strong tendency to be more satisfied with their lives. Reporting the dating frequency first made that information more accessible in memory so that they were more likely to base their life satisfaction rating on it.

The response options provided can also have unintended effects on people’s responses (Schwarz, 1999, as cited in Jhangiani et al. 2012) . For example, when people are asked how often they are “really irritated” and given response options ranging from “less than once a year” to “more than once a month,” they tend to think of major irritations and report being irritated infrequently. But when they are given response options ranging from “less than once a day” to “several times a month,” they tend to think of minor irritations and report being irritated frequently. People also tend to assume that middle response options represent what is normal or typical. So if they think of themselves as normal or typical, they tend to choose middle response options. For example, people are likely to report watching more television when the response options are centered on a middle option of 4 hours than when centered on a middle option of 2 hours. To mitigate against order effects, rotate questions and response items when there is no natural order. Counterbalancing or randomizing the order of presentation of the questions in online surveys are good practices for survey questions and can reduce response order effects that show that among undecided voters, the first candidate listed in a ballot receives a 2.5% boost simply by virtue of being listed first!

Writing Survey Items

Types of Items

Questionnaire items can be either open-ended or closed-ended. Open-ended  items simply ask a question and allow participants to answer in whatever way they choose. The following are examples of open-ended questionnaire items.

  • “What is the most important thing to teach children to prepare them for life?”
  • “Please describe a time when you were discriminated against because of your age.”
  • “Is there anything else you would like to tell us about?”

Open-ended items are useful when researchers do not know how participants might respond or when they want to avoid influencing their responses. Open-ended items are more qualitative in nature, so they tend to be used when researchers have more vaguely defined research questions—often in the early stages of a research project. Open-ended items are relatively easy to write because there are no response options to worry about. However, they take more time and effort on the part of participants, and they are more difficult for the researcher to analyze because the answers must be transcribed, coded, and submitted to some form of qualitative analysis, such as content analysis. Another disadvantage is that respondents are more likely to skip open-ended items because they take longer to answer. It is best to use open-ended questions when the answer is unsure or for quantities which can easily be converted to categories later in the analysis.

Closed-ended items ask a question and provide a set of response options for participants to choose from.

Examples of  Closed-Ended Questions

How old are you?

On a scale of 0 (no pain at all) to 10 (the worst pain ever experienced), how much pain are you in right now?

Closed-ended items are used when researchers have a good idea of the different responses that participants might make. They are more quantitative in nature, so they are also used when researchers are interested in a well-defined variable or construct such as participants’ level of agreement with some statement, perceptions of risk, or frequency of a particular behavior. Closed-ended items are more difficult to write because they must include an appropriate set of response options. However, they are relatively quick and easy for participants to complete. They are also much easier for researchers to analyze because the responses can be easily converted to numbers and entered into a spreadsheet. For these reasons, closed- ended items are much more common.

All closed-ended items include a set of response options from which a participant must choose. For categorical variables like sex, race, or political party preference, the categories are usually listed and participants choose the one (or ones) to which they belong. For quantitative variables, a rating scale is typically provided. A rating scale is an ordered set of responses that participants must choose from.

Likert Scale indicating scaled responses between 1 and 5 to questions. A selection of 1 indicates strongly disagree and a selection of 5 indicates strongly agree

The number of response options on a typical rating scale ranges from three to 11—although five and seven are probably most common. Five-point scales are best for unipolar scales where only one construct is tested, such as frequency (Never, Rarely, Sometimes, Often, Always). Seven- point scales are best for bipolar scales where there is a dichotomous spectrum, such as liking (Like very much, Like somewhat, Like slightly, Neither like nor dislike, Dislike slightly, Dislike somewhat, Dislike very much). For bipolar questions, it is useful to offer an earlier question that branches them into an area of the scale; if asking about liking ice cream, first ask “Do you generally like or dislike ice cream?” Once the respondent chooses like or dislike, refine it by offering them relevant choices from the seven-point scale. Branching improves both reliability and validity (Krosnick & Berent, 1993, as cited in Jhangiani et al. 2012 ) . Although you often see scales with numerical labels, it is best to only present verbal labels to the respondents but convert them to numerical values in the analyses. Avoid partial labels or length or overly specific labels. In some cases, the verbal labels can be supplemented with (or even replaced by) meaningful graphics.

Writing Effective Items

We can now consider some principles of writing questionnaire items that minimize unintended context effects and maximize the reliability and validity of participants’ responses. A rough guideline for writing 9 questionnaire items is provided by the BRUSO model (Peterson, 2000, as cited in Jhangiani et al. 2012 ) . An acronym, BRUSO stands for “brief,” “relevant,” “unambiguous,” “specific,” and “objective.” Effective questionnaire items are brief and to the point. They avoid long, overly technical, or unnecessary words. This brevity makes them easier for respondents to understand and faster for them to complete. Effective questionnaire items are also relevant to the research question. If a respondent’s sexual orientation, marital status, or income is not relevant, then items on them should probably not be included. Again, this makes the questionnaire faster to complete, but it also avoids annoying respondents with what they will rightly perceive as irrelevant or even “nosy” questions. Effective questionnaire items are also unambiguous; they can be interpreted in only one way. Part of the problem with the alcohol item presented earlier in this section is that different respondents might have different ideas about what constitutes “an alcoholic drink” or “a typical day.” Effective questionnaire items are also specific so that it is clear to respondents what their response should be about and clear to researchers what it is about. A common problem here is closed- ended items that are “double barreled .” They ask about two conceptually separate issues but allow only one response.

Example of a “Double Barreled” question

Please rate the extent to which you have been feeling anxious and depressed

Note: The issue in the question itself is that anxiety and depression are two separate items and should likely be separated

Finally, effective questionnaire items are objective in the sense that they do not reveal the researcher’s own opinions or lead participants to answer in a particular way. The best way to know how people interpret the wording of the question is to conduct a pilot test and ask a few people to explain how they interpreted the question. 

A description of the BRUSO methodology of writing questions wherein items are brief, relevant, unambiguous, specific, and objective

For closed-ended items, it is also important to create an appropriate response scale. For categorical variables, the categories presented should generally be mutually exclusive and exhaustive. Mutually exclusive categories do not overlap. For a religion item, for example, the categories of Christian and Catholic are not mutually exclusive but Protestant and Catholic are mutually exclusive. Exhaustive categories cover all possible responses. Although Protestant and Catholic are mutually exclusive, they are not exhaustive because there are many other religious categories that a respondent might select: Jewish, Hindu, Buddhist, and so on. In many cases, it is not feasible to include every possible category, in which case an ‘Other’ category, with a space for the respondent to fill in a more specific response, is a good solution. If respondents could belong to more than one category (e.g., race), they should be instructed to choose all categories that apply.

For rating scales, five or seven response options generally allow about as much precision as respondents are capable of. However, numerical scales with more options can sometimes be appropriate. For dimensions such as attractiveness, pain, and likelihood, a 0-to-10 scale will be familiar to many respondents and easy for them to use. Regardless of the number of response options, the most extreme ones should generally be “balanced” around a neutral or modal midpoint.

Example of an unbalanced versus balanced rating scale

Unbalanced rating scale measuring perceived likelihood

Unlikely | Somewhat Likely | Likely | Very Likely | Extremely Likely

Balanced rating scale measuring perceived likelihood

Extremely Unlikely | Somewhat Unlikely | As Likely as Not | Somewhat Likely |Extremely Likely

Note, however, that a middle or neutral response option does not have to be included. Researchers sometimes choose to leave it out because they want to encourage respondents to think more deeply about their response and not simply choose the middle option by default. However, including middle alternatives on bipolar dimensions can be used to allow people to choose an option that is neither.

Formatting the Survey

Writing effective items is only one part of constructing a survey. For one thing, every survey should have a written or spoken introduction that serves two basic functions (Peterson, 2000, as cited by Jhangiani et al. 2012 ). One is to encourage respondents to participate in the survey. In many types of research, such encouragement is not necessary either because participants do not know they are in a study (as in naturalistic observation) or because they are part of a subject pool and have already shown their willingness to participate by signing up and showing up for the study. Survey research usually catches respondents by surprise when they answer their phone, go to their mailbox, or check their e-mail—and the researcher must make a good case for why they should agree to participate. This means that the researcher has only a moment to capture the attention of the respondent and must make it as easy as possible for the respondent  to participate . Thus the introduction should briefly explain the purpose of the survey and its importance, provide information about the sponsor of the survey (university-based surveys tend to generate higher response rates), acknowledge the importance of the respondent’s participation, and describe any incentives for participating.

The second function of the introduction is to establish informed consent. Remember that this involves describing to respondents everything that might affect their decision to participate. This includes the topics covered by the survey, the amount of time it is likely to take, the respondent’s option to withdraw at any time, confidentiality issues, and so on. Written consent forms are not always used in survey research (when the research is of minimal risk and completion of the survey instrument is often accepted by the IRB as evidence of consent to participate), so it is important that this part of the introduction be well documented and presented clearly and in its entirety to every respondent.

The introduction should be followed by the substantive questionnaire items. But first, it is important to present clear instructions for completing the questionnaire, including examples of how to use any unusual response scales. Remember that the introduction is the point at which respondents are usually most interested and least fatigued, so it is good practice to start with the most important items for purposes of the research and proceed to less important items. Items should also be grouped by topic or by type. For example, items using the same rating scale (e.g., a 5-point agreement scale) should be grouped together if possible to make things faster and easier for respondents. Demographic items are often presented last because they are least interesting to participants but also easy to answer in the event respondents have become tired or bored. Of course, any survey should end with an expression of appreciation to the respondent.

Coding your survey responses

Once you’ve closed your survey, you’ll need to identify how to quantify the data you’ve collected. Much of this can be done in ways similar to methods described in the previous two chapters. Although there are several ways by which to do this, here are some general tips:

  • Transfer data : Transfer your data to a program which will allow you to organize and ‘clean’ the data. If you’ve used an online tool to gather data, you should be able to download the survey results into a format appropriate for working the data. If you’ve collected responses by hand, you’ll need to input the data manually.
  • Save: ALWAYS save a copy of your original data. Save changes you make to the data under a different name or version in case you need to refer back to the original data.
  • De-identify: This step will depend on the overall approach that you’ve taken to answer your research question and may not be appropriate for your project.
  • Name the variables: Again, there is no ‘right’ way to do this; however, as you move forward, you will want to be sure you can easily identify what data you are extracting. Many times, when you transfer your data the program will automatically associate data collected with the question asked. It is a good idea to name the variable something associated with the data, rather than the question
  • Code the attributes : Each variable will likely have several different attributes, or      layers. You’ll need to come up with a coding method to distinguish the different responses. As discussed in previous chapters, each attribute should have a numeric code associated so that you can quantify the data and use descriptive and/or inferential statistical methods to either describe or explore relationships within the dataset.

Most online survey tools will download data into a spreadsheet-type program and organize that data in association with the question asked. Naming the variables so that you can easily identify the information will be helpful as you proceed to analysis.

This is relatively simple to accomplish with closed-ended questions. Because                   you’ve ‘forced’ the respondent to pick a concrete answer, you can create a code               that is associated with each answer. In the picture above, respondents were                     asked to identify their region and given a list of geographical regions and in                     structed to pick one. The researcher then created a code for the regions. In this               case, 1= West; 2= Midwest; 3= Northeast; 4= Southeast; and 5= Southwest. If you’re           working to quantify data that is somewhat qualitative in nature (i.e. open ended             questions) the process is a little more complicated. You’ll need to either create                 themes or categories, classify types or similar responses, and then assign codes to         those themes or categories.

6. Create a codebook : This.is.essential. Once you begin to code the data, you will                 have somewhat disconnected yourself from the data by translating the data from         a language that we understand to a language which a computer understands. Af           ter you run your statistical methods, you’ll translate it back to the native language         and share findings. To stay organized and accurate, it is important that you keep a         record of how the data has been translated.

7.  Analyze: Once you have the data inputted, cleaned, and coded, you should be                ready  to analyze your data using either descriptive or inferential methods, depend.      ing on your approach and overarching goal.

Key Takeaways

  • Surveys are a great method to identify information about perceptions and experiences
  • Question items must be carefully crafted to elicit an appropriate response
  • Surveys are often a mixed-methods approach to research
  • Both descriptive and inferential statistical approaches can be applied to the data gleaned through survey responses
  • Surveys utilize both open and closed ended questions; identifying which types of questions will yield specific data will be helpful as you plan your approach to analysis
  • Most surveys will need to include a method of informed consent, and an introduction. The introduction should clearly delineate the purpose of the survey and how the results will be utilized
  • Pilot tests of your survey can save you a lot of time and heartache. Pilot testing helps to catch issues in the development of item, accessibility, and type of information derived prior to initiating the survey on a larger scale
  • Survey data can be analyzed much like other types of data; following a systematic approach to coding will help ensure you get the answers you’re looking for
  • This section is attributed to Research Methods in Psychology by Rajiv S. Jhangiani, I-Chant A. Chiang, Carrie Cuttler, & Dana C. Leighton is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted. ↵
  • The majority of content in these sections can be attributed to Research Methods in Psychology by Rajiv S. Jhangiani, I-Chant A. Chiang, Carrie Cuttler, & Dana C. Leighton is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted. ↵

A mixed methods approach using self-reports of respondents who are sampled using stringent methods

A type of survey question that allows the respondent to insert their own response; typically qualitative in nature

A type of survey question which forces a respondent to select a response; no subjectivity.

Practical Research: A Basic Guide to Planning, Doing, and Writing Copyright © by megankoster. All Rights Reserved.

Share This Book

Root out friction in every digital experience, super-charge conversion rates, and optimise digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered straight to teams on the ground

Know exactly how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Meet the operating system for experience management

  • Free Account
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Employee Exit Interviews
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories
  • Market Research
  • Artificial Intelligence
  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results.

language

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO
  • Experience Management
  • What is a Survey?

Try Qualtrics for free

What is a survey benefits, tips & free tool.

17 min read It’s a simple question. But as with many things, the answer is more complex than you’d think. Surveys can take many forms but are most common as a questionnaire, either written or online.

A survey is a method of  gathering information  using relevant questions from a sample of people with the aim of understanding populations as a whole. Surveys provide a critical source of data and insights for everyone engaged in the information economy, from businesses to media, to government and academics.

What types of surveys are there?

There are four modes of surveys that are commonly used.

  • Face-to-face surveys
  • Telephone surveys
  • Self-administered paper and pencil surveys
  • Self-administered computer surveys (typically online)

While surveys vary widely in how they’re conducted, there are a number of common components. Many of these features have been studied in extensive detail by survey methodologists, psychologists, statisticians, and in many other fields of research.

In this article, we’ll go through the fundamentals — from the benefits of online surveys to how to use your results.

Start creating surveys today with our free software

The benefits of online surveys

Chobani product research question - Select the option that is more appealing to you

Millions of surveys are sent out each year.  Online survey software  has been the most popular way of conducting survey research for over a decade, and because getting faster insights is imperative to business success, more companies are migrating to  digital solutions .

There are several benefits to online surveys over more traditional methods, such as paper surveys, and not just because of the money you can save.

Here are a few of the key benefits you should consider:

More responses, faster

Unlike paper surveys that require you to wait for responses to be posted back, answers from online surveys can be gathered automatically. Respondents are  likely to respond faster  to your online survey too because they can be completed with a few clicks.

They’re cheaper to run

Online questionnaires are significantly cheaper than traditional surveys. You don’t have to spend money on postage. And you don’t need the same resource and time to input responses on paper questionnaires into your database.

Online surveys are more accurate

Paper surveys have a higher margin for error because answers given on paper forms need to be input manually into your system to be analysed. With an online survey, answers are entered directly into your system.

Results on tap

Survey data collection is much faster online because you don’t have to spend time manually re-entering responses from paper forms into your system, meaning you  get real-time analysis of your data  that can be broken down and interpreted much quicker.

Reach new audiences

Online surveys can be accessed by anyone, anywhere. This removes the limitations associated with paper surveys that need to be distributed by hand or post. Suddenly, you can conduct more comprehensive research at scale, getting a better understanding of your chosen topic or study globally.

Why create surveys using a survey tool?

It’s easy to create a follow-up or new survey with digital software. Qualtrics offers  free survey templates  you can download and use immediately. Digital software can save your organization time and money thanks to lower setup and administrative costs. It’s more convenient for the customer or respondent because they can take the survey on whichever digital device is most convenient for them (tablet, computer, mobile device, etc). It’s also more convenient for you, as you just need to send the survey link via email and you’ll have the data in your survey management software as soon as responses come in.

Digital surveys scale, too. You can  send a survey  to thousands of people and even translate it into multiple languages so a survey respondent can reply with ease (increasing the chance of gaining a reply).

How to create a survey

Survey Designer Goals

A good survey is only as good as its design, so it’s important to be thorough at every stage from  your initial idea  to final data analysis. By taking the time to plan out your research question and distribution model, you’ll be in the best position to get quality data.

Define your research question and goals

Defining the research question first is important to the success of a survey research project. What are you trying to find out? Do you want to understand what customers think of your latest product or your brand overall? Are you looking at what  benefits your employees want , or if your employees are engaged at work through  employee engagement surveys ?

Without establishing a research question and the metrics you want to measure, a survey will only provide data, not the insights you need to make changes to your processes, product, or services.

Identify who you’ll be surveying

Who should answer your questions? Customers, employees, consumers who aren’t using your product? You must identify this and understand the best way to reach them (social media, email, your website, etc).

Design and pre-test surveys

Designing the questionnaire and pre-testing it is crucial to getting valid and reliable data. For example, careful survey design and pre-testing can help bring clarity and reduce the chance that respondents may interpret the meaning of questions differently.

Select a sample to survey

Selecting a sample is important for collecting valid and reliable data about the population as a whole.

If you’re sampling a large database of customer email addresses and only want one response per household, you should cross-check email addresses against mailing addresses and remove duplicates. Then you should draw a random sample from the remaining email addresses. Use our  sample size calculator  to determine how many responses you need to be confident in your data.

Another consideration in sampling is that your customers will not be an unbiased population for all types of research. If you’re launching a product for a new segment, your existing market may not be representative of potential buyers so any data gathered from them won’t be beneficial.

Send out your survey

When you send out your survey, it’s important to get responses from everyone in the sample, as this will determine the response rate of the survey. Create a plan to get a great response rate for your survey before you begin distribution so you can hit your target.

You can distribute your survey online using an email survey tool or an email management system (for example, using mail merge with Microsoft Outlook or Google forms).

Analyse the data

It’s often necessary to code and adjust the data before analysing, particularly if open-ended questions  were asked.  Qualtrics Text IQ easily analyses data from open-text responses, giving you actionable insights. Your whole data set can then be analysed, and you can make a plan for improvements. There are a variety of statistical analysis types that are well suited for survey data. We’ll cover them later in this article.

For more information on how to get started on your survey creation visit our  complete guide on creating a survey.

Produce results using the right survey questions

Types of Survey Questions

There are more than 100 ways to ask a question. Question types have a direct impact on the survey results. For instance, text-entry questions are most reliable, but also lead to respondent fatigue faster, so you should limit their number.)

Simple questions are best, but you can pick and choose questions based on what you want to achieve. Here is a list of the most common question types, with example question text:

  • Multiple choice: Multiple choice questions form the basis of most research. They can be displayed as a traditional list of choices or as a dropdown menu, select box, etc.

Urugway survey example

  • Multi-select: Multi-select is used when you want participants to select more than one answer from a list.

Example of multi-select

  • Text entry: Text entry, also known as open-field, is used to gather open-ended responses. These survey responses can be lengthy essays, standard form information such as name, email address, or anything in between.

Example of a text-entry survey

  • Ranking order: Rank order is used to determine the order of preference for a list of items. This question type is best used when you want to measure your respondents’ attitudes toward something.

Example of ranking order survey

  • Rating order:  Rating questions  ask respondents to indicate their personal levels for aspects such as agreement, satisfaction, or frequency. An example is a Likert scale.

Rank scale survey question

  • Matrix table: Matrix questions are used to collect multiple pieces of information in one question. This type provides an effective way to condense your survey or to group similar items into one question.

Matrix table example

  • Slider: Sliders let respondents indicate their level of preference with a draggable bar rather than a traditional button or checkbox.

Slider survey example

  • Side-by-side: Side-by-side questions let you ask multiple questions in one condensed table and provide an effective way of shortening your survey while gathering the same amount of data.

Example of side-by-side survey

The conventional wisdom—which has been supported by most empirical research on the topic over the years—suggests that, in general, questions should be worded to:

  • Be simple, direct, comprehensible
  • Not use jargon
  • Be specific and concrete (rather than general and abstract)
  • Avoid ambiguous words
  • Avoid double-barreled questions
  • Avoid negations
  • Avoid leading questions
  • Include filter questions
  • Read smoothly out loud
  • Avoid emotionally charged words
  • Allow for all possible responses

If you’re curious about which question types can go together, you can review our sample survey templates that you can use for free to help you get started:

  • Employee satisfaction  survey template
  • Employee exit  survey template
  • Customer satisfaction (CSAT)  survey template
  • Ad testing  survey template
  • Brand awareness  survey template
  • Product pricing  survey template
  • Product research  survey template
  • Employee engagement  survey template
  • Customer service  survey template
  • NPS  survey template (Net Promoter Score
  • Product package testing  survey template
  • Product features prioritization  survey template

For more information on how to  create a survey  with the best mixture of questions, view the  Qualtrics Handbook of Question Design

Ways to improve your survey creation

At first glance, a good survey looks easy to construct. You just ask the questions you want to be answered, right? In reality, the way you ask the question can determine the survey responses, so your questions should be unbiased, direct, and mutually exclusive. Well, there are best practices to ensure you get the best results, all of which we’ll outline in this section.

Design the survey for your brand

Regardless of who sends your online survey, ensure it’s consistent with your company’s branding.

When you  create a survey , it’s still important that the customer recognises it’s from you so they know it’s legitimate and not fraud. This also preserves overall brand consistency .

Quality check your survey data

All data should be checked and double-checked to ensure accuracy.

Before you send the online survey, take the survey yourself and make sure there are no errors. Complete the survey on multiple devices and make sure everything functions as it should. After you’ve gathered your data, review the results output and make sure everything is accurate.

Avoid asking for personal info you don’t need or already have

Respondents want to give their opinions, not personal details. If you already have a respondent’s information, don’t ask for it again. If you’re collecting sensitive data, make sure you’re complying with internal policies, local laws, and  GDPR  (if applicable). You’ll also want to ensure your servers that house the data are secure and carry out periodic penetration checks to patch any vulnerabilities.

Distribute your survey through the right channels

Your survey can be  distributed through multiple channels  – web, email, social media, etc. To make sure the maximum number of people complete it, distribute it through the channel your target audience is most likely to be on. This might even mean distributing it through social media for millennials and email for Generation X.

How to use your survey results

Knowing how to implement a survey is only half the story. If you don’t then make changes based on what you’ve learned, the time spent creating the survey will be wasted. Create an internal and external distribution plan before you even get the data. You can even create a mock presentation with fake data and a mock distribution plan, so you know exactly what you need to do once you’ve analysed the data.

Trigger automatic actions

Automatic triggers save you time and help you scale your experience management programs. Use the findings from your surveys to trigger data integration with other tools like CRMs  Salesforce ,  Hubspot , or  Microsoft Dynamics . Create service tickets directly with a  closed-loop ticketing follow-up  add-on or with integrations to your service desk software such as  Zendesk ,  ServiceNow , or  FreshDesk . If a customer is at risk of churning, have an alert pop-up when that customer interacts with your customer service agent. There are endless possibilities to use automatic triggers once you have your data analysed.

Format insights into a user-friendly format

There are many things you can do with your survey results, but it’s most important to get them into the hands of the decision-makers in an  easily understandable format . From creating interactive PDF reports to exporting to Excel, SPSS, or Google sheets, you can present the information in multiple ways to help make sure it’s well-received by your stakeholders.

Formatting the analysis of your survey into a user-friendly format can help take it from this (which provides little overall insight when you have to view every question)

To this (where insights are instant and conclusions can be drawn instantly:

Use advanced statistical analysis

Advanced statistical analysis can help you answer research questions in more depth. These are methods that are typically used by a trained research professional or  high-tech statistical software like Qualtrics Stats iQ .

  • Regression analysis – Measures the degree of influence of independent variables on a dependent variable (the relationship between two variables).
  • Analysis of Variance (ANOVA) test  – Commonly used with a regression study to find out what effect independent variables have on the dependent variable. It can compare multiple groups simultaneously to see if there is a relationship between them.
  • Conjoint analysis  – Asks people to make trade-offs when making decisions then analyses the results to give the most popular outcome. Helps you understand why people make the complex choices they do.
  • T-Test  – Helps you compare whether two data groups have different mean values and allows the user to interpret whether differences are meaningful or merely coincidental.
  • Crosstab analysis – Used in quantitative market research to analyse categorical data – that is, variables that are different and mutually exclusive, and allows you to compare the relationship between two variables in contingency tables.

Use a survey tool that does it all with Qualtrics

Used by more than 13,000 brands and supporting more than 1 billion surveys a year, Qualtrics empowers everyone in an organisation to gather insights and take action. No coding required.

Through the Qualtrics survey tool, you can get answers to your most important marketing, branding, customer, and product questions. This online survey software can handle everything from simple customer feedback questionnaires to detailed research projects.

You can also leverage listening survey tools to capture feedback from more than 125 data sources, including online reviews and social media sites. Then, manage your data from the survey platform to break down silos, understand customers, and run targeted research.

Furthermore, with drag-and-drop simplicity — even for the most advanced surveys — you can create questionnaires that fit your exact purpose, and utilise built-in intelligence (powered by iQ) to get higher-quality data.

Lastly, for expert market researchers and survey designers, Qualtrics features custom programming to give you total flexibility over question types, survey design, embedded data, and other variables.

Qualtrics’ survey software puts all the information at your fingertips, making it easy for you to access, measure, and examine data that are critical to your organisation — and its success — without you having to jump between systems.

Get started with our free survey software

Related resources

Thematic analysis 11 min read, post event survey questions 10 min read, choosing the best survey tools 16 min read, survey app 11 min read, close-ended questions 7 min read, survey vs questionnaire 12 min read, likert scales 14 min read, request demo.

Ready to learn more about Qualtrics?

Illustration

  • Basics of Research Process
  • Methodology
  • Survey Research Design: Definition, How to Conduct a Survey & Examples
  • Speech Topics
  • Basics of Essay Writing
  • Essay Topics
  • Other Essays
  • Main Academic Essays
  • Research Paper Topics
  • Basics of Research Paper Writing
  • Miscellaneous
  • Chicago/ Turabian
  • Data & Statistics
  • Admission Writing Tips
  • Admission Advice
  • Other Guides
  • Student Life
  • Studying Tips
  • Understanding Plagiarism
  • Academic Writing Tips
  • Basics of Dissertation & Thesis Writing

Illustration

  • Essay Guides
  • Research Paper Guides
  • Formatting Guides
  • Admission Guides
  • Dissertation & Thesis Guides

Survey Research Design: Definition, How to Conduct a Survey & Examples

Survey research

Table of contents

Illustration

Use our free Readability checker

Survey research is a quantitative research method that involves collecting data from a sample of individuals using standardized questionnaires or surveys. The goal of survey research is to measure the attitudes, opinions, behaviors, and characteristics of a target population. Surveys can be conducted through various means, including phone, mail, online, or in-person.

If your project involves live interaction with numerous people in order to obtain important data, you should know the basic rules of survey research beforehand. Today we’ll talk about this research type, review the step-by-step guide on how to do a survey research and try to understand main advantages and potential pitfalls. The following important questions will be discussed below:

  • Purpose and techniques of information collection.
  • Kinds of responses.
  • Analysis techniques, assumptions, and conclusions.

Do you wish to learn best practices of survey conducting? Stay with our research paper service and get prepared for some serious reading!

What Is Survey Research: Definition

Let’s define the notion of survey research first. It revolves around surveys you conduct to retrieve certain data from your respondents. The latter is to be carefully selected from some population that for particular reasons possess the data necessary for your research. For example, they can be witnesses of some event that you should investigate. Surveys contain a set of predefined questions, closed- or open-ended. They can be sent to participants who would answer them and thus provide you with data for your research. There are many methods for organizing surveys and processing the obtained information.

Purpose of Survey Research Design

Purpose of survey research is to collect proper data and thus get insights for your research. You should pick participants with relatable experience. It should be done in order to get relevant information from them. Questions in your survey should be formulated in a way that allows getting as much useful data as possible. The format of a survey should be adjusted to the situation. It will ensure your respondents will be ready to give their answers. It can be a questionnaire sent over email or questions asked during a phone call.

Surveys Research Methods

Which survey research method to choose? Let’s review the most popular approaches and when to use them. There are two critical factors that define how a survey will be conducted

  • Tool to send questions
  • online: using web forms or email questionnaires.
  • phone: reaching out to respondents individually. Sometimes using an automated service.
  • face-to-face: interviewing respondents in the real world. This makes room for more in-depth questions.
  • Time to conduct research
  • short-term periods.
  • long-term periods.

Let’s explore the time-related methods in detail.

Cross-Sectional Survey Design Research

The first type is cross sectional survey research. Design of this survey type includes collecting various insights from an audience within a specific short time period. It is used for descriptive analysis of a subject. The purpose is to provide quick conclusions or assumptions. Which is why this approach relies on fast data gathering and processing techniques.  Such surveys are typically implemented in sectors such as retail, education, healthcare etc, where the situation tends to change fast. So it is important to obtain operational results as soon as possible.

Longitudinal Survey Research

Let’s talk about survey research designs . Planning a design beforehand is crucial. It is crucial in case you are pressed on time or have a limited budget. Collecting information using a properly designed survey research is typically more effective and productive compared with a casually conducted study.  Preparation of a survey design includes the following major steps:

  • Understand the aim of your research. So that you can better plan the entire path of a survey and avoid obvious issues.
  • Pick a good sample from a population. Ensure precision of the results by selecting members who could provide useful insights and opinions.
  • Review available research methods. Decide about the one most suitable for your specific case.
  • Prepare a questionnaire. Selection of questions would directly affect the quality of your longitudinal analysis . So make sure to pick good questions. Also, avoid unnecessary ones to save time and counter possible errors.
  • Analyze results and make conclusions.

Advantages of Survey Research

As a rule, survey research involves getting data from people with first-hand knowledge about the research subject. Therefore, when formulated properly, survey questions should provide some unique insights and thus describe the subject better. Other benefits of this approach include:

  • Minimum investment. Online and automated call services require very low investment per respondent.
  • Versatile sources. Data can be collected by numerous means, allowing more flexibility.
  • Reliable for respondents. Anonymous surveys are secure. Respondents are more likely to answer honestly if they understand it will be confidential.

Types of Survey Research

Let’s review the main types of surveys. It is important to know about most popular templates. So that you wouldn’t have to develop your own ones from scratch for your specific case. Such studies are usually categorized by the following aspects:

  • Objectives.
  • Data source.
  • Methodology.

We’ll examine each of these aspects below, focusing on areas where certain types are used. 

Types of Survey Research Depending on Objective

Depending on your objective and the specifics of the subject’s context, the following survey research types can be used:

  • Predictive This approach foresees asking questions that automatically predict the best possible response options based on how they are formulated. As a result, it is often easier for respondents to provide their answers as they already have helpful suggestions.
  • Exploratory This approach is focused more on the discovery of new ideas and insights rather than collecting statistically accurate information. The results can be difficult to categorize and analyze. But this approach is very useful for finding a general direction for further research.
  • Descriptive This approach helps to define and describe your respondents' opinions or behavior more precisely. By predefining certain categories and designing survey questions, you obtain statistical data. This descriptive research approach is often used at later research stages. It is used in order to better understand the meaning of insights obtained at the beginning.

Types of Survey Research Depending on Data Source

The following research survey types can be defined based on which sources you obtain the data from:

  • Primary In this case, you collect information directly from the original source, e.g., learn about a natural disaster from a survivor. You aren’t using any intermediary instances. And, as a result, don't get any information twisted or lost on its way. This is the way to obtain the most valid and trustworthy results. But at the same time, it is often not so easy to access such sources.
  • Secondary This involves collecting data from existing research on the same subject that has been published. Such information is easier to access. But at the same time, it is usually too general and not tailored for your specific needs.

Types of Survey Research Depending on Methodology

Finally, let’s review survey research methodologies based on the format of retrieved and processed data. They can be:

  • Quantitative An approach that focuses on gathering numeric or measurable data from respondents. This provides enough material for statistical analysis. And then leads to some meaningful conclusions. Collection of such data requires properly designed surveys that include numeric options. It is important to take precautions to ensure that the data you’ve gathered is valid.
  • Qualitative Such surveys rely on opinions, impressions, reflections, and typical reactions of target groups. They should include open-ended questions to allow respondents to give detailed answers. It allows providing information that they consider most relevant. Qualitative research is used to understand, explain or evaluate some ideas or tendencies.

It is essential to differentiate these two kinds of research. That's why we prepared a special blog, which is about quantitative vs qualitative research .

How to Conduct a Survey Research: Main Steps

Now let’s find out how to do a survey step by step. Regardless of methods you use to design and conduct your survey, there are general guidelines that should be followed. The path is quite straightforward: 

  • Assess your goals and options for accessing necessary groups.
  • Formulate each question in a way that helps you obtain the most valuable data.
  • Plan and execute the distribution of the questions.
  • Process the results.

Let’s take a closer look at all these stages.

Step 1. Create a Clear Survey Research Question

Each survey research question should add some potential value to your expected results. Before formulating your questionnaire, it is better to invest some time analyzing your target populations. This will allow you to form proper samples of respondents. Big enough to get some insights from them but not too big at the same time. A good way to prepare questions is by constructing case studies for your subject. Analyzing case study examples in detail will help you understand which information about them is necessary.

Step 2. Choose a Type of Survey Research

As we’ve already learned, there are several different types of survey research. Starting with a close analysis of your subject, goals and available sources will help you understand which kinds of questions are to be distributed.  As a researcher, you’ll also need to analyze the features of the selected group of respondents. Pick a type that makes it easier to reach out to them. For example, if you should question a group of elderly people, online forms wouldn’t be efficient compared with interviews.

Step 3. Distribute the Questionnaire for Your Survey Research

The next step of survey research is the most decisive one. Now you should execute the plan you’ve created earlier. And then conduct the questioning of the entire group that was selected. If this is a group assignment, ask your colleagues or peers for help. Especially if you should deal with a big group of respondents. It is important to stick to the initial scenario but leave some room for improvisation in case there are difficulties with reaching out to respondents. After you collect all necessary responses, this data can be processed and analyzed.

Step 4. Analyze the Results of Your Research Survey

The data obtained during the survey research should be processed. So that you can use it for making assumptions and conclusions. If it is qualitative, you should conduct a thematic analysis to find important ideas and insights that could confirm your theories or expand your knowledge of the subject. Quantitative data can be analyzed manually or with the help of some program. Its purpose is to extract dependencies and trends from it to confirm or refute existing assumptions.

Step 5. Save the Results of Your Survey Research

The final step is to compose a survey research paper in order to get your results ordered. This way none of them would be lost especially if you save some copies of the paper. Depending on your assignment and on which stage you are at, it can be a dissertation, a thesis or even an illustrative essay where you explain the subject to your audience.  Each survey you’ve conducted must get a special section in your paper where you explain your methods and describe your results.

Survey Research Example

We have got a few research survey examples in case you would need some real world cases to illustrate the guidelines and tips provided above. Below is a sample research case with population and the purposes of researchers defined.

Example of survey research design The Newtown Youth Initiative will conduct a qualitative survey to develop a program to mitigate alcohol consumption by adolescent citizens of Newtown. Previously, cultural anthropology research was performed for studying mental constructs to understand young people's expectations from alcohol and their views on specific cultural values. Based on its results, a survey was designed to measure expectancies, cultural orientation among the adolescent population. A secure web page has been developed to conduct this survey and ensure anonymity of respondents. The Newtown Youth Initiative will partner with schools to share the link to this page with students and engage them to participate. Statistical analysis of differences in expectancies and cultural orientation between drinkers and non-drinkers will be performed using the data from this survey.

Survey Research: Key Takeaways

Today, we have explored the research survey notion and reviewed the main features of this research activity and its usage in the social sciences topics . Important techniques and tips have been reviewed. A step by step guide for conducting such studies has also been provided.

Illustration

Found it difficult to reach out to your target group? Or are you just pressed with deadlines? We've got your back! Check out our writing services and leave a ‘ write paper for me ’ request. We are a team of skilled authors with vast experience in various academic fields.

Frequently Asked Questions About Survey Research

1. what is a market research survey.

A market research survey can help a company understand several aspects of their target market. It typically involves picking focus groups of customers and asking them questions in order to learn about demand for specific products or services and understand whether it grows. Such feedback would be crucial for a company’s development. It can help it to plan its further strategic steps.

2. How does survey research differ from experimental research methods?

The main difference between experiment and survey research is that the latter means field research, while experiments are typically performed in laboratory conditions. When conducting surveys, researchers don’t have full control on the process and should adapt to the specific traits of their target groups in order to obtain answers from them. Besides, results of a study might be harder to quantify and turn into statistical values.

4. What is the difference between survey research and descriptive research?

The purpose of descriptive studies is to explain what the subject is and which features it has. Survey research may include descriptive information but is not limited by that. Typically it goes beyond descriptive statistics and includes qualitative research or advanced statistical methods used to draw inferences, find dependencies or build trends. On the other hand, descriptive methods don’t necessarily include questioning respondents, obtaining information from other sources.

3. What is good sample size for a survey?

It always depends on a specific case and researcher’s goals. However, there are some general guidelines and best practices for this activity. Good maximum sample size is usually around 10% of the population, as long as this does not exceed 1000 people. In any case, you should be mindful of your time and budget limitations when planning your actions. In case you’ve got a team to help you, it might be possible to process more data.

Joe_Eckel_1_ab59a03630.jpg

Joe Eckel is an expert on Dissertations writing. He makes sure that each student gets precious insights on composing A-grade academic writing.

You may also like

Descriptive Research

Leveraging collective action and environmental literacy to address complex sustainability challenges

  • Perspective
  • Open access
  • Published: 09 August 2022
  • Volume 52 , pages 30–44, ( 2023 )

Cite this article

You have full access to this open access article

definition of a research survey

  • Nicole M. Ardoin   ORCID: orcid.org/0000-0002-3290-8211 1 ,
  • Alison W. Bowers 2 &
  • Mele Wheaton 3  

8183 Accesses

18 Citations

20 Altmetric

Explore all metrics

Developing and enhancing societal capacity to understand, debate elements of, and take actionable steps toward a sustainable future at a scale beyond the individual are critical when addressing sustainability challenges such as climate change, resource scarcity, biodiversity loss, and zoonotic disease. Although mounting evidence exists for how to facilitate individual action to address sustainability challenges, there is less understanding of how to foster collective action in this realm. To support research and practice promoting collective action to address sustainability issues, we define the term “collective environmental literacy” by delineating four key potent aspects: scale, dynamic processes, shared resources, and synergy. Building on existing collective constructs and thought, we highlight areas where researchers, practitioners, and policymakers can support individuals and communities as they come together to identify, develop, and implement solutions to wicked problems. We close by discussing limitations of this work and future directions in studying collective environmental literacy.

Similar content being viewed by others

definition of a research survey

Bridge over troubled water: managing compatibility and conflict among thought collectives in sustainability science

definition of a research survey

“Salomone Sostenibile”: An Award to ‘Communicate’ the University’s Leading Role in Sustainable Development

definition of a research survey

Engaging with Ethnically Diverse Community Groups

Avoid common mistakes on your manuscript.

Introduction

For socio-ecologically intertwined issues—such as climate change, land conversion, biodiversity loss, resource scarcity, and zoonotic diseases—and their associated multi-decadal timeframes, individual action is necessary, yet not sufficient, for systemic, sustained change (Amel et al. 2017 ; Bodin 2017 ; Niemiec et al. 2020 ; Spitzer and Fraser 2020 ). Instead, collective action, or individuals working together toward a common good, is essential for achieving the scope and scale of solutions to current sustainability challenges. To support communities as they engage in policy and action for socio-environmental change, communicators, land managers, policymakers, and other practitioners need an understanding of how communities coalesce and leverage their shared knowledge, skills, connections, and experiences.

Engagement efforts, such as those grounded in behavior-change approaches or community-based social marketing initiatives, that address socio-environmental issues have often emphasized individuals as the pathway to change. Such efforts address a range of domains including, but not limited to, residential energy use, personal transportation choices, and workplace recycling efforts, often doing so in a stepwise fashion, envisioning each setting or suite of behaviors as discrete spheres of action and influence (Heimlich and Ardoin 2008 ; McKenzie-Mohr 2011 ). In this way, specific actions are treated incrementally and linearly, considering first the individual barriers to be removed and then the motivations to be activated (and, sometimes, sustained; Monroe 2003 ; Gifford et al. 2011 ). Once each behavior is successfully instantiated, the next barrier is then addressed. Proceeding methodically from one action to the next, such initiatives often quite successfully alter a series of actions or group of related behaviors (at least initially) by addressing them incrementally, one at a time (Byerly et al. 2018 ). Following this aspirational logic chain, many resources have been channeled into such programs under the assumption that, by raising awareness and knowledge, such information, communication, and educational outreach efforts will shift attitudes and behaviors to an extent that, ultimately, mass-scale change will follow. (See discussion in Wals et al. 2014 .)

Numerous studies have demonstrated, however, that challenges arise with these stepwise approaches, particularly with regard to their ability to address complex issues and persist over time (Heimlich and Ardoin 2008 ; Wals et al. 2014 ). Such approaches place a tremendous—and unrealistic—burden on individuals, ignoring key aspects not only of behavioral science but also of social science more broadly, including the view that humans exist nested within socio-ecological systems and, thus, are most successful at achieving lasting change when it is meaningful, relevant, and undertaken within a supportive context (Swim et al. 2011 ; Feola 2015 ). Individualized approaches often require multiple steps or nudges (Byerly et al. 2018 ), or ongoing reminders to retain their salience (Stern et al. 2008 ). Because of the emphasis on decontextualized action, such approaches can miss, ignore, obfuscate, or minimize the importance of the bigger picture, which includes the sociocultural, biophysical, and political economic contexts (Ardoin 2006 ; Amel et al. 2017 ). Although the tightly trained focus on small, actionable steps and reliance on individual willpower may help in initially achieving success with initial habit formation (Carden and Wood 2018 ), it becomes questionable in terms of bringing about a wave of transformation on larger scales in the longer term. For those decontextualized actions to persist, they require continued prompting, constancy, and support in the social and biophysical context (Schultz 2014 ; Manfredo et al. 2016 ; Wood and Rünger 2016 ).

Less common in practice are theoretically based initiatives that embrace the holistic nature of the human experience, which occurs within complex systems spanning time and space in a multidimensional, weblike fashion (Bronfenbrenner 1979 ; Rogoff 2003 ; Barron 2006 ; DeCaro and Stokes 2008 ; Gould et al. 2019 ; Hovardas 2020 ). These systems-thinking approaches, while varying across disciplines and epistemological perspectives, envision human experiences, including learning and behavior, as occurring within a milieu that include the social, political, cultural, and historical contexts (Rogoff 2003 ; Roth and Lee 2007 ; Swim et al. 2011 ; Gordon 2019 ). In such a view, people’s everyday practices continuously reflect and grow out of past learning and experiences, not only at the individual, but also at the collective level (Lave 1991 ; Gutiérrez and Rogoff 2003 ; Nasir et al. 2020 ; Ardoin and Heimlich 2021 ). The multidimensional context in which we exist—including the broader temporal and spatial ecosystem—both facilitates and constrains our actions.

Scholars across diverse areas of study discuss the need for and power of collective thought and action, using various conceptual frames, models, and terms, such as collective action, behavior, impact, and intelligence; collaborative governance; communities of practice; crowdsourcing; and social movement theory; among many others (Table 1 ). These scholars acknowledge and explore the influence of our multidimensional context on collective thought and action. In this paper, we explore the elements and processes that constitute collective environmental literacy . We draw on the vast, relevant literature and, in so doing, we attempt to invoke the power of the collective: by reviewing and synthesizing ideas from a variety of fields, we strive to leverage existing constructs and perspectives that explore notions of the “collective” (see Table 1 for a summary of constructs and theories reviewed to develop our working definition of collective environmental literacy). A primary goal of this paper is to dialogue with other researchers and practitioners working in this arena who are eager to uncover and further explore related avenues.

First, we present a formal definition of collective environmental literacy. Next, we briefly review the dominant view of environmental literacy at the individual level and, in support of a collective take on environmental literacy, we examine various collective constructs. We then delve more deeply into the definition of collective environmental literacy by outlining four key aspects: scale, dynamic processes, shared resources, and synergy. We conclude by providing suggestions for future directions in studying collective environmental literacy.

Defining collective environmental literacy

Decades of research in political science, economics, anthropology, sociology, psychology, and the learning sciences, among other fields (Chawla and Cushing 2007 ; Ostrom 2009 ; Sawyer 2014 ; Bamberg et al. 2015 ; Chan 2016 ; Jost et al. 2017 ) repeatedly demonstrates the effectiveness, and indeed necessity of, collective action when addressing problems that are inherently social in nature. Yet theoretical frameworks and empirical documentation emphasize that such collective activities rarely arise spontaneously and, when they do, are a result of preconditions that have sown fertile ground (van Zomeren et al. 2008 ; Duncan 2018 ). Persistent and effective collective action then requires scaffolding in the form of institutional, sociocultural, and political economic structure that provides ongoing support. To facilitate discussions of how to effectively support collective action around sustainability issues, we suggest the concept of “collective environmental literacy.” We conceptualize collective environmental literacy as more than collective action; rather, we suggest that the term encapsulates action along with its various supporting structures and resources. Additionally, we employ the word “literacy” as it connotes learning, intention, and the idea that knowledge, skills, attitudes, and behaviors can be enhanced iteratively over time. By using “literacy,” we strive to highlight the efforts, often unseen, that lead to effective collective action in communities. We draw on scholarship in science and health education, areas that have begun over the past two decades to theorize about related areas of collective science literacy (Roth and Lee 2002 , 2004 ; Lee and Roth 2003 ; Feinstein 2018 ) and health literacy (Freedman et al. 2009 ; Papen 2009 ; Chinn 2011 ; Guzys et al. 2015 ). Although these evolving constructs lack consensus definitions, they illuminate affordances and constraints that exist when conceptualizing collective environmental literacy (National Academies of Sciences, Engineering, and Medicine [NASEM] 2016 ).

Some of the key necessary—but not sufficient—conditions that facilitate aligned, collective actions include a common body of decision-making information; shared attitudes, values, and beliefs toward a motivating issue or concern; and efficacy skills that facilitate change-making (Sturmer and Simon 2004 ; van Zomeren et al. 2008 ; Jagers et al. 2020 ). In addition, other contextual factors are essential, such as trust, reciprocity, collective efficacy, and communication among group members and societal-level facilitators, such as social norms, institutions, and technology (Bandura 2000 ; Ostrom 2010 ; McAdam and Boudet 2012 ; Jagers et al. 2020 ). Taken together, we term this body of knowledge, dispositions, skills, and the context in which they flourish collective environmental literacy . More formally, we define collective environmental literacy as: a dynamic, synergistic process that occurs as group members develop and leverage shared resources to undertake individual and aggregate actions over time to address sustainability issues within the multi-scalar context of a socio-environmental system (Fig.  1 ).

figure 1

Key elements of collective environmental literacy

Environmental literacy: Historically individual, increasingly collective

Over the past five decades, the term “environmental literacy” has come into increasingly frequent use. Breaking from the traditional association of “literacy” with reading and writing in formal school contexts, environmental literacy emphasizes associations with character and behavior, often in the form of responsible environmental stewardship (Roth 1992 ). Footnote 1 Such perspectives define the concept as including affective (attitudinal), cognitive (knowledge-based), and behavioral domains, emphasizing that environmental literacy is both a process and outcome that develops, builds, and morphs over time (Hollweg et al. 2011 ; Wheaton et al. 2018 ; Clark et al. 2020 ).

The emphasis on defining, measuring, and developing interventions to bring about environmental literacy has primarily remained at the individual scale, as evidenced by frequent descriptions of an environmentally literate person (Roth 1992 ; Hollweg et al. 2011 among others) rather than community or community member. In most understandings, discussions, and manifestations of environmental literacy, the implicit assumption remains that the unit of action, intervention, and therefore analysis occurs at the individual level. Yet instinctively and perhaps by nature, community members often seek information and, as a result, take action collectively, sharing what some scholars call “the hive mind” or “group mind,” relying on each other for distributed knowledge, expertise, motivation, and support (Surowiecki 2005 ; Sunstein 2008 ; Sloman and Fernbach 2017 ; Paul 2021 ).

As with the proverbial elephant (Saxe, n.d.), each person, household, or neighborhood group may understand or “see” a different part of an issue or challenge, bring a novel understanding to the table, and have a certain perspective or skill to contribute. Although some environmental literacy discussions allude to a collective lens (e.g., Hollweg et al. 2011 ; Ardoin et al. 2013 ; Wheaton et al. 2018 ; Bey et al. 2020 ), defining, developing frameworks, and creating measures to assess the efficacy of such collective-scale sustainability-related endeavors has remained elusive. Footnote 2 Looking to related fields and disciplines—such as ecosystem theory, epidemiology and public health, sociology, network theory, and urban planning, among others—can provide insight, theoretical frames, and empirical examples to assist in such conceptualizations (McAdam and Boudet 2012 ; National Research Council 2015 ) (See Table 1 for an overview of some of the many areas of study that informed our conceptualization of collective environmental literacy).

Seeking the essence of the collective: Looking to and learning from others

The social sciences have long focused on “the kinds of activities engaged in by sizable but loosely organized groups of people” (Turner et al. 2020 , para. 1) and addressed various collective constructs, such as collective behavior, action, intelligence, and memory (Table 1 ). Although related constructs in both the social and natural sciences—such as communities of practice (Wenger and Snyder 2000 ), collaborative governance (Ansell and Gash 2008 ; Emerson et al. 2012 ), and the collaboration–coordination continuum (Sadoff and Grey 2005 ; Prager 2015 ), as well as those from social movement theory and related areas (McAdam and Boudet 2012 ; de Moor and Wahlström 2019 )—lack the word “collective” in name, they too leverage the benefits of collectivity. A central tenet connects all of these areas: powerful processes, actions, and outcomes can arise when individuals coalesce around a common purpose or cause. This notion of a dynamic, potent force transcending the individual to enhance the efficacy of outcomes motivates the application of a collective lens to the environmental literacy concept.

Dating to the 1800s, discussions of collective behavior have explored connections to social order, structures, and norms (Park 1927 ; Smelser 2011 /1962; Turner and Killian 1987 ). Initially, the focus emphasized spontaneous, often violent crowd behaviors, such as riots, mobs, and rebellions. More contemporarily, sociologists, political scientists, and others who study social movements and collective behaviors acknowledge that such phenomena may take many forms, including those occurring in natural ecosystems, such as ant colonies, bird flocks, and even the human brain (Gordon 2019 ). In sociology, collective action represents a paradigm shift highlighting coordinated, purposeful pro-social movements, while de-emphasizing aroused emotions and crowd behavior (Miller 2014 ). In political science, Ostrom’s ( 1990 , 2000 , 2010 ) theory of collective action in the context of the management of shared resources extends the concept’s reach to economics and other fields. In education and the learning sciences, social learning and sociocultural theories tap into the idea of learning as a social-cognitive-cultural endeavor (Vygotsky 1980 ; Lave and Wenger 1991 ; Tudge and Winterhoff 1993 ; Rogoff 2003 ; Reed et al. 2010 ).

Collective action, specifically, and collective constructs, generally, have found their way into the research and practice in the fields of conservation, natural resources, and environmental management. Collective action theory has been applied in a range of settings and scenarios, including agriculture (Mills et al. 2011 ), invasive species management (Marshall et al. 2016 ; Sullivan et al. 2017 ; Lubeck et al. 2019 ; Clarke et al. 2021 ), fire management (Canadas et al. 2016 ; Charnley et al. 2020 ), habitat conservation (Raymond 2006 ; Niemiec et al. 2020 ), and water governance (Lopez-Gunn 2003 ; Baldwin et al. 2018 ), among others. Frameworks and methods that emphasize other collective-related ideas—like collaboration, co-production, and group learning—are also ubiquitous in natural resource and environmental management. These constructs include community-based conservation (DeCaro and Stokes 2008 ; Niemiec et al. 2016 ), community natural resource management (Kellert et al. 2000 ; Dale et al. 2020 ), collaboration/coordination (Sadoff and Grey 2005 ; Prager 2015 ), polycentricity (Galaz et al. 2012 ; Heikkila et al. 2018 ), knowledge co-production (Armitage et al. 2011 ; Singh et al. 2021 ), and social learning (Reed et al. 2010 ; Hovardas 2020 ). Many writings on collective efforts in the social sciences broadly, and applied in the area of environment specifically, provide insights into collective action’s necessary preconditions, which prove invaluable to further defining and later operationalizing collective environmental literacy.

Unpacking the definition of collective environmental literacy: Anchoring principles

As described, we propose the following working definition of collective environmental literacy drawing on our analysis of related literatures and informed by scholarly and professional experience in the sustainability and conservation fields: a dynamic, synergistic process that occurs as group members develop and leverage shared resources to undertake individual and aggregate actions over time to address sustainability issues within the multi-scalar context of a socio-environmental system (Fig.  1 ). This definition centers on four core, intertwined ideas: the scale of the group involved; the dynamic nature of the process; shared resources brought by, available to, and needed by the group; and the synergy that arises from group interaction.

Multi-scalar

When transitioning from the focus on individual to collective actions—and, herein, principles of environmental literacy—the most obvious and primary requisite shift is one of scale. Yet, moving to a collective scale does not mean abandoning action at the individual scale; rather, success at the collective level is intrinsically tied to what occurs at an individual level. Such collective-scale impacts leverage the power of the hive, harnessing people’s willingness, ability, and motivation to take action alongside others, share their ideas and resources to build collective ideas and resources, contribute to making a difference in an impactful way, and participate communally in pro-social activities.

Collective environmental literacy is likely dynamic in its orientation to scale, incorporating place-based notions, such as ecoregional or community-level environmental literacy (with an emphasis on geographic boundaries). On the other hand, it may encapsulate environmental literacy of a group or organization united by a common identity (e.g., organizational membership) or cause (e.g., old-growth forests, coastal protection), rather than solely or even primarily by geography. Although shifting scales can make measuring collective environmental literacy more difficult, dynamic levels may be a benefit when addressing planetary boundary issues such as climate change, biodiversity, and ocean acidification (Galaz et al. 2012 ). Some scholars have called for a polycentric approach to these large-scale issues in response to a perceived failure of global-wide, top-down solutions (Ostrom 2010 , 2012 ; Jordan et al. 2018 ). Conceptualizing and consequently supporting collective environmental literacy at multiple scales can facilitate such desired polycentricity.

Rather than representing a static outcome, environmental literacy is a dynamic process that is fluctuating and complex, reflective of iterative interactions among community members, whose discussions and negotiations reflect the changing context of sustainability issues. Footnote 3 Such open-minded processes allow for, and indeed welcome, adaptation in a way that builds social-ecological resilience (Berkes and Jolly 2002 ; Adger et al. 2005 ; Berkes 2007 ). Additionally, this dynamism allows for collective development and maturation, supporting community growth in collective knowledge, attitudes, skills, and actions via new experiences, interactions, and efforts (Berkman et al. 2010 ). With this mindset, and within a sociocultural perspective, collective environmental literacy evolves through drawing on and contributing to the community’s funds of knowledge (González et al. 2006 ). Movement and actions within and among groups impact collective literacy, as members share knowledge and other resources, shifting individuals and the group in the course of their shared practices (Samerski 2019 ).

In a collective mode, effectiveness is heightened as shared resources are streamlined, waste is minimized, and innovation maximized. Rather than each group member developing individual expertise in every matter of concern, the shared knowledge, skills, and behaviors can be distributed, pursued, and amplified among group members efficiently and effectively, with collective literacy emerging from the process of pooling diverse forms of capital and aggregating resources. This perspective builds on ideas of social capital as a collective good (Ostrom 1990 ; Putnam 2020 ), wherein relationships of trust and reciprocity are both inputs and outcomes (Pretty and Ward 2001 ). The shared resources then catalyze and sustain action as they are reassembled and coalesced at the group level for collective impact.

The pooled resources—likely vast—may include, but are not limited to, physical and human resources, funding, time, energy, and space and place (physical or digital). Shared resources may also include forms of theorized capital, such as intellectual and social (Putnam 2020 ). Also of note is the recognition that these resources extend far beyond information and knowledge. Of particular interest when building collective environmental literacy are resources previously ignored or overlooked by those in power in prior sustainability efforts. For example, collective environmental literacy can draw strength from shared resources unique to the community or even subgroups within the larger community. Discussions of Indigenous knowledge (Gadgil et al. 1993 ) and funds of knowledge (González et al. 2006 ; Cruz et al. 2018 ) suggest critical, shared resources that highlight strengths of an individual community and its members. Another dimension of shared resources relates to the strength of institutional connections, such as the benefits that accrue from leveraging the collective knowledge, expertise, and resources of organizational collaborators working in adjacent areas to further and amplify each other’s impact (Wojcik et al. 2021 ).

Synergistic

Finally, given the inherent complexities related to defining, deploying, implementing, and measuring these dynamic, at-times ephemeral processes, resources, and outcomes at a collective scale, working in such a manner must be clearly advantageous to pressing sustainability issues at hand. Numerous related constructs and approaches from a range of fields emphasize the benefits of diverse collaboration to collective thought and action, including improved solutions, more effective and fair processes, and more socioculturally just outcomes (Klein 1990 ; Jörg 2011 ; Wenger and Snyder 2000 ; Djenontin and Meadow 2018 ). These benefits go beyond efficient aggregation and distribution of resources, invoking an almost magical quality that defines synergy, resulting in robust processes and outcomes that are more than the sum of the parts.

This synergy relies on the diversity of a group across various dimensions, bringing power, strength, and insight to a decision-making process (Bear and Woolley 2011 ; Curşeu and Pluut 2013 ; Freeman and Huang 2015 ; Lu et al. 2017 ; Bendor and Page 2019 ). Individuals are limited not only to singular knowledge-perspectives and skillsets, but also to their own experiences, which influence their self-affirming viewpoints and tendencies to seek out confirmatory information for existing beliefs (Kahan et al. 2011 ). Although the coming together of those from different racial, cultural, social, and economic backgrounds facilitates a collective literacy process that draws on a wider range of resources and equips a gestalt, it also sets up the need to consider issues of power, privilege, voice, and representation (Bäckstrand 2006 ) and the role of social capital, leading to questions related to trust and reciprocity in effective collectives (Pretty and Ward 2001 ; Folke et al. 2005 ).

Leveraging the ‘Hive’: Proceeding with collective environmental literacy

This paper presents one conceptualization of collective environmental literacy, with the understanding that numerous ways exist to envision its definition, formation, deployment, and measurement. Characterized by a collective effort, such literacies at scale offer a way to imagine, measure, and support the synergy that occurs when the emphasis moves from an individual to a larger whole. By expanding the scale and focusing on shared responsibility among actors at the systems level, opportunities arise for inspiring and enabling a broader contribution to a sustainable future. These evolving notions serve to invite ongoing conversation, both in research and practice, about how to enact our collective responsibility toward, as well as vision of, a thriving future.

Emerging from the many discussions of shared and collaborative efforts to address socio-environmental issues, our conceptualization of collective environmental literacy is a first step toward supporting communities as they work to identify, address, and solve sustainability problems. We urge continued discussions on this topic, with the goal of understanding the concept of collective environmental literacy, how to measure it, and the implications of this work for practitioners. The conceptual roots of collective environmental literacy reach into countless fields of study and, as such, a transdisciplinary approach, which includes an eye toward practice, is necessary to fully capture and maximize the tremendous amount of knowledge, wisdom, and experience around this topic. Specifically, next steps to evolve the concept include engaging sustainability researchers and practitioners in discussions of the saliency of the presented definition of collective environmental literacy. These discussions include verifying the completeness of the definition and ensuring a thorough review of relevant research: Are parts of the definition missing or unclear? What are the “blank, blind, bald, and bright spots” in the literature (Reid 2019 p. 158)? Additionally, recognizing and leveraging literacy at a collective scale most certainly is not unique to environmental work, nor is adopting literacy-related language to conceptualize and measure process outcomes, although the former has consistently proven more challenging. Moreover, although we (the authors) appreciate the connotations and structures gained by using a literacy framework, we struggle with whether “environmental literacy” is the most appropriate and useful term for the conceptualizations as described herein; we, thus, welcome lively discussions about the need for new terminology.

Even at this early stage of conceptualization, this work has implications for practitioners. For scientists, communicators, policymakers, land managers, and other professionals desiring to work with communities to address sustainability issues, a primary take-away message concerns the holistic nature of what is needed for effective collective action in the environmental realm. Many previous efforts have focused on conveying information and, while a lack of knowledge and awareness may be a barrier to action in some cases, the need for a more holistic lens is increasingly clear. This move beyond an individually focused, information-deficit model is essential for effective impact (Bolderdijk et al. 2013 ; van der Linden 2014 ; Geiger et al. 2019 ). The concept of collective environmental literacy suggests a role for developing shared resources that can foster effective collective action. When working with communities, a critical early step includes some form of needs assessment—a systematic, in-depth process that allows for meaningfully gauging gaps in shared resources required to tackle sustainability issues (Braus 2011). Following this initial, evaluative step, an understanding of the components of collective environmental literacy, as outlined in this paper, can be used to guide the development of interventions to support communities in their efforts to address those issues.

Growing discussion of collective literacy constructs, and related areas, suggests researchers, practitioners, and policymakers working in pro-social areas recognize and value collective efforts, despite the need for clearer definitions and effective measures. This definitional and measurement work, in both research and practice, is not easy. The ever-changing, dynamic contexts in which collective environmental literacy exists make defining the concept a moving target, compounded by a need to draw upon work in countless, often distinct academic fields of study. Furthermore, the hard-to-see, inner workings of collective constructs make measurement difficult. Yet, the “power of the hive” is intriguing, as the synergism that arises from communities working in an aligned manner toward a unified vision suggests a potency and wave of motivated action essential to coalescing and leveraging individual goodwill, harnessing its power and potential toward effective sustainability solutions.

See Stables and Bishop’s ( 2001 ) idea of defining environmental literacy by viewing the environment as “text.”

The climate change education literature also includes a nascent, but growing, discussion of collective-lens thinking and literacy. See, for example, Waldron et al. ( 2019 ), Mochizuki and Bryan ( 2015 ), and Kopnina ( 2016 ).

This conceptualization is similar to how some scholars describe collective health literacy (Berkman et al., 2010 ; Mårtensson and Hensing, 2012 ).

Adger, W.N. 2003. Social capital, collective action, and adaptation to climate change. Economic Geography 79: 387–404.

Article   Google Scholar  

Adger, W.N., T.P. Hughes, C. Folke, S.R. Carpenter, and J. Rockström. 2005. Social-ecological resilience to coastal disasters. Science 309: 1036–1039. https://doi.org/10.1126/science.1112122 .

Article   CAS   Google Scholar  

Adler, P.S., and S.-W. Kwon. 2002. Social capital: Prospects for a new concept. Academy of Management Review 27: 17–40. https://doi.org/10.5465/amr.2002.5922314 .

Agrawal, A. 1995. Dismantling the divide between Indigenous and scientific knowledge. Development and Change 26: 413–439. https://doi.org/10.1111/j.1467-7660.1995.tb00560.x .

Aguilar, O.M. 2018. Examining the literature to reveal the nature of community EE/ESD programs and research. Environmental Education Research 24: 26–49. https://doi.org/10.1080/13504622.2016.1244658 .

Aguilar, O., A. Price, and M. Krasny. 2015. Perspectives on community environmental education. In M.C. Monroe & M.E. Krasny (Eds.), Across the spectrum: Resources for environmental educators (3rd edn., pp. 235–249). North American Association for Environmental Education.

Aldrich, D.P., and M.A. Meyer. 2015. Social capital and community resilience. American Behavioral Scientist 59: 254–269. https://doi.org/10.1177/0002764214550299 .

Amel, E., C. Manning, B. Scott, and S. Koger. 2017. Beyond the roots of human inaction: Fostering collective effort toward ecosystem conservation. Science 356: 275–279. https://doi.org/10.1126/science.aal1931 .

Ansell, C., and A. Gash. 2008. Collaborative governance in theory and practice. Journal of Public Administration Research and Theory 18: 543–571. https://doi.org/10.1093/jopart/mum032 .

Ardoin, N.M. 2006. Toward an interdisciplinary understanding of place: Lessons for environmental education. Canadian Journal of Environmental Education 11: 112–126.

Google Scholar  

Ardoin, N.M., and J.E. Heimlich. 2021. Environmental learning in everyday life: Foundations of meaning and a context for change. Environmental Education Research 27: 1681–1699. https://doi.org/10.1080/13504622.2021.1992354 .

Ardoin, N.M., C. Clark, and E. Kelsey. 2013. An exploration of future trends in environmental education research. Environmental Education Research 19: 499–520. https://doi.org/10.1080/13504622.2012.709823 .

Armitage, D., F. Berkes, A. Dale, E. Kocho-Schellenberg, and E. Patton. 2011. Co-management and the co-production of knowledge: Learning to adapt in Canada’s Arctic. Global Environmental Change 21: 995–1004. https://doi.org/10.1016/j.gloenvcha.2011.04.006 .

Assis Neto, F.R., and C.A.S. Santos. 2018. Understanding crowdsourcing projects: A systematic review of tendencies, workflow, and quality management. Information Processing & Management 54: 490–506. https://doi.org/10.1016/j.ipm.2018.03.006 .

Bäckstrand, K. 2006. Multi-stakeholder partnerships for sustainable development: Rethinking legitimacy, accountability and effectiveness. European Environment 16: 290–306. https://doi.org/10.1002/eet.425 .

Baldwin, E., P. McCord, J. Dell’Angelo, and T. Evans. 2018. Collective action in a polycentric water governance system. Environmental Policy and Governance 28: 212–222. https://doi.org/10.1002/eet.1810 .

Bamberg, S., J. Rees, and S. Seebauer. 2015. Collective climate action: Determinants of participation intention in community-based pro-environmental initiatives. Journal of Environmental Psychology 43: 155–165. https://doi.org/10.1016/j.jenvp.2015.06.006 .

Bandura, A. 1977. Social learning theory . Englewood Cliffs: Prentice Hall.

Bandura, A. 2000. Exercise of human agency through collective efficacy. Current Directions in Psychological Science 9: 75–78. https://doi.org/10.1111/1467-8721.00064 .

Barron, B. 2006. Interest and self-sustained learning as catalysts of development: A learning ecology perspective. Human Development 49: 193–224. https://doi.org/10.1159/000094368 .

Barry, M.M., M. D’Eath, and J. Sixsmith. 2013. Interventions for improving population health literacy: Insights from a rapid review of the evidence. Journal of Health Communication 18: 1507–1522. https://doi.org/10.1080/10810730.2013.840699 .

Barton, A.C., and E. Tan. 2009. Funds of knowledge and discourses and hybrid space. Journal of Research in Science Teaching 46: 50–73. https://doi.org/10.1002/tea.20269 .

Bear, J.B., and A.W. Woolley. 2011. The role of gender in team collaboration and performance. Interdisciplinary Science Reviews 36: 146–153. https://doi.org/10.1179/030801811X13013181961473 .

Bendor, J., and S.E. Page. 2019. Optimal team composition for tool-based problem solving. Journal of Economics & Management Strategy 28: 734–764. https://doi.org/10.1111/jems.12295 .

Berkes, F. 2007. Understanding uncertainty and reducing vulnerability: Lessons from resilience thinking. Natural Hazards 41: 283–295. https://doi.org/10.1007/s11069-006-9036-7 .

Berkes, F., and D. Jolly. 2002. Adapting to climate change: Social-ecological resilience in a Canadian western Arctic community. Conservation Ecology 5: 45.

Berkes, F., and H. Ross. 2013. Community resilience: Toward an integrated approach. Society & Natural Resources 26: 5–20. https://doi.org/10.1080/08941920.2012.736605 .

Berkes, F., M.K. Berkes, and H. Fast. 2007. Collaborative integrated management in Canada’s north: The role of local and traditional knowledge and community-based monitoring. Coastal Management 35: 143–162.

Berkman, N.D., T.C. Davis, and L. McCormack. 2010. Health literacy: What is it? Journal of Health Communication 15: 9–19. https://doi.org/10.1080/10810730.2010.499985 .

Bey, G., C. McDougall, and S. Schoedinger. 2020. Report on the NOAA office of education environmental literacy program community resilience education theory of change. National Oceanic and Atmospheric Administration . https://doi.org/10.25923/mh0g-5q69 .

Blumer, H. 1971. Social problems as collective behavior. Social Problems 18: 298–306.

Bodin, Ö. 2017. Collaborative environmental governance: Achieving collective action in social-ecological systems. Science . https://doi.org/10.1126/science.aan1114 .

Bolderdijk, J.W., M. Gorsira, K. Keizer, and L. Steg. 2013. Values determine the (in)effectiveness of informational interventions in promoting pro-environmental behavior. PLoS ONE 8: e83911. https://doi.org/10.1371/journal.pone.0083911 .

Brabham, D.C. 2013. Crowdsourcing . Cambridge: MIT Press.

Book   Google Scholar  

Braus, J. (Ed.). 2011. Tools of engagement: A toolkit for engaging people in conservation. NAAEE/Audubon. https://cdn.naaee.org/sites/default/files/eepro/resource/files/toolsofengagement.pdf .

Brieger, S.A. 2019. Social identity and environmental concern: The importance of contextual effects. Environment and Behavior 51: 828–855. https://doi.org/10.1177/0013916518756988 .

Briggs, J. 2005. The use of Indigenous knowledge in development: Problems and challenges. Progress in Development Studies 5: 99–114. https://doi.org/10.1191/1464993405ps105oa .

Briggs, J., and J. Sharp. 2004. Indigenous knowledges and development: A postcolonial caution. Third World Quarterly 25: 661–676. https://doi.org/10.1080/01436590410001678915 .

Bronfenbrenner, U. 1979. The ecology of human development: Experiments by nature and design . Cambridge: Harvard University Press.

Bruce, C., and P. Chesterton. 2002. Constituting collective consciousness: Information literacy in university curricula. International Journal for Academic Development 7: 31–40. https://doi.org/10.1080/13601440210156457 .

Byerly, H., A. Balmford, P.J. Ferraro, C.H. Wagner, E. Palchak, S. Polasky, T.H. Ricketts, A.J. Schwartz, et al. 2018. Nudging pro-environmental behavior: Evidence and opportunities. Frontiers in Ecology and the Environment 16: 159–168. https://doi.org/10.1002/fee.1777 .

Canadas, M.J., A. Novais, and M. Marques. 2016. Wildfires, forest management and landowners’ collective action: A comparative approach at the local level. Land Use Policy 56: 179–188. https://doi.org/10.1016/j.landusepol.2016.04.035 .

Carden, L., and W. Wood. 2018. Habit formation and change. Current Opinion in Behavioral Sciences 20: 117–122. https://doi.org/10.1016/j.cobeha.2017.12.009 .

Chan, M. 2016. Psychological antecedents and motivational models of collective action: Examining the role of perceived effectiveness in political protest participation. Social Movement Studies 15: 305–321. https://doi.org/10.1080/14742837.2015.1096192 .

Charnley, S., E.C. Kelly, and A.P. Fischer. 2020. Fostering collective action to reduce wildfire risk across property boundaries in the American West. Environmental Research Letters 15: 025007. https://doi.org/10.1088/1748-9326/ab639a .

Chawla, L., and D.F. Cushing. 2007. Education for strategic environmental behavior. Environmental Education Research 13: 437–452. https://doi.org/10.1080/13504620701581539 .

Chinn, D. 2011. Critical health literacy: A review and critical analysis. Social Science & Medicine 73: 60–67. https://doi.org/10.1016/j.socscimed.2011.04.004 .

Clark, C.R., J.E. Heimlich, N.M. Ardoin, and J. Braus. 2020. Using a Delphi study to clarify the landscape and core outcomes in environmental education. Environmental Education Research 26: 381–399. https://doi.org/10.1080/13504622.2020.1727859 .

Clarke, M., Z. Ma, S.A. Snyder, and K. Floress. 2021. Factors influencing family forest owners’ interest in community-led collective invasive plant management. Environmental Management 67: 1088–1099. https://doi.org/10.1007/s00267-021-01454-1 .

Cruz, A.R., S.T. Selby, and W.H. Durham. 2018. Place-based education for environmental behavior: A ‘funds of knowledge’ and social capital approach. Environmental Education Research 24: 627–647. https://doi.org/10.1080/13504622.2017.1311842 .

Curşeu, P.L., and H. Pluut. 2013. Student groups as learning entities: The effect of group diversity and teamwork quality on groups’ cognitive complexity. Studies in Higher Education 38: 87–103. https://doi.org/10.1080/03075079.2011.565122 .

Cutter, S.L., L. Barnes, M. Berry, C. Burton, E. Evans, E. Tate, and J. Webb. 2008. A place-based model for understanding community resilience to natural disasters. Global Environmental Change 18: 598–606. https://doi.org/10.1016/j.gloenvcha.2008.07.013 .

Dale, A., K. Vella, S. Ryan, K. Broderick, R. Hill, R. Potts, and T. Brewer. 2020. Governing community-based natural resource management in Australia: International implications. Land 9: 234. https://doi.org/10.3390/land9070234 .

de Moor, J., and M. Wahlström. 2019. Narrating political opportunities: Explaining strategic adaptation in the climate movement. Theory and Society 48: 419–451. https://doi.org/10.1007/s11186-019-09347-3 .

DeCaro, D., and M. Stokes. 2008. Social-psychological principles of community-based conservation and conservancy motivation: Attaining goals within an autonomy-supportive environment. Conservation Biology 22: 1443–1451.

Djenontin, I.N.S., and A.M. Meadow. 2018. The art of co-production of knowledge in environmental sciences and management: Lessons from international practice. Environmental Management 61: 885–903. https://doi.org/10.1007/s00267-018-1028-3 .

Duncan, L.E. 2018. The psychology of collective action. In The Oxford handbook of personality and social psychology , ed. K. Deaux and M. Snyder. Oxford: Oxford University Press.

Edwards, M., F. Wood, M. Davies, and A. Edwards. 2015. ‘Distributed health literacy’: Longitudinal qualitative analysis of the roles of health literacy mediators and social networks of people living with a long-term health condition. Health Expectations 18: 1180–1193. https://doi.org/10.1111/hex.12093 .

Emerson, K., T. Nabatchi, and S. Balogh. 2012. An integrative framework for collaborative governance. Journal of Public Administration Research and Theory 22: 1–29.

Engeström, Y. 2001. Expansive learning at work: Toward an activity theoretical reconceptualization. Journal of Education and Work 14: 133–156. https://doi.org/10.1080/13639080020028747 .

Ensor, J., and B. Harvey. 2015. Social learning and climate change adaptation: Evidence for international development practice. Wires Climate Change 6: 509–522. https://doi.org/10.1002/wcc.348 .

Fanta, V., M. Šálek, and P. Sklenicka. 2019. How long do floods throughout the millennium remain in the collective memory? Nature Communications 10: 1105. https://doi.org/10.1038/s41467-019-09102-3 .

Feinstein, N.W. 2018. Collective science literacy: A key to community science capacity [Conference session]. American Association for the Advancement of Science Annual Meeting, Austin, TX, USA https://d32ogoqmya1dw8.cloudfront.net/files/earthconnections/collective_science_literacy_key.pdf .

Feola, G. 2015. Societal transformation in response to global environmental change: A review of emerging concepts. Ambio 44: 376–390. https://doi.org/10.2139/ssrn.2689741 .

Fernandez-Gimenez, M.E., H.L. Ballard, and V.E. Sturtevant. 2008. Adaptive management and social learning in collaborative and community-based monitoring: A study of five community-based forestry organizations in the western USA. Ecology and Society 13: 15.

Folke, C., T. Hahn, P. Olsson, and J. Norberg. 2005. Adaptive governance of social-ecological systems. Annual Review of Environment and Resources 30: 441–473. https://doi.org/10.1146/annurev.energy.30.050504.144511 .

Freedman, D.A., K.D. Bess, H.A. Tucker, D.L. Boyd, A.M. Tuchman, and K.A. Wallston. 2009. Public health literacy defined. American Journal of Preventive Medicine 36: 446–451. https://doi.org/10.1016/j.amepre.2009.02.001 .

Freeman, R.B., and W. Huang. 2015. Collaborating with people like me: Ethnic coauthorship within the United States. Journal of Labor Economics 33: S289–S318.

Gadgil, M., F. Berkes, and C. Folke. 1993. Indigenous knowledge for biodiversity conservation. Ambio 22: 151–156.

Galaz, V., B. Crona, H. Österblom, P. Olsson, and C. Folke. 2012. Polycentric systems and interacting planetary boundaries—Emerging governance of climate change–ocean acidification–marine biodiversity. Ecological Economics 81: 21–32. https://doi.org/10.1016/j.ecolecon.2011.11.012 .

Geiger, S.M., M. Geiger, and O. Wilhelm. 2019. Environment-specific vs general knowledge and their role in pro-environmental behavior. Frontiers in Psychology 10: 718. https://doi.org/10.3389/fpsyg.2019.00718 .

Gifford, R., C. Kormos, and A. McIntyre. 2011. Behavioral dimensions of climate change: Drivers, responses, barriers, and interventions. Wires Climate Change 2: 801–827. https://doi.org/10.1002/wcc.143 .

González, N., L.C. Moll, and C. Amanti. 2006. Funds of knowledge: Theorizing practices in households, communities, and classrooms . New York: Routledge.

Gordon, D.M. 2019. Measuring collective behavior: An ecological approach. Theory in Biosciences . https://doi.org/10.1007/s12064-019-00302-5 .

Gould, R.K., N.M. Ardoin, J.M. Thomsen, and N. Wyman Roth. 2019. Exploring connections between environmental learning and behavior through four everyday-life case studies. Environmental Education Research 25: 314–340.

Graham, S., A.L. Metcalf, N. Gill, R. Niemiec, C. Moreno, T. Bach, V. Ikutegbe, L. Hallstrom, et al. 2019. Opportunities for better use of collective action theory in research and governance for invasive species management. Conservation Biology 33: 275–287. https://doi.org/10.1111/cobi.13266 .

Granovetter, M. 1978. Threshold models of collective behavior. American Journal of Sociology 83: 1420–1443.

Groulx, M., M.C. Brisbois, C.J. Lemieux, A. Winegardner, and L. Fishback. 2017. A role for nature-based citizen science in promoting individual and collective climate change action? A systematic review of learning outcomes. Science Communication 39: 45–76. https://doi.org/10.1177/1075547016688324 .

Gutiérrez, K.D., and B. Rogoff. 2003. Cultural ways of learning: Individual traits or repertoires of practice. Educational Researcher 32: 19–25. https://doi.org/10.3102/0013189X032005019 .

Guzys, D., A. Kenny, V. Dickson-Swift, and G. Threlkeld. 2015. A critical review of population health literacy assessment. BMC Public Health 15: 1–7. https://doi.org/10.1186/s12889-015-1551-6 .

Halbwachs, M. 1992. On collective memory (L. A. Coser, Ed. & Trans.). University of Chicago Press. (Original works published 1941 and 1952).

Heikkila, T., S. Villamayor-Tomas, and D. Garrick. 2018. Bringing polycentric systems into focus for environmental governance. Environmental Policy and Governance 28: 207–211. https://doi.org/10.1002/eet.1809 .

Heimlich, J.E., and N.M. Ardoin. 2008. Understanding behavior to understand behavior change: A literature review. Environmental Education Research 14: 215–237. https://doi.org/10.1080/13504620802148881 .

Hill, R., F.J. Walsh, J. Davies, A. Sparrow, M. Mooney, R.M. Wise, and M. Tengö. 2020. Knowledge co-production for Indigenous adaptation pathways: Transform post-colonial articulation complexes to empower local decision-making. Global Environmental Change 65: 102161. https://doi.org/10.1016/j.gloenvcha.2020.102161 .

Hollweg, K.S., J. Taylor, R.W. Bybee, T.J. Marcinkowski, W.C. McBeth, and P. Zoido. 2011. Developing a framework for assessing environmental literacy: Executive summary . North American Association for Environmental Education. https://cdn.naaee.org/sites/default/files/envliteracyexesummary.pdf .

Hovardas, T. 2020. A social learning approach for stakeholder engagement in large carnivore conservation and management. Frontiers in Ecology and Evolution 8: 436. https://doi.org/10.3389/fevo.2020.525278 .

Jagers, S.C., N. Harring, Å. Löfgren, M. Sjöstedt, F. Alpizar, B. Brülde, D. Langlet, A. Nilsson, et al. 2020. On the preconditions for large-scale collective action. Ambio 49: 1282–1296. https://doi.org/10.1007/s13280-019-01284-w .

Jordan, A., D. Huitema, H. van Asselt, and J. Forster. 2018. Governing climate change: Polycentricity in action? Cambridge: Cambridge University Press.

Jörg, T. 2011. New thinking in complexity for the social sciences and humanities: A generative, transdisciplinary approach . New York: Springer Science & Business Media.

Jost, J.T., J. Becker, D. Osborne, and V. Badaan. 2017. Missing in (collective) action: Ideology, system justification, and the motivational antecedents of two types of protest behavior. Current Directions in Psychological Science 26: 99–108. https://doi.org/10.1177/0963721417690633 .

Jull, J., A. Giles, and I.D. Graham. 2017. Community-based participatory research and integrated knowledge translation: Advancing the co-creation of knowledge. Implementation Science 12: 150. https://doi.org/10.1186/s13012-017-0696-3 .

Kahan, D.M., H. Jenkins-Smith, and D. Braman. 2011. Cultural cognition of scientific consensus. Journal of Risk Research 14: 147–174. https://doi.org/10.1080/13669877.2010.511246 .

Kania, J., and M. Kramer. 2011. Collective impact. Stanford Social Innovation Review 9: 36–41.

Karachiwalla, R., and F. Pinkow. 2021. Understanding crowdsourcing projects: A review on the key design elements of a crowdsourcing initiative. Creativity and Innovation Management 30: 563–584. https://doi.org/10.1111/caim.12454 .

Kellert, S.R., J.N. Mehta, S.A. Ebbin, and L.L. Lichtenfeld. 2000. Community natural resource management: Promise, rhetoric, and reality. Society & Natural Resources 13: 705–715.

Klein, J.T. 1990. Interdisciplinarity: History, theory, and practice . Detroit: Wayne State University Press.

Knapp, C.N., R.S. Reid, M.E. Fernández-Giménez, J.A. Klein, and K.A. Galvin. 2019. Placing transdisciplinarity in context: A review of approaches to connect scholars, society and action. Sustainability 11: 4899. https://doi.org/10.3390/su11184899 .

Koliou, M., J.W. van de Lindt, T.P. McAllister, B.R. Ellingwood, M. Dillard, and H. Cutler. 2020. State of the research in community resilience: Progress and challenges. Sustainable and Resilient Infrastructure 5: 131–151. https://doi.org/10.1080/23789689.2017.1418547 .

Kopnina, H. 2016. Of big hegemonies and little tigers: Ecocentrism and environmental justice. The Journal of Environmental Education 47: 139–150. https://doi.org/10.1080/00958964.2015.1048502 .

Krasny, M.E., M. Mukute, O. Aguilar, M.P. Masilela, and L. Olvitt. 2017. Community environmental education. In Urban environmental education review , ed. A. Russ and M.E. Krasny, 124–132. Ithaca: Cornell University Press.

Chapter   Google Scholar  

Lave, J. 1991. Situating learning in communities of practice.

Lave, J., and E. Wenger. 1991. Situated learning: Legitimate peripheral participation . Cambridge: Cambridge University Press.

Lee, S., and W.-M. Roth. 2003. Science and the “good citizen”: Community-based scientific literacy. Science, Technology, & Human Values 28: 403–424. https://doi.org/10.1177/0162243903028003003 .

Lévy, P., and R. Bononno. 1997. Collective intelligence: Mankind’s emerging world in cyberspace . New York: Perseus Books.

Lloyd, A. 2005. No man (or woman) is an island: Information literacy, affordances and communities of practice. The Australian Library Journal 54: 230–237. https://doi.org/10.1080/00049670.2005.10721760 .

Lopez-Gunn, E. 2003. The role of collective action in water governance: A comparative study of groundwater user associations in La Mancha aquifers in Spain. Water International 28: 367–378. https://doi.org/10.1080/02508060308691711 .

Lu, J.G., A.C. Hafenbrack, P.W. Eastwick, D.J. Wang, W.W. Maddux, and A.D. Galinsky. 2017. “Going out” of the box: Close intercultural friendships and romantic relationships spark creativity, workplace innovation, and entrepreneurship. Journal of Applied Psychology 102: 1091–1108. https://doi.org/10.1037/apl0000212 .

Lubeck, A., A. Metcalf, C. Beckman, L. Yung, and J. Angle. 2019. Collective factors drive individual invasive species control behaviors: Evidence from private lands in Montana, USA. Ecology and Society . https://doi.org/10.5751/ES-10897-240232 .

Mackay, C.M.L., M.T. Schmitt, A.E. Lutz, and J. Mendel. 2021. Recent developments in the social identity approach to the psychology of climate change. Current Opinion in Psychology 42: 95–101. https://doi.org/10.1016/j.copsyc.2021.04.009 .

Magis, K. 2010. Community resilience: An indicator of social sustainability. Society & Natural Resources 23: 401–416. https://doi.org/10.1080/08941920903305674 .

Manfredo, M.J., T.L. Teel, and A.M. Dietsch. 2016. Implications of human value shift and persistence for biodiversity conservation. Conservation Biology 30: 287–296. https://doi.org/10.1111/cobi.12619 .

Marshall, G.R., M.J. Coleman, B.M. Sindel, I.J. Reeve, and P.J. Berney. 2016. Collective action in invasive species control, and prospects for community-based governance: The case of serrated tussock ( Nassella trichotoma ) in New South Wales, Australia. Land Use Policy 56: 100–111. https://doi.org/10.1016/j.landusepol.2016.04.028 .

Mårtensson, L., and G. Hensing. 2012. Health literacy: A heterogeneous phenomenon: A literature review. Scandinavian Journal of Caring Sciences 26: 151–160. https://doi.org/10.1111/j.1471-6712.2011.00900.x .

Martin, C., and C. Steinkuehler. 2010. Collective information literacy in massively multiplayer online games. E-Learning and Digital Media 7: 355–365. https://doi.org/10.2304/elea.2010.7.4.355 .

Masson, T., and I. Fritsche. 2021. We need climate change mitigation and climate change mitigation needs the ‘We’: A state-of-the-art review of social identity effects motivating climate change action. Current Opinion in Behavioral Sciences 42: 89–96. https://doi.org/10.1016/j.cobeha.2021.04.006 .

Massung, E., D. Coyle, K.F. Cater, M. Jay, and C. Preist. 2013. Using crowdsourcing to support pro-environmental community activism. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems . https://doi.org/10.1145/2470654.2470708 .

McAdam, D. 2017. Social movement theory and the prospects for climate change activism in the United States. Annual Review of Political Science 20: 189–208. https://doi.org/10.1146/annurev-polisci-052615-025801 .

McAdam, D., and H. Boudet. 2012. Putting social movements in their place: Explaining opposition to energy projects in the United States, 2000–2005 . Cambridge University Press.

McKenzie-Mohr, D. 2011. Fostering sustainable behavior: An introduction to community-based social marketing (3rd edn.). New Society Publishers.

McKinley, D.C., A.J. Miller-Rushing, H.L. Ballard, R. Bonney, H. Brown, S.C. Cook-Patton, D.M. Evans, R.A. French, et al. 2017. Citizen science can improve conservation science, natural resource management, and environmental protection. Biological Conservation 208: 15–28.

Miller, D.L. 2014. Introduction to collective behavior and collective action (3rd ed.). Waveland Press.

Mills, J., D. Gibbon, J. Ingram, M. Reed, C. Short, and J. Dwyer. 2011. Organising collective action for effective environmental management and social learning in Wales. The Journal of Agricultural Education and Extension 17: 69–83. https://doi.org/10.1080/1389224X.2011.536356 .

Mistry, J., and A. Berardi. 2016. Bridging Indigenous and scientific knowledge. Science 352: 1274–1275. https://doi.org/10.1126/science.aaf1160 .

Mochizuki, Y., and A. Bryan. 2015. Climate change education in the context of education for sustainable development: Rationale and principles. Journal of Education for Sustainable Development 9: 4–26. https://doi.org/10.1177/0973408215569109 .

Monroe, M.C. 2003. Two avenues for encouraging conservation behaviors. Human Ecology Review 10: 113–125.

Nasir, N.S., M.M. de Royston, B. Barron, P. Bell, R. Pea, R. Stevens, and S. Goldman. 2020. Learning pathways: How learning is culturally organized. In Handbook of the cultural foundations of learning , ed. N.S. Nasir, C.D. Lee, R. Pea, and M.M. de Royston, 195–211. Routledge.

National Academies of Sciences, Engineering, and Medicine. 2016. Science literacy: Concepts, contexts, and consequences . https://doi.org/10.17226/23595

National Research Council. 2015. Collective behavior: From cells to societies: Interdisciplinary research team summaries . National Academies Press. https://doi.org/10.17226/21737

Niemiec, R.M., N.M. Ardoin, C.B. Wharton, and G.P. Asner G.P. 2016. Motivating residents to combat invasive species on private lands: Social norms and community reciprocity. Ecology and Society , 21. https://doi.org/10.5751/ES-08362-210230

Niemiec, R.M., S. McCaffrey, and M.S. Jones. 2020. Clarifying the degree and type of public good collective action problem posed by natural resource management challenges. Ecology and Society 25: 30. https://doi.org/10.5751/ES-11483-250130 .

Norström, A.V., C. Cvitanovic, M.F. Löf, S. West, C. Wyborn, P. Balvanera, A.T. Bednarek, E.M. Bennett, et al. 2020. Principles for knowledge co-production in sustainability research. Nature Sustainability 3: 182–190. https://doi.org/10.1038/s41893-019-0448-2 .

Olick, J.K. 1999. Collective memory: The two cultures. Sociological Theory 17: 333–348. https://doi.org/10.1111/0735-2751.00083 .

Ostrom, E. 1990. Governing the commons: The evolution of institutions for collective action . Cambridge University Press.

Ostrom, E. 2000. Collective action and the evolution of social norms. Journal of Economic Perspectives 14: 137–158. https://doi.org/10.1257/jep.14.3.137 .

Ostrom, E. 2009. A general framework for analyzing sustainability of social-ecological systems. Science 325: 419–422. https://doi.org/10.1126/science.1172133 .

Ostrom, E. 2010. Polycentric systems for coping with collective action and global environmental change. Global Environmental Change 20: 550–557. https://doi.org/10.1016/j.gloenvcha.2010.07.004 .

Ostrom, E. 2012. Nested externalities and polycentric institutions: Must we wait for global solutions to climate change before taking actions at other scales? Economic Theory 49: 353–369. https://doi.org/10.1007/s00199-010-0558-6 .

Ostrom, E., and T.K. Ahn. 2009. The meaning of social capital and its link to collective action. In Handbook of social capital: The troika of sociology, political science and economics , ed. G.T. Svendsen and G.L.H. Svendsen, 17–35. Edward Elgar Publishing.

Papen, U. 2009. Literacy, learning and health: A social practices view of health literacy. Literacy and Numeracy Studies . https://doi.org/10.5130/lns.v0i0.1275 .

Park, R.E. 1927. Human nature and collective behavior. American Journal of Sociology 32: 733–741.

Paul, A.M. 2021. The extended mind: The power of thinking outside the brain . Boston: Mariner Books.

Pawilen, G.T. 2021. Integrating Indigenous knowledge in the Philippine elementary science curriculum: Integrating Indigenous knowledge. International Journal of Curriculum and Instruction 13: 1148–1160.

Prager, K. 2015. Agri-environmental collaboratives for landscape management in Europe. Current Opinion in Environmental Sustainability 12: 59–66. https://doi.org/10.1016/j.cosust.2014.10.009 .

Pretty, J., and H. Ward. 2001. Social capital and the environment. World Development 29: 209–227. https://doi.org/10.1016/S0305-750X(00)00098-X .

Putnam, R.D. 2020. Bowling alone: Revised and updated: The collapse and revival of American community . Anniversary. New York: Simon & Schuster.

Raymond, L. 2006. Cooperation without trust: Overcoming collective action barriers to endangered species protection. Policy Studies Journal 34: 37–57. https://doi.org/10.1111/j.1541-0072.2006.00144.x .

Reed, M.S., A.C. Evely, G. Cundill, I. Fazey, J. Glass, A. Laing, J. Newig, B. Parrish, et al. 2010. What is social learning? Ecology and Society 15: 12.

Reicher, S., R. Spears, and S.A. Haslam. 2010. The social identity approach in social psychology. In The SAGE handbook of identities (pp. 45–62). SAGE. https://doi.org/10.4135/9781446200889

Reid, A. 2019. Blank, blind, bald and bright spots in environmental education research. Environmental Education Research 25: 157–171. https://doi.org/10.1080/13504622.2019.1615735 .

Rogoff, B. 2003. The cultural nature of human development (Reprint edition) . Oxford: Oxford University Press.

Roth, C.E. 1992. Environmental literacy: Its roots, evolution and directions in the 1990s . http://eric.ed.gov/?id=ED348235

Roth, W.-M. 2003. Scientific literacy as an emergent feature of collective human praxis. Journal of Curriculum Studies 35: 9–23. https://doi.org/10.1080/00220270210134600 .

Roth, W.-M., and A.C. Barton. 2004. Rethinking scientific literacy . London: Psychology Press.

Roth, W.-M., and S. Lee. 2002. Scientific literacy as collective praxis. Public Understanding of Science 11: 33–56. https://doi.org/10.1088/0963-6625/11/1/302 .

Roth, W.-M., and S. Lee. 2004. Science education as/for participation in the community. Science Education 88: 263–291.

Roth, W.-M., and Y.-J. Lee. 2007. “Vygotsky’s neglected legacy”: Cultural-historical activity theory. Review of Educational Research 77: 186–232.

Sadoff, C.W., and D. Grey. 2005. Cooperation on international rivers: A continuum for securing and sharing benefits. Water International 30: 420–427.

Samerski, S. 2019. Health literacy as a social practice: Social and empirical dimensions of knowledge on health and healthcare. Social Science & Medicine 226: 1–8. https://doi.org/10.1016/j.socscimed.2019.02.024 .

Sawyer, R.K. 2014. The future of learning: Grounding educational innovation in the learning sciences. In The Cambridge handbook of the learning sciences , ed. R.K. Sawyer, 726–746. Cambridge: Cambridge University Press.

Saxe, J.G. n.d.. The blind man and the elephant . All Poetry. Retrieved October 6, 2020, from https://allpoetry.com/The-Blind-Man-And-The-Elephant .

Scheepers, D., and N. Ellemers. 2019. Social identity theory. In Social psychology in action: Evidence-based interventions from theory to practice , ed. K. Sassenberg and M.L.W. Vliek, 129–143. New York: Springer International Publishing.

Schipper, E.L.F., N.K. Dubash, and Y. Mulugetta. 2021. Climate change research and the search for solutions: Rethinking interdisciplinarity. Climatic Change 168: 18. https://doi.org/10.1007/s10584-021-03237-3 .

Schoerning, E. 2018. A no-conflict approach to informal science education increases community science literacy and engagement. Journal of Science Communication, Doi 10: 17030205.

Schultz, P.W. 2014. Strategies for promoting proenvironmental behavior: Lots of tools but few instructions. European Psychologist 19: 107–117. https://doi.org/10.1027/1016-9040/a000163 .

Sharifi, A. 2016. A critical review of selected tools for assessing community resilience. Ecological Indicators 69: 629–647. https://doi.org/10.1016/j.ecolind.2016.05.023 .

Sherrieb, K., F.H. Norris, and S. Galea. 2010. Measuring capacities for community resilience. Social Indicators Research 99: 227–247. https://doi.org/10.1007/s11205-010-9576-9 .

Singh, R.K., A. Singh, K.K. Zander, S. Mathew, and A. Kumar. 2021. Measuring successful processes of knowledge co-production for managing climate change and associated environmental stressors: Adaptation policies and practices to support Indian farmers. Journal of Environmental Management 282: 111679. https://doi.org/10.1016/j.jenvman.2020.111679 .

Sloman, S., and P. Fernbach. 2017. The knowledge illusion: Why we never think alone . New York: Riverhead Books.

Smelser, N.J. 2011. Theory of collective behavior . Quid Pro Books. (Original work published 1962).

Sørensen, K., S. Van den Broucke, J. Fullam, G. Doyle, J. Pelikan, Z. Slonska, H. Brand, and (HLS-EU) Consortium Health Literacy Project European. 2012. Health literacy and public health: A systematic review and integration of definitions and models. BMC Public Health 12: 80. https://doi.org/10.1186/1471-2458-12-80 .

Spitzer, W., and J. Fraser. 2020. Advancing community science literacy. Journal of Museum Education 45: 5–15. https://doi.org/10.1080/10598650.2020.1720403 .

Stables, A., and K. Bishop. 2001. Weak and strong conceptions of environmental literacy: Implications for environmental education. Environmental Education Research 7: 89. https://doi.org/10.1080/13504620125643 .

Stern, M.J., R.B. Powell, and N.M. Ardoin. 2008. What difference does it make? Assessing outcomes from participation in a residential environmental education program. The Journal of Environmental Education 39: 31–43. https://doi.org/10.3200/JOEE.39.4.31-43 .

Stets, J.E., and P.J. Burke. 2000. Identity theory and social identity theory. Social Psychology Quarterly 63: 224–237. https://doi.org/10.2307/2695870 .

Sturmer, S., and B. Simon. 2004. Collective action: Towards a dual-pathway model. European Review of Social Psychology 15: 59–99. https://doi.org/10.1080/10463280340000117 .

Sullivan, A., A. York, D. White, S. Hall, and S. Yabiku. 2017. De jure versus de facto institutions: Trust, information, and collective efforts to manage the invasive mile-a-minute weed (Mikania micrantha). International Journal of the Commons 11: 171–199. https://doi.org/10.18352/ijc.676 .

Sunstein, C.R. 2008. Infotopia: How many minds produce knowledge . Oxford: Oxford University Press.

Surowiecki, J. 2005. The wisdom of crowds . New York: Anchor.

Swim, J.K., S. Clayton, and G.S. Howard. 2011. Human behavioral contributions to climate change: Psychological and contextual drivers. American Psychologist 66: 251–264.

Thaker, J., P. Howe, A. Leiserowitz, and E. Maibach. 2019. Perceived collective efficacy and trust in government influence public engagement with climate change-related water conservation policies. Environmental Communication 13: 681–699. https://doi.org/10.1080/17524032.2018.1438302 .

Tudge, J.R.H., and P.A. Winterhoff. 1993. Vygotsky, Piaget, and Bandura: Perspectives on the relations between the social world and cognitive development. Human Development 36: 61–81. https://doi.org/10.1159/000277297 .

Turner, R.H., and L.M. Killian. 1987. Collective behavior , 3rd ed. Englewood Cliffs: Prentice Hall.

Turner, R.H., N.J. Smelser, and L.M. Killian. 2020. Collective behaviour. In Encyclopedia Britannica . Encyclopedia Britannica, Inc. https://www.britannica.com/science/collective-behaviour .

van der Linden, S. 2014. Towards a new model for communicating climate change. In Understanding and governing sustainable tourism mobility , ed. S. Cohen, J. Higham, P. Peeters, and S. Gössling, 263–295. Milton Park: Routledge.

van Zomeren, M., T. Postmes, and R. Spears. 2008. Toward an integrative social identity model of collective action: A quantitative research synthesis of three socio-psychological perspectives. Psychological Bulletin 134: 504–535. https://doi.org/10.1037/0033-2909.134.4.504 .

Vygotsky, L.S. 1980. Mind in society: The development of higher psychological processes . Cambridge: Harvard University Press.

Waldron, F., B. Ruane, R. Oberman, and S. Morris. 2019. Geographical process or global injustice? Contrasting educational perspectives on climate change. Environmental Education Research 25: 895–911. https://doi.org/10.1080/13504622.2016.1255876 .

Wals, A.E.J., M. Brody, J. Dillon, and R.B. Stevenson. 2014. Convergence between science and environmental education. Science 344: 583–584.

Wenger, E.C., and W.M. Snyder. 2000. Communities of practice: The organizational frontier. Harvard Business Review 78: 139–146.

Weschsler, D. 1971. Concept of collective intelligence. American Psychologist 26: 904–907. https://doi.org/10.1037/h0032223 .

Wheaton, M., A. Kannan, and N.M. Ardoin. 2018. Environmental literacy: Setting the stage (Environmental Literacy Brief, Vol. 1). Social Ecology Lab, Stanford University. https://ed.stanford.edu/sites/default/files/news/images/stanfordsocialecologylab-brief-1.pdf .

Wojcik, D.J., N.M. Ardoin, and R.K. Gould. 2021. Using social network analysis to explore and expand our understanding of a robust environmental learning landscape. Environmental Education Research 27: 1263–1283.

Wood, W., and D. Rünger. 2016. Psychology of habit. Annual Review of Psychology 67: 289–314. https://doi.org/10.1146/annurev-psych-122414-033417 .

Woolley, A.W., C.F. Chabris, A. Pentland, N. Hashmi, and T.W. Malone. 2010. Evidence for a collective intelligence factor in the performance of human groups. Science 330: 686–688. https://doi.org/10.1126/science.1193147 .

Download references

Acknowledgements

We are grateful to Maria DiGiano, Anna Lee, and Becca Shareff for their feedback and contributions to early drafts of this paper. We appreciate the research and writing assistance supporting this paper provided by various members of the Stanford Social Ecology Lab, especially: Brennecke Gale, Pari Ghorbani, Regina Kong, Naomi Ray, and Austin Stack.

This work was supported by a grant from the Pisces Foundation.

Author information

Authors and affiliations.

Emmett Interdisciplinary Program in Environment and Resources, Graduate School of Education, and Woods Institute for the Environment, Stanford University, 233 Littlefield Hall, Stanford, CA, 94305, USA

Nicole M. Ardoin

Social Ecology Lab, Graduate School of Education and Woods Institute for the Environment, Stanford University, 233 Littlefield Hall, Stanford, CA, 94305, USA

Alison W. Bowers

Emmett Interdisciplinary Program in Environment and Resources, School of Earth, Energy and Environmental Sciences, Stanford University, 473 Via Ortega, Suite 226, Stanford, CA, 94305, USA

Mele Wheaton

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Nicole M. Ardoin .

Ethics declarations

Conflict of interest.

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Ardoin, N.M., Bowers, A.W. & Wheaton, M. Leveraging collective action and environmental literacy to address complex sustainability challenges. Ambio 52 , 30–44 (2023). https://doi.org/10.1007/s13280-022-01764-6

Download citation

Received : 11 July 2021

Revised : 11 January 2022

Accepted : 22 June 2022

Published : 09 August 2022

Issue Date : January 2023

DOI : https://doi.org/10.1007/s13280-022-01764-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Collective action
  • Environmental literacy
  • Social movements
  • Sustainability
  • Find a journal
  • Publish with us
  • Track your research

Female labor force participation

Across the globe, women face inferior income opportunities compared with men. Women are less likely to work for income or actively seek work. The global labor force participation rate for women is just over 50% compared to 80% for men. Women are less likely to work in formal employment and have fewer opportunities for business expansion or career progression. When women do work, they earn less. Emerging evidence from recent household survey data suggests that these gender gaps are heightened due to the COVID-19 pandemic.

Women’s work and GDP

Women’s work is posited to be related to development through the process of economic transformation.

Levels of female labor force participation are high for the poorest economies generally, where agriculture is the dominant sector and women often participate in small-holder agricultural work. Women’s participation in the workforce is lower in middle-income economies which have much smaller shares of agricultural activities. Finally, among high-income economies, female labor force participation is again higher, accompanied by a shift towards a service sector-based economy and higher education levels among women.

This describes the posited  U-shaped relationship  between development (proxied by GDP per capita) and female labor force participation where women’s work participation is high for the poorest economies, lower for middle income economies, and then rises again among high income economies.

This theory of the U-shape is observed globally across economies of different income levels. But this global picture may be misleading. As more recent studies have found, this pattern does not hold within regions or when looking within a specific economy over time as their income levels rise.

In no region do we observe a U-shape pattern in female participation and GDP per capita over the past three decades.

Structural transformation, declining fertility, and increasing female education in many parts of the world have not resulted in significant increases in women’s participation as was theorized. Rather, rigid historic, economic, and social structures and norms factor into stagnant female labor force participation.

Historical view of women’s participation and GDP

Taking a historical view of female participation and GDP, we ask another question: Do lower income economies today have levels of participation that mirror levels that high-income economies had decades earlier?

The answer is no.

This suggests that the relationship of female labor force participation to GDP for lower-income economies today is different than was the case decades past. This could be driven by numerous factors -- changing social norms, demographics, technology, urbanization, to name a few possible drivers.

Gendered patterns in type of employment

Gender equality is not just about equal access to jobs but also equal access for men and women to good jobs. The type of work that women do can be very different from the type of work that men do. Here we divide work into two broad categories: vulnerable work and wage work.

The Gender gap in vulnerable and wage work by GDP per capita

Vulnerable employment is closely related to GDP per capita. Economies with high rates of vulnerable employment are low-income contexts with a large agricultural sector. In these economies, women tend to make up the higher share of the vulnerably employed. As economy income levels rise, the gender gap also flips, with men being more likely to be in vulnerable work when they have a job than women.

From COVID-19 crisis to recovery

The COVID-19 crisis has exacerbated these gender gaps in employment. Although comprehensive official statistics from labor force surveys are not yet available for all economies,  emerging studies  have consistently documented that working women are taking a harder hit from the crisis. Different patterns by sector and vulnerable work do not explain this. That is, this result is not driven by the sectors in which women work or their higher rates of vulnerable work—within specific work categories, women fared worse than men in terms of COVID-19 impacts on jobs.

Among other explanations is that women have borne the brunt of the increase in the demand for care work (especially for children). A strong and inclusive recovery will require efforts which address this and other underlying drivers of gender gaps in employment opportunities.

  • Research and Publications
  • Projects and Operations
  • Privacy Notice
  • Site Accessibility
  • Access to Information
  • REPORT FRAUD OR CORRUPTION

BRIEF RESEARCH REPORT article

This article is part of the research topic.

Infections in the Intensive Care Unit - Volume II

Management of spontaneous septic hypothermia in intensive care. A national survey of French intensive care units Provisionally Accepted

  • 1 Centre Hospitalier Universitaire (CHU) de Rennes, France

The final, formatted version of the article will be published soon.

The benefice benefit of temperature control in sepsis or septic shock is still under debate in the literature. We developed a national survey to assess the current state of knowledge and the practical management of spontaneous septic hypothermia in French intensive care units. Out of more 764 intensivists who were contacted, 436 responded to the survey. The majority of doctors (52.4%) considered spontaneous septic hypothermia to be a frequently encountered situation in intensive care, and 62.1% were interested in this problem. Definition of spontaneous septic hypothermia among French intensivists was not consensual. More than half of the doctors questioned (57.1%) stated that they did not actively rewarm patients suffering from spontaneous septic hypothermia.

Keywords: spontaneous septic hypothermia, Sepsis, septic shock, Temperature Control, Hypothermia

Received: 29 Feb 2024; Accepted: 14 May 2024.

Copyright: © 2024 Eustache, Le Balc'h and Launey. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Mx. Gabriel Eustache, Centre Hospitalier Universitaire (CHU) de Rennes, Rennes, France

People also looked at

definition of a research survey

Cultural Relativity and Acceptance of Embryonic Stem Cell Research

Article sidebar.

definition of a research survey

Main Article Content

There is a debate about the ethical implications of using human embryos in stem cell research, which can be influenced by cultural, moral, and social values. This paper argues for an adaptable framework to accommodate diverse cultural and religious perspectives. By using an adaptive ethics model, research protections can reflect various populations and foster growth in stem cell research possibilities.

INTRODUCTION

Stem cell research combines biology, medicine, and technology, promising to alter health care and the understanding of human development. Yet, ethical contention exists because of individuals’ perceptions of using human embryos based on their various cultural, moral, and social values. While these disagreements concerning policy, use, and general acceptance have prompted the development of an international ethics policy, such a uniform approach can overlook the nuanced ethical landscapes between cultures. With diverse viewpoints in public health, a single global policy, especially one reflecting Western ethics or the ethics prevalent in high-income countries, is impractical. This paper argues for a culturally sensitive, adaptable framework for the use of embryonic stem cells. Stem cell policy should accommodate varying ethical viewpoints and promote an effective global dialogue. With an extension of an ethics model that can adapt to various cultures, we recommend localized guidelines that reflect the moral views of the people those guidelines serve.

Stem cells, characterized by their unique ability to differentiate into various cell types, enable the repair or replacement of damaged tissues. Two primary types of stem cells are somatic stem cells (adult stem cells) and embryonic stem cells. Adult stem cells exist in developed tissues and maintain the body’s repair processes. [1] Embryonic stem cells (ESC) are remarkably pluripotent or versatile, making them valuable in research. [2] However, the use of ESCs has sparked ethics debates. Considering the potential of embryonic stem cells, research guidelines are essential. The International Society for Stem Cell Research (ISSCR) provides international stem cell research guidelines. They call for “public conversations touching on the scientific significance as well as the societal and ethical issues raised by ESC research.” [3] The ISSCR also publishes updates about culturing human embryos 14 days post fertilization, suggesting local policies and regulations should continue to evolve as ESC research develops. [4]  Like the ISSCR, which calls for local law and policy to adapt to developing stem cell research given cultural acceptance, this paper highlights the importance of local social factors such as religion and culture.

I.     Global Cultural Perspective of Embryonic Stem Cells

Views on ESCs vary throughout the world. Some countries readily embrace stem cell research and therapies, while others have stricter regulations due to ethical concerns surrounding embryonic stem cells and when an embryo becomes entitled to moral consideration. The philosophical issue of when the “someone” begins to be a human after fertilization, in the morally relevant sense, [5] impacts when an embryo becomes not just worthy of protection but morally entitled to it. The process of creating embryonic stem cell lines involves the destruction of the embryos for research. [6] Consequently, global engagement in ESC research depends on social-cultural acceptability.

a.     US and Rights-Based Cultures

In the United States, attitudes toward stem cell therapies are diverse. The ethics and social approaches, which value individualism, [7] trigger debates regarding the destruction of human embryos, creating a complex regulatory environment. For example, the 1996 Dickey-Wicker Amendment prohibited federal funding for the creation of embryos for research and the destruction of embryos for “more than allowed for research on fetuses in utero.” [8] Following suit, in 2001, the Bush Administration heavily restricted stem cell lines for research. However, the Stem Cell Research Enhancement Act of 2005 was proposed to help develop ESC research but was ultimately vetoed. [9] Under the Obama administration, in 2009, an executive order lifted restrictions allowing for more development in this field. [10] The flux of research capacity and funding parallels the different cultural perceptions of human dignity of the embryo and how it is socially presented within the country’s research culture. [11]

b.     Ubuntu and Collective Cultures

African bioethics differs from Western individualism because of the different traditions and values. African traditions, as described by individuals from South Africa and supported by some studies in other African countries, including Ghana and Kenya, follow the African moral philosophies of Ubuntu or Botho and Ukama , which “advocates for a form of wholeness that comes through one’s relationship and connectedness with other people in the society,” [12] making autonomy a socially collective concept. In this context, for the community to act autonomously, individuals would come together to decide what is best for the collective. Thus, stem cell research would require examining the value of the research to society as a whole and the use of the embryos as a collective societal resource. If society views the source as part of the collective whole, and opposes using stem cells, compromising the cultural values to pursue research may cause social detachment and stunt research growth. [13] Based on local culture and moral philosophy, the permissibility of stem cell research depends on how embryo, stem cell, and cell line therapies relate to the community as a whole. Ubuntu is the expression of humanness, with the person’s identity drawn from the “’I am because we are’” value. [14] The decision in a collectivistic culture becomes one born of cultural context, and individual decisions give deference to others in the society.

Consent differs in cultures where thought and moral philosophy are based on a collective paradigm. So, applying Western bioethical concepts is unrealistic. For one, Africa is a diverse continent with many countries with different belief systems, access to health care, and reliance on traditional or Western medicines. Where traditional medicine is the primary treatment, the “’restrictive focus on biomedically-related bioethics’” [is] problematic in African contexts because it neglects bioethical issues raised by traditional systems.” [15] No single approach applies in all areas or contexts. Rather than evaluating the permissibility of ESC research according to Western concepts such as the four principles approach, different ethics approaches should prevail.

Another consideration is the socio-economic standing of countries. In parts of South Africa, researchers have not focused heavily on contributing to the stem cell discourse, either because it is not considered health care or a health science priority or because resources are unavailable. [16] Each country’s priorities differ given different social, political, and economic factors. In South Africa, for instance, areas such as maternal mortality, non-communicable diseases, telemedicine, and the strength of health systems need improvement and require more focus [17] Stem cell research could benefit the population, but it also could divert resources from basic medical care. Researchers in South Africa adhere to the National Health Act and Medicines Control Act in South Africa and international guidelines; however, the Act is not strictly enforced, and there is no clear legislation for research conduct or ethical guidelines. [18]

Some parts of Africa condemn stem cell research. For example, 98.2 percent of the Tunisian population is Muslim. [19] Tunisia does not permit stem cell research because of moral conflict with a Fatwa. Religion heavily saturates the regulation and direction of research. [20] Stem cell use became permissible for reproductive purposes only recently, with tight restrictions preventing cells from being used in any research other than procedures concerning ART/IVF.  Their use is conditioned on consent, and available only to married couples. [21] The community's receptiveness to stem cell research depends on including communitarian African ethics.

c.     Asia

Some Asian countries also have a collective model of ethics and decision making. [22] In China, the ethics model promotes a sincere respect for life or human dignity, [23] based on protective medicine. This model, influenced by Traditional Chinese Medicine (TCM), [24] recognizes Qi as the vital energy delivered via the meridians of the body; it connects illness to body systems, the body’s entire constitution, and the universe for a holistic bond of nature, health, and quality of life. [25] Following a protective ethics model, and traditional customs of wholeness, investment in stem cell research is heavily desired for its applications in regenerative therapies, disease modeling, and protective medicines. In a survey of medical students and healthcare practitioners, 30.8 percent considered stem cell research morally unacceptable while 63.5 percent accepted medical research using human embryonic stem cells. Of these individuals, 89.9 percent supported increased funding for stem cell research. [26] The scientific community might not reflect the overall population. From 1997 to 2019, China spent a total of $576 million (USD) on stem cell research at 8,050 stem cell programs, increased published presence from 0.6 percent to 14.01 percent of total global stem cell publications as of 2014, and made significant strides in cell-based therapies for various medical conditions. [27] However, while China has made substantial investments in stem cell research and achieved notable progress in clinical applications, concerns linger regarding ethical oversight and transparency. [28] For example, the China Biosecurity Law, promoted by the National Health Commission and China Hospital Association, attempted to mitigate risks by introducing an institutional review board (IRB) in the regulatory bodies. 5800 IRBs registered with the Chinese Clinical Trial Registry since 2021. [29] However, issues still need to be addressed in implementing effective IRB review and approval procedures.

The substantial government funding and focus on scientific advancement have sometimes overshadowed considerations of regional cultures, ethnic minorities, and individual perspectives, particularly evident during the one-child policy era. As government policy adapts to promote public stability, such as the change from the one-child to the two-child policy, [30] research ethics should also adapt to ensure respect for the values of its represented peoples.

Japan is also relatively supportive of stem cell research and therapies. Japan has a more transparent regulatory framework, allowing for faster approval of regenerative medicine products, which has led to several advanced clinical trials and therapies. [31] South Korea is also actively engaged in stem cell research and has a history of breakthroughs in cloning and embryonic stem cells. [32] However, the field is controversial, and there are issues of scientific integrity. For example, the Korean FDA fast-tracked products for approval, [33] and in another instance, the oocyte source was unclear and possibly violated ethical standards. [34] Trust is important in research, as it builds collaborative foundations between colleagues, trial participant comfort, open-mindedness for complicated and sensitive discussions, and supports regulatory procedures for stakeholders. There is a need to respect the culture’s interest, engagement, and for research and clinical trials to be transparent and have ethical oversight to promote global research discourse and trust.

d.     Middle East

Countries in the Middle East have varying degrees of acceptance of or restrictions to policies related to using embryonic stem cells due to cultural and religious influences. Saudi Arabia has made significant contributions to stem cell research, and conducts research based on international guidelines for ethical conduct and under strict adherence to guidelines in accordance with Islamic principles. Specifically, the Saudi government and people require ESC research to adhere to Sharia law. In addition to umbilical and placental stem cells, [35] Saudi Arabia permits the use of embryonic stem cells as long as they come from miscarriages, therapeutic abortions permissible by Sharia law, or are left over from in vitro fertilization and donated to research. [36] Laws and ethical guidelines for stem cell research allow the development of research institutions such as the King Abdullah International Medical Research Center, which has a cord blood bank and a stem cell registry with nearly 10,000 donors. [37] Such volume and acceptance are due to the ethical ‘permissibility’ of the donor sources, which do not conflict with religious pillars. However, some researchers err on the side of caution, choosing not to use embryos or fetal tissue as they feel it is unethical to do so. [38]

Jordan has a positive research ethics culture. [39] However, there is a significant issue of lack of trust in researchers, with 45.23 percent (38.66 percent agreeing and 6.57 percent strongly agreeing) of Jordanians holding a low level of trust in researchers, compared to 81.34 percent of Jordanians agreeing that they feel safe to participate in a research trial. [40] Safety testifies to the feeling of confidence that adequate measures are in place to protect participants from harm, whereas trust in researchers could represent the confidence in researchers to act in the participants’ best interests, adhere to ethical guidelines, provide accurate information, and respect participants’ rights and dignity. One method to improve trust would be to address communication issues relevant to ESC. Legislation surrounding stem cell research has adopted specific language, especially concerning clarification “between ‘stem cells’ and ‘embryonic stem cells’” in translation. [41] Furthermore, legislation “mandates the creation of a national committee… laying out specific regulations for stem-cell banking in accordance with international standards.” [42] This broad regulation opens the door for future global engagement and maintains transparency. However, these regulations may also constrain the influence of research direction, pace, and accessibility of research outcomes.

e.     Europe

In the European Union (EU), ethics is also principle-based, but the principles of autonomy, dignity, integrity, and vulnerability are interconnected. [43] As such, the opportunity for cohesion and concessions between individuals’ thoughts and ideals allows for a more adaptable ethics model due to the flexible principles that relate to the human experience The EU has put forth a framework in its Convention for the Protection of Human Rights and Dignity of the Human Being allowing member states to take different approaches. Each European state applies these principles to its specific conventions, leading to or reflecting different acceptance levels of stem cell research. [44]

For example, in Germany, Lebenzusammenhang , or the coherence of life, references integrity in the unity of human culture. Namely, the personal sphere “should not be subject to external intervention.” [45]  Stem cell interventions could affect this concept of bodily completeness, leading to heavy restrictions. Under the Grundgesetz, human dignity and the right to life with physical integrity are paramount. [46] The Embryo Protection Act of 1991 made producing cell lines illegal. Cell lines can be imported if approved by the Central Ethics Commission for Stem Cell Research only if they were derived before May 2007. [47] Stem cell research respects the integrity of life for the embryo with heavy specifications and intense oversight. This is vastly different in Finland, where the regulatory bodies find research more permissible in IVF excess, but only up to 14 days after fertilization. [48] Spain’s approach differs still, with a comprehensive regulatory framework. [49] Thus, research regulation can be culture-specific due to variations in applied principles. Diverse cultures call for various approaches to ethical permissibility. [50] Only an adaptive-deliberative model can address the cultural constructions of self and achieve positive, culturally sensitive stem cell research practices. [51]

II.     Religious Perspectives on ESC

Embryonic stem cell sources are the main consideration within religious contexts. While individuals may not regard their own religious texts as authoritative or factual, religion can shape their foundations or perspectives.

The Qur'an states:

“And indeed We created man from a quintessence of clay. Then We placed within him a small quantity of nutfa (sperm to fertilize) in a safe place. Then We have fashioned the nutfa into an ‘alaqa (clinging clot or cell cluster), then We developed the ‘alaqa into mudgha (a lump of flesh), and We made mudgha into bones, and clothed the bones with flesh, then We brought it into being as a new creation. So Blessed is Allah, the Best of Creators.” [52]

Many scholars of Islam estimate the time of soul installment, marked by the angel breathing in the soul to bring the individual into creation, as 120 days from conception. [53] Personhood begins at this point, and the value of life would prohibit research or experimentation that could harm the individual. If the fetus is more than 120 days old, the time ensoulment is interpreted to occur according to Islamic law, abortion is no longer permissible. [54] There are a few opposing opinions about early embryos in Islamic traditions. According to some Islamic theologians, there is no ensoulment of the early embryo, which is the source of stem cells for ESC research. [55]

In Buddhism, the stance on stem cell research is not settled. The main tenets, the prohibition against harming or destroying others (ahimsa) and the pursuit of knowledge (prajña) and compassion (karuna), leave Buddhist scholars and communities divided. [56] Some scholars argue stem cell research is in accordance with the Buddhist tenet of seeking knowledge and ending human suffering. Others feel it violates the principle of not harming others. Finding the balance between these two points relies on the karmic burden of Buddhist morality. In trying to prevent ahimsa towards the embryo, Buddhist scholars suggest that to comply with Buddhist tenets, research cannot be done as the embryo has personhood at the moment of conception and would reincarnate immediately, harming the individual's ability to build their karmic burden. [57] On the other hand, the Bodhisattvas, those considered to be on the path to enlightenment or Nirvana, have given organs and flesh to others to help alleviate grieving and to benefit all. [58] Acceptance varies on applied beliefs and interpretations.

Catholicism does not support embryonic stem cell research, as it entails creation or destruction of human embryos. This destruction conflicts with the belief in the sanctity of life. For example, in the Old Testament, Genesis describes humanity as being created in God’s image and multiplying on the Earth, referencing the sacred rights to human conception and the purpose of development and life. In the Ten Commandments, the tenet that one should not kill has numerous interpretations where killing could mean murder or shedding of the sanctity of life, demonstrating the high value of human personhood. In other books, the theological conception of when life begins is interpreted as in utero, [59] highlighting the inviolability of life and its formation in vivo to make a religious point for accepting such research as relatively limited, if at all. [60] The Vatican has released ethical directives to help apply a theological basis to modern-day conflicts. The Magisterium of the Church states that “unless there is a moral certainty of not causing harm,” experimentation on fetuses, fertilized cells, stem cells, or embryos constitutes a crime. [61] Such procedures would not respect the human person who exists at these stages, according to Catholicism. Damages to the embryo are considered gravely immoral and illicit. [62] Although the Catholic Church officially opposes abortion, surveys demonstrate that many Catholic people hold pro-choice views, whether due to the context of conception, stage of pregnancy, threat to the mother’s life, or for other reasons, demonstrating that practicing members can also accept some but not all tenets. [63]

Some major Jewish denominations, such as the Reform, Conservative, and Reconstructionist movements, are open to supporting ESC use or research as long as it is for saving a life. [64] Within Judaism, the Talmud, or study, gives personhood to the child at birth and emphasizes that life does not begin at conception: [65]

“If she is found pregnant, until the fortieth day it is mere fluid,” [66]

Whereas most religions prioritize the status of human embryos, the Halakah (Jewish religious law) states that to save one life, most other religious laws can be ignored because it is in pursuit of preservation. [67] Stem cell research is accepted due to application of these religious laws.

We recognize that all religions contain subsets and sects. The variety of environmental and cultural differences within religious groups requires further analysis to respect the flexibility of religious thoughts and practices. We make no presumptions that all cultures require notions of autonomy or morality as under the common morality theory , which asserts a set of universal moral norms that all individuals share provides moral reasoning and guides ethical decisions. [68] We only wish to show that the interaction with morality varies between cultures and countries.

III.     A Flexible Ethical Approach

The plurality of different moral approaches described above demonstrates that there can be no universally acceptable uniform law for ESC on a global scale. Instead of developing one standard, flexible ethical applications must be continued. We recommend local guidelines that incorporate important cultural and ethical priorities.

While the Declaration of Helsinki is more relevant to people in clinical trials receiving ESC products, in keeping with the tradition of protections for research subjects, consent of the donor is an ethical requirement for ESC donation in many jurisdictions including the US, Canada, and Europe. [69] The Declaration of Helsinki provides a reference point for regulatory standards and could potentially be used as a universal baseline for obtaining consent prior to gamete or embryo donation.

For instance, in Columbia University’s egg donor program for stem cell research, donors followed standard screening protocols and “underwent counseling sessions that included information as to the purpose of oocyte donation for research, what the oocytes would be used for, the risks and benefits of donation, and process of oocyte stimulation” to ensure transparency for consent. [70] The program helped advance stem cell research and provided clear and safe research methods with paid participants. Though paid participation or covering costs of incidental expenses may not be socially acceptable in every culture or context, [71] and creating embryos for ESC research is illegal in many jurisdictions, Columbia’s program was effective because of the clear and honest communications with donors, IRBs, and related stakeholders.  This example demonstrates that cultural acceptance of scientific research and of the idea that an egg or embryo does not have personhood is likely behind societal acceptance of donating eggs for ESC research. As noted, many countries do not permit the creation of embryos for research.

Proper communication and education regarding the process and purpose of stem cell research may bolster comprehension and garner more acceptance. “Given the sensitive subject material, a complete consent process can support voluntary participation through trust, understanding, and ethical norms from the cultures and morals participants value. This can be hard for researchers entering countries of different socioeconomic stability, with different languages and different societal values. [72]

An adequate moral foundation in medical ethics is derived from the cultural and religious basis that informs knowledge and actions. [73] Understanding local cultural and religious values and their impact on research could help researchers develop humility and promote inclusion.

IV.     Concerns

Some may argue that if researchers all adhere to one ethics standard, protection will be satisfied across all borders, and the global public will trust researchers. However, defining what needs to be protected and how to define such research standards is very specific to the people to which standards are applied. We suggest that applying one uniform guide cannot accurately protect each individual because we all possess our own perceptions and interpretations of social values. [74] Therefore, the issue of not adjusting to the moral pluralism between peoples in applying one standard of ethics can be resolved by building out ethics models that can be adapted to different cultures and religions.

Other concerns include medical tourism, which may promote health inequities. [75] Some countries may develop and approve products derived from ESC research before others, compromising research ethics or drug approval processes. There are also concerns about the sale of unauthorized stem cell treatments, for example, those without FDA approval in the United States. Countries with robust research infrastructures may be tempted to attract medical tourists, and some customers will have false hopes based on aggressive publicity of unproven treatments. [76]

For example, in China, stem cell clinics can market to foreign clients who are not protected under the regulatory regimes. Companies employ a marketing strategy of “ethically friendly” therapies. Specifically, in the case of Beike, China’s leading stem cell tourism company and sprouting network, ethical oversight of administrators or health bureaus at one site has “the unintended consequence of shifting questionable activities to another node in Beike's diffuse network.” [77] In contrast, Jordan is aware of stem cell research’s potential abuse and its own status as a “health-care hub.” Jordan’s expanded regulations include preserving the interests of individuals in clinical trials and banning private companies from ESC research to preserve transparency and the integrity of research practices. [78]

The social priorities of the community are also a concern. The ISSCR explicitly states that guidelines “should be periodically revised to accommodate scientific advances, new challenges, and evolving social priorities.” [79] The adaptable ethics model extends this consideration further by addressing whether research is warranted given the varying degrees of socioeconomic conditions, political stability, and healthcare accessibilities and limitations. An ethical approach would require discussion about resource allocation and appropriate distribution of funds. [80]

While some religions emphasize the sanctity of life from conception, which may lead to public opposition to ESC research, others encourage ESC research due to its potential for healing and alleviating human pain. Many countries have special regulations that balance local views on embryonic personhood, the benefits of research as individual or societal goods, and the protection of human research subjects. To foster understanding and constructive dialogue, global policy frameworks should prioritize the protection of universal human rights, transparency, and informed consent. In addition to these foundational global policies, we recommend tailoring local guidelines to reflect the diverse cultural and religious perspectives of the populations they govern. Ethics models should be adapted to local populations to effectively establish research protections, growth, and possibilities of stem cell research.

For example, in countries with strong beliefs in the moral sanctity of embryos or heavy religious restrictions, an adaptive model can allow for discussion instead of immediate rejection. In countries with limited individual rights and voice in science policy, an adaptive model ensures cultural, moral, and religious views are taken into consideration, thereby building social inclusion. While this ethical consideration by the government may not give a complete voice to every individual, it will help balance policies and maintain the diverse perspectives of those it affects. Embracing an adaptive ethics model of ESC research promotes open-minded dialogue and respect for the importance of human belief and tradition. By actively engaging with cultural and religious values, researchers can better handle disagreements and promote ethical research practices that benefit each society.

This brief exploration of the religious and cultural differences that impact ESC research reveals the nuances of relative ethics and highlights a need for local policymakers to apply a more intense adaptive model.

[1] Poliwoda, S., Noor, N., Downs, E., Schaaf, A., Cantwell, A., Ganti, L., Kaye, A. D., Mosel, L. I., Carroll, C. B., Viswanath, O., & Urits, I. (2022). Stem cells: a comprehensive review of origins and emerging clinical roles in medical practice.  Orthopedic reviews ,  14 (3), 37498. https://doi.org/10.52965/001c.37498

[2] Poliwoda, S., Noor, N., Downs, E., Schaaf, A., Cantwell, A., Ganti, L., Kaye, A. D., Mosel, L. I., Carroll, C. B., Viswanath, O., & Urits, I. (2022). Stem cells: a comprehensive review of origins and emerging clinical roles in medical practice.  Orthopedic reviews ,  14 (3), 37498. https://doi.org/10.52965/001c.37498

[3] International Society for Stem Cell Research. (2023). Laboratory-based human embryonic stem cell research, embryo research, and related research activities . International Society for Stem Cell Research. https://www.isscr.org/guidelines/blog-post-title-one-ed2td-6fcdk ; Kimmelman, J., Hyun, I., Benvenisty, N.  et al.  Policy: Global standards for stem-cell research.  Nature   533 , 311–313 (2016). https://doi.org/10.1038/533311a

[4] International Society for Stem Cell Research. (2023). Laboratory-based human embryonic stem cell research, embryo research, and related research activities . International Society for Stem Cell Research. https://www.isscr.org/guidelines/blog-post-title-one-ed2td-6fcdk

[5] Concerning the moral philosophies of stem cell research, our paper does not posit a personal moral stance nor delve into the “when” of human life begins. To read further about the philosophical debate, consider the following sources:

Sandel M. J. (2004). Embryo ethics--the moral logic of stem-cell research.  The New England journal of medicine ,  351 (3), 207–209. https://doi.org/10.1056/NEJMp048145 ; George, R. P., & Lee, P. (2020, September 26). Acorns and Embryos . The New Atlantis. https://www.thenewatlantis.com/publications/acorns-and-embryos ; Sagan, A., & Singer, P. (2007). The moral status of stem cells. Metaphilosophy , 38 (2/3), 264–284. http://www.jstor.org/stable/24439776 ; McHugh P. R. (2004). Zygote and "clonote"--the ethical use of embryonic stem cells.  The New England journal of medicine ,  351 (3), 209–211. https://doi.org/10.1056/NEJMp048147 ; Kurjak, A., & Tripalo, A. (2004). The facts and doubts about beginning of the human life and personality.  Bosnian journal of basic medical sciences ,  4 (1), 5–14. https://doi.org/10.17305/bjbms.2004.3453

[6] Vazin, T., & Freed, W. J. (2010). Human embryonic stem cells: derivation, culture, and differentiation: a review.  Restorative neurology and neuroscience ,  28 (4), 589–603. https://doi.org/10.3233/RNN-2010-0543

[7] Socially, at its core, the Western approach to ethics is widely principle-based, autonomy being one of the key factors to ensure a fundamental respect for persons within research. For information regarding autonomy in research, see: Department of Health, Education, and Welfare, & National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (1978). The Belmont Report. Ethical principles and guidelines for the protection of human subjects of research.; For a more in-depth review of autonomy within the US, see: Beauchamp, T. L., & Childress, J. F. (1994). Principles of Biomedical Ethics . Oxford University Press.

[8] Sherley v. Sebelius , 644 F.3d 388 (D.C. Cir. 2011), citing 45 C.F.R. 46.204(b) and [42 U.S.C. § 289g(b)]. https://www.cadc.uscourts.gov/internet/opinions.nsf/6c690438a9b43dd685257a64004ebf99/$file/11-5241-1391178.pdf

[9] Stem Cell Research Enhancement Act of 2005, H. R. 810, 109 th Cong. (2001). https://www.govtrack.us/congress/bills/109/hr810/text ; Bush, G. W. (2006, July 19). Message to the House of Representatives . National Archives and Records Administration. https://georgewbush-whitehouse.archives.gov/news/releases/2006/07/20060719-5.html

[10] National Archives and Records Administration. (2009, March 9). Executive order 13505 -- removing barriers to responsible scientific research involving human stem cells . National Archives and Records Administration. https://obamawhitehouse.archives.gov/the-press-office/removing-barriers-responsible-scientific-research-involving-human-stem-cells

[11] Hurlbut, W. B. (2006). Science, Religion, and the Politics of Stem Cells.  Social Research ,  73 (3), 819–834. http://www.jstor.org/stable/40971854

[12] Akpa-Inyang, Francis & Chima, Sylvester. (2021). South African traditional values and beliefs regarding informed consent and limitations of the principle of respect for autonomy in African communities: a cross-cultural qualitative study. BMC Medical Ethics . 22. 10.1186/s12910-021-00678-4.

[13] Source for further reading: Tangwa G. B. (2007). Moral status of embryonic stem cells: perspective of an African villager. Bioethics , 21(8), 449–457. https://doi.org/10.1111/j.1467-8519.2007.00582.x , see also Mnisi, F. M. (2020). An African analysis based on ethics of Ubuntu - are human embryonic stem cell patents morally justifiable? African Insight , 49 (4).

[14] Jecker, N. S., & Atuire, C. (2021). Bioethics in Africa: A contextually enlightened analysis of three cases. Developing World Bioethics , 22 (2), 112–122. https://doi.org/10.1111/dewb.12324

[15] Jecker, N. S., & Atuire, C. (2021). Bioethics in Africa: A contextually enlightened analysis of three cases. Developing World Bioethics, 22(2), 112–122. https://doi.org/10.1111/dewb.12324

[16] Jackson, C.S., Pepper, M.S. Opportunities and barriers to establishing a cell therapy programme in South Africa.  Stem Cell Res Ther   4 , 54 (2013). https://doi.org/10.1186/scrt204 ; Pew Research Center. (2014, May 1). Public health a major priority in African nations . Pew Research Center’s Global Attitudes Project. https://www.pewresearch.org/global/2014/05/01/public-health-a-major-priority-in-african-nations/

[17] Department of Health Republic of South Africa. (2021). Health Research Priorities (revised) for South Africa 2021-2024 . National Health Research Strategy. https://www.health.gov.za/wp-content/uploads/2022/05/National-Health-Research-Priorities-2021-2024.pdf

[18] Oosthuizen, H. (2013). Legal and Ethical Issues in Stem Cell Research in South Africa. In: Beran, R. (eds) Legal and Forensic Medicine. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-32338-6_80 , see also: Gaobotse G (2018) Stem Cell Research in Africa: Legislation and Challenges. J Regen Med 7:1. doi: 10.4172/2325-9620.1000142

[19] United States Bureau of Citizenship and Immigration Services. (1998). Tunisia: Information on the status of Christian conversions in Tunisia . UNHCR Web Archive. https://webarchive.archive.unhcr.org/20230522142618/https://www.refworld.org/docid/3df0be9a2.html

[20] Gaobotse, G. (2018) Stem Cell Research in Africa: Legislation and Challenges. J Regen Med 7:1. doi: 10.4172/2325-9620.1000142

[21] Kooli, C. Review of assisted reproduction techniques, laws, and regulations in Muslim countries.  Middle East Fertil Soc J   24 , 8 (2020). https://doi.org/10.1186/s43043-019-0011-0 ; Gaobotse, G. (2018) Stem Cell Research in Africa: Legislation and Challenges. J Regen Med 7:1. doi: 10.4172/2325-9620.1000142

[22] Pang M. C. (1999). Protective truthfulness: the Chinese way of safeguarding patients in informed treatment decisions. Journal of medical ethics , 25(3), 247–253. https://doi.org/10.1136/jme.25.3.247

[23] Wang, L., Wang, F., & Zhang, W. (2021). Bioethics in China’s biosecurity law: Forms, effects, and unsettled issues. Journal of law and the biosciences , 8(1).  https://doi.org/10.1093/jlb/lsab019 https://academic.oup.com/jlb/article/8/1/lsab019/6299199

[24] Wang, Y., Xue, Y., & Guo, H. D. (2022). Intervention effects of traditional Chinese medicine on stem cell therapy of myocardial infarction.  Frontiers in pharmacology ,  13 , 1013740. https://doi.org/10.3389/fphar.2022.1013740

[25] Li, X.-T., & Zhao, J. (2012). Chapter 4: An Approach to the Nature of Qi in TCM- Qi and Bioenergy. In Recent Advances in Theories and Practice of Chinese Medicine (p. 79). InTech.

[26] Luo, D., Xu, Z., Wang, Z., & Ran, W. (2021). China's Stem Cell Research and Knowledge Levels of Medical Practitioners and Students.  Stem cells international ,  2021 , 6667743. https://doi.org/10.1155/2021/6667743

[27] Luo, D., Xu, Z., Wang, Z., & Ran, W. (2021). China's Stem Cell Research and Knowledge Levels of Medical Practitioners and Students.  Stem cells international ,  2021 , 6667743. https://doi.org/10.1155/2021/6667743

[28] Zhang, J. Y. (2017). Lost in translation? accountability and governance of Clinical Stem Cell Research in China. Regenerative Medicine , 12 (6), 647–656. https://doi.org/10.2217/rme-2017-0035

[29] Wang, L., Wang, F., & Zhang, W. (2021). Bioethics in China’s biosecurity law: Forms, effects, and unsettled issues. Journal of law and the biosciences , 8(1).  https://doi.org/10.1093/jlb/lsab019 https://academic.oup.com/jlb/article/8/1/lsab019/6299199

[30] Chen, H., Wei, T., Wang, H.  et al.  Association of China’s two-child policy with changes in number of births and birth defects rate, 2008–2017.  BMC Public Health   22 , 434 (2022). https://doi.org/10.1186/s12889-022-12839-0

[31] Azuma, K. Regulatory Landscape of Regenerative Medicine in Japan.  Curr Stem Cell Rep   1 , 118–128 (2015). https://doi.org/10.1007/s40778-015-0012-6

[32] Harris, R. (2005, May 19). Researchers Report Advance in Stem Cell Production . NPR. https://www.npr.org/2005/05/19/4658967/researchers-report-advance-in-stem-cell-production

[33] Park, S. (2012). South Korea steps up stem-cell work.  Nature . https://doi.org/10.1038/nature.2012.10565

[34] Resnik, D. B., Shamoo, A. E., & Krimsky, S. (2006). Fraudulent human embryonic stem cell research in South Korea: lessons learned.  Accountability in research ,  13 (1), 101–109. https://doi.org/10.1080/08989620600634193 .

[35] Alahmad, G., Aljohani, S., & Najjar, M. F. (2020). Ethical challenges regarding the use of stem cells: interviews with researchers from Saudi Arabia. BMC medical ethics, 21(1), 35. https://doi.org/10.1186/s12910-020-00482-6

[36] Association for the Advancement of Blood and Biotherapies.  https://www.aabb.org/regulatory-and-advocacy/regulatory-affairs/regulatory-for-cellular-therapies/international-competent-authorities/saudi-arabia

[37] Alahmad, G., Aljohani, S., & Najjar, M. F. (2020). Ethical challenges regarding the use of stem cells: Interviews with researchers from Saudi Arabia.  BMC medical ethics ,  21 (1), 35. https://doi.org/10.1186/s12910-020-00482-6

[38] Alahmad, G., Aljohani, S., & Najjar, M. F. (2020). Ethical challenges regarding the use of stem cells: Interviews with researchers from Saudi Arabia. BMC medical ethics , 21(1), 35. https://doi.org/10.1186/s12910-020-00482-6

Culturally, autonomy practices follow a relational autonomy approach based on a paternalistic deontological health care model. The adherence to strict international research policies and religious pillars within the regulatory environment is a great foundation for research ethics. However, there is a need to develop locally targeted ethics approaches for research (as called for in Alahmad, G., Aljohani, S., & Najjar, M. F. (2020). Ethical challenges regarding the use of stem cells: interviews with researchers from Saudi Arabia. BMC medical ethics, 21(1), 35. https://doi.org/10.1186/s12910-020-00482-6), this decision-making approach may help advise a research decision model. For more on the clinical cultural autonomy approaches, see: Alabdullah, Y. Y., Alzaid, E., Alsaad, S., Alamri, T., Alolayan, S. W., Bah, S., & Aljoudi, A. S. (2022). Autonomy and paternalism in Shared decision‐making in a Saudi Arabian tertiary hospital: A cross‐sectional study. Developing World Bioethics , 23 (3), 260–268. https://doi.org/10.1111/dewb.12355 ; Bukhari, A. A. (2017). Universal Principles of Bioethics and Patient Rights in Saudi Arabia (Doctoral dissertation, Duquesne University). https://dsc.duq.edu/etd/124; Ladha, S., Nakshawani, S. A., Alzaidy, A., & Tarab, B. (2023, October 26). Islam and Bioethics: What We All Need to Know . Columbia University School of Professional Studies. https://sps.columbia.edu/events/islam-and-bioethics-what-we-all-need-know

[39] Ababneh, M. A., Al-Azzam, S. I., Alzoubi, K., Rababa’h, A., & Al Demour, S. (2021). Understanding and attitudes of the Jordanian public about clinical research ethics.  Research Ethics ,  17 (2), 228-241.  https://doi.org/10.1177/1747016120966779

[40] Ababneh, M. A., Al-Azzam, S. I., Alzoubi, K., Rababa’h, A., & Al Demour, S. (2021). Understanding and attitudes of the Jordanian public about clinical research ethics.  Research Ethics ,  17 (2), 228-241.  https://doi.org/10.1177/1747016120966779

[41] Dajani, R. (2014). Jordan’s stem-cell law can guide the Middle East.  Nature  510, 189. https://doi.org/10.1038/510189a

[42] Dajani, R. (2014). Jordan’s stem-cell law can guide the Middle East.  Nature  510, 189. https://doi.org/10.1038/510189a

[43] The EU’s definition of autonomy relates to the capacity for creating ideas, moral insight, decisions, and actions without constraint, personal responsibility, and informed consent. However, the EU views autonomy as not completely able to protect individuals and depends on other principles, such as dignity, which “expresses the intrinsic worth and fundamental equality of all human beings.” Rendtorff, J.D., Kemp, P. (2019). Four Ethical Principles in European Bioethics and Biolaw: Autonomy, Dignity, Integrity and Vulnerability. In: Valdés, E., Lecaros, J. (eds) Biolaw and Policy in the Twenty-First Century. International Library of Ethics, Law, and the New Medicine, vol 78. Springer, Cham. https://doi.org/10.1007/978-3-030-05903-3_3

[44] Council of Europe. Convention for the protection of Human Rights and Dignity of the Human Being with regard to the Application of Biology and Medicine: Convention on Human Rights and Biomedicine (ETS No. 164) https://www.coe.int/en/web/conventions/full-list?module=treaty-detail&treatynum=164 (forbidding the creation of embryos for research purposes only, and suggests embryos in vitro have protections.); Also see Drabiak-Syed B. K. (2013). New President, New Human Embryonic Stem Cell Research Policy: Comparative International Perspectives and Embryonic Stem Cell Research Laws in France.  Biotechnology Law Report ,  32 (6), 349–356. https://doi.org/10.1089/blr.2013.9865

[45] Rendtorff, J.D., Kemp, P. (2019). Four Ethical Principles in European Bioethics and Biolaw: Autonomy, Dignity, Integrity and Vulnerability. In: Valdés, E., Lecaros, J. (eds) Biolaw and Policy in the Twenty-First Century. International Library of Ethics, Law, and the New Medicine, vol 78. Springer, Cham. https://doi.org/10.1007/978-3-030-05903-3_3

[46] Tomuschat, C., Currie, D. P., Kommers, D. P., & Kerr, R. (Trans.). (1949, May 23). Basic law for the Federal Republic of Germany. https://www.btg-bestellservice.de/pdf/80201000.pdf

[47] Regulation of Stem Cell Research in Germany . Eurostemcell. (2017, April 26). https://www.eurostemcell.org/regulation-stem-cell-research-germany

[48] Regulation of Stem Cell Research in Finland . Eurostemcell. (2017, April 26). https://www.eurostemcell.org/regulation-stem-cell-research-finland

[49] Regulation of Stem Cell Research in Spain . Eurostemcell. (2017, April 26). https://www.eurostemcell.org/regulation-stem-cell-research-spain

[50] Some sources to consider regarding ethics models or regulatory oversights of other cultures not covered:

Kara MA. Applicability of the principle of respect for autonomy: the perspective of Turkey. J Med Ethics. 2007 Nov;33(11):627-30. doi: 10.1136/jme.2006.017400. PMID: 17971462; PMCID: PMC2598110.

Ugarte, O. N., & Acioly, M. A. (2014). The principle of autonomy in Brazil: one needs to discuss it ...  Revista do Colegio Brasileiro de Cirurgioes ,  41 (5), 374–377. https://doi.org/10.1590/0100-69912014005013

Bharadwaj, A., & Glasner, P. E. (2012). Local cells, global science: The rise of embryonic stem cell research in India . Routledge.

For further research on specific European countries regarding ethical and regulatory framework, we recommend this database: Regulation of Stem Cell Research in Europe . Eurostemcell. (2017, April 26). https://www.eurostemcell.org/regulation-stem-cell-research-europe   

[51] Klitzman, R. (2006). Complications of culture in obtaining informed consent. The American Journal of Bioethics, 6(1), 20–21. https://doi.org/10.1080/15265160500394671 see also: Ekmekci, P. E., & Arda, B. (2017). Interculturalism and Informed Consent: Respecting Cultural Differences without Breaching Human Rights.  Cultura (Iasi, Romania) ,  14 (2), 159–172.; For why trust is important in research, see also: Gray, B., Hilder, J., Macdonald, L., Tester, R., Dowell, A., & Stubbe, M. (2017). Are research ethics guidelines culturally competent?  Research Ethics ,  13 (1), 23-41.  https://doi.org/10.1177/1747016116650235

[52] The Qur'an  (M. Khattab, Trans.). (1965). Al-Mu’minun, 23: 12-14. https://quran.com/23

[53] Lenfest, Y. (2017, December 8). Islam and the beginning of human life . Bill of Health. https://blog.petrieflom.law.harvard.edu/2017/12/08/islam-and-the-beginning-of-human-life/

[54] Aksoy, S. (2005). Making regulations and drawing up legislation in Islamic countries under conditions of uncertainty, with special reference to embryonic stem cell research. Journal of Medical Ethics , 31: 399-403.; see also: Mahmoud, Azza. "Islamic Bioethics: National Regulations and Guidelines of Human Stem Cell Research in the Muslim World." Master's thesis, Chapman University, 2022. https://doi.org/10.36837/ chapman.000386

[55] Rashid, R. (2022). When does Ensoulment occur in the Human Foetus. Journal of the British Islamic Medical Association , 12 (4). ISSN 2634 8071. https://www.jbima.com/wp-content/uploads/2023/01/2-Ethics-3_-Ensoulment_Rafaqat.pdf.

[56] Sivaraman, M. & Noor, S. (2017). Ethics of embryonic stem cell research according to Buddhist, Hindu, Catholic, and Islamic religions: perspective from Malaysia. Asian Biomedicine,8(1) 43-52.  https://doi.org/10.5372/1905-7415.0801.260

[57] Jafari, M., Elahi, F., Ozyurt, S. & Wrigley, T. (2007). 4. Religious Perspectives on Embryonic Stem Cell Research. In K. Monroe, R. Miller & J. Tobis (Ed.),  Fundamentals of the Stem Cell Debate: The Scientific, Religious, Ethical, and Political Issues  (pp. 79-94). Berkeley: University of California Press.  https://escholarship.org/content/qt9rj0k7s3/qt9rj0k7s3_noSplash_f9aca2e02c3777c7fb76ea768ba458f0.pdf https://doi.org/10.1525/9780520940994-005

[58] Lecso, P. A. (1991). The Bodhisattva Ideal and Organ Transplantation.  Journal of Religion and Health ,  30 (1), 35–41. http://www.jstor.org/stable/27510629 ; Bodhisattva, S. (n.d.). The Key of Becoming a Bodhisattva . A Guide to the Bodhisattva Way of Life. http://www.buddhism.org/Sutras/2/BodhisattvaWay.htm

[59] There is no explicit religious reference to when life begins or how to conduct research that interacts with the concept of life. However, these are relevant verses pertaining to how the fetus is viewed. (( King James Bible . (1999). Oxford University Press. (original work published 1769))

Jerimiah 1: 5 “Before I formed thee in the belly I knew thee; and before thou camest forth out of the womb I sanctified thee…”

In prophet Jerimiah’s insight, God set him apart as a person known before childbirth, a theme carried within the Psalm of David.

Psalm 139: 13-14 “…Thou hast covered me in my mother's womb. I will praise thee; for I am fearfully and wonderfully made…”

These verses demonstrate David’s respect for God as an entity that would know of all man’s thoughts and doings even before birth.

[60] It should be noted that abortion is not supported as well.

[61] The Vatican. (1987, February 22). Instruction on Respect for Human Life in Its Origin and on the Dignity of Procreation Replies to Certain Questions of the Day . Congregation For the Doctrine of the Faith. https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_con_cfaith_doc_19870222_respect-for-human-life_en.html

[62] The Vatican. (2000, August 25). Declaration On the Production and the Scientific and Therapeutic Use of Human Embryonic Stem Cells . Pontifical Academy for Life. https://www.vatican.va/roman_curia/pontifical_academies/acdlife/documents/rc_pa_acdlife_doc_20000824_cellule-staminali_en.html ; Ohara, N. (2003). Ethical Consideration of Experimentation Using Living Human Embryos: The Catholic Church’s Position on Human Embryonic Stem Cell Research and Human Cloning. Department of Obstetrics and Gynecology . Retrieved from https://article.imrpress.com/journal/CEOG/30/2-3/pii/2003018/77-81.pdf.

[63] Smith, G. A. (2022, May 23). Like Americans overall, Catholics vary in their abortion views, with regular mass attenders most opposed . Pew Research Center. https://www.pewresearch.org/short-reads/2022/05/23/like-americans-overall-catholics-vary-in-their-abortion-views-with-regular-mass-attenders-most-opposed/

[64] Rosner, F., & Reichman, E. (2002). Embryonic stem cell research in Jewish law. Journal of halacha and contemporary society , (43), 49–68.; Jafari, M., Elahi, F., Ozyurt, S. & Wrigley, T. (2007). 4. Religious Perspectives on Embryonic Stem Cell Research. In K. Monroe, R. Miller & J. Tobis (Ed.),  Fundamentals of the Stem Cell Debate: The Scientific, Religious, Ethical, and Political Issues  (pp. 79-94). Berkeley: University of California Press.  https://escholarship.org/content/qt9rj0k7s3/qt9rj0k7s3_noSplash_f9aca2e02c3777c7fb76ea768ba458f0.pdf https://doi.org/10.1525/9780520940994-005

[65] Schenker J. G. (2008). The beginning of human life: status of embryo. Perspectives in Halakha (Jewish Religious Law).  Journal of assisted reproduction and genetics ,  25 (6), 271–276. https://doi.org/10.1007/s10815-008-9221-6

[66] Ruttenberg, D. (2020, May 5). The Torah of Abortion Justice (annotated source sheet) . Sefaria. https://www.sefaria.org/sheets/234926.7?lang=bi&with=all&lang2=en

[67] Jafari, M., Elahi, F., Ozyurt, S. & Wrigley, T. (2007). 4. Religious Perspectives on Embryonic Stem Cell Research. In K. Monroe, R. Miller & J. Tobis (Ed.),  Fundamentals of the Stem Cell Debate: The Scientific, Religious, Ethical, and Political Issues  (pp. 79-94). Berkeley: University of California Press.  https://escholarship.org/content/qt9rj0k7s3/qt9rj0k7s3_noSplash_f9aca2e02c3777c7fb76ea768ba458f0.pdf https://doi.org/10.1525/9780520940994-005

[68] Gert, B. (2007). Common morality: Deciding what to do . Oxford Univ. Press.

[69] World Medical Association (2013). World Medical Association Declaration of Helsinki: ethical principles for medical research involving human subjects. JAMA , 310(20), 2191–2194. https://doi.org/10.1001/jama.2013.281053 Declaration of Helsinki – WMA – The World Medical Association .; see also: National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (1979).  The Belmont report: Ethical principles and guidelines for the protection of human subjects of research . U.S. Department of Health and Human Services.  https://www.hhs.gov/ohrp/regulations-and-policy/belmont-report/read-the-belmont-report/index.html

[70] Zakarin Safier, L., Gumer, A., Kline, M., Egli, D., & Sauer, M. V. (2018). Compensating human subjects providing oocytes for stem cell research: 9-year experience and outcomes.  Journal of assisted reproduction and genetics ,  35 (7), 1219–1225. https://doi.org/10.1007/s10815-018-1171-z https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6063839/ see also: Riordan, N. H., & Paz Rodríguez, J. (2021). Addressing concerns regarding associated costs, transparency, and integrity of research in recent stem cell trial. Stem Cells Translational Medicine , 10 (12), 1715–1716. https://doi.org/10.1002/sctm.21-0234

[71] Klitzman, R., & Sauer, M. V. (2009). Payment of egg donors in stem cell research in the USA.  Reproductive biomedicine online ,  18 (5), 603–608. https://doi.org/10.1016/s1472-6483(10)60002-8

[72] Krosin, M. T., Klitzman, R., Levin, B., Cheng, J., & Ranney, M. L. (2006). Problems in comprehension of informed consent in rural and peri-urban Mali, West Africa.  Clinical trials (London, England) ,  3 (3), 306–313. https://doi.org/10.1191/1740774506cn150oa

[73] Veatch, Robert M.  Hippocratic, Religious, and Secular Medical Ethics: The Points of Conflict . Georgetown University Press, 2012.

[74] Msoroka, M. S., & Amundsen, D. (2018). One size fits not quite all: Universal research ethics with diversity.  Research Ethics ,  14 (3), 1-17.  https://doi.org/10.1177/1747016117739939

[75] Pirzada, N. (2022). The Expansion of Turkey’s Medical Tourism Industry.  Voices in Bioethics ,  8 . https://doi.org/10.52214/vib.v8i.9894

[76] Stem Cell Tourism: False Hope for Real Money . Harvard Stem Cell Institute (HSCI). (2023). https://hsci.harvard.edu/stem-cell-tourism , See also: Bissassar, M. (2017). Transnational Stem Cell Tourism: An ethical analysis.  Voices in Bioethics ,  3 . https://doi.org/10.7916/vib.v3i.6027

[77] Song, P. (2011) The proliferation of stem cell therapies in post-Mao China: problematizing ethical regulation,  New Genetics and Society , 30:2, 141-153, DOI:  10.1080/14636778.2011.574375

[78] Dajani, R. (2014). Jordan’s stem-cell law can guide the Middle East.  Nature  510, 189. https://doi.org/10.1038/510189a

[79] International Society for Stem Cell Research. (2024). Standards in stem cell research . International Society for Stem Cell Research. https://www.isscr.org/guidelines/5-standards-in-stem-cell-research

[80] Benjamin, R. (2013). People’s science bodies and rights on the Stem Cell Frontier . Stanford University Press.

Mifrah Hayath

SM Candidate Harvard Medical School, MS Biotechnology Johns Hopkins University

Olivia Bowers

MS Bioethics Columbia University (Disclosure: affiliated with Voices in Bioethics)

Article Details

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License .

IMAGES

  1. Understanding the 3 Main Types of Survey Research & Putting Them to Use

    definition of a research survey

  2. Surveys: What They Are, Characteristics & Examples

    definition of a research survey

  3. Survey Research: Definition, Examples And Methods

    definition of a research survey

  4. Descriptive research : Survey

    definition of a research survey

  5. Survey

    definition of a research survey

  6. Survey Research: Definition, Examples & Methods

    definition of a research survey

VIDEO

  1. What is surveying

  2. Sample Survey: Introduction (Part 1) (Hindi)

  3. Questionnaire || Meaning and Definition || Type and Characteristics || Research Methodology ||

  4. Research Meaning

  5. What is Survey Method or Research? Its Definition, Types of survey method

  6. INTRODUCTION TO SURVEY

COMMENTS

  1. Survey Research

    Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout.

  2. Survey Research

    Survey Research. Definition: Survey Research is a quantitative research method that involves collecting standardized data from a sample of individuals or groups through the use of structured questionnaires or interviews. The data collected is then analyzed statistically to identify patterns and relationships between variables, and to draw ...

  3. Understanding and Evaluating Survey Research

    Survey research is defined as "the collection of information from a sample of individuals through their responses to questions" ( Check & Schutt, 2012, p. 160 ). This type of research allows for a variety of methods to recruit participants, collect data, and utilize various methods of instrumentation. Survey research can use quantitative ...

  4. Survey Research: Definition, Examples and Methods

    Survey Research Definition. Survey Research is defined as the process of conducting research using surveys that researchers send to survey respondents. The data collected from surveys is then statistically analyzed to draw meaningful research conclusions. In the 21st century, every organization's eager to understand what their customers think ...

  5. Survey Research: Definition, Examples & Methods

    Survey research is the process of collecting data from a predefined group (e.g. customers or potential customers) with the ultimate goal of uncovering insights about your products, services, or brand overall.. As a quantitative data collection method, survey research can provide you with a goldmine of information that can inform crucial business and product decisions.

  6. Survey research

    Survey research is a research method involving the use of standardised questionnaires or interviews to collect data about people and their preferences, thoughts, and behaviours in a systematic manner. Although census surveys were conducted as early as Ancient Egypt, survey as a formal research method was pioneered in the 1930-40s by sociologist Paul Lazarsfeld to examine the effects of the ...

  7. Doing Survey Research

    Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout. Distribute the survey.

  8. Survey Research: Definition, Types & Methods

    Descriptive research is the most common and conclusive form of survey research due to its quantitative nature. Unlike exploratory research methods, descriptive research utilizes pre-planned, structured surveys with closed-ended questions. It's also deductive, meaning that the survey structure and questions are determined beforehand based on existing theories or areas of inquiry.

  9. Overview of Survey Research

    Survey research is a quantitative and qualitative method with two important characteristics. First, the variables of interest are measured using self-reports. In essence, survey researchers ask their participants (who are often called respondents in survey research) to report directly on their own thoughts, feelings, and behaviours.

  10. Survey Research

    Survey research is a popular and powerful means by which to study people and organizations in society. It consists of a rich set of techniques used to obtain information about individual attitudes, values, behaviors, opinions, knowledge, and circumstances.

  11. Survey Research

    Definition. Much research is conducted by means of surveys, which comprise the collection of data from respondents, using face-to-face interviews, telephone interviews, or self-completion questionnaires that are returned online, via email, or by post. To ensure validity and generalization across the population being surveyed, respondents to a ...

  12. Overview of Survey Research

    What Is Survey Research? Survey research is a quantitative and qualitative method with two important characteristics. First, the variables of interest are measured using self-reports (using questionnaires or interviews). In essence, survey researchers ask their participants (who are often called respondents in survey research) to report directly on their own thoughts, feelings, and behaviors.

  13. A Short Introduction to Survey Research

    3.3 The Importance of Survey Research in the Social Sciences and Beyond. Survey research is one of the pillars in social science research in the twenty-first century. Surveys are used to measure almost everything from voting behavior to public opinion and to sexual preferences (De Leeuw et al. 2008: 1).

  14. PDF Fundamentals of Survey Research Methodology

    The survey is then constructed to test this model against observations of the phenomena. In contrast to survey research, a . survey. is simply a data collection tool for carrying out survey research. Pinsonneault and Kraemer (1993) defined a survey as a "means for gathering information about the characteristics, actions, or opinions of a ...

  15. A quick guide to survey research

    Survey research is a unique way of gathering information from a large cohort. Advantages of surveys include having a large population and therefore a greater statistical power, the ability to gather large amounts of information and having the availability of validated models. However, surveys are costly, there is sometimes discrepancy in recall ...

  16. Survey Research Overview, Types & Techniques

    Survey research definition is a technique used to gather information about individuals' behaviors, attitudes, and opinions regarding a specific topic. The three types of survey research are ...

  17. (PDF) Understanding and Evaluating Survey Research

    Survey research is defined as. "the collection of information from. a sample of individuals through their. responses to questions" (Check &. Schutt, 2012, p. 160). This type of r e -. search ...

  18. Approaching Survey Research

    What Is Survey Research? Survey research is a quantitative and qualitative method with two important characteristics. First, the variables of interest are measured using self-reports (using questionnaires or interviews). In essence, survey researchers ask their participants (who are often called respondents in survey research) to report directly on their own thoughts, feelings, and behaviors.

  19. PDF Survey Research

    Survey research is a specific type of field study that in- volves the collection of data from a sample of ele- ments (e.g., adult women) drawn from a well-defined population (e.g., all adult women living in the United States) through the use of a questionnaire (for more lengthy discussions, see Babbie, 1990; Fowler, 1988; ...

  20. Questionnaire Design

    Questionnaires vs. surveys. A survey is a research method where you collect and analyze data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.. Designing a questionnaire means creating valid and reliable questions that address your research objectives, placing them in a useful order, and selecting an appropriate method for administration.

  21. What Is A Survey (or Questionnaire)?

    Surveys can take many forms but are most common as a questionnaire, either written or online. A survey is a method of gathering information using relevant questions from a sample of people with the aim of understanding populations as a whole. Surveys provide a critical source of data and insights for everyone engaged in the information economy ...

  22. Survey Research: Definition, Design, Methods and Examples

    Survey research is a quantitative research method that involves collecting data from a sample of individuals using standardized questionnaires or surveys. The goal of survey research is to measure the attitudes, opinions, behaviors, and characteristics of a target population. Surveys can be conducted through various means, including phone, mail ...

  23. Questionnaire

    Definition: A Questionnaire is a research tool or survey instrument that consists of a set of questions or prompts designed to gather information from individuals or groups of people. It is a standardized way of collecting data from a large number of people by asking them a series of questions related to a specific topic or research objective.

  24. Leveraging collective action and environmental literacy to address

    Developing and enhancing societal capacity to understand, debate elements of, and take actionable steps toward a sustainable future at a scale beyond the individual are critical when addressing sustainability challenges such as climate change, resource scarcity, biodiversity loss, and zoonotic disease. Although mounting evidence exists for how to facilitate individual action to address ...

  25. Gartner Survey Finds Generative AI is Now the Most Frequently Deployed

    Generative artificial intelligence (GenAI) is the No. 1 type of AI solution deployed in organizations, according to a new survey by Gartner, Inc.. According to the survey conducted in the fourth quarter of 2023, 29% of the 644 respondents from organizations in the U.S., Germany and the U.K. said that they have deployed and are using GenAI, making GenAI the most frequently deployed AI solution.

  26. Female labor force participation

    Women's work and GDP. Women's work is posited to be related to development through the process of economic transformation. Levels of female labor force participation are high for the poorest economies generally, where agriculture is the dominant sector and women often participate in small-holder agricultural work.

  27. Management of spontaneous septic hypothermia in intensive care. A

    The benefice benefit of temperature control in sepsis or septic shock is still under debate in the literature. We developed a national survey to assess the current state of knowledge and the practical management of spontaneous septic hypothermia in French intensive care units. Out of more 764 intensivists who were contacted, 436 responded to the survey. The majority of doctors (52.4% ...

  28. Cultural Relativity and Acceptance of Embryonic Stem Cell Research

    Voices in Bioethics is currently seeking submissions on philosophical and practical topics, both current and timeless. Papers addressing access to healthcare, the bioethical implications of recent Supreme Court rulings, environmental ethics, data privacy, cybersecurity, law and bioethics, economics and bioethics, reproductive ethics, research ethics, and pediatric bioethics are sought.