2. Email database makes reminders convenient.
3. Enthusiastic target demographics nullifies need of incentives.
4. Supports a larger sample size.
5. Non-respondents and respondents must be matched.
Approval from the Institutional Review Board should be taken as per requirement according to the CHERRIES checklist. However, rules for approval are different as per the country or nation and therefore, local rules must be checked and followed. For instance, in India, the Indian Council of Medical Research released an article in 2017, stating that the concept of broad consent has been updated which is defined “consent for an unspecified range of future research subject to a few contents and/or process restrictions.” It talks about “the flexibility of Indian ethics committees to review a multicentric study proposal for research involving low or minimal risk, survey or studies using anonymized samples or data or low or minimal risk public health research.” The reporting of approvals received and applied for and the procedure of written, informed consent followed must be clear and transparent. 10 , 19
The use of incentives in surveys is also an ethical concern. 20 The different of incentives that can be used are monetary or non-monetary. Monetary incentives are usually discouraged as these may attract the wrong population due to the temptation of the monetary benefit. However, monetary incentives have been seen to make survey receive greater traction even though this is yet to proven. Monetary incentives are not only provided in terms of cash or cheque but also in the form of free articles, discount coupons, phone cards, e-money or cashback value. 21 These methods though tempting must be seldom used. If used, their use must be disclosed and justified in the report. The use of non-monetary incentives like a meeting with a famous personality or access to restricted and authorized areas. These can also help pique the interest of the respondents.
As mentioned earlier, the design of a survey is reflective of the skill of the investigator curating it. 22 Survey builders can be used to design an efficient survey. These offer majority of the basic features needed to construct a survey, free of charge. Therefore, surveys can be designed from scratch, using pre-designed templates or by using previous survey designs as inspiration. Taking surveys could be made convenient by using the various aids available ( Table 1 ). Moreover, even the investigator should be mindful of the unintended response effects of ordering and context of survey questions. 23
Surveys using clear, unambiguous, simple and well-articulated language record precise answers. 24 A well-designed survey accounts for the culture, language and convenience of the target demographic. The age, region, country and occupation of the target population is also considered before constructing a survey. Consistency is maintained in the terms used in the survey and abbreviations are avoided to allow the respondents to have a clear understanding of the question being answered. Universal abbreviations or previously indexed abbreviations maintain the unambiguity of the survey.
Surveys beginning with broad, easy and non-specific questions as compared to sensitive, tedious and non-specific ones receive more accurate and complete answers. 25 Questionnaires designed such that the relatively tedious and long questions requiring the respondent to do some nit-picking are placed at the end improves the response rate of the survey. This prevents the respondent to be discouraged to answer the survey at the beginning itself and motivates the respondent to finish the survey at the end. All questions must provide a non-response option and all questions should be made mandatory to increase completeness of the survey. Questions can be framed in close-ended or open-ended fashion. However, close-ended questions are easier to analyze and are less tedious to answer by the respondent and therefore must be the main component in a survey. Open-ended questions have minimal use as they are tedious, take time to answer and require fine articulation of one's thoughts. Also, their minimal use is advocated because the interpretation of such answers requires dedication in terms of time and energy due to the diverse nature of the responses which is difficult to promise owing to the large sample sizes. 26 However, whenever the closed choices do not cover all probabilities, an open answer choice must be added. 27 , 28
Screening questions to meet certain criteria to gain access to the survey in cases where inclusion criteria need to be established to maintain authenticity of target demographic. Similarly, logic function can be used to apply an exclusion. This allows clean and clear record of responses and makes the job of an investigator easier. The respondents can or cannot have the option to return to the previous page or question to alter their answer as per the investigator's preference.
The range of responses received can be reduced in case of questions directed towards the feelings or opinions of people by using slider scales, or a Likert scale. 29 , 30 In questions having multiple answers, check boxes are efficient. When a large number of answers are possible, dropdown menus reduce the arduousness. 31 Matrix scales can be used to answer questions requiring grading or having a similar range of answers for multiple conditions. Maximum respondent participation and complete survey responses can be ensured by reducing the survey time. Quiz mode or weighted modes allow the respondent to shuffle between questions and allows scoring of quizzes and can be used to complement other weighted scoring systems. 32 A flowchart depicting a survey construct is presented as Fig. 1 .
Validation testing though tedious and meticulous, is worthy effort as the accuracy of a survey is determined by its validity. It is indicative of the of the sample of the survey and the specificity of the questions such that the data acquired is streamlined to answer the questions being posed or to determine a hypothesis. 33 , 34 Face validation determines the mannerism of construction of questions such that necessary data is collected. Content validation determines the relation of the topic being addressed and its related areas with the questions being asked. Internal validation makes sure that the questions being posed are directed towards the outcome of the survey. Finally, Test – retest validation determines the stability of questions over a period of time by testing the questionnaire twice and maintaining a time interval between the two tests. For surveys determining knowledge of respondents pertaining to a certain subject, it is advised to have a panel of experts for undertaking the validation process. 2 , 35
If the questions in the survey are posed in a manner so as to elicit the same or similar response from the respondents irrespective of the language or construction of the question, the survey is said to be reliable. It is thereby, a marker of the consistency of the survey. This stands to be of considerable importance in knowledge-based researches where recall ability is tested by making the survey available for answering by the same participants at regular intervals. It can also be used to maintain authenticity of the survey, by varying the construction of the questions.
A cover letter is the primary means of communication with the respondent, with the intent to introduce the respondent to the survey. A cover letter should include the purpose of the survey, details of those who are conducting it, including contact details in case clarifications are desired. It should also clearly depict the action required by the respondent. Data anonymization may be crucial to many respondents and is their right. This should be respected in a clear description of the data handling process while disseminating the survey. A good cover letter is the key to building trust with the respondent population and can be the forerunner to better response rates. Imparting a sense of purpose is vital to ideationally incentivize the respondent population. 36 , 37 Adding the credentials of the team conducting the survey may further aid the process. It is seen that an advance intimation of the survey prepares the respondents while improving their compliance.
The design of a cover letter needs much attention. It should be captivating, clear, precise and use a vocabulary and language specific to the target population for the survey. Active voice should be used to make a greater impact. Crowding of the details must be avoided. Using italics, bold fonts or underlining may be used to highlight critical information. the tone ought to be polite, respectful, and grateful in advance. The use of capital letters is at best avoided, as it is surrogate for shouting in verbal speech and may impart a bad taste.
The dates of the survey may be intimated, so the respondents may prepare themselves for taking it at a time conducive to them. While, emailing a closed group in a convenience sampled survey, using the name of the addressee may impart a customized experience and enhance trust building and possibly compliance. Appropriate use of salutations like Mr./Ms./Mrs. may be considered. Various portals such as SurveyMonkey allow the researchers to save an address list on the website. These may then be reached out using an embedded survey link from a verified email address to minimize bouncing back of emails.
The body of the cover letter must be short, crisp and not exceed 2–3 paragraphs under idea circumstances. Ernest efforts to protect confidentiality may go a long way in enhancing response rates. 38 While it is enticing to provide incentives to enhance response, these are best avoided. 38 , 39 In cases when indirect incentives are offered, such as provision of results of the survey, these may be clearly stated in the cover letter. Lastly, a formal closing note with the signatures of the lead investigator are welcome. 38 , 40
Well-constructed questionnaires are essentially the backbone of successful survey-based studies. With this type of research, the primary concern is the adequate promotion and dissemination of the questionnaire to the target population. The careful of selection of sample population, therefore, needs to be with minimal flaws. The method of conducting survey is an essential determinant of the response rate observed. 41 Broadly, surveys are of two types: closed and open. Depending on the sample population the method of conducting the survey must be determined.
Various doctors use their own patients as the target demographic, as it improves compliance. However, this is effective in surveys aiming towards a geographically specific, fairly common disease as the sample size needs to be adequate. Response bias can be identified by the data collected from respondent and non-respondent groups. 42 , 43 Therefore, to choose a target population whose database of baseline characteristics is already known is more efficacious. In cases of surveys focused on patients having a rare group of diseases, online surveys or e-surveys can be conducted. Data can also be gathered from the multiple national organizations and societies all over the world. 44 , 45 Computer generated random selection can be done from this data to choose participants and they can be reached out to using emails or social media platforms like WhatsApp and LinkedIn. In both these scenarios, closed questionnaires can be conducted. These have restricted access either through a URL link or through e-mail.
In surveys targeting an issue faced by a larger demographic (e.g. pandemics like the COVID-19, flu vaccines and socio-political scenarios), open surveys seem like the more viable option as they can be easily accessed by majority of the public and ensures large number of responses, thereby increasing the accuracy of the study. Survey length should be optimal to avoid poor response rates. 25 , 46
Uniform distribution of the survey ensures equitable opportunity to the entire target population to access the questionnaire and participate in it. While deciding the target demographic communities should be studied and the process of “lurking” is sometimes practiced. Multiple sampling methods are available ( Fig. 1 ). 47
Distribution of survey to the target demographic could be done using emails. Even though e-mails reach a large proportion of the target population, an unknown sender could be blocked, making the use of personal or a previously used email preferable for correspondence. Adding a cover letter along with the invite adds a personal touch and is hence, advisable. Some platforms allow the sender to link the survey portal with the sender's email after verifying it. Noteworthily, despite repeated email reminders, personal communication over the phone or instant messaging improved responses in the authors' experience. 48 , 49
Distribution of the survey over other social media platforms (SMPs, namely WhatsApp, Facebook, Instagram, Twitter, LinkedIn etc.) is also practiced. 50 , 51 , 52 Surveys distributed on every available platform ensures maximal outreach. 53 Other smartphone apps can also be used for wider survey dissemination. 50 , 54 It is important to be mindful of the target population while choosing the platform for dissemination of the survey as some SMPs such as WhatsApp are more popular in India, while others like WeChat are used more widely in China, and similarly Facebook among the European population. Professional accounts or popular social accounts can be used to promote and increase the outreach for a survey. 55 Incentives such as internet giveaways or meet and greets with their favorite social media influencer have been used to motivate people to participate.
However, social-media platforms do not allow calculation of the denominator of the target population, resulting in inability to gather the accurate response rate. Moreover, this method of collecting data may result in a respondent bias inherent to a community that has a greater online presence. 43 The inability to gather the demographics of the non-respondents (in a bid to identify and prove that they were no different from respondents) can be another challenge in convenience sampling, unlike in cohort-based studies.
Lastly, manually filling of surveys, over the telephone, by narrating the questions and answer choices to the respondents is used as the last-ditch resort to achieve a high desired response rate. 56 Studies reveal that surveys released on Mondays, Fridays, and Sundays receive more traction. Also, reminders set at regular intervals of time help receive more responses. Data collection can be improved in collaborative research by syncing surveys to fill out electronic case record forms. 57 , 58 , 59
Data anonymity refers to the protection of data received as a part of the survey. This data must be stored and handled in accordance with the patient privacy rights/privacy protection laws in reference to surveys. Ethically, the data must be received on a single source file handled by one individual. Sharing or publishing this data on any public platform is considered a breach of the patient's privacy. 11 In convenience sampled surveys conducted by e-mailing a predesignated group, the emails shall remain confidential, as inadvertent sharing of these as supplementary data in the manuscript may amount to a violation of the ethical standards. 60 A completely anonymized e-survey discourages collection of Internet protocol addresses in addition to other patient details such as names and emails.
Data anonymity gives the respondent the confidence to be candid and answer the survey without inhibitions. This is especially apparent in minority groups or communities facing societal bias (sex workers, transgenders, lower caste communities, women). Data anonymity aids in giving the respondents/participants respite regarding their privacy. As the respondents play a primary role in data collection, data anonymity plays a vital role in survey-based research.
The data collected from the survey responses are compiled in a .xls, .csv or .xlxs format by the survey tool itself. The data can be viewed during the survey duration or after its completion. To ensure data anonymity, minimal number of people should have access to these results. The data should then be sifted through to invalidate false, incorrect or incomplete data. The relevant and complete data should then be analyzed qualitatively and quantitatively, as per the aim of the study. Statistical aids like pie charts, graphs and data tables can be used to report relative data.
Analysis of the responses recorded is done after the time made available to answer the survey is complete. This ensures that statistical and hypothetical conclusions are established after careful study of the entire database. Incomplete and complete answers can be used to make analysis conditional on the study. Survey-based studies require careful consideration of various aspects of the survey such as the time required to complete the survey. 61 Cut-off points in the time frame allow authentic answers to be recorded and analyzed as compared to disingenuous completed questionnaires. Methods of handling incomplete questionnaires and atypical timestamps must be pre-decided to maintain consistency. Since, surveys are the only way to reach people especially during the COVID-19 pandemic, disingenuous survey practices must not be followed as these will later be used to form a preliminary hypothesis.
Reporting the survey-based research is by far the most challenging part of this method. A well-reported survey-based study is a comprehensive report covering all the aspects of conducting a survey-based research.
The design of the survey mentioning the target demographic, sample size, language, type, methodology of the survey and the inclusion-exclusion criteria followed comprises a descriptive report of a survey-based study. Details regarding the conduction of pilot-testing, validation testing, reliability testing and user-interface testing add value to the report and supports the data and analysis. Measures taken to prevent bias and ensure consistency and precision are key inclusions in a report. The report usually mentions approvals received, if any, along with the written, informed, consent taken from the participants to use the data received for research purposes. It also gives detailed accounts of the different distribution and promotional methods followed.
A detailed account of the data input and collection methods along with tools used to maintain the anonymity of the participants and the steps taken to ensure singular participation from individual respondents indicate a well-structured report. Descriptive information of the website used, visitors received and the externally influencing factors of the survey is included. Detailed reporting of the post-survey analysis including the number of analysts involved, data cleaning required, if any, statistical analysis done and the probable hypothesis concluded is a key feature of a well-reported survey-based research. Methods used to do statistical corrections, if used, should be included in the report. The EQUATOR network has two checklists, “The Checklist for Reporting Results of Internet E-Surveys” (CHERRIES) statement and “ The Journal of Medical Internet Research ” (JMIR) checklist, that can be utilized to construct a well-framed report. 62 , 63 Importantly, self-reporting of biases and errors avoids the carrying forward of false hypothesis as a basis of more advanced research. References should be cited using standard recommendations, and guided by the journal specifications. 64
Surveys can be published as original articles, brief reports or as a letter to the editor. Interestingly, most modern journals do not actively make mention of surveys in the instructions to the author. Thus, depending on the study design, the authors may choose the article category, cohort or case-control interview or survey-based study. It is prudent to mention the type of study in the title. Titles albeit not too long, should not exceed 10–12 words, and may feature the type of study design for clarity after a semicolon for greater citation potential.
While the choice of journal is largely based on the study subject and left to the authors discretion, it may be worthwhile exploring trends in a journal archive before proceeding with submission. 65 Although the article format is similar across most journals, specific rules relevant to the target journal may be followed for drafting the article structure before submission.
Articles that are removed from the publication after being released are retracted articles. These are usually retracted when new discrepancies come to light regarding, the methodology followed, plagiarism, incorrect statistical analysis, inappropriate authorship, fake peer review, fake reporting and such. 66 A sufficient increase in such papers has been noticed. 67
We carried out a search of “surveys” on Retraction Watch on 31st August 2020 and received 81 search results published between November 2006 to June 2020, out of which 3 were repeated. Out of the 78 results, 37 (47.4%) articles were surveys, 23 (29.4%) showed as unknown types and 18 (23.2%) reported other types of research. ( Supplementary Table 1 ). Fig. 2 gives a detailed description of the causes of retraction of the surveys we found and its geographic distribution.
A good survey ought to be designed with a clear objective, the design being precise and focused with close-ended questions and all probabilities included. Use of rating scales, multiple choice questions and checkboxes and maintaining a logical question sequence engages the respondent while simplifying data entry and analysis for the investigator. Conducting pilot-testing is vital to identify and rectify deficiencies in the survey design and answer choices. The target demographic should be defined well, and invitations sent accordingly, with periodic reminders as appropriate. While reporting the survey, maintaining transparency in the methods employed and clearly stating the shortcomings and biases to prevent advocating an invalid hypothesis.
Disclosure: The authors have no potential conflicts of interest to disclose.
Author Contributions:
Reporting survey based research
Run a free plagiarism check in 10 minutes, automatically generate references for free.
Published on 6 May 2022 by Shona McCombes . Revised on 10 October 2022.
Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps:
Surveys are a flexible method of data collection that can be used in many different types of research .
What are surveys used for, step 1: define the population and sample, step 2: decide on the type of survey, step 3: design the survey questions, step 4: distribute the survey and collect responses, step 5: analyse the survey results, step 6: write up the survey results, frequently asked questions about surveys.
Surveys are used as a method of gathering data in many different fields. They are a good choice when you want to find out about the characteristics, preferences, opinions, or beliefs of a group of people.
Common uses of survey research include:
Surveys can be used in both cross-sectional studies , where you collect data just once, and longitudinal studies , where you survey the same sample several times over an extended period.
Before you start conducting survey research, you should already have a clear research question that defines what you want to find out. Based on this question, you need to determine exactly who you will target to participate in the survey.
The target population is the specific group of people that you want to find out about. This group can be very broad or relatively narrow. For example:
Your survey should aim to produce results that can be generalised to the whole population. That means you need to carefully define exactly who you want to draw conclusions about.
It’s rarely possible to survey the entire population of your research – it would be very difficult to get a response from every person in Brazil or every university student in the UK. Instead, you will usually survey a sample from the population.
The sample size depends on how big the population is. You can use an online sample calculator to work out how many responses you need.
There are many sampling methods that allow you to generalise to broad populations. In general, though, the sample should aim to be representative of the population as a whole. The larger and more representative your sample, the more valid your conclusions.
There are two main types of survey:
Which type you choose depends on the sample size and location, as well as the focus of the research.
Sending out a paper survey by post is a common method of gathering demographic information (for example, in a government census of the population).
Online surveys are a popular choice for students doing dissertation research , due to the low cost and flexibility of this method. There are many online tools available for constructing surveys, such as SurveyMonkey and Google Forms .
If your research focuses on a specific location, you can distribute a written questionnaire to be completed by respondents on the spot. For example, you could approach the customers of a shopping centre or ask all students to complete a questionnaire at the end of a class.
Oral interviews are a useful method for smaller sample sizes. They allow you to gather more in-depth information on people’s opinions and preferences. You can conduct interviews by phone or in person.
Like questionnaires, interviews can be used to collect quantitative data : the researcher records each response as a category or rating and statistically analyses the results. But they are more commonly used to collect qualitative data : the interviewees’ full responses are transcribed and analysed individually to gain a richer understanding of their opinions and feelings.
Next, you need to decide which questions you will ask and how you will ask them. It’s important to consider:
There are two main forms of survey questions: open-ended and closed-ended. Many surveys use a combination of both.
Closed-ended questions give the respondent a predetermined set of answers to choose from. A closed-ended question can include:
Closed-ended questions are best for quantitative research . They provide you with numerical data that can be statistically analysed to find patterns, trends, and correlations .
Open-ended questions are best for qualitative research. This type of question has no predetermined answers to choose from. Instead, the respondent answers in their own words.
Open questions are most common in interviews, but you can also use them in questionnaires. They are often useful as follow-up questions to ask for more detailed explanations of responses to the closed questions.
To ensure the validity and reliability of your results, you need to carefully consider each question in the survey. All questions should be narrowly focused with enough context for the respondent to answer accurately. Avoid questions that are not directly relevant to the survey’s purpose.
When constructing closed-ended questions, ensure that the options cover all possibilities. If you include a list of options that isn’t exhaustive, you can add an ‘other’ field.
In terms of language, the survey questions should be as clear and precise as possible. Tailor the questions to your target population, keeping in mind their level of knowledge of the topic.
Use language that respondents will easily understand, and avoid words with vague or ambiguous meanings. Make sure your questions are phrased neutrally, with no bias towards one answer or another.
The questions should be arranged in a logical order. Start with easy, non-sensitive, closed-ended questions that will encourage the respondent to continue.
If the survey covers several different topics or themes, group together related questions. You can divide a questionnaire into sections to help respondents understand what is being asked in each part.
If a question refers back to or depends on the answer to a previous question, they should be placed directly next to one another.
Before you start, create a clear plan for where, when, how, and with whom you will conduct the survey. Determine in advance how many responses you require and how you will gain access to the sample.
When you are satisfied that you have created a strong research design suitable for answering your research questions, you can conduct the survey through your method of choice – by post, online, or in person.
There are many methods of analysing the results of your survey. First you have to process the data, usually with the help of a computer program to sort all the responses. You should also cleanse the data by removing incomplete or incorrectly completed responses.
If you asked open-ended questions, you will have to code the responses by assigning labels to each response and organising them into categories or themes. You can also use more qualitative methods, such as thematic analysis , which is especially suitable for analysing interviews.
Statistical analysis is usually conducted using programs like SPSS or Stata. The same set of survey data can be subject to many analyses.
Finally, when you have collected and analysed all the necessary data, you will write it up as part of your thesis, dissertation , or research paper .
In the methodology section, you describe exactly how you conducted the survey. You should explain the types of questions you used, the sampling method, when and where the survey took place, and the response rate. You can include the full questionnaire as an appendix and refer to it in the text if relevant.
Then introduce the analysis by describing how you prepared the data and the statistical methods you used to analyse it. In the results section, you summarise the key results from your analysis.
A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviours. It is made up of four or more questions that measure a single attitude or trait when response scores are combined.
To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with five or seven possible responses, to capture their degree of agreement.
Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.
Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.
The type of data determines what statistical tests you should use to analyse your data.
A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analysing data from people using questionnaires.
If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.
McCombes, S. (2022, October 10). Doing Survey Research | A Step-by-Step Guide & Examples. Scribbr. Retrieved 29 August 2024, from https://www.scribbr.co.uk/research-methods/surveys/
Other students also liked, qualitative vs quantitative research | examples & methods, construct validity | definition, types, & examples, what is a likert scale | guide & examples.
Surveys offer companies a ton of quantitative and qualitative feedback into their customer experience .
Those presenting the survey findings need to do so in a readable, succinct way. This is where a survey report becomes handy. A survey report can be shared with a company’s stakeholders leaders, other integral departments (marketing, PR, advertising, and sales), and various teammates.
A survey report pulls any key data and important findings to create a structured story around various issues. The report also offers actionable steps to resolve the issues at hand and helps companies proactively prepare for the future.
The guide below will discuss the five necessary steps to creating a condensed and thorough survey report that audience members will find interesting and useful. For a quick example of a survey result report see our, visual below .
Like a customer journey map , a survey report needs to have a structured plan. Readers want a formatted structure they can easily follow and jump from slide to slide or page to page without feeling lost. Below is the ideal survey structure:
1. Title Page
The title page should include the following: A short and engaging title, the publication and/or release date, names of those responsible for the report, and a one or two-sentence description.
2. Table of Contents
A table of contents gives the reader a quick overview of the report and allows them to quickly locate sections.
3. Executive Summary
The executive summary is one of the most important sections of the report. It summarizes the report’s main findings and proposes the next steps. Many people only read the executive summary.
4. Background
The background explains the impetus and story behind the survey. It states the hypothesis, question, or issues at hand for why the research was conducted and how the results plan to be used.
5. Survey Method
This section reviews who the target audience was and who the survey actually included. It also reviews how surveyors contacted respondents and the process of data collection. This is often a more technical and detailed section.
6. Survey Results
The survey results section is the meat and potatoes of the report. It provides an overarching theme of the report and underscores any statistical findings or significant takeaways.
7. Appendices
Similar to the methodology section, the appendices section will contain technical data and information about the data collection and analysis process .
A ton of numbers and statistics aren’t appealing or friendly to the average person. It’s best to present the data in a visually appealing way via graphs and charts.
Interactive dashboards are also a nice option if the report is being sent digitally. These dashboards allow readers to quickly glance over findings and to play with variables to see changes in the data.
Our interactive dashboards are great for organizing data and sending customized reports to tell the right story.
Reports can show data in several ways. However, different visualizations work best for different data sets. The suggestions below go over the benefits of each data visualization. Note, these suggestions aren’t rigid. There’s usually flexibility for how to display data.
1. Bar Graph
A bar graph is best used when one data point is an outlier compared to the other data clusters. It is also useful to display negative numbers.
2. Line Graph
A line graph shows progress or trends over time. Reports should use it when there’s a continuous data set or multiple categories of data to portray.
3. Pie Chart
Pie charts contain static numbers that are a percentage of a whole. They’re best for displaying comparisons. Note, the total sum of all segments should equal 100%.
A map displays geographically related data. A survey report should show the proportion of data in each region. For instance, the specific location that the majority of a company’s users are coming from.
A gauge usually depicts a single value such as a company’s average Net Promoter Score (NPS) . Use gauges to highlight extremely relevant figures.
6. Scatter Plot
A scatter plot, also known as a scattergram chart, portrays the relationship between two variables. It’s best when lots of data points exist. A scatter plot helps to reveal distribution trends and outliers.
A survey report can quickly become too detailed. A good way to avoid overwhelming the audience is to keep the copy brief and simple. Each sentence within the survey report should give the reader new knowledge.
Tabular copy (text in graphs, charts, and tables) needs to be extremely short. The main purpose of the text in the data visualizations and dashboards is to label the data or to serve as a title, subheader, or axis label.
Survey report writers do have more leeway when it comes to headlines. Report headlines should be short, but catchy. Think of headliners as tweets that grab consumer’s attention.
Example 1: Company X increased its Net Promoter Score by 15 points last quarter, causing their Customer Lifetime Value to jump by an average of $281.
Example 2: The CX department lowered its Customer Effort Score (CES) by an average of 36% after the group training.
Lastly, before submitting or presenting a report, make sure it’s proofread and correctly punctuated. Ideally, at least two or more individuals will have reviewed and edited the piece before the final draft is submitted or shared with others.
The executive summary combined with the background section should create an engaging story. Remember, the survey report addresses an issue, takes in feedback, and then suggests solutions.
Here’s an example of how HelloFresh used surveys and survey reports to create an appealing story:
Part of conducting a thorough survey and creating an engrossing report means selecting the best type of survey. Companies must choose a survey type that will give the best customer feedback based on the problems or issues they’re facing.
The list below will help companies determine which type of survey will give them the most reliable results.
A real-time summary report is typically an interactive dashboard that gives live feedback via charts, statistics, and graphs from the collected survey data.
Best for: Both qualitative and quantitative surveys as they report in real-time.
Open-ended text reports use text analytics and sentiment analysis tools to identify patterns or themes from customers’ survey responses.
Best for: Qualitative surveys that contain open-ended responses.
A report scheduler automatically sends a survey result or report to other users or departments within an organization. Think of them as check-ins. Typically, a CX department uses schedulers to prompt reports for long-running surveys that require check-ins and monitoring.
Best for: Quantitative surveys. A report scheduler can generate overarching quick reports from the collected data.
A gap analysis is best used to analyze two scale-type questions. For instance, if a customer had two interactions with different customer representatives, a customer satisfaction survey could ask them to rate the satisfaction of each. The gap analysis report would then compare satisfaction levels between the two of them.
Best for: Both qualitative and quantitative surveys. A gap analysis report will almost always be a quantitative survey since it should ask the respondent to give a numbered rating. A gap analysis report can become qualitative if a second open-ended question is asked, such as: what made you rate (Employee Name) higher or lower?
A spotlight report hones in on either one or a target group of respondents. These responses can then be compared to the overall survey responses.
Best for: Both qualitative and quantitative surveys can use spotlight reports since it’s highlighting a small group or individuals’ responses.
A trend analysis report will show significant data currents or tendencies from the past few weeks, months, or even years. A trend analysis report can help businesses modify their surveys by reviewing response rates. Or they can be used to make large overarching themes based on customer feedback.
Best for: Both qualitative and quantitative surveys. Typically, trend analysis reports will offer more insight into quantitative surveys since they’re best for reviewing a large amount of data.
However, with a text analytics tool , qualitative data trends can also be reported on since the text analytics tool automatically creates lots of sentiment data points.
The visual below depicts ways companies best visualize their data in a survey results report .
Both quantitative and qualitative surveys bring in a plethora of insight and data about the customers, specifically, the customer experience. Survey results reports are vital for companies who want to succinctly share findings with both internal and external stakeholders.
Related articles.
We spoke with Stefan Platteau, Associate Director of Global Product Strategy and Analytics, to learn how Chattermill helped HelloFresh optimize its Operations, Logistics, and Supply Chain Management.
Learn how HelloFresh partnered with Chattermill to make strategic product decisions based on customer feedback, and drive more efficient business growth.
We were joined by more than 35 customers, partners, and friends who wanted to make sure that their choice of AI drives their business outcomes.
Understand the voice of your customers in realtime with Customer Feedback Analytics from Chattermill.
Written by: Orana Velarde
Are your survey results sitting in a file on your computer waiting to be analyzed? Or maybe there’s a stack of filled out forms somewhere in your office?
It’s time to get that survey data ready to present to stakeholders or members of your team.
Visme has all the tools you need to visualize your survey data in a report , infographic, printable document or an online, interactive design.
In this article, we’ll help you understand what a survey is and how to conduct one. We’ll also show you how to analyze survey data and present it with visuals.
Let’s get started.
What is a survey, the 4 best tools for creating surveys, how to analyze survey results, how to present survey results with visme.
A survey is a study that involves asking a group of people all the same questions. It’s a research activity that aims to collect data about a particular topic.
A survey usually consists of at least one question and can be as long as tens of questions. The length of your survey depends on the nature of the research.
Surveys can be categorized into three main types:
When it comes to survey results, your data can be either qualitative or quantitative.
The survey results infographic below is from a quantitative survey where participants simply chose their favorites from a list. Customize it to use for your own data.
Surveys are conducted in different ways, depending on the needs of the surveyor and proximity of participants. While some surveys are conducted face-to-face, others are carried out via telephone, or self-administered digitally or on paper.
Surveys can be conducted for lots of different reasons, such as:
To conduct a successful survey, you need the right tools. For face-to-face surveys, you’ll need a group of people who will visit participants, enough printed survey copies or a way to record spoken answers.
For telephone surveys, you’ll need a group of people who can call participants over the phone. You’ll also need a computer program or printed survey question forms where the surveyor can record the data.
For online surveys, you can use a number of different tools. Below are our favorites:
With Visme, not only can you present your survey results, you can also create a survey! With our Typeform integration, creating a survey in a Visme project is as easy as inserting a new chart.
When you present survey data results with visuals, the trends and conclusions are easier and faster to understand.
But before you can do that, you’ll first need to analyze your results. The analysis process depends on the type of survey conducted and how the data was collected.
For example, simple online quantitative surveys can be fed directly to a spreadsheet, while qualitative surveys conducted face-to-face will need considerably more data entry work.
According to Thematic , these are the 5 steps you need to follow for best analysis results.
As you can see, the analysis starts even before creating the survey. This helps make sure that you are asking the right questions.
The data must then be organized into a filterable spreadsheet or table. The most common survey software available for analysis work is Microsoft Excel or Google Sheets.
To analyze more complex data , another great tool is Tableau — a powerful analysis and visualization tool. In fact, for large survey data, we suggest you use a mix of Tableau visualizations embedded into your Visme project along with our signature data widgets.
Now that we’ve looked at all the steps involved in conducting a survey, collecting data and analyzing it, let’s find out how to present your survey results with visuals.
Presenting survey results visually makes it easier to spot trends, arrive at conclusions and put the data to practical use. It’s essential to know how to present data to share insights with stakeholders and team members to get your message across.
You can easily make your survey data look beautiful with the help of Visme’s graph maker, data widgets and powerful integrations.
Check out the video below to learn more about how you can customize data and present it using data visualization tools in Visme.
Aside from data visualization, Visme lets you create interactive reports, presentations, infographics and other designs to help you better present survey results.
To give you more ideas, here are 9 unique ways to present survey results in Visme.
While many times you’ll put together a document, one-pager or infographic to visualize survey results, sometimes a presentation is the perfect format.
Create a survey presentation like the one below to share your findings with your team.
A multi-page report is a great way to print out a hard copy of your survey results, and formally share it with your team, management or stakeholders.
Here’s a survey report template in Visme you can customize.
You can also share interactive versions of your report online using Visme. After you finish designing your survey results report, simply hit publish to generate a shareable URL.
The best way to present survey results is with a chart or graph. The type of chart you choose depends on the nature of your data. Below, we’ll take a look at two common types of charts you can use to visualize and present your survey data.
If you had a smaller survey and really want to visualize one main result, this bar graph survey results template is the perfect solution.
Insert your own information so you can quickly visualize the largest bars, giving you more insight into your audience.
To visualize parts of a whole, a pie chart can really help to differentiate the answers that your audience gave. Look to see which responses were most popular to help you make more informed choices for your brand.
Incorporating some of your survey questions into your report helps your audience understand your results better. Take it a step further by adding relevant icons to help visualize those questions.
Customize this template with your survey information before presenting it to your team.
Another great way to use icons in your survey results report is with pictographs, or icon arrays. Pictographs use symbols like icons and shapes to convey meaning.
Use icon arrays to visualize sections of a whole. For example, you can use icons of people to visualize population data. Need to visualize the difference between cat lovers and dog lovers? Use an array with cat icons in different colors.
Here’s an example of a survey results report that uses pictographs to visualize psychographic data among a population.
One more way to present survey results is with maps. This is a great solution for visualizing geographic data. In Visme, you have several options to help you create interactive maps:
Get creative showcasing your results by adding graphics and illustrations that help represent your data. In the template below, we’ve used a human body to help visualize the survey results.
If you want to show your survey results data in a snackable format, try using data widgets. These are perfect for showing percentages and quantitative comparisons in many different styles.
The best way to use them is to visualize one question of the survey at a time. For example, use one widget for the percentage of yes answers and another for the no answers.
In this template, you can easily customize multiple widgets to visualize different kinds of results and responses.
Last but not least, you have the third-party embed option. With this tool, you can embed any Tableau visualization into a Visme project.
This is a great option if your data is more complex, or if you are a Tableau user who just wants to create better presentations with Visme.
To embed a Tableau into Visme, open the Media tab on the left-hand sidebar, then click on Embed Online Content. From the drop-down, select HTML.
Copy and paste the HTML from your Tableau visualization and paste it into Visme. Now your Tableau is part of a complete survey results report made with Visme!
To get started with visualizing your survey results, log in to your Visme account and choose one of the survey results templates.
If you don’t have a Visme account, creating one is easy and free . Simply register with your email and you’re good to go. Leave a comment below if you have any questions!
Trusted by leading brands
Design visual brand experiences for your business whether you are a seasoned designer or a total novice.
Orana is a multi-faceted creative. She is a content writer, artist, and designer. She travels the world with her family and is currently in Istanbul. Find out more about her work at oranavelarde.com
Welcome to the community
Just started using a new survey tool ? Collected all of your survey data? Great. Confused about what to do next and how to achieve the optimal survey analysis? Don’t be.
If you’ve ever stared at an Excel sheet filled with thousands of rows of survey data and not known what to do, you’re not alone. Use this post as a guide to lead the way to execute best practice survey analysis.
Customer surveys can have a huge impact on your organization. Whether that impact is positive or negative depends on how good your survey is (no pressure). Has your survey been designed soundly ? Does your survey analysis deliver clear, actionable insights? And do you present your results to the right decision makers? If the answer to all those questions is yes, only then new opportunities and innovative strategies can be created.
Survey analysis refers to the process of analyzing your results from customer (and other) surveys. This can, for example, be Net Promoter Score surveys that you send a few times a year to your customers.
Data on its own means nothing without proper analysis. Thus, you need to make sure your survey analysis produces meaningful results that help make decisions that ultimately improve your business.
There are multiple ways of doing this, both manual and through software, which we’ll get to later.
Data exists as numerical and text data, but for the purpose of this post, we will focus on text responses here.
Closed-ended questions can be answered by a simple one-word answer, such as “yes” or “no”. They often consist of pre-populated answers for the respondent to choose from; while an open-ended question asks the respondent to provide feedback in their own words.
Closed-ended questions come in many forms such as multiple choice, drop down and ranking questions.
In this case, they don’t allow the respondent to provide original or spontaneous answers but only choose from a list of pre-selected options. Closed-ended questions are the equivalent of being offered milk or orange juice to drink instead of being asked: “What would you like to drink?”
These types of questions are designed to create data that are easily quantifiable, and easy to code, so they’re final in their nature. They also allow researchers to categorize respondents into groups based on the options they have selected.
An open-ended question is the opposite of a closed-ended question. It’s designed to produce a meaningful answer and create rich, qualitative data using the subject’s own knowledge and feelings.
Open-ended questions often begin with words such as “Why” and “How”, or sentences such as “Tell me about…”. Open-ended questions also tend to be more objective and less leading than closed-ended questions.
How do you find meaningful answers and insights in survey responses?
Go back to your main research questions which you outlined before you started your survey. Don’t have any? You should have set some out when you set a goal for your survey. (More on survey planning below).
A top research question for a business conference could be: “How did the attendees rate the conference overall?”.
The percentages in this example show how many respondents answered a particular way, or rather, how many people gave each answer as a proportion of the number of people who answered the question.
Thus, 60% or your respondents (1098 of those surveyed) are planning to return. This is the majority of people, even though almost a third are not planning to come back. Maybe there’s something you can do to convince the 11% who are not sure yet!
At the start of your survey, you will have set up goals for what you wanted to achieve and exactly which subgroups you wanted to analyze and compare against each other.
This is the time to go back to those and check how they (for example the subgroups; enterprises, small businesses, self-employed) answered, with regards to attending again next year.
For this, you can cross-tabulate, and show the answers per question for each subgroup.
Here, you can see that most of the enterprises and the self-employed must have liked the conference as they’re wanting to come back, but you might have missed the mark with the small businesses.
By looking at other questions and interrogating the data further, you can hopefully figure out why and address this, so you have more of the small businesses coming back next year.
You can also filter your results based on specific types of respondents, or subgroups. So just look at how one subgroup (women, men) answered the question without comparing.
Then you apply the cross tab to look at different attendees to look at female enterprise attendees, female self-employed attendees etc. Just remember that your sample size will be smaller every time you slice the data this way, so check that you still have a valid enough sample size.
Look at your survey questions and really interrogate them. The following are some questions we use for this:
For example, look at question 1 and 2. The difference between the two is that the first one returns the volume, whereas in the second one we can look at the volume relating to a particular satisfaction score. If something is very common, it may not affect the score. But if, for example, your Detractors in an NPS survey mention something a lot, that particular theme will be affecting the score in a negative way. These two questions are important to take hand in hand.
You can also compare different slices of the data, such as two different time periods, or two groups of respondents. Or, look at a particular issue or a theme, and ask questions such as “have customers noticed our efforts in solving a particular issue?”, if you’re conducting a continuous survey over multiple months or years.
For tips on how to analyze results, see below. This is a whole topic in itself, and here are our best tips. For best practice on how to draw conclusions you can find in our post How to get meaningful, actionable insights from customer feedback .
Make sure you incorporate these tips in your analysis, to ensure your survey results are successful.
To always make sure you have a sufficient sample size, consider how many people you need to survey in order to get an accurate result.
You most often will not be able to, and shouldn’t for practicality reasons, collect data from all of the people you want to speak to. So you’d take a sample (or subset) of the people of interest and learn what we can from that sample.
Clearly, if you are working with a larger sample size, your results will be more reliable as they will often be more precise. A larger sample size does often equate to needing a bigger budget though.
The way to get around this issue is to perform a sample size calculation before starting a survey. Then, you can have a large enough sample size to draw meaningful conclusions, without wasting time and money on sampling more than you really need.
Consider how much margin of error you’re comfortable working with first, as your sample size is always an estimate of how the overall population think and behave.
How do you know you can “trust” your survey analysis ie. that you can use the answers with confidence as a basis for your decision making? In this regard, the “significant” in statistical significance refers to how accurate your data is. Or rather, that your results are not based on pure chance, but that they are in fact, representative of a sample. If your data has statistical significance, it means that to a large extent, the survey results are meaningful.
It also shows that your respondents “look like” the total population of people about whom you want to draw conclusions.
When presenting to your stakeholders, it’s imperative to highlight the insights derived from your data, rather than the data itself.
You’ll do yourself a disservice. Don’t even present the information from the data. Don’t wait for your team to create insights out of the data, you’ll get a better response and better feedback if you are the one that demonstrates the insights to begin with, as it goes beyond just sharing percentages and data breakouts.
Don’t stop at the survey data alone. When presenting your insights, to your stakeholders or board, it’s always helpful to use different data points and which might include even personal experiences. If you have personal experience with the topic, use it! If you have qualitative research that supports the data, use it!
So, if you can overlap qualitative research findings with your quantitative data, do so.
Just be sure to let your audience know when you are showing them findings from statistically significant research and when it comes from a different source.
When you analyze open-ended responses, you need to code them. Coding open-ended questions have 3 approaches, here’s a taster:
Whichever way you code text, you want to determine which category a comment falls under. In the below example, any comment about friends and family both fall into the second category. Then, you can easily visualize it as a bar chart.
Code frames can also be combined with a sentiment.
Below, we’re inserting the positive and the negative layer under customer service theme.
So, next, you apply this code frame. Below are snippets from a manual coding job commissioned to an agency.
In the first snippet, there’s a code frame. Under code 1, they code “Applied courses”, and under code “2 Degree in English”. In the second snippet, you can see the actual coded data, where each comment has up to 5 codes from the above code frame. You can imagine that it’s actually quite difficult to analyze data presented in this way in Excel, but it’s much easier to do it using software.
Traditional survey analysis is highly manual, error-prone, and subject to human bias. You may think of this as the most economical solution, but in the long run, it often ends up costing you more (due to time it takes to set up and analyze, human resource, and any errors or bias which result in inaccurate data analysis, leading to faulty interpretation of the data. So, the question is:
When you’re dealing with large amounts of data, it is impossible to manage it all properly manually. Either because there’s simply too much of it or if you’re looking to avoid any bias, or if it’s a long-term study, for example. Then, there is no other option but to use software”
On a large scale, software is ideal for analyzing survey results as you can automate the process by analyzing large amounts of data simultaneously. Plus, software has the added benefit of additional tools that add value.
Below we give just a few examples of types of software you could use to analyze survey data. Of course, these are just a few examples to illustrate the types of functions you could employ.
As an example, with Thematic’s software solution you can identify trends in sentiment and particular themes. Bias is also avoided as it is a software tool, and it doesn’t over-emphasize or ignore specific comments to come to unquantified conclusions.
Below is an example we’ve taken from the tool, to visualize some of Thematic’s features.
Our visualizations tools show far more detail than word clouds, which are more typically used.
You can see two different slices of data. The blue bars are United Airlines 1 and 2-star reviews, and the orange bars are the 4 and 5-star reviews. It’s a fantastic airline, but you can identify the biggest issue as mentioned most frequently by 1-2 stars reviews, which is their flight delays. But the 4 and 5-star reviews have frequent praise for the friendliness of the airline.
You can find more features, such as Thematic’s Impact tool, Comparison, Dashboard and Themes Editor here.
If you’re a DIY analyzer, there’s quite a bit you can do in Excel. Clearly, you do not have the sophisticated features of an online software tool, but for simple tasks, it does the trick. You can count different types of feedback (responses) in the survey, calculate percentages of the different responses survey and generate a survey report with the calculated results. For a technical overview, see this article.
You can also build your own text analytics solution, and rather fast.
The following is an excerpt from a blog written by Alyona Medelyan, PhD in Natural Language Processing & Machine Learning.
As she mentions, you can type in a formula, like this one, in Excel to categorize comments into “Billing”, “Pricing” and “Ease of use”:
It can take less than 10 minutes to create this, and the result is so encouraging! But wait…
Various issues can easily crop up with this approach, see the image below:
Out of 7 comments, here only 3 were categorized correctly. “Billing” is actually about “Price”, and three other comments missed additional themes. Would you bet your customer insights on something that’s at best 50 accurate?
Developed by QRS International, Nvivo is a tool where you can store, organize, categorize and analyze your data and also create visualisations. Nvivo lets you store and sort data within the platform, automatically sort sentiment, themes and attribute, and exchange data with SPSS for further statistical analysis. There’s a transcription tool for quick transcription of voice data.
It’s a no-frills online tool, great for academics and researchers.
Interpris is another tool from QRS International, where you can import and store free text data directly from platforms such as Survey Monkey and store all your data in one place. It has numerous features, for example automatically detecting and categorizing themes.
Favoured by government agencies and communities, it’s good for employee engagement, public opinion and community engagement surveys.
Other tools worth mentioning (for survey analysis but not open-ended questions) are SurveyMonkey, Tableau and DataCracker.
There are numerous tools on the market, and they all have different features and benefits. Choosing a tool that is right for you will depend on your needs, the amount of data and the time you have for your project and, of course, budget. The important part to get right is to choose a tool that is reliable and provides you with quick and easy analysis, and flexible enough to adapt to your needs.
An idea is to check the list of existing clients of the product, which is often listed on their website. Crucially, you’ll want to test the tool, or at the least, get a demo from the sales team, ideally using your own data so that you can use the time to gather new insights.
Good surveys start with smart survey design. Firstly, you need to plan for survey design success. Here are a few tips:
1. keep it short.
Only include questions that you are actually going to use. You might think there are lots of questions that seem useful, but they can actually negatively affect your survey results. Another reason is that often we ask redundant questions that don’t contribute to the main problem we want to solve. The survey can be as short as three questions.
To avoid enforcing your own assumptions, use open-ended questions first. Often, we start with a few checkboxes or lists, which can be intimidating for survey respondents. An open-ended question feels more inviting and warmer – it makes people feel like you want to hear what they want to say and actually start a conversation. Open-ended questions give you more insightful answers, however, closed questions are easier to respond to, easier to analyze, but they do not create rich insights.
The best approach is to use a mix of both types of questions, as It’s more compelling to answer different types of questions for respondents.
Your surveys will reveal what areas in your business need extra support or what creates bottlenecks in your service. Use your surveys as a way of presenting solutions to your audience and getting direct feedback on those solutions in a more consultative way.
It’s important to think about the timing of your survey. Take into account when your audience is most likely to respond to your survey and give them the opportunity to do it at their leisure, at the time that suits them.
It’s crucial to challenge your assumptions, as it’s very tempting to make assumptions about why things are the way they are. There is usually more than meets the eye about a person’s preferences and background which can affect the scenario.
To have multiple survey writer can be helpful, as having people read each other’s work and test the questions helps address the fact that most questions can be interpreted in more than one way.
When you’re choosing your survey questions, make it really count. Only use those that can make a difference to your end outcomes.
As a respondent you want to know your responses count, are reviewed and are making a difference. As an incentive, you can share the results with the participants, in the form of a benchmark, or a measurement that you then report to the participants.
Always think about what customers (or survey respondents) want and what’s in it for them. Many businesses don’t actually think about this when they send out their surveys.
If you can nail the “what’s in it for me”, you automatically solve many of the possible issues for the survey, such as whether the respondents have enough incentive or not, or if the survey is consistent enough.
For more pointers on how to design your survey for success, check out our blog on 4 Steps to Customer Survey Design – Everything You Need to Know .
Agi loves writing! She enjoys breaking down complex topics into clear messages that help others. She speaks four languages fluently and has lived in six different countries.
We make it easy to discover the customer and product issues that matter.
Unlock the value of feedback at scale, in one platform. Try it for free now!
Our experts will show you how Thematic works, how to discover pain points and track the ROI of decisions. To access your free trial, book a personal demo today.
Become a qualitative theming pro! Creating a perfect code frame is hard, but thematic analysis software makes the process much easier.
When two major storms wreaked havoc on Auckland and Watercare’s infrastructurem the utility went through a CX crisis. With a massive influx of calls to their support center, Thematic helped them get inisghts from this data to forge a new approach to restore services and satisfaction levels.
Everyone says they want customers to be satisfied, but what are you actually doing to make customers happy? How do you know if you’re on the right track? How do you know if your customer satisfaction efforts make a difference? Why even aim for customer satisfaction at all? We
Surveys are a special research tool with strengths, weaknesses, and a language all of their own. There are many different steps to designing and conducting a survey, and survey researchers have specific ways of describing what they do.
This handout, based on an annual workshop offered by the Program on Survey Research at Harvard, is geared toward undergraduate honors thesis writers using survey data.
74 KB |
Updated: November 23, 2021
Published: October 04, 2021
Obtaining customer feedback is difficult. You need strong survey questions that effectively derive customer insights. Not to mention a distribution system that shares the survey with the right customers at the right time. However, survey data doesn't just sort and analyze itself. You need a team dedicated to sifting through survey results and highlighting key trends and behaviors for your marketing, sales, and customer service teams. In this post, we'll discuss not only how to analyze survey results, but also how to present your findings to the rest of your organization.
Short on time? Jump to the topics that interest you most:
How to present survey results, how to write a survey report, survey report template examples, 1. understand the four measurement levels..
Before analyzing data, you should understand the four levels of measurement. These levels determine how survey questions should be measured and what statistical analysis should be performed. The four measurement levels are nominal scales, ordinal scales, interval scales, and ratio scales.
Nominal scales classify data without any quantitative value, similar to labels. An example of a nominal scale is, "Select your car's brand from the list below." The choices have no relationship to each other. Due to the lack of numerical significance, you can only keep track of how many respondents chose each option and which option was selected the most.
5 Research and Planning Templates + a Free Guide on How to Use Them in Your Market Research
All fields are required.
Click this link to access this resource at any time.
Ordinal scales are used to depict the order of values. For this scale, there's a quantitative value because one rank is higher than another. An example of an ordinal scale is, "Rank the reasons for using your laptop." You can analyze both mode and median from this type of scale, and ordinal scales can be analyzed through cross-tabulation analysis .
Interval scales depict both the order and difference between values. These scales have quantitative value because data intervals remain equivalent along the scale, but there's no true zero point. An example of an interval scale is in an IQ test. You can analyze mode, median, and mean from this type of scale and analyze the data through ANOVA , t-tests , and correlation analyses . ANOVA tests the significance of survey results, while t-tests and correlation analyses determine if datasets are related.
Ratio scales depict the order and difference between values, but unlike interval scales, they do have a true zero point. With ratio scales, there's quantitative value because the absence of an attribute can still provide information. For example, a ratio scale could be, "Select the average amount of money you spend online shopping." You can analyze mode, median, and mean with this type of scale and ratio scales can be analyzed through t-tests, ANOVA, and correlation analyses as well.
Once you understand how survey questions are analyzed, you should take note of the overarching survey question(s) that you're trying to solve. Perhaps, it's "How do respondents rate our brand?"
Then, look at survey questions that answer this research question, such as "How likely are you to recommend our brand to others?" Segmenting your survey questions will isolate data that are relevant to your goals.
Additionally, it's important to ask both close-ended and open-ended questions.
A close-ended survey question gives a limited set of answers. Respondents can't explain their answer and they can only choose from pre-determined options. These questions could be yes or no, multiple-choice, checkboxes, dropdown, or a scale question. Asking a variety of questions is important to get the best data.
An open-ended survey question will ask the respondent to explain their opinion. For example, in an NPS survey, you'll ask how likely a customer is to recommend your brand. After that, you might consider asking customers to explain their choice. This could be something like "Why or why wouldn't you recommend our product to your friends/family?"
Quantitative data is valuable because it uses statistics to draw conclusions. While qualitative data can bring more interesting insights about a topic, this information is subjective, making it harder to analyze. Quantitative data, however, comes from close-ended questions which can be converted into a numeric value. Once data is quantified, it's much easier to compare results and identify trends in customer behavior .
It's best to start with quantitative data when performing a survey analysis. That's because quantitative data can help you better understand your qualitative data. For example, if 60% of customers say they're unhappy with your product, you can focus your attention on negative reviews about user experience. This can help you identify roadblocks in the customer journey and correct any pain points that are causing churn.
If you analyze all of your responses in one group, it isn't entirely effective for gaining accurate information. Respondents who aren't your ideal customers can overrun your data and skew survey results. Instead, if segment responses using cross-tabulation, you can analyze how your target audience responded to your questions.
Cross-tabulation records the relationships between variables. It compares two sets of data within one chart. This reveals specific insights based on your participants' responses to different questions. For example, you may be curious about customer advocacy among your customers based in Boston, MA. You can use cross-tabulation to see how many respondents said they were from Boston and said they would recommend your brand.
By pulling multiple variables into one chart, we can narrow down survey results to a specific group of responses. That way, you know your data is only considering your target audience.
Below is an example of a cross-tabulation chart. It records respondents' favorite baseball teams and what city they reside in.
If the statistical significance or p-value for a data point is equal to or lower than 0.05, it has moderate statistical significance since the probability for error is less than 5%. If the p-value is lower than 0.01, that means it has high statistical significance because the probability for error is less than 1%.
Another important aspect of survey analysis is knowing whether the conclusions you're drawing are accurate. For instance, let's say we observed a correlation between ice cream sales and car thefts in Boston. Over a month, as ice cream sales increased so did reports of stolen cars. While this data may suggest a link between these variables, we know that there's probably no relationship.
Just because the two are correlated doesn't mean one causes the other. In cases like these, there's typically a third variable — the independent variable — that influences the two dependent variables. In this case, it's temperature. As the temperature increases, more people buy ice cream. Additionally, more people leave their homes and go out, which leads to more opportunities for crime.
While this is an extreme example, you never want to draw a conclusion that's inaccurate or insufficient. Analyze all the data before assuming what influences a customer to think, feel, or act a certain way.
While current data is good for keeping you updated, it should be compared to data you've collected in the past. If you know 33% of respondents said they would recommend your brand, is that better or worse than last year? How about last quarter?
If this is your first year analyzing data, make these results the benchmark for your next analysis. Compare future results to this record and track changes over quarters, months, years, or whatever interval you prefer. You can even track data for specific subgroups to see if their experiences improve with your initiatives.
Now that you've gathered and analyzed all of your data, the next step is to share it with coworkers, customers, and other stakeholders. However, presentation is key in helping others understand the insights you're trying to explain.
The next section will explain how to present your survey results and share important customer data with the rest of your organization.
Graphs and charts are visually appealing ways to share data. Not only are the colors and patterns easy on the eyes, but data is often easier to understand when shared through a visual medium. However, it's important to choose a graph that highlights your results in a relevant way.
This Canva report template lets the data speak for itself. The minimal portrait layout offers plenty of negative space around the content so that it can breathe. Bold numbers and percentages can remain or be omitted depending on the needs you have for each page. One of the rare gems of this template is its ability to balance large, clear images that don't crowd out the important written information on the page. Use this template for hybrid text-visual designs.
This presentation template makes a great research report template due to its clean lines, contrasting graphic elements, and ample room for visuals. The headers in this template virtually jump off the page to grab the readers' attention. There's aren't many ways to present quantitative data using this template example, but it works well for qualitative survey reports like focus groups or product design studies where original images will be discussed.
Related articles.
Free Guide & Templates to Help Your Market Research
Service Hub provides everything you need to delight and retain customers while supporting the success of your whole front office
We use essential cookies to make Venngage work. By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts.
Manage Cookies
Cookies and similar technologies collect certain information about how you’re using our website. Some of them are essential, and without them you wouldn’t be able to use Venngage. But others are optional, and you get to choose whether we use them or not.
Strictly Necessary Cookies
These cookies are always on, as they’re essential for making Venngage work, and making it safe. Without these cookies, services you’ve asked for can’t be provided.
Show cookie providers
Functionality Cookies
These cookies help us provide enhanced functionality and personalisation, and remember your settings. They may be set by us or by third party providers.
Performance Cookies
These cookies help us analyze how many people are using Venngage, where they come from and how they're using it. If you opt out of these cookies, we can’t get feedback to make Venngage better for you and all our users.
Targeting Cookies
These cookies are set by our advertising partners to track your activity and show you relevant Venngage ads on other sites as you browse the internet.
How to present survey results using infographics.
How can you present survey data in a way that won’t bore your audience to tears?
Well, we all know that unique visuals like infographics can make charts and graphs more engaging. Survey data is easily translated into graphs and charts, making survey results and infographics the perfect marriage!
So without further ado, let’s get into everything you need to know to make a survey results infographic .
First up, let's kick things off by checking out some survey results templates that match up with different types of data. After that, I'll guide you through creating eye-catching survey results infographics, spicing up your results with some handy tips.
CREATE A FREE SURVEY RESULTS INFOGRAPHIC
Click to jump ahead:
Visualizing survey data effectively means using different types of charts for different types of survey results (i.e. binary, rating scale, multiple choice, single choice, or demographic results).
If your survey questions offer two binary options (for example, “yes” and “no”), a pie chart is the simplest go-to option.
Using pies for binary results is pretty self-explanatory. Basically, just use a single pie slice to highlight the proportion of “Yes” responses compared to “No” responses. For the “Yes” responses, use a brighter, more saturated color and start the segment at 12 o’clock on the pie chart:
EDIT THIS SURVEY RESULTS TEMPLATE
If you want to compare the response rates of multiple groups, skip the pies and go for a single bar chart. A bunch of aligned bars are much easier to compare than multiple pie charts. Don’t forget to label each bar with its percentage for clarity:
For a fun alternative that’s less information-dense, you can split up the bars to make a sort of modified 100% stacked bar chart. This frees up some space to add better labels for both the “Yes” responses and the “No” responses.
Or, forget about the extra notes and let the data speak for itself. Use a standard 100% stacked bar chart, color-coded to contrast the different responses, and sorted for readability.
In a rating scale question, survey takers are offered a spectrum of possible answers and are asked to select an answer along that spectrum.
This type of question is often found on customer satisfaction surveys , used to gain an understanding of customer sentiment about a product or service. It's also popular for post event surveys , to gage how much people enjoyed the event.
Most commonly it comes in one of two forms: the Likert scale (“Strongly Disagree,” “Disagree,” Neutral, “Agree” and “Strongly Agree”) or the Net Promoter Score (NPS, ranging from 0 to 10). The NPS is used to judge the willingness of a customer to recommend a product or service to others.
The 100% stacked bar chart is the simplest option for visualizing survey data from rating scale questions. It’s quick to make, and presents the proportion of responses in each category quite clearly.
With either of these scales, it’s helpful to summarize the results into coarser categories. Take the five- and ten-point Likert and NPS scales and summarize them into simpler three-point scales (“disagree”, “neutral”, and “agree” or “positive”, “neutral”, and “negative”).
Presenting survey results in a simplified categories goes a long way in making the chart easier to read.
If your survey gathers information about the respondents’ demographics in addition to other survey results, you may want to use that data as part of your analysis. Including factors like age, gender, income level, and even geographic location can make for an interesting infographic.
Visualizing survey data on a map is a fun way to include a demographic component in your infographic. A chloropleth map, like you see below, can be used to show the distribution of some data by geographic location . Different values are represented by different shades of a given color, so no reading is required:
Histograms, on the the other hand, can be used to show the age distribution of a particular population. They can easily incorporate data on gender, too:
While these specialized survey charts are great for more complex data, they won’t always be necessary. Consider using an icon chart when you want to make a simpler type of demographic data, like job or role, a feature of your design. They’re a fun way to add more impact to simple results.
Open-ended questions (questions that require respondents to write out their own answer, rather than selecting a preset answer) present a bit of a challenge. In order to visualize them, the answers need to be grouped in some way, either through common keywords, sentiments or some other factor.
Word clouds, though frowned upon by some data visualization experts, can be a quick way to get summary of this type of qualitative data.
They’re great for audiences who don’t have experience with data-heavy tables or statistical analysis , and they’re easy to make. Just pick out the most frequently-used keywords from the comments and plug them into our word cloud generator.
Otherwise you’ll have to do a more intensive manual qualitative analysis. Go through the open-ended responses and create categories.
Once you’ve quantified your answers, you’ll be able to present the results in a bar chart like this one, which shows the percent of comments that fall into each category.
Multiple choice questions allow respondents to select one or more answers from a list of possible answers.
The best visual for this kind of survey is a simple bar chart.
For the questions that allow respondents to make more than one selection, you’ll need to calculate the percentage of people who chose each answer, like you see in this chart from CoSchedule :
As always, bars should be sorted from greatest to least.
Pie charts are a decent option for times when respondents can only select a single answer. Keep in mind, though, that they’re not ideal if you’ve got a lot of data. If you have more than a few different responses to show, try giving each one its own chart:
Now that we’ve covered the best chart types for each type of survey result, let’s get into how we might combine survey charts to make a complete infographic.
A survey results infographic should use a combination of charts, graphic elements, and annotations tell a story.
The most popular type of survey results infographic is the single-column summary infographic. It sums up all of the major takeaways of a survey, explicitly stating the most important insights.
It might show the results of every survey question simply, using a large, bold number or basic chart for each question:
Or it might present a comprehensive overview of the data, with a more detailed, annotated chart for each survey question:
It might add some extra commentary after each question, too.
Either way, it presents the questions sequentially, in a single column, so that viewers can scroll through to read the results like a story.
To make your own single-column summary infographic, simply start at the top with the first question, and work your way down until you’ve covered each of the major survey insights. State each question, add the results in the form of a chart, and add notes about any interesting learnings.
To add some visual organization to a single-column infographic, use different background colors to create distinctions between sections. Add colored blocks behind each question to divide up the content.
Like you can see in the Netflix survey above, alternating red and black background colors adds a pleasing sense of rhythm and makes the infographic easier to scan.
If your survey is only a few questions long, a big single-column infographic is probably overkill. It might be better to stick with a basic 8.5”x11” page, and make it all about the numbers.
Forget about adding lots of notes, comments, and annotations. Just state each question in the simplest possible terms (i.e., “Where users are located”), and use simple survey charts to sum up the results.
Make sure you organize the charts based on an underlying grid , or you might end up with a jumbled mess.
Or you can even forget about charts altogether, and present the key takeaways as simply as possible. Use big, bold numbers to make a statement:
The last go-to option for presenting survey results is the one-page feature infographic. It couldn’t be more simple. It breaks down the results of a single survey question, in a single chart, on a single page.
We like to call this the “power stat” infographic. It combines a very simple chart with some big, bold text for a high-impact result:
Even if you have the most interesting survey data ever, no one will give it a second look if your infographic is poorly designed. Keep these best practices in mind when you make your next survey results infographic.
Your readers should be able to understand your survey charts in only a few seconds’ glance. Don't go for double barrelled questions when it comes to creating them. And if you ask me, that makes chart labels the most important chart elements (after the data itself, of course).
Descriptive labels can be used to add context to the data--to spell out the conclusions and implications of the data in the chart. This extra text will help to ensure that nothing is misinterpreted or lost in translation between you and your audience.
A well-labelled chart looks something like this:
The labels stand out against the background of the chart, with arrows clearly tying them to their respective data points.
It can be tempting to include every single data point in a visualization, but that won’t do you any good!
Be selective with your data. Just because you have a lot of data doesn’t mean your audience will want to spend hours scrolling through a mile-long infographic.
Select the most important results, and leave the rest for more in-depth summaries like white papers or reports . Include some supporting data if you need to, but remember--data visualization is all about cutting through the clutter .
Along the same lines, avoid adding unnecessary icons, hard-to-read fonts, gaudy colors, 3D effects, or any other forms of “chartjunk”--ornamental elements that don’t help clarify anything about the data itself.
While you might think that adding extra elements will make your infographic more appealing, they often only distract from the information you want to communicate.
The focus of your infographic should be A) the charts and B) your notes, labels, and annotations.
Regardless of what colors, fonts, images, or icons you use, be sure to apply styling consistently throughout the graphic.
Notice how color is used consistently (to represent the same response) in each section of this infographic?
That makes comparing responses across populations painless.
Cite your data sources, ideally in link form, in the footer of your infographic. Make it easy for the more curious members of your audience to find and peruse the original data for themselves.
Even if it’s your own original research, linking to the complete data will help your credibility and allow readers to make their own decisions about the data. And who knows--maybe they’ll find something interesting that you missed the first time around!
Sometimes tables and graphs alone just don’t cut it.
While an in-depth analysis of survey results is best presented in a comprehensive report, an infographic is an excellent medium for summarizing your findings for more immediate impact.
Now that you know how to present survey results with the right charts, the infographic design process should be painless. If you get stuck, check out this roundup of our most popular survey results templates .
Or get started right away:
GET STARTED FOR FREE
When you’ve lovingly designed, built, and distributed your survey and responses start flooding in, it’s time to begin the process of sorting and analyzing the data you’ll be presenting to stakeholders.
Once you’ve weeded the unusable responses, begin recording relevant responses through your survey platform or in a spreadsheet. If you use survey software like CheckMarket , you can easily transfer data into visuals with pre-built reports and dashboards.
Decide your data groups. Was the survey just answering one over-arching question? Or do you have multiple areas covered? Represent each data group separately.
For each result, provide additional information such as why you conducted the survey, what questions you were trying to answer, how the results help businesses, and any surprising answers.
When you have the data separated, the next step is to identify and prioritize the information your stakeholders will most want to see.
First things first: who is your audience? Is it your boss? Is it your peers? Is it your direct clients or customers? The information that clients want to see, for instance, may be completely different to what your boss is interested in. The information you choose to share will vary drastically depending on the campaign you’re working on.
For example, if you’re working on a new marketing campaign, your audience may be interested in how you plan on advertising your business and what perks that may bring them.
However, when it comes to your stakeholders, they will be less interested in the customer perks, and more interested in how this new campaign will work for the business. They might want to know:
When you’re presenting results, clearly define the purpose of the survey and why it matters to your stakeholders. Your story should be specific and concise.
Raise vital questions early on and have the answers ready to go. Your stakeholders have a limited amount of time to listen to what you have to say – make sure you are making the most of it.
This means you’ll have to pick and choose your data results carefully. All results need to be relevant and essential. Your stakeholders will be interested in information that makes a difference. And you’ll want the answers to be presented in the easiest way possible – which is why you want to choose your display method carefully.
When you present results, you are looking to be clear, simple, and memorable. So, viewers should not have to ask you to explain your results.
Here are five common ways to present your survey results to businesses, stakeholders, and customers.
Graphs and charts summarize survey results in a quick, easy graphic for people to understand. Some of the most common types of graphs include:
When creating a chart or graph, make the findings clear to read. Avoid too many intersecting lines and text options. If you can’t fit all the information into one graph, create several graphs rather than making one complex chart. Using colors to differentiate groups is another way to make results easy to read.
Infographics add a creative twist to otherwise bland charts and graphs. A good infographic will use images to enhance the message, not distract from the data.
One survey results presentation example is to use silhouettes of people to convey a percentage of the population instead of a bar graph. This image helps those who see it connect the statistic to real people.
A word cloud is a powerful way to display open-ended question responses graphically. As more people respond with a specific word, that word will appear in the cloud – emphasizing the most relevant answers.
People spend over 100 minutes a day watching videos – which is why marketers have tapped into this strategic area for reaching an audience. Nearly 88% of marketers say video marketing yields a strong return.
A video is a powerful tool for presenting information, including the results of your survey. You can capture your audience’s attention with motion, sound, and colorful statistics to help them remember information and react accordingly.
If you present findings through video, be aware that sharing options will be limited to platforms that can play a video – such a blog posts, websites, and PowerPoint presentations. Also, creating a PDF of the findings for people to look over at their leisure is a helpful way to support a video presentation.
Spreadsheets like Excel are not visually appealing, but they work well for organizing large amounts of information to create a survey results report.
While an image or video works best on websites, sometimes you may need to add more information than can fit in one picture.
Suppose you wanted to provide stakeholders or business partners with a detailed look at the survey and all the responses. A spreadsheet will allow the freedom to display all the necessary information at once. You can still use attractive infographics to summarize the findings and a video to present the report along with the spreadsheet.
Interactive results are a fun way to allow viewers to explore results. You can also organize the findings to help break up large amounts of information.
Interactive maps are a common way to display survey results graphically. For example, results can be viewed by region when they click on a specific map area. Interactive maps and displays work best for websites and blogs.
An infographic that summarizes all the data as a global average allows people who don’t have the time to explore the map to see the information.
Time is precious in the marketing industry. You don’t want to spend days analyzing and sorting through survey results.
And you don’t have to.
By using CheckMarket, you can create, gather, and present survey results with one easy-to-use platform.
Your email address will not be published. Required fields are marked *
Researching the white paper:.
The process of researching and composing a white paper shares some similarities with the kind of research and writing one does for a high school or college research paper. What’s important for writers of white papers to grasp, however, is how much this genre differs from a research paper. First, the author of a white paper already recognizes that there is a problem to be solved, a decision to be made, and the job of the author is to provide readers with substantive information to help them make some kind of decision--which may include a decision to do more research because major gaps remain.
Thus, a white paper author would not “brainstorm” a topic. Instead, the white paper author would get busy figuring out how the problem is defined by those who are experiencing it as a problem. Typically that research begins in popular culture--social media, surveys, interviews, newspapers. Once the author has a handle on how the problem is being defined and experienced, its history and its impact, what people in the trenches believe might be the best or worst ways of addressing it, the author then will turn to academic scholarship as well as “grey” literature (more about that later). Unlike a school research paper, the author does not set out to argue for or against a particular position, and then devote the majority of effort to finding sources to support the selected position. Instead, the author sets out in good faith to do as much fact-finding as possible, and thus research is likely to present multiple, conflicting, and overlapping perspectives. When people research out of a genuine desire to understand and solve a problem, they listen to every source that may offer helpful information. They will thus have to do much more analysis, synthesis, and sorting of that information, which will often not fall neatly into a “pro” or “con” camp: Solution A may, for example, solve one part of the problem but exacerbate another part of the problem. Solution C may sound like what everyone wants, but what if it’s built on a set of data that have been criticized by another reliable source? And so it goes.
For example, if you are trying to write a white paper on the opioid crisis, you may focus on the value of providing free, sterilized needles--which do indeed reduce disease, and also provide an opportunity for the health care provider distributing them to offer addiction treatment to the user. However, the free needles are sometimes discarded on the ground, posing a danger to others; or they may be shared; or they may encourage more drug usage. All of those things can be true at once; a reader will want to know about all of these considerations in order to make an informed decision. That is the challenging job of the white paper author. The research you do for your white paper will require that you identify a specific problem, seek popular culture sources to help define the problem, its history, its significance and impact for people affected by it. You will then delve into academic and grey literature to learn about the way scholars and others with professional expertise answer these same questions. In this way, you will create creating a layered, complex portrait that provides readers with a substantive exploration useful for deliberating and decision-making. You will also likely need to find or create images, including tables, figures, illustrations or photographs, and you will document all of your sources.
Find more easy contacts at our Quick Start Guide
Published on 29.8.2024 in Vol 13 (2024)
Authors of this article:
1 School of Nursing, Columbia University, New York, NY, United States
2 Icahn School of Medicine Mount Sinai, New York, NY, United States
3 School of Public Health, Columbia University Mailman, New York, NY, United States
4 Pennsylvania State University, University Park, PA, United States
*all authors contributed equally
Gregory L Alexander, RN, PhD
School of Nursing
Columbia University
560 W. 168 Room 628
New York, NY, 10032
United States
Phone: 1 5733013131
Email: [email protected]
Background: Survey-driven research is a reliable method for large-scale data collection. Investigators incorporating mixed-mode survey designs report benefits for survey research including greater engagement, improved survey access, and higher response rate. Mix-mode survey designs combine 2 or more modes for data collection including web, phone, face-to-face, and mail. Types of mixed-mode survey designs include simultaneous (ie, concurrent), sequential, delayed concurrent, and adaptive. This paper describes a research protocol using mixed-mode survey designs to explore health IT (HIT) maturity and care environments reported by administrators and nurse practitioners (NPs), respectively, in US nursing homes (NHs).
Objective: The aim of this study is to describe a research protocol using mixed-mode survey designs in research using 2 survey tools to explore HIT maturity and NP care environments in US NHs.
Methods: We are conducting a national survey of 1400 NH administrators and NPs. Two data sets (ie, Care Compare and IQVIA) were used to identify eligible facilities at random. The protocol incorporates 2 surveys to explore how HIT maturity (survey 1 collected by administrators) impacts care environments where NPs work (survey 2 collected by NPs). Higher HIT maturity collected by administrators indicates greater IT capabilities, use, and integration in resident care, clinical support, and administrative activities. The NP care environment survey measures relationships, independent practice, resource availability, and visibility. The research team conducted 3 iterative focus groups, including 14 clinicians (NP and NH experts) and recruiters from 2 national survey teams experienced with these populations to achieve consensus on which mixed-mode designs to use. During focus groups we identified the pros and cons of using mixed-mode designs in these settings. We determined that 2 mixed-mode designs with regular follow-up calls (Delayed Concurrent Mode and Sequential Mode) is effective for recruiting NH administrators while a concurrent mixed-mode design is best to recruit NPs.
Results: Participant recruitment for the project began in June 2023. As of April 22, 2024, a total of 98 HIT maturity surveys and 81 NP surveys have been returned. Recruitment of NH administrators and NPs is anticipated through July 2025. About 71% of the HIT maturity surveys have been submitted using the electronic link and 23% were submitted after a QR code was sent to the administrator. Approximately 95% of the NP surveys were returned with electronic survey links.
Conclusions: Pros of mixed-mode designs for NH research identified by the team were that delayed concurrent, concurrent, and sequential mixed-mode methods of delivering surveys to potential participants save on recruitment time compared to single mode delivery methods. One disadvantage of single-mode strategies is decreased versatility and adaptability to different organizational capabilities (eg, access to email and firewalls), which could reduce response rates.
International Registered Report Identifier (IRRID): DERR1-10.2196/56170
Survey use in clinical informatics research is ubiquitous. Surveys are often used to collect data and measure phenomena such as knowledge of clinical informatics specialties [ 1 ] or the use of electronic health records [ 2 ]. Benefits of using surveys include lower costs to conduct research, better population descriptions, flexibility, and dependability of study designs [ 3 ]. Surveys are used in many professions and across health care settings, including nursing homes, home health care, and hospitals [ 4 - 6 ]. The expansive use of surveys in clinical informatics research calls for a continued focus on training to improve the ability of researchers to design high-quality surveys, develop effective reporting mechanisms, maximize recruitment strategies, and adapt to recruitment challenges needed to enhance the results. Various modes of survey data collection exist across studies. Literature establishing a theoretical foundation for questionnaire response styles used in surveys when collecting data about public opinion indicate that mode of data collection (eg, mixed-modes) is an important stimulus for response [ 7 ]. In this paper, researchers describe a research protocol using mixed-mode survey designs in clinical informatics research using 2 survey tools to explore Health IT (HIT) maturity and nurse practitioner (NP) care environments in US nursing homes (NHs).
In this protocol, HIT maturity is defined in 3 dimensions including HIT capabilities, use, and integration. These HIT maturity dimensions are conceived within NH resident care, clinical support (eg, HIT use in laboratory, pharmacy, and radiology activities), and administrative activities [ 8 ]. The HIT maturity survey tool contains 27 content areas and 183 content items [ 9 ]. The tool will be used to survey NH administrators. The Nurse Practitioner Nursing Home Organizational Climate Questionnaire (NP-NHOCQ), used to measure NP care environments, contains 5 subscales and 41 items. This tool will be used to survey NPs in NHs. The NP-NHOCQ measures the care environment of NPs in NHs in 5 areas: (1) NP-Physician Relations, (2) NP-Administration Relations, (3) NP-Director of Nursing Relations, (4) Independent Practice and Support, and (5) Professional Visibility.
Survey-driven research is known as a reliable data collection method to capture individual perspectives on a large scale. However, there are many challenges related to survey-based data collection, such as low response rates and rising costs of human capital [ 10 ]. Previously, researchers have explored the use of mixed-mode survey designs combining methods such as web, phone, face-to-face, and mail administrations. Mixed-mode survey research involves using 2 or more of these modes for data collection [ 11 ]. A survey mode is defined as the communication channel used to collect survey data from one or more respondents [ 11 ]. Prior research has reported the benefits of mixed-mode surveys such as enhancing engagement [ 12 ], mitigating accessibility barriers [ 13 ], and increasing response rates [ 14 ].
Survey modes can be implemented individually or combined with other modes. A single mode approach deploys only one mode at a time. For example, a researcher may use postal mail services as the only method to contact study participants and collect data. Alternatively, mixed-mode designs use multiple modes to recruit respondents (see Figure 1 ). For instance, a simultaneous (also known as concurrent) mixed-mode approach allows respondents to choose their preference between multiple modes deployed at the same time. For instance, a researcher may offer study participants a choice to complete a survey using an electronic PDF version of the questionnaire that can be printed, scanned, and faxed back to researchers or an electronic survey link completed via web. Mixed-modes can also use a sequential approach. In this mode, researchers may offer 2 different modes, one mode at a time, with a second mode coming later, after the first. This mode is particularly useful when following up with participants who do not respond (nonrespondents) to provide alternative survey strategies that better suit their workflows. An example of sequential mode may include contacting participants initially via phone call and then, following no initial response, a second contact is made using a QR code that is sent via a mailed letter. Another mixed-mode useful for following up with nonrespondents is called a delayed concurrent mode. In this mode, participants are offered one mode, then nonrespondents are offered a choice between 2 other modes later during follow-up activities. An example of the delayed concurrent mode might include an initial mailed survey. Then when no response is received, potential participants are sent a choice between a face-to-face or a phone interview to complete the survey. Finally, an adaptive mixed-mode design incorporates different sampling units. In the adaptive modes, 2 different samples are each offered a different mode.
Mixed-mode survey research has long been identified as a means to improve participation in survey recruitment. For instance, a systematic review of 22 articles among nurses provided evidence that recruitment design strategies that include postal and telephone contacts are generally more successful than fax or web-based approaches [ 16 ]. In a more recent systematic review of 893 studies, mode of administration was a key factor in successful recruitment. However, in this review, electronic and postal modes of survey data collection were less likely to result in higher response rates [ 17 ]. In other research using mixed-modes with clinicians, using a multiple contact protocol generated final response rates 10% points higher than single mode methods [ 18 ].
In this paper, we present mixed-mode methods used in a large survey of NHs in the United States. To achieve research goals, we must have a robust and effective recruitment plan. Therefore, we are using an innovative research protocol using mixed-modes to improve NH administrators’ and NPs’ engagement in survey data collection while increasing the response rates.
The US health care system has over 15,600 NHs serving over 1.3 million residents [ 19 ]. A growing strategy for improving the outcomes for NH residents is to effectively integrate HIT into care delivery to promote safer care environments for NH residents. HIT integration into NH resident care may improve care environments and by extension, better care quality [ 20 ]. Survey-driven research is a reliable method to capture the perspective of individuals about these phenomena on a large scale. Our team is conducting a national survey of NH administrators and NPs, incorporating 2 different survey tools to explore how HIT maturity (survey 1) impacts care environments (survey 2) where NPs work. A specific aim of this research is to provide comprehensive assessments of HIT maturity and NP care environments in NHs nationally. The goal of the National Institute of Aging funded research study (5R01AG080517, principal investigators: GLA and LP) is to assess differences in HIT maturity and care environments in NHs where NPs deliver care to residents with Alzheimer disease and related dementias and examine their impact on hospitalizations and emergency department visits among residents.
The sample for this study includes randomly selected NHs including administrators (ie, NH leaders responsible for HIT systems in their organization) and NPs from each NH. Our goal is to recruit participants from 1400 NHs in the United States. We use 2 national sources to identify NHs for this study. The first data source is called NH Compare (or Care Compare), a publicly available national data set containing information about organizational characteristics of US NHs and quality of care [ 21 ]. The second data source stems from IQVIA, a company that stores national data about NH location, contact information, and staff including administrators and NPs. In preparation for this proposal, IQVIA provided our team data to identify all US NHs with practicing NPs. According to these data, in 2021, a total of 11,222 unique NPs worked in 5000 NHs for an average of 2.2 NPs/NH. Based on this estimate, we expect to contact 3080 NPs within the 1400 NHs (1400 NHs X 2.2 NPs/NH).
We use NH Compare files to identify NHs for our study based on 2 specific inclusion criteria. First, we include all NHs located in the United States including Alaska and Hawaii. Second, we include at least 1 NP working in each facility. NPs may include actual employees of a facility or may be employed by an external organization as a consultant for a facility and not directly by the NH. Facilities are not eligible to participate if they meet the following 3 exclusion criteria. First, NHs that do not have an NP employed. Second, NHs with a hospital-based designation as their HIT maturity are likely to be different due to national incentives for HIT adoption in acute care [ 22 , 23 ]. Approximately 6% (n=15,518) of NHs have a health system designation that includes common ownership or joint management [ 24 ]. Third, NHs that are designated as a special focus facility (SFF), which indicates any NH with a history of serious quality issues. NHs with an SFF designation are required to be in a program to stimulate quality-of-care improvements [ 25 ]. In October 2023, Centers for Medicare & Medicaid Services indicated that approximately 0.5% of US NHs have an SFF designation [ 25 ].
The NH Compare website was downloaded in February 2023 to identify facilities for recruitment. We identified 4163 facilities that matched our criteria. In preliminary work, during 2 prior NH survey studies, we achieved approximately a 45% response rate of surveys returned from administrators. Therefore, for the current protocol, we oversampled by randomly selecting 3000 NHs, which we identified by linking the NH Compare and IQVIA data. We included at least 5 facilities in each state, except for Alaska (2 facilities) and Wyoming (3 facilities) which have few NHs with NPs identified. We will recruit all administrators from these 3000 NHs to complete a HIT maturity survey. For every NH that completes the HIT maturity survey, we will recruit all NPs from those facilities.
After we generated the random sample from the merged files, we compared basic characteristics of NHs between the selected NHs and the rest of the NHs nationally. The following NH characteristics were compared to assure that there was limited bias in sample representation:
The research team conducted iterative focus groups that included NPs and survey recruitment experts to discuss the pros and cons of different recruitment strategies. To explore the pros and cons, members of the focus groups assessed recruitment strategies used during 2 prior national studies of long-term care NH sites [ 26 ]. The PI and some members of the focus groups lead these national studies that were reviewed. Additionally, members of the focus groups reviewed and discussed potential mixed-mode strategies from the literature to incorporate into this protocol. Schouten [ 15 ] mixed-mode survey research helped inform our decisions for our protocol design.
We aim to survey administrators and NPs using 2 survey tools describing HIT maturity and care environments from each discipline, respectively. To prepare the protocol, the research team conducted 3 iterative focus groups with clinicians (NPs and NH experts), recruiters from 2 national survey teams experienced with recruitment in NHs and with NPs, and a statistician to achieve consensus on which mixed-mode designs to incorporate into this research. Our research protocol workflow is illustrated in Figure 2 . The following sections include descriptions of the mixed-mode workflows by discipline and the surveys being used in this protocol.
For each randomly selected NH, contact information for NH Administrators has been obtained using IQVIA data set. Our team searched NH websites to confirm contact information of current administrators. During initial contact with each NH administrator (either by phone or a mailed letter), we describe the study’s purpose and explain the study. All administrators who are contacted and agreed to participate in the study will be sent a cover letter providing details about the study’s purpose, instructions on how to complete the NH HIT maturity survey tool, and descriptions of the benefits and risks of participation. We provide a description of the HIT maturity survey for administrators including that the survey measures HIT capabilities, extent of HIT use, and degree of HIT integration in resident care, clinical support, and administrative activities [ 9 ]. We incorporate 2 mixed-mode designs when recruiting NH administrators including a Delayed Concurrent Mode and a Sequential Mode with regular follow-up phone contacts to stimulate engagement.
Our primary mode for this study is a Delayed Concurrent mixed-mode design. In this mode, administrators are offered the choice between multiple modes. During the first contact (conducted by phone), we describe the project and obtain email addresses for administrators who agree to participate. Then, we follow up with administrators by email with an electronic survey link and a PDF simultaneously. This is important because the different choices among electronic surveys and PDFs allow administrators the flexibility to choose a mode that fits their needs. In nonresponse cases, administrators are later offered a different mode including a postal letter with a QR code that includes a URL link to the survey tool that is subsequently sent at a later time.
As a secondary option, we incorporate a Sequential Mode for a minimum of 10% of the facilities in each state. In this mode, participants are offered only 1 mode at a time and only part of the nonrespondents are invited for the second mode. The first mode includes mailing a postal letter that describes the study and provides both a QR code and URL link to the survey for the NH administrator. Recruiters make a series of follow-up calls to administrators after the letter is sent. During follow-up calls, emails are confirmed by the recruitment team. In this sequence, administrators who agree to take the survey and have provided their email addresses but have failed to respond with a completed mailed or faxed survey after a minimum of four follow-up calls are offered a second mode. The second mode includes a URL link and a PDF of the survey that is sent to administrators via email.
The recruitment team asks administrators to confirm that at least one NP works in their facility (whether employed by the NH or by an external health organization that provides NP services to the facility) and to verify the NP’s name and contact information. NP’s contact information listed in the IQVIA data is confirmed with administrators to ensure that it is current. Contact information that is not current is updated by the recruitment team in the recruitment database. NHs that do not meet the eligibility criterion (eg, NP left and no new NP hired) are excluded. The research team will incorporate a concurrent mixed-mode design to recruit NPs for the study.
NPs are contacted by email or phone by our recruitment team and are provided with information describing the study, its voluntary nature, and confidentiality per the institutional review board’s (IRB’s) protocol. NPs are sent links to both an electronic survey and PDF concurrently. We expect some NHs to have more than one NP complete a survey.
The protocol was approved by the IRB (AAAU3845). Ethical issues that were addressed in our IRB protocol included confidentiality and anonymity of privacy to encourage honest responses. Security and accessibility of the data only to authorized research staff. Researchers also created plans for minimizing coercive behaviors during recruitment (eg, applying pressure) by creating systematic follow-up and templates with recruitment language to use during contacts. The research protocol and all procedures were approved by Columbia University IRB (AAAU3845).
Up to 4 follow-up phone calls are conducted at specified 2-week intervals for administrators who have agreed to participate. Administrators and NPs who do not complete surveys are marked as “No Contact.” Administrators and NPs who complete a survey receive US $25 compensation in the form of a gift card.
All survey data collection is conducted through REDCap (Research Electronic Data Capture; Vanderbilt University) a web-based software designed for data collection and management in research studies with emphasis on data security and flexibility [ 27 ]. We maintain data about recruitment efforts in REDCap, including number of facilities contacted, persons contacted at each facility, packets or links sent, surveys received, initial cannot reach, contact calls made, follow-up calls made, confirmations received (will complete and not completed), stated completions, and follow-up cannot reach. Recruitment staff, including a project coordinator and 4 research assistants, make recruitment calls and send surveys to NH administrators and NPs.
Data collected via electronic survey are electronically transferred to the REDCap database. Data collected via PDF is manually entered into the REDCap system by our research staff. A meticulous data-cleaning strategy is used before formal statistical analysis to ensure the data quality [ 28 ]. We used algorithms to check questionnaires for consistency and validity. For example, graphical exploration through boxplots, histograms, and scatter plots will be used to help with detecting outliers and logically implausible data points. Any identified outlying observations undergo thorough examination to discern between potential data entry errors and genuinely extreme values. Data entry errors are corrected. Any systematic patterns will be scrutinized. Every step of the data cleaning process and associated decisions are documented to ensure transparency.
There is a possibility that some NH administrators or NPs who agree to participate in the study will not fill out an HIT maturity or care environment survey tool completely. We anticipate that there may be some missing data on completed surveys. Based on prior national HIT maturity and NP studies, we have estimated that less than 3% of the data for surveys received was missing for both types of surveys. We plan to use all available data in our analyses.
NH HIT Maturity [ 8 , 29 ] is measured using a total composite score that correspond to 7 HIT maturity stages. The 7 maturity stages range from the lowest HIT Maturity Stage 0—Nonexistent HIT solutions or electronic health records to Stage 6—Use of data by residents and resident representatives to generate clinical data and drive self-management. A higher total HIT maturity score indicates greater IT capabilities, use, and integration in resident care, clinical support (including IT systems in pharmacy, radiology, laboratory), and administrative activities in the NH. The overall standardized Cronbach α for this instrument in past research was 0.86 (high); each dimension or domain achieved a Cronbach α ranging from 0.7 to 0.9 [ 30 ].
NP Care Environment is measured by a 44-item Nurse Practitioner Nursing Home Organizational Climate Survey (NP-NHOCS) [ 31 ], which asks NPs to rate the work attributes in NHs using a 5-point Likert scale. The NP-NHOCS has five subscales: (1) NP-Physician Relations (7 items)—measures the relationship, communication, and teamwork between NPs and physicians; (2) NP-Administration Relations (11 items)—measures collaboration and communication between NPs and managers; (3) NP-Director of Nursing Relations (8 items)—measures the relationship, communication, and teamwork between NPs and Directors of Nursing; (4) Independent Practice and Support (9 items)—measures resources and support NPs have for their independent practice; and (5) Professional Visibility (9 items)—measures how visible the NP role is in the organization. We first compute NP-level and then NH-level mean scores by aggregating the responses of all NPs in the NHs as recommended [ 32 ]. Higher mean scores indicate better care environments. NPs are asked to complete measures of demographics (eg, age, sex, and experience).
A number of planned analyses will be performed. In terms of HIT maturity survey, we aim to understand which survey mode (Delayed Concurrent vs Sequential) will maximize NHs’ engagement in our research project and which factor(s) influence survey completion method. First, descriptive statistics will be used to summarize the key variables of interest including but not limited to response rates (agreeing to participate or not), completion rates, time taken to complete the survey, and the proportion of electronic surveys received. Chi-square test or Fisher exact test will be used to examine differences in response rates, completion rates, proportion of electronic survey received between the NHs assigned to the Delayed Concurrent Mode and Sequential Mode survey designs. This analysis will determine if one survey mode yields higher response and completion rates compared to the other. Second, if there is sufficient data availability, linear regression models will be used to test whether NH administrators’ demographic characteristics (ie, age, sex, race or ethnicity), NH-level characteristics (eg, bed size and staffing hours), and HIT maturity level are associated with the choice of survey completion method (electronic or PDF format).
In terms of NP care environment survey, all NPs will be offered both an electronic survey and a PDF concurrently. The proportion of electronic surveys received among those who respond will be calculated to determine preference for electronic over PDF surveys. If a sufficient number of electronic and PDF surveys are received, linear mixed effects models with NH as random effect will be used to assess whether the choice of survey completion method is associated with NH-level characteristics (eg, HIT maturity score, geographical location, ownership), NP-level characteristics (eg, age, race or ethnicity, years of experience, and job roles), and NP care environment scores, respectively.
The research team conducted 3 iterative focus groups with a total of 14 clinicians including NPs and survey recruitment experts. The following pros and cons were used to determine our recruitment strategies.
The pros of mixed-mode designs identified by the team during focus groups were that delayed concurrent, concurrent, and sequential mixed-mode approaches can save recruitment time compared to single mode delivery methods. Additionally, effort on the part of recruitment staff is minimized by using mixed-modes. By using mixed survey modes, participants can immediately choose their preferred survey method, potentially enhancing their satisfaction with the survey process. This facilitates engagement that leads to completed surveys and increased response rates. Another pro of the concurrent mode identified was that sending a QR code via the postal service in addition to providing a URL link provides greater selectivity and plasticity in a respondent’s choice, which could enhance engagement and responsiveness to surveys. A pro of single mode designs is the potential for quick turnaround times and representative samples for projects with limited resources [ 33 ].
One disadvantage of single mode strategies is that they decrease the versatility and adaptability to different organizational capabilities (eg, access to email and system firewalls), which could reduce response rates. For example, a URL link sent via email might be more difficult for NH administrators and NPs to open due to system firewalls put into place by organizations to meet higher level security standards of HIT systems. We identified another con of a sequential mode; for instance, if a singular mode is offered when initial recruitment is started, the respondent may not engage in the second wave. For example, if respondents are concerned about access to email, they may not engage with us again in further calls if the first mode offered including email is perceived as a barrier to participation. Other cons that were identified related to NH infrastructure and environmental variables. For instance, NPs might have limited access or no workspace available to print a PDF and to complete a survey. Other reported cons of mixed-mode designs (sequential modes [web then telephone]) compared to single mode (telephone only) include higher missing data rates and more focal responses [ 34 ].
After randomization, we rigorously compared selected and nonselected NHs based on key NH level characteristics such as bed size, ownership, location, staffing hours, payer mix, and overall rating. Our analysis did not reveal statistically significant differences in these characteristics (See Table S1 in Multimedia Appendix 1 ).
The research study was funded in February 2023. Participant recruitment for the project began in June 2023. As of June 3, 2024, a total of 109 HIT maturity surveys and 83 NP surveys have been returned. About 69% of the HIT maturity surveys have been submitted using the electronic link and 27% were submitted after a QR code was sent to the administrator. About 95% of the NP surveys were returned with electronic survey links.
Our national study is the first to our knowledge to focus on NH HIT maturity and NP care environments where administrators and NPs work. Although NPs are a predominant provider in NHs [ 35 ], no study to date has focused on NP care environments and available resources (eg, technology) to this discipline, leading to limited understanding of how NPs conduct work, and how HIT maturity contributes to an NP’s ability to improve care and outcomes for NH residents with serious chronic conditions. Furthermore, a primary objective of this study is to provide evidence of how administrators and NPs codesign technologies that can transform care delivery in NHs. Our team anticipates that using mixed-modes will enhance our ability to work with participants at different stages of HIT maturity, which we believe is in an important factor in how care environments are perceived by employees (eg, NPs) in these settings.
To achieve this goal, we first must be able to maximize engagement in this survey research with strong representation by both NH administrators and NPs from all US states. Second, we must mitigate barriers to NH administrators and NPs accessing surveys so that they can participate. Finally, we must achieve acceptable response rates by generating different modes of support, providing choice and flexible means for NH administrators and NPs to participate in the survey process. In this protocol, we have identified mixed-mode recruitment strategies based on the expert opinion of experienced survey recruitment staff that should enable us to meet our goals and to achieve a representative national sample of NH administrators and NPs.
This study may have limitations. In prior work, we have identified great variability in HIT capabilities among many NH's, such as access to external email and connectivity challenges where NH staff work [ 36 ]. Depending on the survey mode used during the data collection, this variation may create differences in response rates between facilities. We have incorporated various mixed-mode methods in this research protocol that should allow respondents to choose their preferred method and the ability to complete a survey considering their institutional characteristics. The use of mixed-modes has been shown to improve participation in survey research, thus reducing barriers for less well-resourced NHs (eg, NHs with lower HIT maturity levels). Less resourced NHs are typically those with greater resident ethnic and racial diversity [ 37 ], so improving their participation is critical to enhance representation of these communities, which is a benefit of the design.
This research protocol describes a study using 2 survey tools to measure HIT maturity and NP care environments in the US NHs as perceived by administrators and NPs. We have identified the pros and cons of survey recruitment strategies experienced by our team in past work. We reviewed evidence-based recruitment strategies using mixed-modes which are defined in the literature as methods that incorporate the use of 2 or more modes to recruit respondents. In this protocol, we have incorporated a delayed concurrent mode, sequential mode, and a concurrent mode to enhance engagement, mitigate barriers to survey access, and to increase response rates in collecting survey data both from NH administrators and NPs to have robust data for future analysis.
The authors wish to acknowledge Dr Richard Chan and Ms Hana Amer for the contributions in the initial stages of determining the steps in the research protocol. Research reported in this publication was supported by the National Institute on Aging of the National Institutes of Health (award R01AG080517). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
GLA, LP, YZ, MH, SK, AAN, KW, MBS, AK, TB, and ST contributed to the design, acquisition, interpretation, writing, and revision of this manuscript.
None declared.
Supplementary Table.
health IT |
institutional review board |
nursing home |
nursing practitioner |
Nurse Practitioner Nursing Home Organizational Climate Questionnaire |
Nurse Practitioner Nursing Home Organizational Climate Survey |
Research Electronic Data Capture |
special focus facility |
Edited by S Ma; submitted 08.01.24; peer-reviewed by T Mujirishvili; comments to author 23.03.24; revised version received 22.04.24; accepted 28.06.24; published 29.08.24.
©Gregory L Alexander, Lusine Poghosyan, Yihong Zhao, Mollie Hobensack, Sergey Kisselev, Allison A Norful, John McHugh, Keely Wise, M Brooke Schrimpf, Ann Kolanowski, Tamanna Bhatia, Sabrina Tasnova. Originally published in JMIR Research Protocols (https://www.researchprotocols.org), 29.08.2024.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Research Protocols, is properly cited. The complete bibliographic information, a link to the original publication on https://www.researchprotocols.org, as well as this copyright and license information must be included.
Jiayun Zhang
Shanghai Key Lab of Intelligent Information Processing , School of Computer Science , Fudan University , Shanghai , China , fudan.edu.cn
Corresponding Author
Qingyuan Gong
Research Institute of Intelligent Complex Systems , Fudan University , Shanghai , China , fudan.edu.cn
Department of Information and Communications Engineering , Aalto University , Espoo , Finland , aalto.fi
Aaron Yi Ding
Department of Engineering Systems and Services , Delft University of Technology , Delft , Netherlands , tudelft.nl
The temporal patterns of code submissions, denoted as work rhythms, provide valuable insight into the work habits and productivity in software development. In this paper, we investigate the work rhythms in software development and their effects on technical performance by analyzing the profiles of developers and projects from 110 international organizations and their commit activities on GitHub. Using clustering, we identify four work rhythms among individual developers and three work rhythms among software projects. Strong correlations are found between work rhythms and work regions, seniority, and collaboration roles. We then define practical measures for technical performance and examine the effects of different work rhythms on them. Our findings suggest that moderate overtime is related to good technical performance, whereas fixed office hours are associated with receiving less attention. Furthermore, we survey 92 developers to understand their experience with working overtime and the reasons behind it. The survey reveals that developers often work longer than required. A positive attitude towards extended working hours is associated with situations that require addressing unexpected issues or when clear incentives are provided. In addition to the insights from our quantitative and qualitative studies, this work sheds light on tangible measures for both software companies and individual developers to improve the recruitment process, project planning, and productivity assessment.
The time allocation for work activities is closely related to a software developer’s daily routine and reflects her/his work habits. We define the work rhythms in the process of software development as the temporal patterns shown in developers’ code submission activities. A typical work rhythm of a developer could be described as follows: the developer may start the work at 9 a.m. on working days and concentrate on writing and submitting code during working hours. She/he would take a short break at noon for lunch and the code submissions could stop for a while as well. After finishing the tasks at 6 p.m., the codes will not be updated until 9 a.m. on the next working day. Developers working in companies with diverse cultures follow different work rhythms. It was reported that one third of software developers do not adopt a typical working hour rhythm (e.g., from 10 a.m. to 6 p.m.) [ 1 ]. The issues of developers’ work rhythms have been discussed extensively. Some Chinese tech companies have adopted an unofficial work schedule known as the “996 working hour system,” which requires employees to work from 9 a.m. to 9 p.m., 6 days a week. The public quickly took notice of these extreme working hours as they were shared on social media ( https://github.com/996icu/996.ICU ). This abnormal work schedule has received criticism, arguing that developers cannot keep focusing on programming during such long working hours and their efficiency and productivity decrease after working for long hours ( https://www.scmp.com/tech/start-ups/article/3005947/quantity-or-quality-chinas-996-work-culture-comes-under-scrutiny ). However, global leading news media, such as Cable News Network (CNN; https://edition.cnn.com/2019/04/15/business/jack-ma-996-china/index.html ) and British Broadcasting Corporation (BBC) News ( https://www.bbc.com/news/business-47934513 ), reported another voice that many successful entrepreneurs weighed on the advantages of long-hour work schedules to the companies. These heated discussions with controversial perspectives press an urge demand to understand developers’ work rhythms and their effects on practical technical performance.
Studying work rhythms in software development yields many important implications. For example, the profiles and activities in online developer communities are considered as reliable indicators of technical performance during the hiring process [ 2 ]. However, having more commits during off-hours does not necessarily equate to better code quality. Instead of assessing based on the quantity of commits, it is crucial to acquire a deeper understanding of work rhythms and their effects. Such insights can help employers gain deeper knowledge about job applicants’ work habits before hiring. In addition, software development teams can rely on more rational assessments of technical performance rather than judging merely by the time spent in the office. With an understanding of the effects of work rhythms on technical performance, both project teams and individual developers can better allocate and schedule their time in development.
The existing studies on the work rhythms of people in different occupations often cover their effects on work performance. Alternative work schedules, such as flexible and compressed work schedules, had positive effects on work-related criteria including productivity and job satisfaction [ 3 , 4 ]. Conversely, sustained work during long working hours was associated with an increased risk of errors and decreased work performance [ 5 , 6 , 7 , 8 , 9 ]. In the field of software engineering, multiple studies have examined the relationship between code quality and the time when the work is performed. It has been found that the bugginess of commits is related to the time (i.e., the hour of the day) when those commits have been made, but there are large variations among individuals and projects [ 10 , 11 , 12 ].
Previous studies have primarily focused on the effects of work hours on code quality, within the contexts of limited organizations and have primarily considered code bugginess as a quality metric. In addition, they have not sufficiently addressed the circadian and weekly patterns that characterize developers’ work habits. Our study leverages a large-scale real-world dataset from GitHub to explore how work rhythms correlate with multiple dimensions of technical performance. Considering that project-level working behaviors often involve collaborative efforts of multiple contributors and do not necessarily reflect the work patterns of individual developers, our study analyzes both project- (in our study, the term “project” is used synonymously with “repository”) and individual-level metrics. We aim to provide a more comprehensive understanding of work patterns from two different yet interconnected perspectives. Specifically, we apply spectral biclustering [ 13 ] to identify the work rhythms from both the individual and project perspectives. The biclustering algorithm simultaneously groups both rows and columns of a data matrix, allowing us to understand the groups of similar subjects (i.e., developers/repositories) and their typical commit behaviors at the same time. We analyze the relationship between the identified work rhythms and demographics (such as region and account/repository age) and collaboration roles (i.e., whether a developer is a structural hole spanner (SHS) [ 14 ]). We use popularity metrics (such as followers, stars, forks, and issues on GitHub) and code productivity (measured by lines of code changed per week) as indicators of technical performance. Then, we perform a comprehensive analysis to investigate how these work rhythms influence technical performance. Furthermore, we conduct a survey study to complement the results of empirical data analysis.
We design an approach with spectral biclustering algorithm to identify the work rhythms of repositories and individual developers. This method reveals four distinct work rhythms among individuals and three among repositories.
We present an empirical analysis of the correlations between work rhythms and demographics including regions, age, and collaboration roles. We define multiple practical measures for technical performance and study the effects of work rhythms on them.
We conduct a survey involving 92 respondents to gain insights into developers’ experiences and the reasons and attitudes towards overtime work.
We introduce the background and related works in Section 2 and research questions in Section 3 , followed by our research methods (Section 4 ) and results (Section 5 ). We discuss the significance of our contributions in Section 6 and offer some concluding remarks in Section 7 .
Developers are engaged in multiple work activities in a given week and follow some rules in the time usage in software development [ 15 , 16 , 17 ]. Sequential analysis of the generated contents is crucial for understanding the behavior patterns of online users [ 18 , 19 ]. The widely used development tools such as version control systems and online developer communities ensure the transparency of the workflows, which provide researchers with abundant resources to investigate developers’ work practices [ 20 , 21 , 22 , 23 ]. By exploring the data from these development tools, multiple studies have examined developers’ work practices and contributions.
First, the work time in software development has been studied. For example, Claes et al. [ 1 ] defined work rhythm as the circadian and weekly patterns of commits. They analyzed the commit timestamps of 86 open source software projects and reported that two-thirds of the developers follow a standard work schedule and rarely work nights and weekends. In addition, Traulle and Dalle [ 24 ] investigated the evolution of developers’ work rhythms. They observe a trend where developers adopt more regular work patterns over time and start working increasingly earlier. Furthermore, this study is related to our previous work [ 25 ], which examined the commit activities of tech companies in China and the United States and compared the differences in working hours between companies in the two countries. Compared with our previous work, this study expands the scope and introduces new research questions—the correlations between work rhythm and technical performance. In addition, we enlarge the dataset to include a wider range of regions and approach the analysis of working behaviors at more granular levels by examining both project- and individual-level behaviors.
Second, the relationships between work quality and work time have been investigated. For example, Khomh et al. [ 26 ] studied the impact of Firefox’s rapid release cycle on software quality. They found that the fast release cycle did not lead to more bugs but accelerated the process of fixing bugs. In addition, several studies focused on the relationships between the bugginess of code and the hour of the day when the code is submitted. For instance, Eyolfson et al. [ 10 , 11 ] studied three well-known open source projects and found that more bugs are contained in commits made during midnight and early morning, while commits made in the morning have the best quality. Prechelt and Pepper [ 12 ] investigated a closed-source industry project and proposed that 8 p.m. is the hour with the highest error rate. It is observed that results vary across different projects.
Previous research on the effects of work time often investigates projects from limited organizations and only considers the bugginess of code as the metric of code quality. In addition, these studies typically focus on the effects of specific hours of the day, rather than the circadian and weekly patterns. There is no sufficient investigation with solid evidence yet to show the relationship between work rhythms and technical performance from multiple aspects. In this paper, we perform data analysis on a real-world code submission dataset collected from GitHub, a prominent online developer platform with more than 100 million developers and hosting more than 420 million repositories ( https://github.com/about , accessed on May 18, 2024).
During the software development, people often use Git, a distributed version control system, to monitor the modifications to the code. To submit code changes to Git, people make commits that include details such as authorship, timestamp, and the code changes made. The temporal distribution of a developer’s commit logs reflects her/his rhythm of submitting code changes. These commit logs can be accessed if the projects are uploaded to GitHub and set to publicly visible. Figure 1 shows the time distribution of developers’ code submissions on GitHub. The statistics are generated according to the GitHub User Dataset [ 27 , 28 ]. The dataset consists of the information and activities about more than 10 million randomly selected GitHub users. We focus on the users who have more than 100 commits and have submitted codes on more than 100 different days. Among these users, we select 13,201 of the developers with 5,406,933 commits. In general, developers commit more frequently on weekdays than at weekends. There are peak hours of code submissions at 11 a.m., 4 p.m., and 10 p.m., and an off-peak period during the early morning, which conforms to the common sense of people’s daily life. The aggregated commit logs in Figure 1 show that developers exhibit temporal regularities in code submissions. However, given the differences in the adoption of work practices, such general work rhythm could not represent effectively the work habit of each developer.
We aim to study the work rhythms of developers and software projects to have a comprehensive view of work rhythms in software development from both the individual and group levels. Our study is guided by the following four research questions:
RQ1. What are the work rhythms of individual developers and software projects?
RQ2. Are work rhythms related to demographics and collaboration roles?
The first two RQs intend to reveal representative work rhythms among individual developers and software projects and examine discrepancies in the demographics of the developers with different work rhythms.
RQ3. What are the correlations between different work rhythms and technical performance?
The third RQ is to seek a deeper understanding of the relationships of different work rhythms with the outcome of work by considering various metrics for technical performance.
RQ4. What are developers’ attitudes towards work rhythms and productivity?
The last RQ investigates developers’ actual work experience and their views on productivity.
In this section, we present the data collection and analysis methods in our study. A summary of the research subjects, variables, and the methods of data analysis for each research question is provided in Table 1 . The overview of the methodology is presented in Figure 2 .
Research question | Subject | Variable | Analysis method |
---|---|---|---|
RQ1 | Developer/repository | Commit frequency during the week | Spectral biclustering |
RQ2 | Developer | Account creation time | Mann–Whitney test |
Structral hole spanner | APGreedy, Pearson’s -squared test | ||
Repository | Regions | Pearson’s -squared test | |
Repository creation time | Mann–Whitney test | ||
RQ3 | Developer | Number of followers | Mann–Whitney test |
Average number of stars | |||
h-index of stars | |||
Repository | Number of stars | Mann–Whitney test | |
Number of forks | |||
Number of open issues | |||
Lines of code changed per week | |||
RQ4 | Developer | Required and actual working hours | User study |
Time allocation for work activities | |||
Attitude towards working overtime |
The commit logs of public projects on GitHub are publicly visible and can be retrieved using the GitHub API. Our data collection adhered to “terms of service” of GitHub ( https://help.github.com/articles/github-terms-of-service/ ). The data collection took place from May 1 to May 27, 2019. The dataset covers the commit activities of the source repositories of 110 organizations ever since the repositories were created. The location of the companies spread a wide range from the United States (such as Facebook, Amazon, and Google) to China (such as Baidu, Tencent, and Alibaba) and Europe (such as SAP, Nokia, and Spotify). To accurately assess work rhythms, we used the local time of each commit log to avoid the potential influence of different time zones in which the commits were made. Commit logs without time zone information (9.03% of the total) were excluded. Following the data cleaning, a total of 1,532,439 commits remained. Then, we group these commits by repositories and committers respectively and form the following two datasets for our analysis.
Company repositories. We scanned the repository lists of the 110 organization accounts and crawled descriptive information about the repositories and commit logs submitted into the repositories. We selected repositories with at least 300 commits and formed the repository dataset with a total of 1,131 repositories and 1,111,685 commits.
Individual developers. To study the work rhythms of individual developers, we first merged different identities of the same developer, as a developer may have multiple identities on GitHub and in the version control system. We extracted the email from the version control system’s author field and GitHub account ID from the author field recorded in GitHub commit activity. We created a mapping from email addresses to GitHub accounts and grouped together identities that shared the same account ID or email address. Following this dealiasing process, 47.1% of the committer identities were merged. Then, we chose the core developers by selecting those with at least 30 commits. These developers are the top 12.5% of the committers and have made 85% out of all commits in our dataset. We further crawl the GitHub account information of the developers, including number of followers and number of stars in each of their own repositories. Finally, we formed our developer dataset with 7,509 individual developers and 1,296,715 commits, among which 2,754 have detailed information about their GitHub accounts.
To profile how commits are created by a developer or in a project repository, we compute the frequencies of commit activities across different time intervals and apply clustering to identify patterns.
4.2.2. biclustering model.
Among various classical clustering methods, such as K-means [ 30 ], DBSCAN [ 31 ], and the state-of-the-art ones designed for specific applications such as topic models (latent Dirichlet allocation) [ 32 , 33 ], we choose the spectral biclustering [ 13 ] algorithm to discover the work rhythms in our dataset. Spectral biclustering is a clustering technique, which generates biclusters—a group of samples (in row) that show similar behavior across a subset of features (in column), or vice versa. In our scenario, we group both developers/repositories and the commit behavior at a time to understand the groups of similar subjects and their typical behaviors. Specifically, developers/repositories grouped in different row clusters show different commit behaviors. In addition, the column clusters outputted by the algorithm enable us to infer how developers/repositories in different row clusters behave in each subset of hours. Developers/repositories with the same rhythm have similar commit frequencies in each subset of hours.
The model takes the 48-dimensional vectors as input and automatically discovers the clusters of work rhythms by measuring the similarities between them. To implement the clustering model, we used Scikit-learn [ 34 ], a widely used machine-learning library. To determine the optimal parameter setting, we perform an iterative search for the number of work rhythms k from 2 to 8 with empirical experiments. For each k , we visualize the rhythms and examine the number of samples in clusters to ensure that the clusters have sufficient individuals and exhibit distinct patterns beyond mere time shifting. We choose k as the largest value among those tested that yields stable and distinctive work rhythms.
4.3.1. demographics of developers and repositories.
We intend to explore whether developers or repositories with specific demographic information tend to follow specific work rhythms.
First, local cultures may have an impact on work rhythms. To investigate whether there is a difference among developers who work on repositories from different regions in terms of work rhythms, we examine the countries of the repositories that the developers worked on. For each developer, we group the repositories that she/he has made contributions to and check which countries the organizations of the repositories belong to. If a developer has contributions to repositories from more than one country, we set the work region of the developer as “multiple countries.” We target four different regions: the United States, China, Europe, and multiple countries.
In addition, considering the fact that senior developers may take charge of more projects than junior developers, we assume that senior developers have different work rhythms from young developers. For this purpose, we investigate whether there is a correlation between the type of work rhythms and the seniority of the developers. We use the number of days after the creation time of GitHub account as a proxy for one’s seniority in programming.
Furthermore, according to Vasilescu et al.’s [ 35 ] study, there are differences in terms of productivity between younger repositories and older ones. As a result, repositories with longer histories may have different work rhythms from newly created ones. We count the number of days since a project was created on GitHub as the measure of repository age.
Collaboration is an important feature of software engineering. The developer’s participation in project collaboration is a testament to her/his technical ability.
The structural hole theory [ 14 , 36 , 37 , 38 ] in social network analytics suggests that people who are positioned in structural holes, known as SHS, play a critical role in the collaboration and management of the teams. A structural hole is perceived as a gap between two closely connected groups. SHS fill in the gaps among different groups. They control the diffusion of valuable information across groups and come up with new ideas by combining ideas from multiple sources [ 14 ]. Bhowmik et al. [ 39 ] studied the role of structural holes in requirements identification of open-source software development and found that structural holes are positively related to the contribution of a larger amount of new requirements and play an important role in new requirement identification.
We intend to see whether there is a difference in terms of work rhythms between SHS developers and ordinary developers. We build a collaboration graph using our dataset, in which the node represents a developer and an edge between two nodes represents the two developers have committed to the same repository. We apply an advanced SHS identification algorithm called APGreedy [ 40 ] (there are several SHS identification algorithms [ 37 , 41 , 42 ] and APGreedy is a representative one) to find the SHS in the collaboration graph and choose the top 500 developers as the SHS developers. After filtering out developers with less than 30 commits, we obtain 246 SHS developers in total. Accordingly, we select 246 non-SHS developers from the rest using random sampling to represent the ordinary developers.
We define the following measures for evaluating the technical performance of a developer:
Average number of stars. GitHub provides starring function for users to mark their interest in projects. We count the average number of stars received by the repositories owned by the developer. Receiving more stars indicates a higher popularity of a project [ 43 ].
Number of followers. We use the number of followers a GitHub user has at the time of data collection as a signal of standing [ 44 ] within the community. Users with lots of followers are influential in the developer community as many people are paying attention to their activities.
H-index of Stars. The h-index [ 45 ] was originally introduced as a metric to evaluate both the productivity and citation impact of a scholar’s research publications. It has been used to measure the influence of users’ generated contents in social networks [ 46 ]. We define h-index of a developer as the maximum value of c such that the given developer has published c repositories that have each been starred at least c times. We use this metric to measure both the productivity and influence of a developer on GitHub.
To examine the technical performance of repositories, we define the following measures:
Number of stars. We use the number of stars a repository has received to evaluate the popularity of a repository. A repository with many stars implies that many people show their interests in it [ 35 , 47 ].
Number of forks. The “forking” function on GitHub enables developers to create a copy of a repository as their personal repository and then they can make changes to the code freely. Similar to the number of stars discussed above, the number of forks a repository has received is another important indicator that a repository is popular [ 35 , 44 , 48 ].
Number of open issues. Issues can be used to track bugs, enhancements, or other requests. In cases where the project’s problem was suspect, submitters and core members often engaged in extended discussions about the appropriateness of the code [ 49 , 50 ]. Repositories with more open issues receive more attention than those with less.
Lines of code changed per week (LOC changed ). This measure is defined as the average number of lines of code changed (the sum of additions and deletions) in all commits in a repository per week. It is a measure of outputs produced per unit time, which serves as a proxy for productivity [ 35 , 51 , 52 , 53 ].
To accurately identify behavioral differences among different populations, we conduct statistical hypothesis testing on different groups.
First, we conduct Pearson’s chi-squared test [ 54 ] to examine if there are significant differences in the work rhythms among different groups (i.e., regions and collaboration roles) of projects or developers. The Pearson’s chi-squared test is commonly used for evaluating the significance of the association between two categories in sets of categorical data.
Second, we statistically validate if there are significant differences in the demographics and technical performance among different groups of software projects and developers. We compute the measures of each subject within the group and the measures of the population outside the group. Then, we apply the Mann–Whitney U test [ 55 ], which is commonly used to determine whether two independent samples are from populations with the same distribution.
The results of Pearson’s chi-squared test and Mann–Whitney U test are measured by p -value, where a smaller p -value indicates higher significance level in rejecting the null hypothesis H 0 . A p -value below 0.05 indicates a significant difference among the two populations in terms of the selected measure. Cramer’s V and Cliff’s delta effect size are used to supplement the results of Pearson’s chi-squared test and Mann–Whitney U test, respectively.
To investigate how developers experience and think of their work rhythms and productivity, we designed an online survey and sent it to developers in selected tech companies. The selected companies included a mix of large corporations and startups.
Our survey was reviewed and approved by the Institute of Science and Technology, Fudan University. Prior to the launch of the survey, we invited seven developers from different tech companies and did a pilot test. These participants completed the questionnaire and provided feedback, which we used to refine the survey. Next, we performed an undeclared pilot test involving 10 participants from selected companies in our dataset. We reviewed and discussed their responses to ensure that the questionnaire was free of major issues. After finalizing the survey, we distributed it online and asked the pilot participants to share the link to the survey with others. The survey had 1,516 views and received 92 responses from eligible respondents who identified their current job as software development. The survey questions are given in the appendix.
First, to validate our result on work rhythms, we asked survey participants about their required working hours and actual working hours on a typical work day. The participants are required to provide both their required and actual start time and end time of work or to implicate there is no required working hours.
Next, we asked participants about the time they spent on different work activities and programming themes. According to Meyer et al. [ 56 ], developers primarily identified coding-related tasks as productive, whereas activities such as attending meetings, reading, and writing emails were often considered as unproductive. To gain insight into productivity both during and outside office hours, we asked participants to indicate the percentage of time they spent on various work activities during these periods, including coding, studying, project planning, writing documents, contacting colleagues, meeting, social activities, and others. Participants could choose one among the following five options to indicate the percentage of time they spent on each work activity or programming theme: “less than 5%,” “between 5% and 20%,” “between 20% and 35%,” “between 35% and 50%,” and “more than 50%.” In addition, according to Meyer et al.’s [ 56 ] work, different types of programming tasks impact productivity differently. For instance, activities such as development and bug fixing were perceived as productive, whereas testing was considered as unproductive. We also asked participants about the percentage of time they spent on different programming themes in off-hours, using the same options as in the previous question. We asked participants to specify the detailed information if they had been involved in activities or programming theme other than those we listed.
Moreover, to understand whether developers believe extra working hours can contribute to productivity, we included a question asking whether extra working hours increase productivity. Participants were given the option to select either “agree,” “neutral,” or “disagree.” Then, we cross checked their ideas with their motivations for working overtime. Beckers et al. [ 57 ] proposed that the outcome of extra working hours was affected by motivation. Highly motivated workers might have more active attitude towards extra working hours. To see how participants’ perspectives on extra working hours differ with motivations, we included a multiple-choice question, listing nine common reasons for working overtime. These options were derived from initial interviews with several developers, who explained why they worked overtime. Their reasons were used as initial options in pilot tests. During the pilot tests, participants were asked to provide additional reasons if theirs were not listed. We then reviewed their answers and adjusted the options to ensure that the given reasons covered all cases. Finally, we concluded nine reasons from their responses, such as (1) handling emergencies (such as application crashes), (2) meeting deadlines, (3) making up for the time wasted on programming-independent work activities during office hours, (4) taxi reimbursement (some companies covered the taxi expenses within specific hours), (5) good environment of company (such as free snacks and air conditioners), (6) peer pressure (participants mentioned they stayed in the office after work because most of their colleagues did not leave), (7) company requirements, (8) enjoying coding in spare time, and (9) working for bonus. One or more options could be selected. Participants could also specify their reasons if they are not given as options.
5.1.1. work rhythms of developers.
We apply clustering analysis on the commit behavior of developers in our dataset. Four work rhythms are detected among the developers in our dataset. We visualize the four detected work rhythms in the form of heatmap, as shown in Figures 3(a) , 3(b) , 3(c) , and 3(d) , with the x -axis representing the hours and the y -axis representing the days in a week. The color intensity of each time slot shows the aggregated commit frequency among developers, where darker color indicates higher commit frequencies. The detected work rhythms exhibit unique characteristics. The 48 hr in weekdays and weekends are divided into four subsets, as shown in Table 2 . We observe the commit behavior in the subsets of hours and summarize the following characteristics:
Subset | Weekday | Weekend |
---|---|---|
1 | 9 a.m. to 5 p.m. | — |
2 | 7 p.m. to 12 a.m. (mid night) | 3 p.m. to 11 p.m. |
3 | — | 9 a.m. to 2 p.m. and 12 a.m. |
4 | 1 a.m. to 8 a.m. and 6 p.m. | 1 a.m. to 8 a.m. |
#1: Nine-to-five worker. As shown in Figure 3(a) , developers with work rhythm #1 concentrate on programming during regular office hours (9 a.m. to 5 p.m.) on weekdays. They submit code changes less frequently after work hours or on weekends.
#2: Flex timers. As shown in Figure 3(b) , the code submissions of developers with rhythm #2 are uniformly distributed on almost every hour on weekdays. Developers with this rhythm are likely to submit code changes at any time of the day and do not display fixed work and rest time.
#3: Overnight developers. As shown in Figure 3(c) , developers with rhythm #3 submit their codes from 9 a.m. to 12 a.m. They also make code submissions on weekends following a similar daily working schedule as weekday, whereas the commit frequency on weekends is lower than that on weekdays.
#4: Off-hour developers. As shown in Figure 3(d) , the peak time of the code submissions of developers with rhythm #4 is weekday nights and weekends, instead of regular working hours on weekdays.
We also apply clustering analysis on the commit behavior of repositories. Three work rhythms are detected among the repositories in our dataset. Figures 4(a) and 4(b) present the temporal distributions of commit frequency for identified rhythms. The 48 hr in weekdays and weekends are divided into three subsets, as shown in Table 3 . We summarize the features of the three identified rhythms as follows:
Subset | Weekday | Weekend |
---|---|---|
1 | 9 a.m. to 5 p.m. | — |
2 | 7 p.m. to 12 a.m. (midnight) | 9 a.m. to 12 a.m. (midnight) |
3 | 1 a.m. to 8 a.m. and 6 p.m. | 1 a.m. to 8 a.m. |
#1: Typical office hours. Repositories with work rhythm #1 adopt typical work time, usually from 9 a.m. to 5 p.m on weekdays. Code changes are rarely submitted into those repositories on weekends.
#2: Slightly extended working hours. Repositories with rhythm #2 extend the typical work time to 6 p.m. on weekdays. Compared with developers in rhythm #1, repositories with rhythm #2 usually have more code submissions on weekends.
#3: Working over night and weekend. Repositories with rhythm #3 endure longer working hours than the other two rhythms. Developers of these repositories work equally on weekdays and weekends, starting from nine in the morning to the midnight.
The percentage of developers and repositories in each detected work rhythm is shown in Figures 5(a) and 5(b) , respectively. Among the four work rhythms detected in the developer dataset, we observe that about two-thirds of the developers follow rhythm #1 (typical working hours), which conforms to Claes et al.’s [ 1 ] finding. Among the three work rhythms detected in the repository dataset, rhythm #1 covers half of the repositories and rhythm #2 takes up 40% repositories, and the rest 10% repositories follow rhythm #3.
Do work rhythms vary across different regions? We examine the work regions of the developers. The percentages of developers per rhythm in each region are shown in Figure 6 . Developers working for organizations in the United States and Europe mainly follow rhythm #1, whereas rhythms #3 and #4 are more prevalent among developers working for organizations in China or “multiple countries”. We divide developers into two groups according to their work regions: the United States and Europe as a group and China and “multiple countries” as another group. We apply chi-square test to check the frequency of the two groups in each of the four rhythms. We find a significant difference between the two groups of developers in terms of the four work rhythms ( p -value < 0.001, Cramer’s V = 0.325).
Is there a correlation between work rhythm and developer seniority? We investigate the account age of developers in each rhythm and perform Mann–Whitney U test. Figure 7(a) shows the account ages of the developers for each work rhythm in box plots. Developers with rhythms #3 ( p -value < 0.001, Cliff’s delta d = 0.20) and #4 ( p -value = 0.004, d = 0.13) tend to create their GitHub accounts earlier than those with other rhythms, which indicates that developers with rhythms #3 and #4 start to be engaged in software development earlier than those with the other two rhythms. Developers with rhythm #1 created their GitHub accounts later than others ( p -value < 0.001, d = −0.20).
Is there a correlation between work rhythm and project maturity? We investigate the repository age in each rhythm and perform Mann–Whitney U test. As shown in Figure 7(b) , repositories with the three rhythms do not show significant difference in terms of repository ages ( p -values > 0.05).
Do SHS developers have specific work rhythms? The percentages of developers in each rhythm among SHS developers and ordinary developers are shown in Figure 8 . There are more developers with rhythm #1 and fewer developers with rhythm #3 among ordinary developers than among SHS developers. We apply chi-square test and find a significant difference between SHS and non-SHS developers in terms of rhythms #1 and #3 ( p -value = 0.006, Cramer’s V = 0.128). Compared with ordinary developers, SHS developers tend to be overnight developers rather than work in fixed office hours.
Next, we examine the effects of work rhythms on various measures of technical performance. Figures 9(a) , 9(b) , and 9(c) present the performance on the three measures for developers. We perform Mann–Whitney U test and the results are shown in Table 4 . The value in each entry of the table is the ratio between the median value of the measures within the group and outside the group. A less than 1 value indicates that the developers with the selected rhythm have smaller value in the chosen measure and a higher than 1 value means otherwise. In addition, ∗ marks the difference is significant with p -value ≤ 0.05, ∗∗ marks p -value ≤ 0.01 and ∗∗∗ marks p -value ≤ 0.001. As shown in Table 4 , developers with rhythms #3 and #4 had more followers (Cliff’s delta d = 0.30 and 0.16 respectively), received more stars from their own repositories ( d = 0.228 and 0.158 respectively) and had higher h-indexes ( d = 0.239 and 0.169 respectively). In contrast, developers with rhythm #1 perform the worst in all three measures: average number of stars ( d = −0.235), number of followers ( d = −0.282), and h-index ( d = −0.243).
Rhythm | Average number of stars | Number of followers | h-Index |
---|---|---|---|
#1 | 0.30 | 0.34 | 0.50 |
#2 | 1.25 | 0.63 | 1.00 |
#3 | 3.12 | 2.95 | 2.00 |
#4 | 2.17 | 1.85 | 2.00 |
We also examine the effect of repositories’ work rhythms on technical performance and apply Mann–Whitney U test. The results are shown in Figures 10(a) , 10(b) , 10(c) , and 10(d) and Table 5 . Repositories with rhythm #2 receive more stars ( d = 0.085) and have more forks ( d = 0.090) than those with the other two rhythms. Repositories with rhythm #3 receive more stars than others ( d = 0.151). As for the number of open issues, there is no significant difference among the three work rhythms.
Rhythm | Number of stars | Number of forks | Number of open issues | LOC |
---|---|---|---|---|
#1 | 0.51 | 0.71 | 1.00 | 1.17 |
#2 | 1.66 | 1.50 | 1.03 | 0.91 |
#3 | 1.55 | 0.99 | 0.93 | 0.78 |
It is interesting to find that although repositories with rhythm #1 have larger LOC changed than those with the other two rhythms, their values of the other measures of technical performance including stars ( d = −0.133) and forks ( d = −0.10) turn out to be lower. To discover the reason for this phenomenon, we further check the number of lines of code added and deleted per commit in each hour of a day. As shown in Figures 11(a) and 11(b) , during the typical office hours, both the lines of code added and deleted per commit submitted into repositories with rhythm #1 are larger than those with the other two rhythms. During 4.–5 p.m. the sizes of the commits are the largest among commits in all hours of the day. The commit sizes peak between 4 and 5 p.m., suggesting a hypothesis that developers working on repositories with rhythm #1 may submit larger commits just before leaving the office to finish their workday on time. However, this practice might lead to lower code quality, necessitating deletions and rewrites the next day. As a result, these repositories have more frequent code changes, but their stars and forks are fewer.
5.4.1. required working hour vs. actual working hour.
We ask participants about their companies’ required working hour and their actual working hour on a typical work day. As shown in Figure 12 , most participants reply that their companies require an 8-hr work day schedule. However, they usually work longer hours than required.
Figure 13 presents the distribution of activities during office hours and off-hours. Coding occupies the majority of time in both periods. The rankings for time spent on different tasks are mostly consistent, except for meetings and studying. During the office hours, meetings rank the third and the sixth respectively, whereas, during the off-hours, studying moves up to the second and meetings drop to the sixth. As shown in Figure 14 , the most common programming activity during off-hours is developing, followed by testing, bug fixing, and creating backups.
Except for 25 participants (27.17%) who claim they do not work extra hours, 38 participants (41.30%) believe that additional working hours enhance productivity, 26 participants (28.26%) believe that additional work time does not boost productivity, and three participants (3.26%) are neutral.
We ask participants why they work overtime. Among all the options, “deadline” receives the most votes (33.3%). “Emergency” is the second most popular reason with 32.3% responses. In addition, 24.7% mention that they work overtime to make up for the time wasted on programming-independent work activities during office hours, 19.4% say that their companies require extra working hours, 16.1% agree that they work overtime because of peer pressure, 15.1% claim that they work overtime because they enjoy coding in their spare time, 7.5% say that they stay in the office after work because their companies provide good environment, and 6.5% mention that they work overtime because their companies provide taxi reimbursement. Only 1.1% say because of the bonus that their companies offer for overtime work.
We cross-check their motivations and views on the productivity of additional working hours. The results are shown in Figure 15 , in which the height of a rectangle represents the proportion of participants who agree on the option and the flow represents the proportion of participants who agree on both the two options on each side. According to the results, more respondents agree extra working hours could increase productivity if they work overtime for emergencies (19 agree and 8 disagree), deadlines (18 agree and 10 disagree), making up for the time wasted on programming-independent work activities (13 agree and 10 disagree), taxi reimbursement (4 agree and 2 disagree), or good environment of their companies (3 agree and 2 disagree). In contrast, fewer respondents agree with the idea if they work overtime because of the company’s requirements (8 agree and 9 disagree), peer pressures (7 agree and 8 disagree), or bonus (0 agrees and 1 disagrees). Among the respondents who work overtime because they enjoy coding in their spare time, the numbers of participants who hold both views are the same (4 agree and 4 disagree).
6.1. implications for software practice.
The purpose of this paper is to investigate the work rhythms in software development and their effects on technical performance. We identify four typical work rhythms in the developer dataset. The typical working hours (from 9 a.m. to 5 p.m. on weekdays) cover 64% of developers in the dataset. The rest three rhythms represent an aperiodic work rhythm, an overnight work rhythm, and an off-hour work rhythm, respectively. In addition, three work rhythms are detected among repositories in the dataset. There are one typical work rhythm covering half of the repositories and two different types of overtime work rhythm.
Work rhythms are correlated with demographics and collaboration roles. Work rhythms with moderately extended working hours are more popular among senior developers. The maturity of a repository does not decrease the chance of requiring its developers to work extra hours. Developers who bridge collaboration groups consist of a higher proportion of “overnight developers” than others.
Work rhythms with a moderate amount of extended working hours appear to be associated with good technical performance. According to our results, projects and developers following the work rhythms with moderate hours of overtime work (rhythms #3 and #4 in developers’ rhythms and rhythms #2 and #3 in repositories’ rhythms) turn out to have better work performance than those following other rhythms. Projects and developers following fixed-hour work rhythms (rhythm #1 in developers’ rhythms and rhythm #1 in repositories’ rhythms) show poorer technique performance. Developers who follow aperiodic work rhythm (rhythm #2 in developers’ rhythms) do not present better performance than others.
Developers’ perspectives on productivity in extended working hours are influenced by their motivations of working overtime. They would feel extended working hours increase their productivity when the time for coding is insufficient due to some unexpected arrangments (such as approaching deadline) or the companies give clear incentives (such as reimbursing taxi fares), while fewer believe that extended working hours could increase productivity if they are under the requirement of companies, or work for bonus, or just follow the other colleagues to work overtime. Tech companies and teams could benefit from practices, for example, not forcing the members to work extra hours, and providing employees with better work environment and clear incentives.
Being a first study to reveal work rhythms in software development and their effects on technical performance, there are a few limitations in our work. First, the data analysis in our study is limited to public open-source projects hosted on GitHub. Therefore, our conclusions are specific to the open-source projects and their contributors. Although our findings demonstrate notable distinctions between work rhythms, we cannot guarantee their broader applicability to the entire industry, as comprehensive data on a wider range of companies and closed-source projects would be necessary. We notice that there are alternative platforms such as GitLab where organizations release their work projects in a timely way. In addition, while we aim to capture an authentic snapshot of developer activity in open-source projects by forming an actual distribution of repositories in the companies, the variation in the number of repositories across these companies could potentially introduce bias into the results. In future work, we plan to explore other data sources to validate and expand our findings.
Second, our quantitative analysis on the work rhythms primarily focuses on the commit activities. Analysis on more comprehensive dataset could better reveal of rules of one field of research [ 58 ]. Other activities, such as meetings and document writing, also occupy developers’ working hours; therefore, the time spent on programming might not fully represent their work schedule. However, because programming is a major task of developers’ work, the temporal pattern of commits is a strong indicator of work time and our findings could provide insights into developers’ working status. We also acknowledge that there might be a delay between the time of making commits and the actual time of completing coding tasks. However, because our analysis is based on aggregated commits rather than individual ones, the impact of such delays should be negligible.
Third, the metrics that we use to measure the technical performance are indirect. For developers, we use average number of stars, number of followers, and h-index of stars as indicators of their reputations. For repositories, we consider number of stars, number of forks, and number of issues as proxies for user attention. More user attention and discussions mean that the repositories and developers are recognized by more people, which indicates their good technical performance. In addition, we use the lines of code changed per week to measure code productivity. Although these measures are intuitively reasonable, they could only show technical performance in some way. More metrics such as code quality should be addressed to obtain a comprehensive understanding of the technical performance.
In this paper, we aim to discover work rhythms in software development and investigate their effects on technical performance. We found four work rhythms among individuals and three work rhythms among repositories in our dataset. The findings indicate that developers working for organizations in China or multiple countries tend to follow long-hour work rhythms, whereas those working for organizations in the United States and Europe tend to follow the typical work rhythm. Regarding the effects of work rhythms on technical performance, we found that a moderate amount of overtime work is related to good technical performance, whereas fixed office hours appear to be associated with projects and developers who receive less attention. In addition, our survey study indicates that developers usually tend to work longer than their companies’ required working hours. A positive attitude towards overtime work is often linked to situations that require addressing unexpected issues, such as approaching deadlines, or when clear incentives are provided.
For future work, we aim to delve deeper into the underlying mechanisms behind developers’ work. We wish to understand the underlying causes for different working rhythms by considering the interplay between work rhythms and other factors, such as technical roles and collaboration patterns. Furthermore, we plan to investigate the causal relationship between work rhythms and technical performance by conducting experimentation and incremental studies.
The authors declare that they have no conflicts of interest.
This work has been sponsored by National Natural Science Foundation of China (nos. 62072115 and 62102094), Shanghai Science and Technology Innovation Action Plan Project (no. 22510713600), European Union’s Horizon 2020 Research and Innovation Programme under the grant agreement no. 101021808, and Marie Skłodowska Curie grant agreement no. 956090.
What is the country of your company?
How long have you been employed at your current company?
What is the type of your current job? (e.g., development, testing, product management, etc.)
What is your company’s designated working hour for workdays? (Please fill in the start and end time in 24-hr format.)
What are your actual working hour for workdays? (Please fill in the start and end time in 24-hr format.)
I work on both Saturday and Sunday every weekend
I work on either Saturday or Sunday every weekend
I sometimes work on weekends (less than once a week, please specify how many days per month on average)
I never work on weekends
Other (please specify)
Most of my colleagues work overtime.
My company provides benefits for overtime worker.
I enjoy working overtime.
I work during holidays.
I work more before/after holidays.
Project planning
Reading/writing documents and preparing reports
Handling other work tasks, e.g., reading/writing emails, etc.
Learning software, tools, skills, etc.
Business entertainment, e.g., hosting colleagues, etc.
Leisure activities
Development
I do not work overtime.
Handling emergencies (such as application crashes).
Making up for the time wasted on programming-independent work activities during office hours.
Company requirements.
Peer pressure (most of my colleagues have not left).
Enjoying coding in spare time.
The company provides good environment, e.g., free smacks and air conditioners.
The company provides taxi reimbursements within specific hours.
Working for bonus.
Other. (Please specify the reason.)
Agree—overall, working overtime increases my work output.
Disagree—overtime work does not compensate for my extra working hours.
Data availability.
As the data used in this work are publicly visible and accessible on GitHub, researchers interested in accessing the data can retrieve it directly from the GitHub platform with its official API. To ensure transparency and facilitate further research, the list of organizations and repositories in our dataset is publicly available on GitHub: https://github.com/jiayunz/Work-Rhythms-in-Software-Development . Researchers can refer to this repository to gain access to the specific projects and repositories included in the dataset. For any inquiries or requests related to the dataset, researchers can contact the corresponding author through email.
Change password, your password must have 10 characters or more:.
Your password has been changed
Forgot your password.
Enter your email address below.
Please check your email for instructions on resetting your password. If you do not receive an email within 10 minutes, your email address may not be registered, and you may need to create a new Wiley Online Library account.
Can't sign in? Forgot your username?
Enter your email address below and we will send you your username
If the address matches an existing account you will receive an email with instructions to retrieve your username
Survey results will help to fuel the ’s series of rankings in 2024.
The world’s largest invitation-only academic opinion survey, Times Higher Education ’s global Academic Reputation Survey, is being launched in early November.
The Academic Reputation Survey 2024, available in 12 languages, will be distributed by THE and use benchmark data as a guide to ensure that the response coverage is as representative of world scholarship as possible.
The annual questionnaire targets only experienced, published scholars, who offer their views on excellence in research and teaching within their disciplines and at institutions with which they are familiar. Invitations will be spread across a three-month period.
The survey results will help to fuel THE ’s series of rankings in 2024.
Scholars are asked to use their discipline-specific knowledge to name up to 15 universities that they believe are the best in both research and teaching. The survey, which typically takes up to 15 minutes to complete, will close at the end of January 2024.
The headline results of the survey will be shared with respondents. More detailed analysis will then be published in THE ’s World Reputation Rankings 2024. The survey data from 2023 and 2024 will also be used alongside 15 objective indicators to help create the THE World University Rankings 2025, to be published in late 2024, and all WUR subsidiary rankings.
The survey also provides a uniquely rich picture of the changing global academic reputation of institutions to inform THE ’s editorial analyses and data and analytics tools.
The survey is strictly invitation-only; universities cannot make nominations or supply contact lists, and individuals cannot nominate themselves for participation.
Please check your inbox for an invitation from [email protected] .
If you are selected to take part in the survey, you have been chosen based on a proven record of research publication and will be representing thousands of your peers in your discipline and your country. Please take the opportunity to provide your expert input and help us develop a uniquely rich perspective on global higher education.
Read the World Reputation Rankings 2023 methodology
Why register?
Or subscribe for unlimited access to:
Already registered or a current subscriber? Login
Institutions must be more strategic about positioning themselves and their work when considering new markets and partnerships, says Tania Rhodes-Taylor
Universities can now sign up to participate ahead of data collection opening in the autumn
African participation surpasses North America for the first time
Locale determines the mission of many universities, and the Impact Rankings recognise how such diversity and community-mindedness helps to further progress towards the SDGs
New thematic analysis reveals the countries and institutions that are walking the talk on sustainability
Run a free plagiarism check in 10 minutes, generate accurate citations for free.
Methodology
Published on January 2, 2023 by Shona McCombes . Revised on September 11, 2023.
What is a literature review? A literature review is a survey of scholarly sources on a specific topic. It provides an overview of current knowledge, allowing you to identify relevant theories, methods, and gaps in the existing research that you can later apply to your paper, thesis, or dissertation topic .
There are five key steps to writing a literature review:
A good literature review doesn’t just summarize sources—it analyzes, synthesizes , and critically evaluates to give a clear picture of the state of knowledge on the subject.
Upload your document to correct all your mistakes in minutes
What is the purpose of a literature review, examples of literature reviews, step 1 – search for relevant literature, step 2 – evaluate and select sources, step 3 – identify themes, debates, and gaps, step 4 – outline your literature review’s structure, step 5 – write your literature review, free lecture slides, other interesting articles, frequently asked questions, introduction.
When you write a thesis , dissertation , or research paper , you will likely have to conduct a literature review to situate your research within existing knowledge. The literature review gives you a chance to:
Writing literature reviews is a particularly important skill if you want to apply for graduate school or pursue a career in research. We’ve written a step-by-step guide that you can follow below.
The academic proofreading tool has been trained on 1000s of academic texts. Making it the most accurate and reliable proofreading tool for students. Free citation check included.
Try for free
Writing literature reviews can be quite challenging! A good starting point could be to look at some examples, depending on what kind of literature review you’d like to write.
You can also check out our templates with literature review examples and sample outlines at the links below.
Download Word doc Download Google doc
Before you begin searching for literature, you need a clearly defined topic .
If you are writing the literature review section of a dissertation or research paper, you will search for literature related to your research problem and questions .
Start by creating a list of keywords related to your research question. Include each of the key concepts or variables you’re interested in, and list any synonyms and related terms. You can add to this list as you discover new keywords in the process of your literature search.
Use your keywords to begin searching for sources. Some useful databases to search for journals and articles include:
You can also use boolean operators to help narrow down your search.
Make sure to read the abstract to find out whether an article is relevant to your question. When you find a useful book or article, you can check the bibliography to find other relevant sources.
You likely won’t be able to read absolutely everything that has been written on your topic, so it will be necessary to evaluate which sources are most relevant to your research question.
For each publication, ask yourself:
Make sure the sources you use are credible , and make sure you read any landmark studies and major theories in your field of research.
You can use our template to summarize and evaluate sources you’re thinking about using. Click on either button below to download.
As you read, you should also begin the writing process. Take notes that you can later incorporate into the text of your literature review.
It is important to keep track of your sources with citations to avoid plagiarism . It can be helpful to make an annotated bibliography , where you compile full citation information and write a paragraph of summary and analysis for each source. This helps you remember what you read and saves time later in the process.
Professional editors proofread and edit your paper by focusing on:
See an example
To begin organizing your literature review’s argument and structure, be sure you understand the connections and relationships between the sources you’ve read. Based on your reading and notes, you can look for:
This step will help you work out the structure of your literature review and (if applicable) show how your own research will contribute to existing knowledge.
There are various approaches to organizing the body of a literature review. Depending on the length of your literature review, you can combine several of these strategies (for example, your overall structure might be thematic, but each theme is discussed chronologically).
The simplest approach is to trace the development of the topic over time. However, if you choose this strategy, be careful to avoid simply listing and summarizing sources in order.
Try to analyze patterns, turning points and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred.
If you have found some recurring central themes, you can organize your literature review into subsections that address different aspects of the topic.
For example, if you are reviewing literature about inequalities in migrant health outcomes, key themes might include healthcare policy, language barriers, cultural attitudes, legal status, and economic access.
If you draw your sources from different disciplines or fields that use a variety of research methods , you might want to compare the results and conclusions that emerge from different approaches. For example:
A literature review is often the foundation for a theoretical framework . You can use it to discuss various theories, models, and definitions of key concepts.
You might argue for the relevance of a specific theoretical approach, or combine various theoretical concepts to create a framework for your research.
Like any other academic text , your literature review should have an introduction , a main body, and a conclusion . What you include in each depends on the objective of your literature review.
The introduction should clearly establish the focus and purpose of the literature review.
Depending on the length of your literature review, you might want to divide the body into subsections. You can use a subheading for each theme, time period, or methodological approach.
As you write, you can follow these tips:
In the conclusion, you should summarize the key findings you have taken from the literature and emphasize their significance.
When you’ve finished writing and revising your literature review, don’t forget to proofread thoroughly before submitting. Not a language expert? Check out Scribbr’s professional proofreading services !
This article has been adapted into lecture slides that you can use to teach your students about writing a literature review.
Scribbr slides are free to use, customize, and distribute for educational purposes.
Open Google Slides Download PowerPoint
If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.
Statistics
Research bias
A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .
It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.
There are several reasons to conduct a literature review at the beginning of a research project:
Writing the literature review shows your reader how your work relates to existing research and what new insights it will contribute.
The literature review usually comes near the beginning of your thesis or dissertation . After the introduction , it grounds your research in a scholarly field and leads directly to your theoretical framework or methodology .
A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other academic texts , with an introduction , a main body, and a conclusion .
An annotated bibliography is a list of source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a paper .
If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.
McCombes, S. (2023, September 11). How to Write a Literature Review | Guide, Examples, & Templates. Scribbr. Retrieved August 29, 2024, from https://www.scribbr.com/dissertation/literature-review/
Other students also liked, what is a theoretical framework | guide to organizing, what is a research methodology | steps & tips, how to write a research proposal | examples & templates, "i thought ai proofreading was useless but..".
I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”
COMMENTS
How to Write a Survey Results Report. Let's walk through some tricks and techniques with real examples. 1. Use Data Visualization. The most important thing about a survey report is that it allows readers to make sense of data. Visualizations are a key component of any survey summary.
Checklist: Research results 0 / 7. I have completed my data collection and analyzed the results. I have included all results that are relevant to my research questions. I have concisely and objectively reported each result, including relevant descriptive statistics and inferential statistics. I have stated whether each hypothesis was supported ...
Reporting Research Results in APA Style | Tips & Examples. Published on December 21, 2020 by Pritha Bhandari.Revised on January 17, 2024. The results section of a quantitative research paper is where you summarize your data and report the findings of any relevant statistical analyses.. The APA manual provides rigorous guidelines for what to report in quantitative research papers in the fields ...
How quantilope streamlines the analysis and presentation of survey results quantilope's automated Consumer Intelligence Platform saves clients from the tedious, manual processes of traditional market research , offering an end-to-end resource for questionnaire setup, real-time fielding, automated charting, and AI-assisted reporting.
The results section of a research paper tells the reader what you found, while the discussion section tells the reader what your findings mean. The results section should present the facts in an academic and unbiased manner, avoiding any attempt at analyzing or interpreting the data. Think of the results section as setting the stage for the ...
Research results refer to the findings and conclusions derived from a systematic investigation or study conducted to answer a specific question or hypothesis. These results are typically presented in a written report or paper and can include various forms of data such as numerical data, qualitative data, statistics, charts, graphs, and visual aids.
If you don't need a specific style, make sure that the formatting for the paper is consistent throughout. Use the same spacing, font, font size, and citations throughout the paper. 3. Adopt a clear, objective voice throughout the paper. Remember that your job is to report the results of the survey.
Build coherence along this section using goal statements and explicit reasoning (guide the reader through your reasoning, including sentences of this type: 'In order to…, we performed….'; 'In view of this result, we ….', etc.). In summary, the general steps for writing the Results section of a research article are:
The ' Results' section of a research paper, like the 'Introduction' and other key parts, attracts significant attention from editors, reviewers, and readers. The reason lies in its critical role — that of revealing the key findings of a study and demonstrating how your research fills a knowledge gap in your field of study. Given its importance, crafting a clear and logically ...
For most research papers in the social and behavioral sciences, there are two possible ways of organizing the results. Both approaches are appropriate in how you report your findings, but use only one approach. Present a synopsis of the results followed by an explanation of key findings. This approach can be used to highlight important findings.
Abstract. The coronavirus disease 2019 (COVID-19) pandemic has led to a massive rise in survey-based research. The paucity of perspicuous guidelines for conducting surveys may pose a challenge to the conduct of ethical, valid and meticulous research. The aim of this paper is to guide authors aiming to publish in scholarly journals regarding the ...
Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout. Distribute the survey.
Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout.
How to analyze survey data. Learn how SurveyMonkey can help you analyze your survey data effectively, as well as create better surveys with ease. The results are back from your online surveys. Now it's time to tap the power of survey data analysis to make sense of the results and present them in ways that are easy to understand and act on ...
It states the hypothesis, question, or issues at hand for why the research was conducted and how the results plan to be used. 5. Survey Method. This section reviews who the target audience was and who the survey actually included. It also reviews how surveyors contacted respondents and the process of data collection. This is often a more ...
1. Create a Presentation. While many times you'll put together a document, one-pager or infographic to visualize survey results, sometimes a presentation is the perfect format. Create a survey presentation like the one below to share your findings with your team. 1 / 8.
Below we give just a few examples of types of software you could use to analyze survey data. Of course, these are just a few examples to illustrate the types of functions you could employ. 1. Thematic software. As an example, with Thematic's software solution you can identify trends in sentiment and particular themes.
Surveys are a special research tool with strengths, weaknesses, and a language all of their own. There are many different steps to designing and conducting a survey, and survey researchers have specific ways of describing what they do.This handout, based on an annual workshop offered by the Program on Survey Research at Harvard, is geared toward undergraduate honors thesis writers using survey ...
5. Include the methodology of your research. The methodology section of your report should explain exactly how your survey was conducted, who was invited to participate, and the types of tests used to analyze the data. You might use charts or graphs to help communicate this data.
Take the five- and ten-point Likert and NPS scales and summarize them into simpler three-point scales ("disagree", "neutral", and "agree" or "positive", "neutral", and "negative"). Source. Presenting survey results in a simplified categories goes a long way in making the chart easier to read. 3. Demographic results.
Here are five common ways to present your survey results to businesses, stakeholders, and customers. 1. Graphs and Charts. Graphs and charts summarize survey results in a quick, easy graphic for people to understand. Some of the most common types of graphs include: Bar graphs are the most popular way to display results.
Typically that research begins in popular culture--social media, surveys, interviews, newspapers. Once the author has a handle on how the problem is being defined and experienced, its history and its impact, what people in the trenches believe might be the best or worst ways of addressing it, the author then will turn to academic scholarship as ...
make certain changes to your survey after it is published. Please note, you should be cautious about . making changes after collecting data. Certain survey alterations can invalidate or delete parts of your . already collected data. Changes to existing surveys must have IRB approval before being distributed.
Background: Survey-driven research is a reliable method for large-scale data collection. Investigators incorporating mixed-mode survey designs report benefits for survey research including greater engagement, improved survey access, and higher response rate. Mix-mode survey designs combine 2 or more modes for data collection including web, phone, face-to-face, and mail.
Furthermore, we conduct a survey study to complement the results of empirical data analysis. ... In this paper, we aim to discover work rhythms in software development and investigate their effects on technical performance. ... The data collected from the survey were used for research purposes only and for overall analysis. The survey questions ...
Questionnaires vs. surveys. A survey is a research method where you collect and analyze data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.. Designing a questionnaire means creating valid and reliable questions that address your research objectives, placing them in a useful order, and selecting an appropriate method for administration.
This was the first survey since independent candidate Robert F. Kennedy Jr. dropped out of the race and endorsed Trump. Independent Cornel West is now at 2%. Independent Cornel West is now at 2%.
The world's largest invitation-only academic opinion survey, Times Higher Education's global Academic Reputation Survey, is being launched in early November. The Academic Reputation Survey 2024, available in 12 languages, will be distributed by THE and use benchmark data as a guide to ensure that the response coverage is as representative of world scholarship as possible.
Assembling such a global stocktake of effective climate policy interventions is so far hampered by two main obstacles: First, even though there is a plethora of data on legislative frameworks and pledged national emission reductions (8-10), systematic and cross-nationally comparable data about the specific types and mixes of implemented policy instruments are lacking.
A literature review is a survey of scholarly sources on a specific topic. It provides an overview of current knowledge, allowing you to identify relevant theories, methods, and gaps in the existing research that you can later apply to your paper, thesis, or dissertation topic. There are five key steps to writing a literature review: